From: Andy Champ on
Lew wrote:
>
> Andy Champ wrote:
>> In 1982 the manager may well have been right to stop them wasting
>> their time fixing a problem that wasn't going to be a problem for
>> another 18 years or so. The software was probably out of use long
>> before that.
>
> Sure, that's why so many programs had to be re-written in 1999.
>
> Where do you get your conclusions?
>

Pretty well everything I saw back in 1982 was out of use by 1999. How
much software do you know that made the transition?

Let's see.. Operating systems. The PC world was... umm.. CP/M 80?
Maybe MS-Dos 1.0? And by 1999 I was working on drivers for Windows
2000. That's at least two, maybe three depending how you count it,
ground-up re-writes of the OS.

With that almost all the PC apps had gone from 8 bit versions in 64kb of
RAM to 16-bit DOS to Windows 3.1 16-bit with non-preemptive multitasking
and finally to a 32-bit app with multi-threading and pre-emptive
multitasking running in hundreds of megs.

OK, so how about embedded stuff? That dot-matrix printer became a
laserjet. The terminal concentrator lost its RS232 ports, gained a
proprietary LAN, then lost that and got ethernet. And finally
evaporated in a cloud of client-server computing smoke.

I'm not so up on the mainframe world - but I'll be surprised if the
change from dumb terminals to PC clients didn't have a pretty major
effect on the software down the back.

Where do you get your conclusions that there was much software out there
that was worth re-writing eighteen years ahead of time? Remember to
allow for compound interest on the money invested on that development...

Andy.
From: Seebs on
On 2010-02-11, Andy Champ <no.way(a)nospam.invalid> wrote:
> Where do you get your conclusions that there was much software out there
> that was worth re-writing eighteen years ahead of time? Remember to
> allow for compound interest on the money invested on that development...

I'm using tens of thousands of lines of code right now that are over twenty
years old. It's called "OS X", and it contains a large hunk of code that
was written either at Berkeley in the 80s or at NeXT in the 80s.

We're still using classes with names like "NSString", where "NS" stands for
NeXTStep. You know, the company that folded something like fifteen years
ago, and wrote this stuff originally prior to 1990?

Heck, I have code *I personally wrote* 19 years ago and which I still use.
It was last touched in any way in 1998, so far as I can tell. It's been
untouched ever since, *because it works correctly*.

And honestly, here's the big thing:

In reality, I do not think that writing things correctly costs that much
more. Because, see, it pays off in debugging time. The rule of thumb they
use at $dayjob is that in the progression from development to testing and
then to users, the cost of fixing a bug goes up by about a factor of 10
at each level. That seems plausible to me. So if I spend an extra day
on a project fixing a couple of bugs, those bugs would probably cost about
10 days total people-time if caught in testing, and 100 if caught by
our customers. And 1000+ if caught by their customers.

This is why I have successfully proposed "this is too broken to fix, let's
build a new one from scratch" on at least one occasion, and gotten it done.

-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
From: Bo Persson on
MarkusSchaber wrote:
> Hi,
>
> On 10 Feb., 22:53, Andy Champ <no....(a)nospam.invalid> wrote:
>> Can any locksmith or
>> burglar alarm maker guarantee that a building will withstand all
>> attacks for 12 months? _That_ is the equivalent of withstanding
>> all virus attacks for 12 months - and it's on a far simpler system.
>
> Maybe not the locksmith itself, but there are insurance companies
> which calculate how high the risk is, and they take that liability.
>
> For locks, cars, even airplanes, insurance companies do that all the
> time. But there are only a few cases where this is done for
> software.

Is it? What about the software that controls the locks, cars, and
airplanes?


Bo Persson


From: Wojtek on
Andy Champ wrote :
> Lew wrote:
>>
>> Andy Champ wrote:
>>> In 1982 the manager may well have been right to stop them wasting their
>>> time fixing a problem that wasn't going to be a problem for another 18
>>> years or so. The software was probably out of use long before that.
>>
>> Sure, that's why so many programs had to be re-written in 1999.
>>
>> Where do you get your conclusions?
>>
>
> How much software do you know that made the transition?

> I'm not so up on the mainframe world

Yup. Made lots of money transitioning mainframe apps.

--
Wojtek :-)


From: Lew on
Andy Champ wrote:
>>> In 1982 the manager may well have been right to stop them wasting
>>> their time fixing a problem that wasn't going to be a problem for
>>> another 18 years or so. The software was probably out of use long
>>> before that.

Lew wrote:
>> Sure, that's why so many programs had to be re-written in 1999.
>>
>> Where do you get your conclusions?

Andy Champ wrote:
> Pretty well everything I saw back in 1982 was out of use by 1999. How
> much software do you know that made the transition?

Pretty much everything I saw back in 1982 is in production to this day, never
mind 1999.

Pretty much everything that had Y2K issues in 1999 was in production since the
1980s or earlier. By the 90s, more software was written without that bug.

Again, why do you think Y2K was such an issue, if affected software had gone
out of production by then?

> Let's see.. Operating systems. The PC world was... umm.. CP/M 80? Maybe
> MS-Dos 1.0? And by 1999 I was working on drivers for Windows 2000.
> That's at least two, maybe three depending how you count it, ground-up
> re-writes of the OS.

PCs were not relevant in 1982. PCs largely didn't have Y2K issues; it was
mainly a mainframe issue.

> With that almost all the PC apps had gone from 8 bit versions in 64kb of
> RAM to 16-bit DOS to Windows 3.1 16-bit with non-preemptive multitasking
> and finally to a 32-bit app with multi-threading and pre-emptive
> multitasking running in hundreds of megs.

Largely irrelevant to the discussion of Y2K issues, which were a mainframe
issue for the most part.

PCs were not in common use in 1982.

> OK, so how about embedded stuff? That dot-matrix printer became a
> laserjet. The terminal concentrator lost its RS232 ports, gained a
> proprietary LAN, then lost that and got ethernet. And finally
> evaporated in a cloud of client-server computing smoke.

Not relevant to the discussion of Y2K issues.

> I'm not so up on the mainframe world - but I'll be surprised if the
> change from dumb terminals to PC clients didn't have a pretty major
> effect on the software down the back.

This was mainframe stuff. Most PC software didn't have Y2K bugs, and there
weren't PCs in common use in 1982.

PCs have had negligible effect on mainframe applications, other than to
provide new ways of feeding them.

> Where do you get your conclusions that there was much software out there
> that was worth re-writing eighteen years ahead of time? Remember to
> allow for compound interest on the money invested on that development...

Software development costs are inversely proportional to the fourth power of
the time allotted. That's way beyond the inflation rate.

Y2K repair costs were inflated by the failure to deal with them early, not
reduced.

The point of my example wasn't that Y2K should have been handled earlier, but
that the presence of the bug was not due to developer fault but management
decision, a point you ignored.

--
Lew