From: mayayana on
>
> If this is the case, then I assume that you never defrag your hard drive?
Or
> run diagnostics to monitor its health?
>

Actually, defrag reduces the movement of the arm
by keeping files contiguous. But as I said, I meant
my comment in the sense of "it doesn't hurt to be
efficient". (If only the hotshots at Microsoft would
take that approach, maybe we wouldn't have 7-9GB
operating systems that access the disk constantly
for no good reason. :)

I certainly don't believe that writing 700 files will
kill a disk.


From: Al Dunbar on


"mayayana" <mayayana(a)nospam.invalid> wrote in message
news:uhH6PaauKHA.812(a)TK2MSFTNGP06.phx.gbl...
>>
>> If this is the case, then I assume that you never defrag your hard drive?
> Or
>> run diagnostics to monitor its health?
>>
>
> Actually, defrag reduces the movement of the arm
> by keeping files contiguous.

Yes, but it does this by moving the arm a lot when the defrag operation is
in progress. I agree that defragging is a good idea. But more because of its
effect on the efficiency of the computer operation than reducing disk drive
wear and tear. But there can be too much of a good thing, for example, run a
batch script as a scheduled task that defrags the hard drive in an infinite
loop.

> But as I said, I meant
> my comment in the sense of "it doesn't hurt to be
> efficient".

But it actually *can* hurt to "be efficient" sometimes. You need to apply
that efficiency only when it will actually make a measurable difference and
when the cost of the change is not more expensive than just leaving it be.

Supposing that by some simplification strategy we could reduce the disk
accesses from 5,000 down to 50 for the OP's particular application. Hey, a
reduction of 99% looks pretty good.

But if the normal operation of his computer generates 500,000 disk accesses,
the net reduction of that 4990 access reduction is only about 1% of the
total work being done by the drive.

Add to this one other issue: how one determines the actual impact on the
effect on the underlying hardware of a script accessing a hard drive, and
how one determines the effect of changes made to the script.

In my opinion, the best result of "efficiency" is that efficient code can
often be simpler, and therefore less work to maintain. Now, show us some
statistics to indicate that computers running efficient scripts tend to
outlast those running inefficient ones, and I might change my mind.

> (If only the hotshots at Microsoft would
> take that approach, maybe we wouldn't have 7-9GB
> operating systems that access the disk constantly
> for no good reason. :)

Well, there must be a "reason" of some sort. It would be interesting to find
out which are the bad reasons. Fortunately, some of the less useful reasons
can be turned off, like the "visual effects" you can disable under
performance in the system properties advanced tab.

/Al