From: Ken Smith on
In article <erpfgo$8qk_003(a)s934.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv(a)aol.com> wrote:
>In article <ur3vt2tl5lg3ujbtu14tgksbjd6s35h51j(a)4ax.com>,
> MassiveProng <MassiveProng(a)thebarattheendoftheuniverse.org> wrote:
>>On Fri, 23 Feb 2007 14:59:04 +0000 (UTC), kensmith(a)green.rahul.net
>>(Ken Smith) Gave us:
>>
>>>No great amount of care is needed. I've done that sort of restore a few
>>>times with no great trouble. Since files are stored with the modification
>>>date, a copy command that checks dates does the hard part.
>>>
>> Batch (read DOS type batch file) driven backup routines worked
>>flawlessly for me for YEARS, and only backed up what was needed, and
>>never overwrote a newer file with an older file.
>
>Using your method, a restore would have to start with Backup Tape
>#1, then #2, then #3, ....finishing with the last backup tape made.
>If you have been doing this for three years, you have 1000 tapes
>to restore.

No, if the copy checks the dates, you can load the backups in any order.
What you do in practice is mount the complete backup and then the newest
incremental. You then mount the previous incremental and then the one
before that.

If your software is any good, it will let you know when you can stop. All
that's needed is to record the dates on all of the files. A fairly simple
script can tell you if you have more to do.


>Another method was to do a full backup save each day. This will
>work until you find that you lost a source file sometime in the
>last 12 years. Now how do you find the last save of that file?

This is not a problem in practice if the copy is smart about dates.

The usual practice is to do a full backup every so often and incremental
ones in between.
--
--
kensmith(a)rahul.net forging knowledge

From: Tony Lance on
On Fri, 23 Feb 2007 17:35:09 +0000, Tony Lance <judemarie(a)bigberthathing.co.uk> wrote:

Big Bertha Thing tidings
Cosmic Ray Series
Possible Real World System Constructs
http://web.onetel.com/~tonylance/tidings.html
Access page for 4K Web Page
Astrophysics net ring Access site
Newsgroup Reviews including de.sci.physik

Tidings of the battle of Lens

From the book
Twenty Years After
by Alexandre Dumas
Published by George G.Harrup & Co.Ltd., 1923
Reprinted 1929
(C) Copyright Tony Lance 1998
Distribute complete and free of charge to comply.


Big Bertha Thing nation

In "A Town Like Alice" by Nevil Shute the heroine has to ride
forty miles through the "wet", at her first time on a horse,
to save someones life. She does it and ends up in hospital,
OK but unable to sit down for a week. Her fiancee gets told
"Thats some Sheila that you have got there mate."

Tony Lance
judemarie(a)bigberthathing.co.uk


Newsgroups: swnet.sci.astro, sci.chem
From: Tony Lance <judemarie(a)bigberthathing.co.uk>
Date: Fri, 16 Feb 2007 16:52:45 +0000
Local: Fri, Feb 16 2007 4:52 pm
Subject: Re: Big Bertha Thing unified

Friday, November 14, 1997 01:43:02 PM
Message
From: Tony Lance
Subject: Big Bertha Thing 5
To: FC Mods Discussion
Big Bertha Thing 5

Some people think that Big Bertha is less suitable for the
Mods Conf., than for Classical Particle Conf.
Have they heard the long name for CP Conf?
High Energy Particle Physics Rock Hard Science With a
Fringe Perspective.
They think that is a better place to put Big Bertha.
They must be besides themselves, which proves the thing.
From: Ken Smith on
In article <erpfvv$8qk_001(a)s934.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv(a)aol.com> wrote:
>In article <ermvfo$rph$5(a)blue.rahul.net>,
> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>In article <ermm1f$8qk_001(a)s774.apx1.sbo.ma.dialup.rcn.com>,
>> <jmfbahciv(a)aol.com> wrote:
>>>In article <era3ti$tvp$6(a)blue.rahul.net>,
>>> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>[.....]
>>>>> The problem is that the software side
>>>>>of the biz is dragging its heels and the hardware types are
>>>>>getting away with not including assists to the OS guys.
>>>>
>>>>The most hardware guys have to design hardware to run with Windows.
>>>
>>>Sigh! Windows' OS has a grandfather that required those hardware
>>>assists. Your point is making an excuse for lousy design
>>>requirements.
>>
>>No, I am pointing out what has really happened. Windows was written to
>>run on the usual DOS hardware. Gradually, features got added until it
>>needed more stuff than DOS. If I designed a 4096 by 4096 display, I
>>wouldn't sell very many unless I could make a windows driver for it.
>>
>>
>>[.....]
>>>>Even just suggesting that there be true backups of peoples machines throws
>>>>them into a panic.
>>>
>>>Good. That's why you should just say the words. That will have
>>>a ripple effect throughout the industry.
>>
>>No, I need my computer to work and be backed up. I don't give the hind
>>most 3/4 of a rat what happens to the average windows user's data.
>
>I know that you don't care. I do care. That is why you don't
>understand about all kinds of computing usage and I do.

You are assuming that I don't know about things I don't care about this is
a serious error on your part. I know that there are many people out there
who have not yet seen the light and still run Windows. I know that these
people are doomed to lose valuable data at some time in the future. I
know that fixing this will require some software that gets around things
Windows does. I don't run Windows. I run Linux. As a result, I want to
back up my data on a Linux box. I also want to protect my self from the
bad effects of Windows losing data on someone else's machine. This is why
I raise the issue.


>>>> "Imagine an evil person gets to the PC, deletes all
>>>>the files of that user and reformats the harddisk on the machine. How
>>>>long would it take to put it all back as a working system?" has been the
>>>>question I have asked.
>>>
>>>Instead of saying evil person, just say lightning strike or power
>>>surge or blizzard/hurricane when everything shuts down for 2 weeks.
>>
>>That is a lot less damage than an evil person can cause. Backing up by
>>storing on two computers will serve to protect against lightning.
>
>No it won't. There a billions of dollars spent on trying to
>make one set of computing services non-local.

Either, you just lack imagination about what an evil person can do or you
over estimate the problem caused by something like a lightning strike. An
evil person can destroy any copy on any machine he has the ability to
write to. This means that he can delete all the data on the remote
machines too. This is why you need a write only memory in the system.

[.....]
>>>>On just a sinlge PC it is quite easy.
>>>
>>>No, it is not. The way files, directories, structures and MOST
>>>importantly, data has been organized makes it almost impossible
>>>to manage files on a PC.
>>
>>We are talking about a backup. You can just copy the entire hard disk
>>when it is a single PC.
>
>That is not a backup of the files.
>
>YOu seem to be talking about a bit-to-bit copy. That will also
>copy errors which don't exist on the output device.

I am talking of a complete and total and correct image of the drive. It
is a bit by bit copy. Usually it is stored onto a larger drive without
compression. If something goes bad, you can "loop back and mount" the
image. This gives you a read only exact copy of the file system as it
was. You then can simply fix the damaged file system.


[....]
>>>That's called an incremental backup. Great care needs to occur
>>>to combine full and incremental backups.
>>
>>No great amount of care is needed. I've done that sort of restore a few
>>times with no great trouble. Since files are stored with the modification
>>date, a copy command that checks dates does the hard part.
>
>You are very inexperience w.r.t. this computing task.

You seem to be claiming knowledge you don't have.

> It is not
>as easy as you make it out to be.

It in fact can be easier. I knew someone who wrote a lot of the software
used by banks and insurance companies. They stored the data transaction
by transaction, daily and incrementals, monthly near full backups and
yearly total backups. The system for recovery was very well tested and
automated. After every software change, they had to requalify the code.
This meant restoring an old back up and making a new one and restoring
that. I assume that software like that is still the common practice.



> Now think about that fact
>and all the people who are going to be doing all banking online.

It doesn't matter if you bank on line or in person. If you bank's
computers fail, you can't do a transaction. If they lose all their
computer data, you will have a devil of a time getting at your money.
This is why I always try to keep more than one bank, a couple of credit
cards and some cash. I know that there is some risk that a bank may have
a windows machine connected to the important information.

--
--
kensmith(a)rahul.net forging knowledge

From: Ken Smith on
In article <erpam2$8ss_002(a)s934.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv(a)aol.com> wrote:
>In article <ermtbj$rph$1(a)blue.rahul.net>,
> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>In article <ermofh$8qk_003(a)s774.apx1.sbo.ma.dialup.rcn.com>,
>> <jmfbahciv(a)aol.com> wrote:
>>>In article <er4i05$1ln$7(a)blue.rahul.net>,
>>> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>>>In article <er47qv$8qk_001(a)s897.apx1.sbo.ma.dialup.rcn.com>,
>>>> <jmfbahciv(a)aol.com> wrote:
>>>>[.....]
>>>>>>NT was written in the first place for a processor that didn't do
>>>>>>interrupts well.
>>>>>
>>>>>Nuts. If the hardware doesn't do it, then you can make the software
>>>>>do it. As TW used to say, "A small matter of programming".
>>>>
>>>>On the N10 there was no way to code around it. The hardware was designed
>>>>so that it had to walk to the breakroom and back before it picked up the
>>>>phone. Nothing you could say over the phone would help.
>>>>
>>>>
>>>>
>>>>>> The N10 AKA 860 processor had to spill its entire
>>>>>>pipeline when interrupted. This slowed things down a lot when the code
>>>>>>involved interrupts. When the project was moved back to the X86 world,
>it
>>>>>>was marketed as secure ... well sort of .... well kind of .... its better
>>>>>>than 98. I don't think a lot of time was spent on improving the
>interrupt
>>>>>>performance.
>>>>>
>>>>>You are confusing delivery of computing services by software with
>>>>>delivery of computing services of hardware.
>>>>
>>>>No, hardware sets the upper limit on what software can do.
>>>
>>>That all depends on who is doing the coding.
>>
>>If a CPU chip needs 1 hour to do a an add instruction, you can't make it
>>go faster by anything you code. Like I said it sets the upper limit on
>>the performance.
>
>Sigh! If an ADD takes an hour and the computation has to be done
>in less time, then you don't use the ADD instruction. You do
>the addition by hand.

In other words: You need another CPU to do the operation. No amount of
fancy code on a machine that takes an hour per instruction will fix it.

This is what I have been trying to explain to you about the hardware
setting the upper limit on performance.

--
--
kensmith(a)rahul.net forging knowledge

From: Ken Smith on
In article <erpb68$8qk_001(a)s934.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv(a)aol.com> wrote:
>In article <ermu1l$rph$2(a)blue.rahul.net>,
> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>In article <ermmhd$8qk_001(a)s774.apx1.sbo.ma.dialup.rcn.com>,
>> <jmfbahciv(a)aol.com> wrote:
>>>In article <erhn0i$em5$4(a)blue.rahul.net>,
>>> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>[.....]
>>>>Sure you can. If the computer is running a printer server, you can
>>>>predict the right order for the files to be read by the server. If there
>>>>is a task constantly running to take sliding billion point FFTs, you know
>>>>what is best for the FFT part. Just because the human may change
>>>>something it doesn't mean they change everything.
>>>
>>>All of this is single-task, single user thinking.
>>
>>No, it isn't. It is taking a practical example of a way that real
>>multiuser systems actually run.
>
>I know of plenty OSes and how they actually ran. We even made
>them go.

Then you should know that I am correct in what I am saying about the real
usage.

>
>> It is very common for a small fraction of
>>the tasks to need the large fraction of the memory. This is just the way
>>it is in the real world.
>
>That all depends on the computer site and who the users are.

Everything "depends", but 99.9999% of the cases are like that. There are
very few where the jobs are evenly spread.


>>
>>The computer that is doing the work of posting this is a multiuser
>>machine. It has me on it using only several tens of kilobytes.
>
><GAG> That's too much.

That's what I am using. "pico" is smallish and there is a little overhead
from "bash".

>
>>> Computer usage
>>>by the general population requires more than this. You keep
>>>looking at the load balance from a naive user POV.
>>
>>No, you are just making stuff up because you've been shown to be wrong
>>about the real world of computers today.
>
>Keep thinking like that and you'll never learn something.

From you! You have been shown to be wrong on this subject.



>>I'm using a company that sells computer time like a timesharing company.
>>They also sell modem access, and data storage. This is the modern
>>business model for this sort of company.
>
>And that is one business.

There are a great many like it now. There are also a lot of internet ISP
companies. They have the same sort of usage profile.


[....]
>>You only think that because you haven't stopped to think about what I
>>wrote. We were discussing the case where swapping had to happen. There
>>is no point in asking at this point if it needs to happen because the
>>argument has already shown that it must. There is more data to be
>>maintained than will fit into the RAM of the machine. There is no
>>question about swapping being needed. The discussion is about the
>>advantages of having the code specific to the large memory usage make
>>informed choices about swapping.
>
>You are not talking about swapping; you are talking about the
>working set of pages. You do NOT have to swap code if the
>storage disk is as fast as the swapping disk.

What the devil are you talking about? You were sort of making sense until
you got to this. The "swapping" under discussion is between the swap
volume and the physical RAM. The swap volume can never be anything like
as fast as the RAM. A VM system makes it appear that there is more RAM
than is physically there by using the swap volume.

Do you think that computers still use drum storage or mumble tanks for the
memory.


--
--
kensmith(a)rahul.net forging knowledge