From: Richard Maine on
Steven Fisher <sdfisher(a)spamcop.net> wrote:

> In article <1jf17w7.1v1q9551p0p1cqN%nospam(a)see.signature>,
> nospam(a)see.signature (Richard Maine) wrote:
>
> > That's one reason I set my TM schedule to kick off at most every 4
> > hours instead of every hour.
>
> You realize that in making it go off every four hours, it'll have a much
> larger impact when it *does* backup?

That depends on usage patterns. It can, of course, be up to 4 times as
much impact, but that assumes quite a bit about usage patterns.

I happen to know that I have a pattern whereby a significant part of the
backup bandwidth is from a moderately small set of files, which get
modified fairly often. If I let it backup once an hour, those files will
get recopied each hour (well, not every hour, but many of them). If I
wait 4 hours, it will still be largely the same set of files backup up -
not 4 times as many.

Plus there is at least some chance that I won't be on the computer at
all when the time comes. For backups every hour, there's just no way.
I've actually been tempted to set the darn thing to something like a
24-hour interval and have it run only at a time of day that I'm seldom
on. I don't really feel the need for "protection" more finely grained
than that. If I do something really critical, I can take extra measures.
The main reason I don't go to the 24-hour schedule is that I'm not sure
TM would handle it well if the finely-grained schedule was on the same
24-hour interval as the daily one; maybe it would, but I'm not sure.

No, I'm not talking the big huge files like virtual machine disks, which
I'm aware get excluded.

I can see this all quite clearly in my remote backups. That's because I
can monitor exactly what file is currently being backup up (and the
bandwidth is low enough that the larger files take long enough to be
pretty easy to spot).

I've got a pretty good idea what my backups are doing. I'm aware that
most people don't, but I know mine. I used to do sysadmin of some
systems at NASA before I retired, so I'm pretty used to having at least
a rough idea of system load impacts of various things and making some of
the tradeoffs. There was a time when I had one system where a full
restore from backup would have taken literally multiple weeks. That
needed some special design attention; mostly it used a scheme where the
first-level backup could be used "live" without going through a complete
restore process. Things have changed since then. I could now fit that
system on the 1TB disk of this iMac. It would be a little tight, but it
would fit. Back then, it was about a million dollars of robotic
storage..... and doing it hard-disk-based would have been well beyond
the resources I could beg.

On the whole, my current backup bandwidth is pretty modest most times.
Every once in a while, a few GB of stuff gets added/changed/whatever,
but that's not the day-to-day stuff.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
From: David Empson on
Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:

> In message <tom_stiller-3EAEB0.07142508032010(a)news.individual.net> Tom
> <tom_stiller(a)yahoo.com> wrote:
>
> >1 GB/Sec (1000baseT <full-duplex,flow-control> none). File transfers
> >can achieve several hundred MB/sec while TM backups rarely exceed a few
> >tens of MB/sec. HD Video is delivered over the same interface at about
> >16 MB/sec/channel (I have two channels in).
>
> You are confusing B (bytes) with b (bits).

Agreed. It is called "Gigabit" Ethernet for a reason.

Theoretical maximum possible transfer rate of 1000Base-T is just under
125 MB/s (megabytes per second). I've never come close to that in
reality, but it depends on the speed of both computers and hard drives.

(Compare with Firewire 800, which can get 80 to 90 MB/s; Firewire 400
can get 40 to 45 MB/s; USB 2.0 can't achieve much more than 30 MB/s even
though its raw bit rate implies it might be able to get close to 60
MB/s.)

> A decently fast SATA RAID will be able to write about 30-45MB/s, which is
> 250-350Mb/s

I assume you meant a NAS which is a "decently fast SAT RAID".

For a directly connected drive:

30 MB/s is about the speed of a single 5400 RPM 2.5" hard drive, and 45
MB/s would be slow for a single 7200 RPM 3.5" hard drive in the last few
years.

Modern 1 TB or larger drives can achieve transfer rates over 100 MB/s
for at least the outer part of the drive.

The theoretical limit of SATA-1 is 150 MB/s, and SATA-2 is 300 MB/s. A
decent striped SATA RAID array should be able to achieve 150 MB/s or
faster, if connected to a SATA or eSATA port on the computer. It would
saturate any of Gigabit Ethernet, Firewire 800/400 or USB 2.0.

If 30-45 MB/s is typical for a good Gigabit Ethernet NAS, then Firewire
800 is a better choice for a connecting a RAID array to a single
computer (or the computer which needs the best throughput). eSATA would
be even better, of course.

--
David Empson
dempson(a)actrix.gen.nz
From: Richard Maine on
Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:

> In message <1jf17w7.1v1q9551p0p1cqN%nospam(a)see.signature>
> Richard <nospam(a)see.signature> wrote:
> > Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:
>
> >> And how do you propose to throttle the file copying when it's a backup
> >> as opposed to any other time?
>
> > It is common enough with other kinds of backup software. My mozy remote
> > backup has an option to throttle the bandwidth it uses.
>
> Bandwidth != writing to disk

Sure it does. The term bandwidth is not restricted to networks. I'm
guessing that must be what you are thinking because I can't imagine any
other reason why one might say that that disk I/O does not have a
bandwidth. *ANYTHING* that involves data transfer has a bandwidth; that
includes disk writes. For that matter, since TM commonly runs over a
network connection, limitting its network bandwidth directly limits its
disk writes.

> I'm sure it must be
> trivial. WHy don't you just whip up a version of Time Machine that
> behaves the way you want it to?

I was making a serious point. If you prefer to just make snarky
comments, then this thread is over as far as I'm concerned. Of course,
that might make you happy. Be happy, then. The snarky question doesn't
deserve an answer.

> Prepare to be surprised, if you ever decide to actually learn anything
> about disk IO.

Personal insults to go with the snarky comments. I don't recall making
personal insults in my post. Make that my prior post. Consider yourself
insulted back. I've no interest in trying to get cute about it.

Bye.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
From: isw on
In article <slrnhpahtu.2mqg.g.kreme(a)cerebus.local>,
Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:

> In message <isw-A3B26B.09345908032010@[216.168.3.50]>
> isw <isw(a)witzend.com> wrote:
> > In article <slrnhp92uv.2nlf.g.kreme(a)cerebus.local>,
> > Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:
>
> >> In message <isw-CFFDDE.20323107032010@[216.168.3.50]> isw
> >> <isw(a)witzend.com> wrote:
> >> >In article <4MLkn.10889$0w4.2497(a)unlimited.newshosting.com>, Lewis
> >> ><notmyemail(a)example.com> wrote:
> >>
> >> >>On 06-Mar-10 23:57, isw wrote:
> >> >>>I used to use Retrospect (in the ancient days of OS 8 and 9), and
> >> >>>even on 100 MHz Macs, it never ever caused poor performance.
> >> >>
> >> >>Disks were much smaller and you were moving far less data and it was
> >> >>being copied much slower.
> >>
> >> >Yes, and that is precisely how I think T-M should behave.
> >>
> >> And how happy would you be when copying files on your system works at
> >> 1992 speeds?
>
> > Just copying files is one thing, but for backups, I simply don't care
> > how long they take; what's important to me is that I not notice the
> > process because of its effect on other processes I'm using at the time.
>
> And how do you propose to throttle the file copying when it's a backup
> as opposed to any other time?

When I was using Retrospect with OS 8 and OS 9, the client app allowed
the user to select how intrusive/fast the backup was. I'd expect
something like that from a well-written contemporary app, too (so I'm
surprised that T-M doesn't have it).

Isaac
From: isw on
In article <slrnhpb7ef.gpu.g.kreme(a)cerebus.local>,
Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:

> In message <1jf17w7.1v1q9551p0p1cqN%nospam(a)see.signature>
> Richard <nospam(a)see.signature> wrote:
> > Lewis <g.kreme(a)gmail.com.dontsendmecopies> wrote:
>
> >> And how do you propose to throttle the file copying when it's a backup
> >> as opposed to any other time?
>
> > It is common enough with other kinds of backup software. My mozy remote
> > backup has an option to throttle the bandwidth it uses.
>
> Bandwidth != writing to disk
>
> > It even allows
> > you to specify time periods when the throttling applies. I've seen other
> > utilities with simillar things. Sure, they are most commonly throttling
> > to avoid hogging your internet connection instead of anything local, but
> > I see no essential difference in principle, or likely in implementation.
>
> Well, it's nice thet you don't see any difference. I'm sure it must be
> trivial. WHy don't you just whip up a version of Time Machine that
> behaves the way you want it to?
>
> Simply put, it is not at *ALL* the same.
>
> Hint: When I tried out Crashplan writing to another disk hogged the
> entire machine.
>
> > Apple doesn't happen to do that with TM at the moment, but I'd be
> > surprised if there were any fundamental reason why they couldn't.
>
> Prepare to be surprised, if you ever decide to actually learn anything
> about disk IO.
>
> > And I would question the "vast" in
>
> >> one that the vast majority of the time everyone wants completed as quickly
> >> as possible.
>
> > I wouldn't argue against it being a majority, but I think the "vast"
> > overstates it.
>
> I'll go further. I am willin to be that, to five nines, people want disk
> IO to complete as fast as possible. That is 99.999% of the time.

You seem to be confusing "Disk IO" with the overall performance of a
backup application.

There is no reason why it is necessary for a backup task to saturate the
disk channel continuously for a long period of time. If it writes small
chunks, with pauses between, the user will experience less lag. Simple.

Isaac