From: Henrik Carlqvist on
Mike Jones <Not(a)Arizona.Bay> wrote:
> Its when I set cp off copy\moving whole dir trees of data about. Doesn/t
> matter if I cp. use MC , Thunar, whatever. Its the data shifting that
> sucks up processing capacity.

So you think that cp eats your CPU capacity? If so you should be able to
see this with top.

How big part of your CPU is spent in user time during copy? (1)
How big part of your CPU is spent in system time during copy? (2)
How big part of your CPU is spent in idle time during copy? (3)

In which state is your cp process? (4)
How much CPU time has your cp process consumed when the cp process has
been working for about ten seconds of wall time? (5)

Example output from top with marking of above measurements:

8:04am up 206 days, 17:48, 2 users, load average: 0.07, 0.02, 0.00
31 processes: 30 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 7.3% user, 7.3% system, 0.0% nice, 85.3% idle

(1) (2) (3)

Continued example:

Mem: 15084K av, 13888K used, 1196K free, 5272K shrd, 8580K buff
Swap: 66492K av, 1544K used, 64948K free 1836K cached

PID USER PRI NI SIZE RES SHRD STAT %CPU %MEM TIME COMMAND
4118 henca 10 0 856 456 328 R 14.7 3.0 0:01 top

(4) (5)

> As for swap, I was messing about with a default auto-setup script that
> generates an emergency 500MiB swapfile,

As has already been suggested, if you care about performance you should
swap to a partition instead of a file.

> and just whacked in three more to make up the 2GiB RAM I've got on this
> machine.

Do you have any particular reason to match the swap size with your RAM
size? If done right it can be good to split swap up on different
partitions. If done wrong this is yet another way of losing performance.
Think about your disks head seek time.

> I don't think its ever used any of it.

Thinking is good, but knowing is better. Once again top will be able to
tell you the truth.

> Thats what surprises me, that a machine with this kind of capacity
> should even feel moving data about in the background.

Define background. You seem to think that cp consumes your CPU. If a
process consumes CPU, how would "foreground" processes be able to stay
unaffected? Ok, if you have multiple cores it would in teory be possible
for foreground tasks to still have all the CPU power they want.

However, I don't think that your cp process is consuming CPU. I think that
it is consuming disk IO. I think that you will see a "D" for the state of
your cp process in top. I also think that other processes also wants to
use your disk, possibly for swapping and this slows your system down. But
this is only what I think, your output from top will let us know.

regards Henrik
--
The address in the header is only to prevent spam. My real address is:
hc3(at)poolhem.se Examples of addresses which go to spammers:
root(a)localhost postmaster(a)localhost

From: Mike Jones on
Responding to Aaron W. Hsu:

> Mike Jones <Not(a)Arizona.Bay> writes:
>
>>I've noticed a slowdown in things when setting cp to copy chunks of
>>data.
>
>>This mostly affects resource hungry stuff like Seamonkey, making things
>>feel as if I've lost a CPU.
>
>>I'm thinking about sticking a 'nice' parameter in my .bashrc for cp
>
>>Now is the time to say "Argh! Don't do that!" and\or "Try this..."
>
>>Ideas?
>
> It's one of those tricks that the modern computer vendors play on you.
> See, you do have lots of RAM and lots of processing power in your
> system, but many systems have an I/O bottleneck when it comes to reading
> data to and from the disks. It is always slower to do this sort of
> thing. Unfortunately, many of the Desktop Environments in use today make
> rather heavy usage of the disk, and some programs, like SeaMonkey, do as
> well. This means that when you are saturating your I/O throughput with
> something like the cp program, you're also going to be slowing down
> programs that rely on disk I/O.
>
> In other words, your processors and memory probably are not saturated,
> but your disk throughput probably is. My solution to this is using
> programs that don't rely on the disk unless it makes sense. That way,
> even if the disk is under heavy load, most programs remain responsive,
> because they don't need to write to the disk or read from it.
>
> Aaron W. Hsu


Aha! This explains why LiveCD distros that ignore disk and load
everything into RAM are so "sprightly"!

Next up, how to I force applications to RAM instead of disk? ;)

--
*=( http://www.thedailymash.co.uk/
*=( For all your UK news needs.
From: goarilla on
Mike Jones wrote:
> Responding to Grant:
>
>> On Mon, 14 Dec 2009 16:22:06 GMT, Mike Jones <Not(a)Arizona.Bay> wrote:
>>
>>> Responding to Keith Keller:
>> ...
>>>> What's your disk subsystem like? LVM? md RAID? SATA, PATA, SAS?
>>>> What do the various disk testers (e.g. vmstat, iostat) say? Are you
>>>> swapping at all?
>> ...
>>> 2 SATA HDDs and sometimes a fair bit of data transfer across the LAN to
>>> and from these disks, and from disk to disk.
>> Can you narrow down wehat causes the slowdown to either HDD file
>> transfer, or over the LAN?
>>
>>> Have multiple swap files
>>> (4)
>>> adding up to RAM size, but this "dragging" thing appears with or without
>>> swap files or swap partitions.
>> Four swapfiles on two HDDs makes little sense, unless you really need
>> the swap space.
>>
>> I place one swap partition on each drive in a multi-drive box, and add
>> ",pri=1" after 'defaults' to the appropriate /etc/fstab lines so that
>> they're treated like RAID0. For example:
>>
>> $ cat /etc/fstab
>> # /etc/fstab for slackware64-13.0 on pooh64 -- 2009-10-07 #
>> /dev/sda3 / reiserfs defaults
>> 0 0 /dev/sdb3 /usr reiserfs defaults
>> 0 0 /dev/sdc2 /home/data reiserfs defaults,noauto
>> 0 0 #
>> /dev/sda5 swap swap defaults,pri=1
>> 0 0 /dev/sdb5 swap swap defaults,pri=1
>> 0 0 #
>> #/dev/cdrom /mnt/cdrom auto noauto,owner,ro
>> 0 0 /dev/fd0 /mnt/floppy auto noauto,owner
>> 0 0 #
>> devpts /dev/pts devpts gid=5,mode=620
>> 0 0 proc /proc proc defaults
>> 0 0 tmpfs /dev/shm tmpfs defaults
>> 0 0 #
>> deltree:/home/common /home/common nfs hard,intr
>> deltree:/home/mirror /home/mirror nfs hard,intr,noauto,user
>> black:/home/buffer /home/buffer nfs hard,intr,noauto,user #
>>
>> No matter what size swap you have available, it's the swap usage that
>> counts, use top, free or cat /proc/swaps to see usage.
>>
>> If you're regularly using lots of swap, it's better to add real memory.
>>
>> Grant.
>
>
>
> Its not a LAN thing. Its when I set cp off copy\moving whole dir trees of
> data about. Doesn/t matter if I cp. use MC , Thunar, whatever. Its the
> data shifting that sucks up processing capacity.
>
> As for swap, I was messing about with a default auto-setup script that
> generates an emergency 500MiB swapfile, and just whacked in three more to
> make up the 2GiB RAM I've got on this machine. I don't think its ever
> used any of it.
>
> Thats what surprises me, that a machine with this kind of capacity should
> even feel moving data about in the background.
>
maybe your hard drives are in PIO mode instead of (m|s)DMA
From: goarilla on
Mike Jones wrote:
> Responding to Aaron W. Hsu:
>
>> Mike Jones <Not(a)Arizona.Bay> writes:
>>
>>> I've noticed a slowdown in things when setting cp to copy chunks of
>>> data.
>>> This mostly affects resource hungry stuff like Seamonkey, making things
>>> feel as if I've lost a CPU.
>>> I'm thinking about sticking a 'nice' parameter in my .bashrc for cp
>>> Now is the time to say "Argh! Don't do that!" and\or "Try this..."
>>> Ideas?
>> It's one of those tricks that the modern computer vendors play on you.
>> See, you do have lots of RAM and lots of processing power in your
>> system, but many systems have an I/O bottleneck when it comes to reading
>> data to and from the disks. It is always slower to do this sort of
>> thing. Unfortunately, many of the Desktop Environments in use today make
>> rather heavy usage of the disk, and some programs, like SeaMonkey, do as
>> well. This means that when you are saturating your I/O throughput with
>> something like the cp program, you're also going to be slowing down
>> programs that rely on disk I/O.
>>
>> In other words, your processors and memory probably are not saturated,
>> but your disk throughput probably is. My solution to this is using
>> programs that don't rely on the disk unless it makes sense. That way,
>> even if the disk is under heavy load, most programs remain responsive,
>> because they don't need to write to the disk or read from it.
>>
>> Aaron W. Hsu
>
>
> Aha! This explains why LiveCD distros that ignore disk and load
> everything into RAM are so "sprightly"!
>
> Next up, how to I force applications to RAM instead of disk? ;)
>

i think they do it by copying /usr to a ramfs or tmpfs
and then they remount /usr but they keep /bin/ /sbin/ /boot/
on the original media, another method which i guess would work would be
to chroot on the ramfs. yes i'm just _guessing_ here

but the first method has problems when you update your system
either you'll have to hack around with squashfs or copy the files
multiple times when updating
From: Grant on
On Tue, 15 Dec 2009 17:13:22 +0100, "goarilla(a)work" <kevindotpaulus(a)mtmdotkuleuven.be> wrote:

>>>> 2 SATA HDDs and sometimes a fair bit of data transfer across the LAN to
....
>maybe your hard drives are in PIO mode instead of (m|s)DMA

SATA drives? I doubt it ;)

Grant.
--
http://bugsplatter.id.au