Prev: Mandriva: could not mound compressed loopback
Next: Fatal error - Fork: Can't reserve memory for stack - Cygwin
From: David Mathog on 11 Aug 2006 14:32 I'm trying to backup a remote system using tar over ssh to a local SDLT320 tape drive /dev/nst0. I'm trying NOT to use RMT since the remote machine is out on the public net and I don't want to punch that particular security hole into the firewall on the server that has the tape drive. Here's the command I'm trying: % ssh root@$REMOTE "cd $TMOUNTNAME ; tar cvlf - . " 2>>$BLOG | \ dd of=/dev/nst0 bs=10240 2>>$BLOG bs=10240 because the default block size on tar is 20 x 512 = 10240. Unfortunately, it doesn't work. Two different things mess up. For $TMOUNTNAME = "/" The commands seem to execute right at first, for instance on the remote node one sees: root 15562 15560 0 11:03 ? 00:00:00 bash -c cd / ; tar cvlf - . root 15607 15562 0 11:03 ? 00:00:00 tar cvlf - . r However, it isn't actually doing what it has been told to. 1. The log file shows that after it does ./lost+found (correctly) the rest of the files backed up are all from /home, but / and /home are different partitions on the remote system: /dev/hda1 and /dev/hdb2. -l is supposed to keep tar on the local file system, which is "/". Apparently it isn't. Note the remote machine is Debian, the local machine is Mandrake 10. 2. This runs for a while and then jams. There is not a clue why in any of the log files. All the programs are still running (dd, tar,ssh) but nothing is going on, no CPU usage, iostat falls to zero on the disk being backed up. I tried running this manually: ssh root@$REMOTE "cd / ; tar cvlf - ." 2>>test.log | wc and it went merrily along much past the point at which the tape version with dd failed. Note that /home is around 160Gb. Apparently there's some sort of synchronization problem between ssh/dd/ and maybe the tape drive that causes everything to jam up. Any suggestions? Thanks, David Mathog
From: David W. Hodgins on 11 Aug 2006 15:34 On Fri, 11 Aug 2006 14:32:07 -0400, David Mathog <mathog(a)caltech.edu> wrote: > 1. The log file shows that after it does ./lost+found (correctly) > the rest of the files backed up are all from /home, but > / and /home are different partitions on the remote system: /dev/hda1 > and /dev/hdb2. -l is supposed to keep tar on the local file > system, which is "/". Apparently it isn't. Note the remote machine > is Debian, the local machine is Mandrake 10. man tar does not show -l as an option on Mandriva 2006. From a test ... # tar cvlf - .>fred tar: Semantics of -l option will change in the future releases. tar: Please use --one-file-system option instead. Try "tar -cvf --one-file-system . >fred", or similar. Regards, Dave Hodgins -- Change nomail.afraid.org to ody.ca to reply by email. (nomail.afraid.org has been set up specifically for use in usenet. Feel free to use it yourself.)
From: David Mathog on 14 Aug 2006 14:53
David Mathog wrote: > > Here's the command I'm trying: > > % ssh root@$REMOTE "cd $TMOUNTNAME ; tar cvlf - . " 2>>$BLOG | \ > dd of=/dev/nst0 bs=10240 2>>$BLOG > I tried a variant of this: ssh root@$REMOTE "dump -0 -q -B 2000000000 -f - -u $TPARTITION | gzip -3 -c " 2>>$BLOG \ | buffer -m 16384k -p 75 -B -o /dev/nst0 2>>$BLOG When run MANUALLY (from the command line) it worked. I was able to verify that the dump moved stuff over by subsequently doing: mt -f /dev/nst0 rewind dd if=/dev/nst0 bs=10240 | zcat | restore -t -f - which listed the partitions contents correctly. However when this EXACT SAME COMMAND is run as part of script, it locked up just like the previous command did. The log file shows that it gets to the "directories" part of the dump and then sticks. As before, all the expected processes are running on both sides, but they are "stuck", with none of them using CPU time. That was the clue, what's the difference between ssh run interactively and in a script? Well here stdout and stderr were taken care of, but stdin wasn't. Even though nothing should be read from stdin, it must be assigned a source that can be read inside the script (very unclear to me why). This form (note the </dev/null) works: ssh root@$REMOTE "dump -0 -q -B 2000000000 -f - -u $TPARTITION | gzip -3 -c " 2>>$BLOG </dev/null \ | buffer -m 16384k -p 75 -B -o /dev/nst0 2>>$BLOG Regards, David Mathog |