Prev: Copied files take up more space than originals
Next: USB Drive: Can't open /dev/rdsk/c2t0d0s2: I/O error
From: underh20 on 22 Dec 2009 14:57 We have two Solaris 10 servers (A & B) setup for sending files between them via SCP. The keys/hosts configuration are set up between the two servers. There's no problem when sending individual file between the two servers. However, when we are standing multiple files "cocncurrently" via shell script or commands from server A to server B, we are getting the following messages repeatedly. Some files got sent OK and some didn't go thru when the messages below appear, regardless of the size or type of these "unsent" files in server A : : : ssh_exchange_identification: Connection closed by remote host lost connection ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host lost connection ssh_exchange_identification: Connection closed by remote host lost connection lost connection ssh_exchange_identification: Connection closed by remote host lost connection ssh_exchange_identification: Connection closed by remote host lost connection : : Any idea how I could trouble-shoot and resolve this issue ? Thanks, Bill
From: OldSchool on 22 Dec 2009 15:44 On Dec 22, 2:57 pm, underh20 <underh20.scubadiv...(a)gmail.com> wrote: > We have two Solaris 10 servers (A & B) setup for sending files between > > There's no problem when sending individual file between the two servers. > > However, when we are standing multiple files "cocncurrently" via shell > script or commands > The first questions I'd have are: How / what commands are being issued to send 'multiple files' concurrently? How many are being sent? When this occurs, is it always around the same number of files? I sort of suspect that you've run into a limitation with pty's not being available on the server you're trying to connect to, or something similar
From: Greg Andrews on 22 Dec 2009 18:48 underh20 <underh20.scubadiving(a)gmail.com> writes: > >We have two Solaris 10 servers (A & B) setup for sending files between >them via SCP. The keys/hosts configuration are set up between the two >servers. There's no problem when sending individual file between the >two servers. > >However, when we are standing multiple files "cocncurrently" via shell >script or commands from server A to server B, we are getting the >following messages repeatedly. > >ssh_exchange_identification: Connection closed by remote host lost connection > One potential source is the configuration of the sshd daemon on the machine that's receiving the connections. From the man page for for the sshd config file "sshd_config": MaxStartups Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10. As someone else mentioned, a shortage of ptys can also cause similar symptoms (though I think the error message will be slightly different). -Greg -- Do NOT reply via e-mail. Reply in the newsgroup.
From: OldSchool on 24 Dec 2009 12:05 FYI, The OP sent me a copy of the script he's running to do this. There are 30+ lines of nohup scp <some_source_file> <some_dest_file> & so, as noted above, he's probably exceeded MaxStartups, since they'd all process at approx. the same time....
From: Barry Margolin on 24 Dec 2009 16:49
In article <18a46a6f-8ba7-4e5c-a95b-07078cf8b6b4(a)k19g2000yqc.googlegroups.com>, OldSchool <scott.myron(a)macys.com> wrote: > FYI, > > The OP sent me a copy of the script he's running to do this. There > are 30+ lines of > > nohup scp <some_source_file> <some_dest_file> & > > so, as noted above, he's probably exceeded MaxStartups, since they'd > all process at approx. the same time.... If you're copying a bunch of files to a common destination directory, I believe it's best to do scp file1 file2 file3 ... dest I expect scp will reuse the same SSH connection where possible, avoiding starting up lots of connections. In some cases running all the transfers in parallel may provide a benefit. If there's high latency between the machines, individual connections may have limited throughput, so you'll get higher combined throughput with concurrent transfers. -- Barry Margolin, barmar(a)alum.mit.edu Arlington, MA *** PLEASE post questions in newsgroups, not directly to me *** *** PLEASE don't copy me on replies, I'll read them in the group *** |