From: glen herrmannsfeldt on
In comp.protocols.tcp-ip Paul Keinanen <keinanen(a)sci.fi> wrote:
(snip)

> Neither does the TCP re-establish connection from the client to the
> server if the server is rebooted, powered up, hardware replaced,
> switch to a redundant unit (and if symbolic addressing is used, change
> of IP address).

UDP will do some of those. The UDP implementation of NFS is
pretty good at surviving server reboots and continuing on as
if nothing changed. I believe the TCP implementations automatically
reconnect, transparent to the user.

> You still need to build an additional layer to handle these events and
> re-establishing connections for non-attended 24/7 operation.

-- glen
From: Paul Keinanen on
On Mon, 22 Feb 2010 22:16:23 +0000 (UTC), glen herrmannsfeldt
<gah(a)ugcs.caltech.edu> wrote:

>In comp.protocols.tcp-ip Paul Keinanen <keinanen(a)sci.fi> wrote:
>(snip)
>
>> Neither does the TCP re-establish connection from the client to the
>> server if the server is rebooted, powered up, hardware replaced,
>> switch to a redundant unit (and if symbolic addressing is used, change
>> of IP address).
>
>UDP will do some of those. The UDP implementation of NFS is
>pretty good at surviving server reboots and continuing on as
>if nothing changed. I believe the TCP implementations automatically
>reconnect, transparent to the user.
>
>> You still need to build an additional layer to handle these events and
>> re-establishing connections for non-attended 24/7 operation.
>
>-- glen

If the TCP implementation of NFS will perform the reconnection, then
that is the feature of the NFS, not the TCP.


From: Karthik Balaguru on
On Feb 22, 1:12 am, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Sunday 21 February 2010 08:57
>
>
>
>
>
> > On Feb 21, 12:43 pm, Tim Watts <t...(a)dionic.net> wrote:
> >> karthikbalaguru <karthikbalagur...(a)gmail.com>
> >> wibbled on Sunday 21 February 2010 03:05
>
> >> > On Feb 21, 4:19 am, Tim Watts <t...(a)dionic.net> wrote:
> >> >> I actually used this when I coded a bunch of servers in perl [1] to
> >> >> interface to dozens of identical embedded devices. It was actually
> >> >> mentally much easier than worry about locking issues as all the
> >> >> separate connections had to be coordinated onto one data set in RAM,
> >> >> ie they weren't functionally independant.
>
> >> > But, was it robust enough to handle near-simultaneous multiple
> >> > connections within a short timeframe from various clients ?
> >> > Were you using some kind of buffering/pool mechanism which
> >> > the main process was checking as soon as it is done with the
> >> > action for a particular connection ?
>
> >> Yes to the first question.
>
> > Cool :-)
>
> >> The OS takes care of that. Within (quite large)
> >> limits, linux (and any other "proper" OS will buffer the incoming SYN
> >> packets until the application gets around to doing an accept() on the
> >> listening socket. The application doesn't have to worry about that as
> >> long as it isn't going to block on something else for some silly amount
> >> of time.
>
> >> In practice, it was 10's of milliseconds at most.
>
> > Okay , it is not a problem until the buffer is able to
> > buffer the packets without overflowing.
> > I have been searching the internet regarding the
> > buffering and arrived at various links for linux -
> >http://www.psc.edu/networking/projects/tcptune/
> >http://fasterdata.es.net/TCP-tuning/linux.html
> >http://www.mjmwired.net/kernel/Documentation/networking/ip-sysctl.txt
>
> > - 'sysctl' seems to hold key !
> > - I do find /proc special file can also be accessed for
> > configuring the system parameters .
>
> > Set maximum size of TCP transmit window -
> >    echo 108544 > /proc/sys/net/core/wmem_max
> > Set maximum size of TCP receive window -
> >    echo 108544 > /proc/sys/net/core/rmem_max
> > Set min, default, max receive window. Used by the autotuning function
> > -
> >     echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
> > Set min, default, max transmit window. Used by the autotuning function
> > -
> >     echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem
> >     echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf
>
> Most of those are are "per connection" limits, not "per system".
>
> http://www.speedguide.net/read_articles.php?id=121
>
> They help make things faster under certain conditions, but are not related
> to the kernel max resources.
>
> You generally need not worry about kernel resources - they are comparatively
> enormous.
>

Good link !

> On my system:
>
> /proc/sys/net/ipv4/tcp_max_syn_backlog
> is set to 1024 - that is a per system limit, but it can be increased without
> issues (simply at the expense of something else).
>
> >> Different issue of course on a tiny system (you still haven't said what
> >> your target system is).
>
> > But, what kind of issues are there in tiny system ?
>
> Like having 1+GB (or TB if you use serious iron) on one system and 4k on the
> other! In the former, you generally don't care about trivial like network
> buffers - there is so much RAM the kernel will sort itself out.
>
> On 4k, you have space for a couple of 1500 byte ethernet packets and some
> RAM for your application. OK - SYN packets aren't 1500 bytes and they can be
> processed into a very small data structure - but you get the point. Not much
> space and every byte matters.
>

Thx for your suggestions.

Karthik Balaguru
From: Maxwell Lol on
karthikbalaguru <karthikbalaguru79(a)gmail.com> writes:
> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .
> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests. Why such a difference in design of TCP and
> UDP servers ? How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?
> Any ideas ?


The TCP server forks a new process (which is very fast, as nothing
needs to to copied) to handle a new connection. If you want fast data
transfer, then you need to let the TCP applications (on both sides) to
buffer the data.

Normally the network stat uses a slow-start algorithm to make sure
collisions do not degrate the network.

UDP provides no way to detect collisions, if used, can cause a network
collapse - where the bandwidth drops to zero and never recovers
because the applications keep trying to consume all of the bandwidth.
From: Maxwell Lol on
Paul Keinanen <keinanen(a)sci.fi> writes:

> As long as you have a simple transaction system, one incoming request,
> one outgoing response, why on earth would any sensible person create a
> TCP/IP connection for this simple transaction ?

If the kernal is overloaded, it can drop UDP packets from the queue
before it sends them out.

Also - if A sends to B, and B replied to A, how does B know that A got
the reply or not?

And if the transaciton is large (multi-packets), you need to use
multple packets. And then you have to keep track of these, and reassemble them.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: 'netstat' and '-f inet' option
Next: WPE for linux