From: Karthik Balaguru on
On Feb 22, 1:12 am, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Sunday 21 February 2010 08:57
>
>
>
>
>
> > On Feb 21, 12:43 pm, Tim Watts <t...(a)dionic.net> wrote:
> >> karthikbalaguru <karthikbalagur...(a)gmail.com>
> >> wibbled on Sunday 21 February 2010 03:05
>
> >> > On Feb 21, 4:19 am, Tim Watts <t...(a)dionic.net> wrote:
> >> >> I actually used this when I coded a bunch of servers in perl [1] to
> >> >> interface to dozens of identical embedded devices. It was actually
> >> >> mentally much easier than worry about locking issues as all the
> >> >> separate connections had to be coordinated onto one data set in RAM,
> >> >> ie they weren't functionally independant.
>
> >> > But, was it robust enough to handle near-simultaneous multiple
> >> > connections within a short timeframe from various clients ?
> >> > Were you using some kind of buffering/pool mechanism which
> >> > the main process was checking as soon as it is done with the
> >> > action for a particular connection ?
>
> >> Yes to the first question.
>
> > Cool :-)
>
> >> The OS takes care of that. Within (quite large)
> >> limits, linux (and any other "proper" OS will buffer the incoming SYN
> >> packets until the application gets around to doing an accept() on the
> >> listening socket. The application doesn't have to worry about that as
> >> long as it isn't going to block on something else for some silly amount
> >> of time.
>
> >> In practice, it was 10's of milliseconds at most.
>
> > Okay , it is not a problem until the buffer is able to
> > buffer the packets without overflowing.
> > I have been searching the internet regarding the
> > buffering and arrived at various links for linux -
> >http://www.psc.edu/networking/projects/tcptune/
> >http://fasterdata.es.net/TCP-tuning/linux.html
> >http://www.mjmwired.net/kernel/Documentation/networking/ip-sysctl.txt
>
> > - 'sysctl' seems to hold key !
> > - I do find /proc special file can also be accessed for
> > configuring the system parameters .
>
> > Set maximum size of TCP transmit window -
> >    echo 108544 > /proc/sys/net/core/wmem_max
> > Set maximum size of TCP receive window -
> >    echo 108544 > /proc/sys/net/core/rmem_max
> > Set min, default, max receive window. Used by the autotuning function
> > -
> >     echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
> > Set min, default, max transmit window. Used by the autotuning function
> > -
> >     echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem
> >     echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf
>
> Most of those are are "per connection" limits, not "per system".
>
> http://www.speedguide.net/read_articles.php?id=121
>
> They help make things faster under certain conditions, but are not related
> to the kernel max resources.
>
> You generally need not worry about kernel resources - they are comparatively
> enormous.
>

Good link !

> On my system:
>
> /proc/sys/net/ipv4/tcp_max_syn_backlog
> is set to 1024 - that is a per system limit, but it can be increased without
> issues (simply at the expense of something else).
>
> >> Different issue of course on a tiny system (you still haven't said what
> >> your target system is).
>
> > But, what kind of issues are there in tiny system ?
>
> Like having 1+GB (or TB if you use serious iron) on one system and 4k on the
> other! In the former, you generally don't care about trivial like network
> buffers - there is so much RAM the kernel will sort itself out.
>
> On 4k, you have space for a couple of 1500 byte ethernet packets and some
> RAM for your application. OK - SYN packets aren't 1500 bytes and they can be
> processed into a very small data structure - but you get the point. Not much
> space and every byte matters.
>

Thx for your suggestions.

Karthik Balaguru
From: Maxwell Lol on
karthikbalaguru <karthikbalaguru79(a)gmail.com> writes:
> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .
> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests. Why such a difference in design of TCP and
> UDP servers ? How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?
> Any ideas ?


The TCP server forks a new process (which is very fast, as nothing
needs to to copied) to handle a new connection. If you want fast data
transfer, then you need to let the TCP applications (on both sides) to
buffer the data.

Normally the network stat uses a slow-start algorithm to make sure
collisions do not degrate the network.

UDP provides no way to detect collisions, if used, can cause a network
collapse - where the bandwidth drops to zero and never recovers
because the applications keep trying to consume all of the bandwidth.
From: Maxwell Lol on
Paul Keinanen <keinanen(a)sci.fi> writes:

> As long as you have a simple transaction system, one incoming request,
> one outgoing response, why on earth would any sensible person create a
> TCP/IP connection for this simple transaction ?

If the kernal is overloaded, it can drop UDP packets from the queue
before it sends them out.

Also - if A sends to B, and B replied to A, how does B know that A got
the reply or not?

And if the transaciton is large (multi-packets), you need to use
multple packets. And then you have to keep track of these, and reassemble them.
From: Barry Margolin on
In article
<597a7568-ddf4-43af-bff0-fb22d0dddb37(a)m35g2000prh.googlegroups.com>,
karthikbalaguru <karthikbalaguru79(a)gmail.com> wrote:

> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .
> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests. Why such a difference in design of TCP and
> UDP servers ? How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?
> Any ideas ?

TCP is usually used for services that require a large amount of
interaction between the client and server, and state that must be
maintained for the duration of the connection. Forking a process or
spawning a thread provides an easy way to maintain that state, and keep
other connections from interfering. And since the connection is likely
to be maintained for a significant period of time, the overhead of
forking a process is usually negligible.

UDP is generally used for simple request/response protocols, where each
packet is independent and state doesn't need to be maintained. Speed is
often of the essense, so having to fork a process for each request would
slow things down noticeably.

There are exceptions, though. If a server handles hundreds or thousands
of concurrent connections, forking so many processes is likely to
overload the system. Thread pools are often used for these types of
servers.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Maxwell Lol on
Maxwell Lol <nospam(a)com.invalid> writes:

> The TCP server forks a new process (which is very fast, as nothing
> needs to to copied) to handle a new connection. If you want fast data
> transfer, then you need to let the TCP applications (on both sides) to
> buffer the data.

>
> Normally the network stat uses a slow-start algorithm to make sure
> collisions do not degrate the network.

I should qualify this to say I was thinking of several large bandwidth
connections, not umpteen thousnads of small ones. (i.e. a video server
for surveilence). I wasn't sure what the OP's requirement was. This
is posted in comp.arch.embedded.

I was thinking that bandwidth was the limited resource,
not the server resources.

If this is the case, forking processes would not be the limiting
aspect. It's filling a high bandwidth pipe of continuous data that
limits the total data transfer.

I hope the OP isn't thinking of a dedicated appliance that can be
a youtube/facebook server. where scalabiality is a real issue.