From: karthikbalaguru on
On Feb 21, 12:43 pm, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Sunday 21 February 2010 03:05
>
> > On Feb 21, 4:19 am, Tim Watts <t...(a)dionic.net> wrote:
> >> I actually used this when I coded a bunch of servers in perl [1] to
> >> interface to dozens of identical embedded devices. It was actually
> >> mentally much easier than worry about locking issues as all the separate
> >> connections had to be coordinated onto one data set in RAM, ie they
> >> weren't functionally independant.
>
> > But, was it robust enough to handle near-simultaneous multiple
> > connections within a short timeframe from various clients ?
> > Were you using some kind of buffering/pool mechanism which
> > the main process was checking as soon as it is done with the
> > action for a particular connection ?
>
> Yes to the first question.

Cool :-)

> The OS takes care of that. Within (quite large)
> limits, linux (and any other "proper" OS will buffer the incoming SYN
> packets until the application gets around to doing an accept() on the
> listening socket. The application doesn't have to worry about that as long
> as it isn't going to block on something else for some silly amount of time.
>
> In practice, it was 10's of milliseconds at most.
>

Okay , it is not a problem until the buffer is able to
buffer the packets without overflowing.
I have been searching the internet regarding the
buffering and arrived at various links for linux -
http://www.psc.edu/networking/projects/tcptune/
http://fasterdata.es.net/TCP-tuning/linux.html
http://www.mjmwired.net/kernel/Documentation/networking/ip-sysctl.txt

- 'sysctl' seems to hold key !
- I do find /proc special file can also be accessed for
configuring the system parameters .

Set maximum size of TCP transmit window -
echo 108544 > /proc/sys/net/core/wmem_max
Set maximum size of TCP receive window -
echo 108544 > /proc/sys/net/core/rmem_max
Set min, default, max receive window. Used by the autotuning function
-
echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
Set min, default, max transmit window. Used by the autotuning function
-
echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem
echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf


> Different issue of course on a tiny system (you still haven't said what your
> target system is).
>

But, what kind of issues are there in tiny system ?

Thx in advans,
Karthik Balaguru

From: David Schwartz on
On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
wrote:

> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .

This is like a singly-linked list. It's the first thing you learn, and
so many people tend to assume it's used a lot in the real world. In
actuality, it's only common in special cases, such as when each
instance needs its own security context (like in 'telnetd' and
'sshd').

> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests.

This is the norm for TCP too, most of the time. But it's more
obviously reasonable for UDP, since there are no connections. You
can't create one process for each connection because there's no such
thing.

> Why such a difference in design of TCP and
> UDP servers ?

To the extent there is such a difference, it's because of the
different things the servers do. For example, TCP is commonly used for
web servers. Each web connection is a separate logical operation with
multiple steps that affect only that operation. However, a typical UDP
server (such as a time server or resolver) needs to maintain some
global state that each packet received minorly interacts with.

> How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?

Process-per-connection servers tend to do this very poorly. But one
trick is to create the processes before you need them rather than
after.

DS
From: Karthik Balaguru on
On Feb 21, 2:55 pm, David Schwartz <dav...(a)webmaster.com> wrote:
> On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> wrote:
>
> > While reading about the various designs, interestingly i
> > came across an info that the design of TCP servers is
> > mostly such that whenever it accepts a connection,
> > a new process is invoked to handle it .
>
> This is like a singly-linked list. It's the first thing you learn, and
> so many people tend to assume it's used a lot in the real world. In
> actuality, it's only common in special cases, such as when each
> instance needs its own security context (like in 'telnetd' and
> 'sshd').
>
> > But, it seems that in the case of UDP servers design,
> > there is only a single process that handles all client
> > requests.
>
> This is the norm for TCP too, most of the time. But it's more
> obviously reasonable for UDP, since there are no connections. You
> can't create one process for each connection because there's no such
> thing.
>

True. It depends on the nature of TCP and UDP.

> > Why such a difference in design of TCP and
> > UDP servers ?
>
> To the extent there is such a difference, it's because of the
> different things the servers do. For example, TCP is commonly used for
> web servers. Each web connection is a separate logical operation with
> multiple steps that affect only that operation. However, a typical UDP
> server (such as a time server or resolver) needs to maintain some
> global state that each packet received minorly interacts with.
>

Agreed.

> > How is TCP server able to handle
> > large number of very rapid near-simultaneous connections ?
>
> Process-per-connection servers tend to do this very poorly.

Yeah , it will overload the servers.

> But one
> trick is to create the processes before you need them rather than
> after.
>

The trick of creating the process before the actual need sounds
interesting and appears similar to pre-forking.
Here a server launches a number of child processes when it starts.
Those inturn would be serving the new connection requests by
having some kind of locking mechanism around the call to accept
so that at any point of time, only one child can use it and the
others will be blocked until the lock is released. There seem to
be some way out of that locking problem.

Karthik Balaguru
From: karthikbalaguru on
On Feb 21, 2:55 pm, David Schwartz <dav...(a)webmaster.com> wrote:
> On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> wrote:
>
> > How is TCP server able to handle
> > large number of very rapid near-simultaneous connections ?
>
> Process-per-connection servers tend to do this very poorly.

It overloads the server.

> But one
> trick is to create the processes before you need them rather than
> after.
>

But, how many processes should be created at the server ?
How will the server know about the number of processes that it
has to create ? Any ideas ?

Thx in advans,
Karthik Balaguru
From: David Schwartz on
On Feb 21, 7:54 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
wrote:

> But, how many processes should be created at the server ?
> How will the server know about the number of processes that it
> has to create ? Any ideas ?

Note that this is a key weakness of the 'process-per-connection'
model, and I recommend just not using that model unless it's mandated
by other concerned (such as cases where security is more important
than performance).

But there are two techniques, and they are typically used in
combination. One is static configuration. This is key on initial
server startup. For example, versions of Apache that were process per
connection let you set the number of processes to be started up
initially. They also let you set the target number of 'spare' servers
waiting for connections.

The other technique is dynamic tuning. You monitor the maximum number
of servers you've ever needed at once, and you keep close to that many
around unless you've had long period of inactivity.

Servers are going to be slow to start up anyway, and the time to
create new processes is not way out of line with the time to fault in
pages of code and other delays during server startup that you can do
very little about.

DS
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: 'netstat' and '-f inet' option
Next: WPE for linux