From: Barry Margolin on
In article
<597a7568-ddf4-43af-bff0-fb22d0dddb37(a)m35g2000prh.googlegroups.com>,
karthikbalaguru <karthikbalaguru79(a)gmail.com> wrote:

> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .
> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests. Why such a difference in design of TCP and
> UDP servers ? How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?
> Any ideas ?

TCP is usually used for services that require a large amount of
interaction between the client and server, and state that must be
maintained for the duration of the connection. Forking a process or
spawning a thread provides an easy way to maintain that state, and keep
other connections from interfering. And since the connection is likely
to be maintained for a significant period of time, the overhead of
forking a process is usually negligible.

UDP is generally used for simple request/response protocols, where each
packet is independent and state doesn't need to be maintained. Speed is
often of the essense, so having to fork a process for each request would
slow things down noticeably.

There are exceptions, though. If a server handles hundreds or thousands
of concurrent connections, forking so many processes is likely to
overload the system. Thread pools are often used for these types of
servers.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Maxwell Lol on
Maxwell Lol <nospam(a)com.invalid> writes:

> The TCP server forks a new process (which is very fast, as nothing
> needs to to copied) to handle a new connection. If you want fast data
> transfer, then you need to let the TCP applications (on both sides) to
> buffer the data.

>
> Normally the network stat uses a slow-start algorithm to make sure
> collisions do not degrate the network.

I should qualify this to say I was thinking of several large bandwidth
connections, not umpteen thousnads of small ones. (i.e. a video server
for surveilence). I wasn't sure what the OP's requirement was. This
is posted in comp.arch.embedded.

I was thinking that bandwidth was the limited resource,
not the server resources.

If this is the case, forking processes would not be the limiting
aspect. It's filling a high bandwidth pipe of continuous data that
limits the total data transfer.

I hope the OP isn't thinking of a dedicated appliance that can be
a youtube/facebook server. where scalabiality is a real issue.


From: Boudewijn Dijkstra on
Op Thu, 25 Feb 2010 03:41:07 +0100 schreef Maxwell Lol
<nospam(a)com.invalid>:
> karthikbalaguru <karthikbalaguru79(a)gmail.com> writes:
>> While reading about the various designs, interestingly i
>> came across an info that the design of TCP servers is
>> mostly such that whenever it accepts a connection,
>> a new process is invoked to handle it .
>> But, it seems that in the case of UDP servers design,
>> there is only a single process that handles all client
>> requests. Why such a difference in design of TCP and
>> UDP servers ? How is TCP server able to handle
>> large number of very rapid near-simultaneous connections ?
>> Any ideas ?
>
>
> The TCP server forks a new process (which is very fast, as nothing
> needs to to copied)

Some things need to be copied, like file descriptors, but nothing big.
Still, forking takes considerably more overhead in time and space than
managing the context yourself.



--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
First  |  Prev  | 
Pages: 1 2 3 4 5 6 7 8
Prev: 'netstat' and '-f inet' option
Next: WPE for linux