From: Jorgen Grahn on
On Sat, 2010-02-20, Tim Watts wrote:
> karthikbalaguru <karthikbalaguru79(a)gmail.com>
> wibbled on Saturday 20 February 2010 13:10
>
>
>> While reading about the various designs, interestingly i
>> came across an info that the design of TCP servers is
>> mostly such that whenever it accepts a connection,
>> a new process is invoked to handle it .
....

> In the meantime, speaking generally (without embedded systems specifically
> in mind):
>
> TCP = reliable stream connection oriented protocol. No worrying about
> sequences, out of order packet delivery, missed packets - except in as much
> as your application needs to handle the TCP stack declaring it's given up
> (exception handling). Some overhead in setting up (3 way handshake) and
> closedown.
>
> UDP = datagram protocol and your application needs to worry about all the
> rest above, if it cares. But very light - no setup/closedown.
>
> Regarding TCP service architechture, there are 3 main classes:
>
> 1) Forking server;
> 2) Threaded server;
> 3) Multiplexing server;
>
> 1 - simplest to program, heaviest on system resources.

Heaviest relatively speaking yes, but less heavy than many people
think. I was going to say "measure before you decide", but you'd need
to write the server first to get realistic data :-/

....
> 3 - Very efficient. One process maintains a state for all connections, often
> using event methodology to call service subroutines when something
> interesting happens (eg new connection, data arrived, output capable of
> accepting data, connection closed). Sounds horrible, but with an OO
> approach, very easy to get your head around. Now bearing in mind that
> anything in OO can be bastardised to a handle and an array of struct which
> holds the equivalent data that an OO object would, this could be a very
> suitable method for emebedded systems where C may be the language of choice
> and there may be no OS or only a very simple one that doesn't map well.

[Here I'm sticking to Unix, which was your original three-bullet
list.]

There are few technical reasons not to use C++ in embedded systems (no
performance impact compared to C if you do it right) but maybe
cultural reasons in some places.

I spent Friday comparing my own C++ implementation of the
listen/accept part of (3) and cleaning up someone else's C
implementation of the same thing. Mine was much simpler, but only
maybe 30% of that simplicity came from OO things which could be
simulated in C. Then 30% were due to other features of C++ such as
RAII, standard containers and algorithms. The remaining 40% was sane
naming and lack of misleading documentation.

> Now, doing 3 wouldn't be so far different to doing it all in UDP *except*
> you now have to care about packet delivery unreliability

That's *one* feature of TCP, but there are others which, if you forget
to implement them correctly, usually spells disaster. Just to mention
two: flow control and congestion avoidance.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
From: Joe Pfeiffer on
karthikbalaguru <karthikbalaguru79(a)gmail.com> writes:

> On Feb 20, 8:08�pm, markhob...(a)hotpop.donottypethisbit.com (Mark
> Hobley) wrote:
>> karthikbalaguru <karthikbalagur...(a)gmail.com> wrote:
>> > While reading about the various designs, interestingly i
>> > came across an info that the design of TCP servers is
>> > mostly such that whenever it accepts a connection,
>> > a new process is invoked to handle it .
>>
>> TCP is a "reliable" connection, whereas UDP is "unreliable". If you understand
>> the difference between these two types of connections, it should be clear why
>> this is so, and you would know which connection type best suits your
>> application.
>>
>
> Agreed, but the query is about the design of the
> TCP server and the UDP server. In TCP server
> whenever a new connection arrives, it accepts the
> connection and invokes a new process to handle
> the new connection request. The main point here
> is that 'a new process is created to handle every
> new connection that arrives at the server' .
> In the case of UDP server, it seems that most
> of the the server design is such that there is only
> one process to handle various clients.
> Will the TCP server get overloaded if it creates
> a new process for every new connection ? How is
> it being managed ?

Tim Watts did an excellent job two posts up-thread describing three
different architectures for TCP servers. To summarize the part that
relates directly to your question: if you've got a really heavy load,
the server can indeed get overloaded. In that case, you need to work
harder and do something like a threaded or multiplexing server.

>>
>> > How is TCP server able to handle large number of very rapid
>> > near-simultaneous connections ?
>>
>> The datagrams carry identification numbers that enable them to be related
>> to the controlling processes, enabling them to be easily managed.
>>
>
> The point here is, consider a scenario that there are
> multiple connection requests are arriving while the
> TCP server is busy in the process of creation of a
> new process for the earlier connection request.
> How does TCP handle those multiple connection
> requests during that scenario ?

That's what the backlog parameter on the listen() call is for. If the
number of pending requests is less than or equal to that number, they
get queued. When the number of pending requests exceeds it, requests
start getting refused.
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
From: Paul Keinanen on
On Sat, 20 Feb 2010 14:50:43 +0000, Tim Watts <tw(a)dionic.net> wrote:

>Paul Keinanen <keinanen(a)sci.fi>
> wibbled on Saturday 20 February 2010 14:12
>
>> On Sat, 20 Feb 2010 05:10:54 -0800 (PST), karthikbalaguru
>> <karthikbalaguru79(a)gmail.com> wrote:
>>
>>>While reading about the various designs, interestingly i
>>>came across an info that the design of TCP servers is
>>>mostly such that whenever it accepts a connection,
>>>a new process is invoked to handle it .
>>>But, it seems that in the case of UDP servers design,
>>>there is only a single process that handles all client
>>>requests. Why such a difference in design of TCP and
>>>UDP servers ? How is TCP server able to handle
>>>large number of very rapid near-simultaneous connections ?
>>>Any ideas ?
>>
>> While I understand that some lazy programmers might use TCP/IP for
>> some minor ad hoc applications.
>>
>
>
>Because it's a *reliable* transport protocol? Why waste effort in the
>application doing what an off the shelf stack can do for you?

This is true only as long as there is an existing TCP/IP connection.

Once this is lost, you have to use something else to establish a new
TCP/IP connection.

Once you have to create a new TCP/IP connection to replace a broken
TCP/IP connection, you need to use a similar amount of logic compared
to an UDP raw Ethernet packet system.

>Perhaps someone wants to shift more than 64k's worth of data and doesn't
>want to be bothered with checking for duplicate packets, sequencing or
>failure to deliver.

Only if there is a 100 % certainty that a TCP/IP connection that I
create today, will remain there long after I am retired and long after
I am dead. A 99.9 % certainty is _far_ too low.

>Most of the internet runs quite happily on TCP/IP with only a very few
>critical components over UDP (eg DNS, NTP - and even then either *may* use
>TCP).
>
>But the choice goes deeper - is the problem better solved with a reliable
>connection oriented protocol or a datagram based one?
>
>> I still do not understand why anybody would use TCP/IP for any
>> critical 24x7 applications.
>
>That is a rather sweeping statement.

The main problem with TCP/IP is that you can not take immediate action
as soon as the link is lost.

As soon as the link is lost, you need to take a similar amount of
action as a TCP/IP logic would require to implement.

From: Paul Keinanen on
On Sat, 20 Feb 2010 08:15:13 -0800 (PST), karthikbalaguru
<karthikbalaguru79(a)gmail.com> wrote:

>
>Agreed, but the query is about the design of the
>TCP server and the UDP server. In TCP server
>whenever a new connection arrives, it accepts the
>connection and invokes a new process to handle
>the new connection request. The main point here
>is that 'a new process is created to handle every
>new connection that arrives at the server' .
>In the case of UDP server, it seems that most
>of the the server design is such that there is only
>one process to handle various clients.
>Will the TCP server get overloaded if it creates
>a new process for every new connection ? How is
>it being managed ?

As long as you have a simple transaction system, one incoming request,
one outgoing response, why on earth would any sensible person create a
TCP/IP connection for this simple transaction ?

From: karthikbalaguru on
On Feb 21, 12:02 am, Paul Keinanen <keina...(a)sci.fi> wrote:
> On Sat, 20 Feb 2010 08:15:13 -0800 (PST), karthikbalaguru
>
> <karthikbalagur...(a)gmail.com> wrote:
>
> >Agreed, but the query is about the design of the
> >TCP server and the UDP server. In TCP server
> >whenever a new connection arrives, it accepts the
> >connection and invokes a new process to handle
> >the new connection request. The main point here
> >is that 'a new process is created to handle every
> >new connection that arrives at the server' .
> >In the case of UDP server, it seems that most
> >of the the server design is such that there is only
> >one process to handle various clients.
> >Will the TCP server get overloaded if it creates
> >a new process for every new connection ? How is
> >it being managed ?
>
> As long as you have a simple transaction system, one incoming request,
> one outgoing response, why on earth would any sensible person create a
> TCP/IP connection for this simple transaction ?

Consider a scenario in which multiple high speed
TCP connection requests are arriving within a
very very short/micro time frame. In that scenario,
the TCP server would get overloaded if separate
thread is created for every new connection that
arrives at the server.

Karthik Balaguru