From: karthikbalaguru on
On Feb 21, 4:19 am, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Saturday 20 February 2010 19:49
>
>
>
> > Interesting to know a method for having a Light load TCP server
> > by using the existing utilities in Linux/Unix in the form of
> > Forking Server !
>
> Yes, it's actually very simple. Get yourself a linux machine, write a
> trivial program in perl,python, C, whatever that accepts lines on STDIN and
> replies with some trivial (eg echo the contents of STDIN) back to STDOUT.
> Run it and see if it does what you expect.
>
> Now configure xinetd (the modern inetd, usually the default on any modern
> linux) to bind your program to say, TCP port 9000.
>
> On the same machine, telnet localhost 9000 and you should have the same
> experience as running the program directly. telnet to it 3 times
> simultaneously from 3 different terminal windows. telnet to it from a
> different machine on your network.
>
>

Interesting telnet example for TCP !
I checked link http://en.wikipedia.org/wiki/Inetd & it seems to
have some good info about this just as you conveyed .
I liked the errorlog feature via udp where only one instance
of the service is running to service all requests. Interesting
to know that by just specifying 'wait', the inetd can be configured
to only use one instance of the server to handle all requests.

>
> >> 2 - Popular - little harder to program, much more efficient, assuming
> >> your OS can handle thread creation more lightly than process creation.
>
> > Threaded Server seems to be good, but it might be
> > overloading the TCP server very quickly incase of fast
> > multiple connection requests within a very short timeframe.
> > Just as you said, i think if the thread creation is of less
> > overhead in the particular OS in which TCP server is
> > running, then it would be great.
>
> Yes - if you don't mind thread programming - it does have its own peculiar
> issues.
>
> > I came across preforking tricks too where a server launches
> > a number of child processes when it starts .
>
> Apache does that in one of its modes (and it has several modes). That is an
> example of a high performance bit of server software. Much more complicated
> to manage of course and less likely to be suitable for a tiny embedded
> system - but could be suitable for a decent 32 bit system with some sort of
> OS.
>
>

Interesting to know that Apache uses preforking tricks
in which the server launches a number of child processes when
it starts.

>
>
>
> > Those inturn
> > would be serving the new connection requests by having
> > some kind of locking mechanism around the call to accept
> > so that at any point of time, only one child can use it and
> > the others will be blocked until the lock is released.
> > There seem to be some way out of that locking problem.
> > But, i think the idea of creation of one child for every
> > new connection/client seems to be better than the preforking
> > trick, but these tricks in turn overload the TCP server
> > incase of fast successive/near-simultaneous connection
> > requests within a short time frame.
> > Just as you said, i think if the thread creation is of less
> > overhead in the particular OS in which TCP server is
> > running, then it would be great.
>
> >> 3 - Very efficient. One process maintains a state for all connections,
> >> often using event methodology to call service subroutines when something
> >> interesting happens (eg new connection, data arrived, output capable of
> >> accepting data, connection closed). Sounds horrible, but with an OO
> >> approach, very easy to get your head around. Now bearing in mind that
> >> anything in OO can be bastardised to a handle and an array of struct
> >> which holds the equivalent data that an OO object would, this could be a
> >> very suitable method for emebedded systems where C may be the language of
> >> choice and there may be no OS or only a very simple one that doesn't map
> >> well.
>
> > Having one process for maintaining states of all
> > connections and implementing the event
> > methodology that calls service subroutines whenever
> > a certain specific instance happens sounds interesting.
> > Appears to be the ultimate method for embedded systems
> > where OS is absent and C is the main language .
> > Anyhow, need to analyze the drawbacks if any.
>
> I actually used this when I coded a bunch of servers in perl [1] to
> interface to dozens of identical embedded devices. It was actually mentally
> much easier than worry about locking issues as all the separate connections
> had to be coordinated onto one data set in RAM, ie they weren't functionally
> independant.
>

But, was it robust enough to handle near-simultaneous multiple
connections within a short timeframe from various clients ?
Were you using some kind of buffering/pool mechanism which
the main process was checking as soon as it is done with the
action for a particular connection ?

> [1] Yes perl. This was a rapid prototyping excercise to prove a point. My
> aim was to recode in C if necessary. It wasn't as the overhead of using perl
> was nearly two orders of magnitude less significant than the load of talking
> to an RDBMS - so it stayed in perl working quite happily in production.
>
> >> Now, doing 3 wouldn't be so far different to doing it all in UDP *except*
> >> you now have to care about packet delivery unreliability - as you can get
> >> a variety of stacks for many embedded systems, why not let someone else's
> >> hard work help you out?
>
> >> --
>

Thx in advans,
Karthik Balaguru
From: karthikbalaguru on
On Feb 21, 4:34 am, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Saturday 20 February 2010 21:26
>
>
>
> > True ! Need to decide on the best design methodology between
> > either threaded or multiplexing server.
>
> I might have missed it - but what system is your code going to run on?
> Linux, something else fairly "fat" or a teeny embedded system with no
> resources?
>

Linux !

> It makes a difference, because there's no point in looking at (say) forking
> servers if you have no processes!
>

:-) True !

> There is also the element of code simplicity and maintainability. If this
> were running on a high end system, you might be better to use a well known
> and debugged framework to manage your connections, so you write as little
> code as possible and what you do write deals mostly with the actual logic of
> your program rather than a whole overhead of connection management. No
> disrepect intended on your programming abilities ;-> but less code is always
> better :)
>
> I was in several minds how to approach my problem, until I found perl's less
> well known IO::MultiPlex library - after that it was plain sailing with a
> multiplexed server. If that hadn't existed, I might well have used a
> multiprocess model with a lump of shared memory and some semaphores (I had
> the advantage that there one only one persistent TCP connection incoming
> from each of a finite number of embedded systems, so connection setup
> overhead was lost in the wash)
> --

Great. But, Is there a C language version of the same that can help
in plain sailing with a multliplexed server ?

I searched the internet to find the features available with perl's
IO::Multiplex .
It seems that IO::Multiplex is designed to take the effort out of
managing
multiple file handles. It is essentially a really fancy front end to
the select
system call. In addition to maintaining the select loop, it buffers
all input
and output to/from the file handles. It can also accept incoming
connections
on one or more listen sockets.
It is object oriented in design, and will notify you of significant
events
by calling methods on an object that you supply. If you are not using
objects,
you can simply supply __PACKAGE__ instead of an object reference.
You may have one callback object registered for each file handle, or
one
global one. Possibly both -- the per-file handle callback object will
be
used instead of the global one. Each file handle may also have a
timer
associated with it. A callback function is called when the timer
expires.

Any equivalent C language package available ?

Thx in advans,
Karthik Balaguru
From: karthikbalaguru on
On Feb 21, 12:43 pm, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Sunday 21 February 2010 03:05
>
> > On Feb 21, 4:19 am, Tim Watts <t...(a)dionic.net> wrote:
> >> I actually used this when I coded a bunch of servers in perl [1] to
> >> interface to dozens of identical embedded devices. It was actually
> >> mentally much easier than worry about locking issues as all the separate
> >> connections had to be coordinated onto one data set in RAM, ie they
> >> weren't functionally independant.
>
> > But, was it robust enough to handle near-simultaneous multiple
> > connections within a short timeframe from various clients ?
> > Were you using some kind of buffering/pool mechanism which
> > the main process was checking as soon as it is done with the
> > action for a particular connection ?
>
> Yes to the first question.

Cool :-)

> The OS takes care of that. Within (quite large)
> limits, linux (and any other "proper" OS will buffer the incoming SYN
> packets until the application gets around to doing an accept() on the
> listening socket. The application doesn't have to worry about that as long
> as it isn't going to block on something else for some silly amount of time.
>
> In practice, it was 10's of milliseconds at most.
>

Okay , it is not a problem until the buffer is able to
buffer the packets without overflowing.
I have been searching the internet regarding the
buffering and arrived at various links for linux -
http://www.psc.edu/networking/projects/tcptune/
http://fasterdata.es.net/TCP-tuning/linux.html
http://www.mjmwired.net/kernel/Documentation/networking/ip-sysctl.txt

- 'sysctl' seems to hold key !
- I do find /proc special file can also be accessed for
configuring the system parameters .

Set maximum size of TCP transmit window -
echo 108544 > /proc/sys/net/core/wmem_max
Set maximum size of TCP receive window -
echo 108544 > /proc/sys/net/core/rmem_max
Set min, default, max receive window. Used by the autotuning function
-
echo "4096 87380 4194304" > /proc/sys/net/ipv4/tcp_rmem
Set min, default, max transmit window. Used by the autotuning function
-
echo "4096 16384 4194304" > /proc/sys/net/ipv4/tcp_wmem
echo 1 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf


> Different issue of course on a tiny system (you still haven't said what your
> target system is).
>

But, what kind of issues are there in tiny system ?

Thx in advans,
Karthik Balaguru

From: David Schwartz on
On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
wrote:

> While reading about the various designs, interestingly i
> came across an info that the design of TCP servers is
> mostly such that whenever it accepts a connection,
> a new process is invoked to handle it .

This is like a singly-linked list. It's the first thing you learn, and
so many people tend to assume it's used a lot in the real world. In
actuality, it's only common in special cases, such as when each
instance needs its own security context (like in 'telnetd' and
'sshd').

> But, it seems that in the case of UDP servers design,
> there is only a single process that handles all client
> requests.

This is the norm for TCP too, most of the time. But it's more
obviously reasonable for UDP, since there are no connections. You
can't create one process for each connection because there's no such
thing.

> Why such a difference in design of TCP and
> UDP servers ?

To the extent there is such a difference, it's because of the
different things the servers do. For example, TCP is commonly used for
web servers. Each web connection is a separate logical operation with
multiple steps that affect only that operation. However, a typical UDP
server (such as a time server or resolver) needs to maintain some
global state that each packet received minorly interacts with.

> How is TCP server able to handle
> large number of very rapid near-simultaneous connections ?

Process-per-connection servers tend to do this very poorly. But one
trick is to create the processes before you need them rather than
after.

DS
From: Chris H on
In message <us90o5hjsgi2qe98mcps52heup86vemk2g(a)4ax.com>, Paul Keinanen
<keinanen(a)sci.fi> writes
>Only if there is a 100 % certainty that a TCP/IP connection that I
>create today, will remain there long after I am retired and long after
>I am dead. A 99.9 % certainty is _far_ too low.

But it is _*FAR*_ higher than _*ANY*_ other protocol.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/