From: rickman on
On Feb 21, 10:54 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
wrote:
> On Feb 21, 2:55 pm, David Schwartz <dav...(a)webmaster.com> wrote:
>
> > On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> > wrote:
>
> > > How is TCP server able to handle
> > > large number of very rapid near-simultaneous connections ?
>
> > Process-per-connection servers tend to do this very poorly.
>
> It overloads the server.
>
> > But one
> > trick is to create the processes before you need them rather than
> > after.
>
> But, how many processes should be created at the server ?
> How will the server know about the number of processes that it
> has to create ? Any ideas ?

If it is really just the creation time that is an issue for running
many processes, one could always be waiting as a hot spare. When it
is then turned loose for a new connection a new process could then be
created as a background task.

Rick
From: karthikbalaguru on
On Feb 22, 12:38 am, David Schwartz <dav...(a)webmaster.com> wrote:
> On Feb 21, 7:54 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> wrote:
>
> > But, how many processes should be created at the server ?
> > How will the server know about the number of processes that it
> > has to create ? Any ideas ?
>
> Note that this is a key weakness of the 'process-per-connection'
> model, and I recommend just not using that model unless it's mandated
> by other concerned (such as cases where security is more important
> than performance).
>

But, how is that technique of 'process-per-connection' very
helpful for security ?

> But there are two techniques, and they are typically used in
> combination. One is static configuration. This is key on initial
> server startup. For example, versions of Apache that were process per
> connection let you set the number of processes to be started up
> initially. They also let you set the target number of 'spare' servers
> waiting for connections.
>

In case of static configurations, wouldn't that target number
of servers started initially load the server ? There seems to be a
drawback in this approach as the 'spare' servers/processes might
be created unnecessarily even if there are only less clients.
That is, if there are less clients, then those servers will be waiting
for connections unnecessarily. This in turn would consume
system resources.

> The other technique is dynamic tuning. You monitor the maximum number
> of servers you've ever needed at once, and you keep close to that many
> around unless you've had long period of inactivity.
>

Dynamic tuning appears to overcome the drawbacks w.r.t
static configuration, But the scenario of 'long period of inactivity'
requires some thought. During that time, we might need to
unnecessarily terminate and restart enough number of processes.
But, since we cannot not be completely sure of the time of
maximum traffic arrival, we might land up in having all those
servers running unnecessarily for long time :-( . Any thoughts ?

The process of termination and recreation also consume
system resources.

> Servers are going to be slow to start up anyway, and the time to
> create new processes is not way out of line with the time to fault in
> pages of code and other delays during server startup that you can do
> very little about.
>

Thx in advans,
Karthik Balaguru
From: karthikbalaguru on
On Feb 22, 5:23 am, rickman <gnu...(a)gmail.com> wrote:
> On Feb 21, 10:54 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> wrote:
>
>
>
>
>
> > On Feb 21, 2:55 pm, David Schwartz <dav...(a)webmaster.com> wrote:
>
> > > On Feb 20, 5:10 am, karthikbalaguru <karthikbalagur...(a)gmail.com>
> > > wrote:
>
> > > > How is TCP server able to handle
> > > > large number of very rapid near-simultaneous connections ?
>
> > > Process-per-connection servers tend to do this very poorly.
>
> > It overloads the server.
>
> > > But one
> > > trick is to create the processes before you need them rather than
> > > after.
>
> > But, how many processes should be created at the server ?
> > How will the server know about the number of processes that it
> > has to create ? Any ideas ?
>
> If it is really just the creation time that is an issue for running
> many processes, one could always be waiting as a hot spare.  When it
> is then turned loose for a new connection a new process could then be
> created as a background task.
>

But, even during waiting, it unnecessarily consumes resources.

Karthik Balaguru
From: David Schwartz on
On Feb 21, 6:22 pm, karthikbalaguru <karthikbalagur...(a)gmail.com>
wrote:

> > Note that this is a key weakness of the 'process-per-connection'
> > model, and I recommend just not using that model unless it's mandated
> > by other concerned (such as cases where security is more important
> > than performance).

> But, how is that technique of 'process-per-connection' very
> helpful for security ?

Processes are isolated from each other by the operating system and can
have their own security context. Threads share pretty much everything.

> > But there are two techniques, and they are typically used in
> > combination. One is static configuration. This is key on initial
> > server startup. For example, versions of Apache that were process per
> > connection let you set the number of processes to be started up
> > initially. They also let you set the target number of 'spare' servers
> > waiting for connections.

> In case of static configurations, wouldn't that target number
> of servers started initially load the server ? There seems to be a
> drawback in this approach as the 'spare' servers/processes might
> be created unnecessarily even if there are only less clients.

So what? Who cares about performance when there's no load?

> That is, if there are less clients, then those servers will be waiting
> for connections unnecessarily. This in turn would consume
> system resources.

So what? If there are less clients, you have system resources to
spare.

> > The other technique is dynamic tuning. You monitor the maximum number
> > of servers you've ever needed at once, and you keep close to that many
> > around unless you've had long period of inactivity.

> Dynamic tuning appears to overcome the drawbacks w.r.t
> static configuration, But the scenario of 'long period of inactivity'
> requires some thought. During that time, we  might need to
> unnecessarily terminate and restart enough number of processes.
> But, since we cannot not be completely sure of the time of
> maximum traffic arrival, we might land up in having all those
> servers running unnecessarily for long time :-( . Any thoughts ?

They won't be "running". They'll be waiting.

> The process of termination and recreation also consume
> system resources.

Again, so what? Why are you trying to optimize the case where the
server has little work to do?

DS
From: Jorgen Grahn on
["Followup-To:" header set to comp.protocols.tcp-ip despite your
suggestion comp.arch.embedded, which I don't read. I think it's pretty
clear by now that he has no particular interest in embedded systems.]

On Mon, 2010-02-22, Tim Watts wrote:
> karthikbalaguru <karthikbalaguru79(a)gmail.com>
> wibbled on Monday 22 February 2010 04:19
>
>
>>
>> But, even during waiting, it unnecessarily consumes resources.
>>
>> Karthik Balaguru
>
> You need to tell us more about your system (hardware spec, purpose of
> server, expected load).

In general, he needs to tell us what his goal with this discussion is.
The questions jump all over the place, and every answer immediately
spawns N new questions -- with no clue what (if anything) he's trying
to accomplish, other than perhaps a focus on extremely high accept()
load.

Perhaps the OP would be better served by a book. You cannot learn all
you need to know about socket programming from a Usenet thread. I
recommend Stevens' "Advanced Programming in the Unix environment" vol
1, and "TCP/IP Illustrated vol 1" in order to make sense of the
former.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: 'netstat' and '-f inet' option
Next: WPE for linux