From: Peter Olcott on

"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
news:zJydnZpsT6iljlrWnZ2dnUVZ_tGdnZ2d(a)giganews.com...
>
> "Henry" <se16(a)btinternet.com> wrote in message
> news:TZKdna2IBNo9_1vWnZ2dnUVZ8jOdnZ2d(a)bt.com...
>> On 15/04/2010 01:11, Peter Olcott wrote:
>>> Single Queue Multi Server (SQMS ) versus Multi Queue
>>> Multi
>>> Server (MQMS)
>>>
>>> http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf
>>>
>>> I am not a math guy, I am a computer science guy. I need
>>> to
>>> know exactly what it is about a SQMS that makes it so
>>> much
>>> more efficient than a MQMS as shown on slide 31 of the
>>> above
>>> link.
>>>
>>> For example is it that the SQ more uniformly distributes
>>> the
>>> jobs than the MQ, thus reducing idle time?
>>>
>>> Thanks.
>>>
>>
>> It is because with m M/M/1 queues you will often get some
>> waiting in a number of queues while other servers are
>> idle. That does not happen with one M/M/m queue where
>> idle time only happens when there is no waiting.
>
> Since (in my case) these servers are provided by sharing a
> single computer CPU using time slicing, this would not
> impact my performance because an idle server would make
> the remaining servers run proportionally more quickly.
>
> If each server would be provided by an individual core of
> a quad core machine, then what you said would directly
> apply.
>
> Does this make sense to you? Do you see any errors in what
> I said?
>

Besides the typo that I corrected on line 6,
providing-->provided

>>
>> With real people in queues, this would lead to some of
>> them changing queues, which would then equalise mean
>> response times, though there would still be higher
>> variance of response times with the m queue version.
>>
>
>


From: Jasen Betts on
On 2010-04-15, Peter Olcott <NoSpam(a)OCR4Screen.com> wrote:
>
> "Jasen Betts" <jasen(a)xnet.co.nz> wrote in message
> news:hq6lh5$k98$1(a)reversiblemaps.ath.cx...
>> On 2010-04-15, Peter Olcott <NoSpam(a)OCR4Screen.com> wrote:
>>
>> suppose you are in a queue and the job in front of you is
>> a longer
>> than usual one. If there's only one server at the head of
>> the queue
>> you're in for a wait,
>>
>> if there are multiple servers one of the other server will
>> likely be
>> free soon enough
>
> This is multiple servers provided on a single CPU, using
> time slicing. Because of this the more servers there are the
> slower that each server is.

in that case using a time slice size that's longer than it usually
takes to serve a client may be the optimum strategy.

dependant on what metic you are using to determine the performance
of a queuing strategy.

--- news://freenews.netfront.net/ - complaints: news(a)netfront.net ---
From: Peter Olcott on

"Jasen Betts" <jasen(a)xnet.co.nz> wrote in message
news:hq9889$1pp$2(a)reversiblemaps.ath.cx...
> On 2010-04-15, Peter Olcott <NoSpam(a)OCR4Screen.com> wrote:
>>
>> "Jasen Betts" <jasen(a)xnet.co.nz> wrote in message
>> news:hq6lh5$k98$1(a)reversiblemaps.ath.cx...
>>> On 2010-04-15, Peter Olcott <NoSpam(a)OCR4Screen.com>
>>> wrote:
>>>
>>> suppose you are in a queue and the job in front of you
>>> is
>>> a longer
>>> than usual one. If there's only one server at the head
>>> of
>>> the queue
>>> you're in for a wait,
>>>
>>> if there are multiple servers one of the other server
>>> will
>>> likely be
>>> free soon enough
>>
>> This is multiple servers provided on a single CPU, using
>> time slicing. Because of this the more servers there are
>> the
>> slower that each server is.
>
> in that case using a time slice size that's longer than it
> usually
> takes to serve a client may be the optimum strategy.
>
> dependant on what metic you are using to determine the
> performance
> of a queuing strategy.

One set of jobs has a real-time constraint of 100 ms. The
remaining jobs only need to be done within 24 hours,
although they should be done as quickly as possible. Because
of this the first set of jobs must have absolute priority
over the remaining jobs, and the remaining jobs have equal
priority relative to each other. These aspects are
relatively easy to optimize for.

The reason that I came to this group is that I needed to
make sure that I understood the mathematical underpinnings
well enough to choose the right queuing strategy. It does
seem that I was right all along on this, and a very bright
extremely experienced man with a PhD in computer science was
consistently and persistently wrong. He was using the
simplistic heuristic that SQMS is always much more efficient
than MQMS. This heuristic clearly does not apply in my case.
If I move my process to a quad-core computer, then the
heuristic applies.

>
> --- news://freenews.netfront.net/ - complaints:
> news(a)netfront.net ---


From: hsantos on
On Apr 16, 8:42 am, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:
> "Jasen Betts" <ja...(a)xnet.co.nz> wrote in message
>
> news:hq9889$1pp$2(a)reversiblemaps.ath.cx...
>
>
>
> > On 2010-04-15, Peter Olcott <NoS...(a)OCR4Screen.com> wrote:
>
> >> "Jasen Betts" <ja...(a)xnet.co.nz> wrote in message
> >>news:hq6lh5$k98$1(a)reversiblemaps.ath.cx...
> >>> On 2010-04-15, Peter Olcott <NoS...(a)OCR4Screen.com>
> >>> wrote:
>
> >>> suppose you are in a queue and the job in front of you
> >>> is
> >>> a longer
> >>> than usual one. If there's only one server at the head
> >>> of
> >>> the queue
> >>> you're in for a wait,
>
> >>> if there are multiple servers one of the other server
> >>> will
> >>> likely be
> >>> free soon enough
>
> >> This is multiple servers provided on a single CPU, using
> >> time slicing. Because of this the more servers there are
> >> the
> >> slower that each server is.
>
> > in that case using a time slice size that's longer than it
> > usually
> > takes to serve a client may be the optimum strategy.
>
> > dependant on what metic you are using to determine the
> > performance
> > of a queuing strategy.
>
> One set of jobs has a real-time constraint of 100 ms. The
> remaining jobs only need to be done within 24 hours,
> although they should be done as quickly as possible. Because
> of this the first set of jobs must have absolute priority
> over the remaining jobs, and the remaining jobs have equal
> priority relative to each other. These aspects are
> relatively easy to optimize for.
>
> The reason that I came to this group is that I needed to
> make sure that I understood the mathematical underpinnings
> well enough to choose the right queuing strategy. It does
> seem that I was right all along on this, and a very bright
> extremely experienced man with a PhD in computer science was
> consistently and persistently wrong. He was using the
> simplistic heuristic that SQMS is always much more efficient
> than MQMS. This heuristic clearly does not apply in my case.
> If I move my process to a quad-core computer, then the
> heuristic applies.

You should be ashame of yourself that you continue to lie and now bad
mouth people.

Folks, Peter Olcott is a troll, I advise you to stop feeding the troll
that will undoudlty extend this thread for monthly to no end.

What he is not telling you that he has no programming capability of
designing for threads and is stuck with a single EXE, starting four
instances, each with their own job priority.

What he is not telling you he wants to use Mongoose (an open source
small web server he can embedd in the EXE) which is a multithread
ready, therefore giving four different FOUR queues.

What he is not telling you that his web clients will be sendng 100
transctions per second, so if his worst case is 100ms work time, he
has:

100 TPS = N * 1000 /worktime

where N is the number of service handlers, thread, processes, single
machine, distributed, what have you. This is a variation of Little's
Law, and solving for N means he needs a steady state count of 10
handlers to handle the worst case.

When this was pointed out to him, since he has only 1 EXE for this
high priority job, he conveniently change the worktime to 10 ms in
order to get N = 1.

What he is not telling you that this project requires an SQL database
with transaction and journals, and he wants everything done in pure
memory. He didn't understand what virtual memory was or memory mapped
files and he wants to load 1.5GB to 5GB per process. No Sharing. He
failed to understand context switching and quantum concepts. He
believes Windows and Linux is a RTOS. He believes can take an SQLITES
engine and turn it into a ISAM database system with direct access to
physical records and offsets. SQLITES does not support multi-access
at the record level. A Writer will lock the entire database. A Reader
in progress will block writer access.

But even then, it was pointed out that this is a steady state flow of
request, one after another. He failed to consider a rate
distribution, throwing a simple example of lets say 50 request within
the first 20 seconds.

What he essentially wants to do is:

Many Web Threads --> 4 exe processes with 1 FIFO queue each
separated by Job Priority.

It was explained to him that he was confused that he will need a
request delegator, the web server itself, sorta like a proxy.

But he doesn't think so, that he can have four separate exes each with
a single fido que feed by a Mongoose Web Server using a two way Named
Piped, with user authentication, SQL interfacing, transactions logs
for crash recovery all done with 10 ms.

He wants to be able to have the low level priory EXE processes to stop
working when a single high prority exe jon arrives.

It he was advised that he should use consider working and sound
principles of using threads, memory maps, worker pools, IOCP methods
to better scale the system. That he should not isolate each handler
to handle just one time. That his design is FLAWED simply by looking
the request rate and the highly ambitious low EXE work time with
little overhead to do user authentication, SQL operations, file
logging, etc.

In summary, with over a dozen experts, scientist and engineers in
both the Windows and Linus forums over a non 1, not 2, not 3 month
period, but since 2001 trying to design this system that he never
could do, he has defies everyone's advic and input which all
unanimously agreed that his

Many Web Threads to 1 FIFO queue EXE thread

is flawed. He was advise to test and explore and was even handled
simulation C/C++ code to help him out, but has not done so.

He has no intention nor capability to program for threads and is stuck
running four EXE, each one to handle one job type - no interfacing, no
contention between them. He must find the optimal setup for his
Single Queue per EXE process. What he doesn't realize is how the
input will come in and be delegated to the separate EXEs which will do
sharing of request handling.

Do yourself a favor and avoid a prolong discussion with this troll.