From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 19:05:19 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:vgl4s5ppn64lkfr7a46ihkm2h4lrbqi6dg(a)4ax.com...
>> See below...
>> On Sun, 11 Apr 2010 17:05:01 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>>
>>>>
>>>> He now got it down to 50 TPS, 3000 per minute, 18,000
>>>> per
>>>> hour, 144,000 per work day after he was told that he
>>>> can't
>>>> do his 10 ms OCR in 0 ms OVERHEAD :)
>>>
>>>Bullshit again. I already said that I estimated the
>>>overhead
>>>to not exceed 10 ms. You stupidly read this to mean that I
>>>said that the overhead will take 0 ms. I did say that my
>>>goal is to get as close to zero time as possible, and this
>>>might still be possible because of hypertheading. Worst
>>>case scenario is that it will double my processing time.
>> ****
>> Ihave NO IDEA how hyperthreading can help in this case,
>> since you are not using any form
>> of threading, and some of the actions, such as transacted
>> file system, passing messages
>
>The main this that I want to occur in parallel is the disk
>accesses. The disk drive controller might already do this
>for me. Another big source of parallelism is the multiple
>processes. Certainly the web server could be updating the
>FIFO queues, including writing the (transaction log) audit
>trail to disk, while one or more OCR processes are running.
>Because of hyperthreading is might take 10 + 10 ms per
>transaction, yet still process 100 TPS.
****
If you are worrying about disk concurrency at this level you are worrying about the wrong
problem. You have no control over it, it is entirely the domain of the OS.

Do you know what is meant by "elevator disk head scheduler algorithm"? (One of my
professors got his PhD from Stanford for providing a closed-form analytic solution that
shows this produces optimal behavior; prior to this we could only determine this by
discrete event simulation methods)
****
>
>> via IPC, etc. are fundamentally serializing actions that
>> require waiting for one thread to
>> complete before another thread is allowed to proceed (and
>> as I pointed out, whether threse
>> threads are in the same process or different processes
>> matters not in the slightest; you
>> have this weird idea that threads in separate processes
>> solve nonexisten problems that you
>> claim exist if the threads share a single process.
>> Indeed, there are such problems but
>
>Yes and you say this from your decade of experience with
>pthreads, right?
****
pthreads has come out of left field here. We have all been assuming he was using modern
preemptive threads managed by the OS, and here he is, living in the 1980s, talking about
the pseudo-thread library instead! But then again, we should have expected he would not
use any of the terminology correctly!
****
>
>> none of the problems you have described change no matter
>> how many processes are involved!)
>> Maybe this is a use of concurrency we have not seen
>> before...
>>
>> OK, if you overhead is 10ms, which is actually pretty
>> generous, then it makes sense. Note
>> that doubling your processing time does not account for
>> hundreds of ms of TCP/IP or HTTP
>> transaction overhead (including network delays).
>
>Of course not. I have another 400 ms allocated to that. Also
>I already know completely that at any time unforeseen
>conditions could arise to make this utterly infeasible for
>the duration of these unforeseen conditions. If network
>traffic suddenly spikes.
****
We still don't see how all of these pieces fit together; you haven't even shown us a
requirements document, or a specification document, and you are debating fine points of
how the implmentation is going to work!
****
>
>>
>> But you still don't get why a SQMS architecture as I
>> described makes more sense.
>
>My goal is to provide absolute priority of the high priority
>jobs over all the other jobs, SQMS can't do that. I can't
>see how it isn't a more complex design lacking any
>incremental benefits. Four Queues each with their own
>process only involves each process seeing if it has a job in
>its own queue. My purpose of IPC in this case to tell the
>process that it does have a new job in its queue.
****
Sorry, you have missed the point. OF COURSE SQMS can do this, even on a single-core
machine from the Museum of Obsolete Computers. I even have explained how this modern
concept, "time slicing", makes it work. Please demonstrate how it cannot work!
****
>
>> ****
>>>
>>>If you weren't so damn rude I would not be so harsh.
>>>Although I could have written what I said more clearly, it
>>>is not my fault for you getting it wrong. All you had to
>>>do
>>>to get it right is to read exactly what I said.
>> ****
>> We are rude because you keep ignoring what WE are saying
>> while busily accusing us of
>> ignoring what YOU are saying! You demanded "sound
>> reasoning" from me, and I showed that a
>> few seconds' thought and third-grade arithmetic prove your
>> design is faulty. You could
>> have done this "sound reasoning" on your own! It is not
>> only not Rocket Science, it is so
>> basic that, literally, a child can do it.
>
>You never explained (or that may have been in the messages
>that I ignored) why my design of MQMS was inferior to what
>you proposed.
****
Ignoring messages is your problem. I can't deal with your inability to face reality.
****
>
>In the future any statements lacking supporting reasoning
>will be taken to be pure malarkey at least until complete
>and sound reasoning is provided.
****
But then, I think you have to follow the same rules. So far, most of your design is based
on pure malarkey, and in addition, you have failed to demonstrate how SQMS is inferior to
MQMS, doing it NUMERICALLY, not just by a handwave. I demonstrated it conclusively by
showing actual numbers, and if you had any understanding of operating systems, you would
understand why it works correctly on a 1-core CPU (although it would be difficult to find
one to demonstrate it on)
****
>
>Mostly what both you and Hector have been doing is forming
>an erroneous view of what I am saying based on false
>assumptions of what I am meaning and then point out how bad
>this misconception of what I am saying is without pointing
>out even why the misconception itself is bad. When you do
>this I have no way to correct the misconception.
>
****
OK, (a) show us a requirements document (b) show us a specification document.

I have a friend whose rates are based on the following: if you have a requirements
document, he charges $N/hr, and if you don't have one, he charges $1.5*N/hr, because of
the pain of trying to extract the requirements. All we've seen thus far is a set of
morphing requirements (which we have trouble tracking) and no specification document at
all.

And, a complete reluctance to do anything to prove we are right or wrong. The only
experiment you ever ran (and it took several days of badgering by us before you ran it)
was on the paging behavior. A discrete event simulation will show that SQMS is going to
beat MQMS. We designed these algorithms on single-core mainframes in the 1960s (in those
days, we called it "batch processing"), and nothing has changed that makes MQMS work
better.
*****
>Here is a really good heuristic for you never make any
>assumptions at all. If you never make any assumptions then
>you also never make any false assumptions. There is NEVER a
>good reason to make an assumption in an ongoing dialogue.
****
Oh, really? You make assumptions all the time, and the only difference is that most of
yours are wrong. For example, the assumption that synchronization is so expensive that
you have to make copies of counter variables! Or that MQMS is going to perform better
than SQMS with a priority-ordered queue and simple priority-inversion prevention (I even
gave the algorithm!) Actually, the priority-inversion protection isn't needed, even on a
1-core CPU, if your servers are separate threads.

You make so many false assumptions it is hard to read your admonishment to use without
ROTFL.
joe
****
>
>>
>> And we never get a single coherent picture; we get
>> constantly-revised pieces, one at a
>> time, without any way to fit them back together to
>> whatever the requirements,
>> specification, or implementation strategy might be. And
>> what we see of the latter, we
>> know to be largely nonsensical. I'd fail freshman
>> programmers for such a design.
>> joe
>> ****
>>>
>>
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below....
On Sun, 11 Apr 2010 21:33:33 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:72v4s5lhuvgvdnnal8rf6h1dpmkqntm0p8(a)4ax.com...
>> So prove any of us wrong. MEASURE a real system and tell
>> us what numbers you get!
>>
>> There is no other truth than the actual system. You have
>> two interesting parameters to
>> measure
>> end-to-end transaction time from the client
>> end-to-end transaction time in the server
>>
>> I already pointed out your MQMS model can give MASSIVE
>> end-to-end delays in the server,
>
>What is the exact scenario that produces this massive delay?
***
Sorry, I already explained this, complete with the arithmetic to prove it. I don't feel
llike repeating the obvious.
*****
>
>> whereas a SQMS model with priority-inversion prevention
>> can MINIMIZE end-to-end delays in
>> the server.
>
>On a single core processor? If it does I don't see how. You
>have to explain these details. On a quad core it is almost
>obvious how it could help.
****
Yep, on a single-core processor! One of the little details is one of the most
carefully-guarded secrets of modern operating systems, so I;m not surprised you haven't
heard about it. It is called "time slicing", and I could tell you more about it, but then
I'd have to kill you, because this top-secret technique is known only to very few
initiates. I may have violated my sacred oaths even by hinting at its existence, and I
will have to watch out for the high priests of operating systems, who may declare me
excommunicated for revealing it.
****
>
>>
>> But then, performance clearly is not an important
>> consideration, or you would want a
>> design that minimizes end-to-end transaction time in the
>> server under high load
>> conditions. And you would not be so insistent that we
>> acknowlege your design must be
>
>No it is just you ignoring design constraints again. Single
>core not quad core.
****
I am curious where you are finding these single-core machines? The antique sales on eBay?
Is your ISP really willing to support these for you?

But SQMS works better on single-core machines (see reference to that secret technique
called "time slicing")
joe
****
>
>> right, when I was able to demonstrate, with third-grade
>> arithmetic, that it isn't very
>> good.
>> joe
>>
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on
Peter Olcott wrote:

> "Hector Santos" <sant9442(a)gmail.com> wrote in message
>>>
>> I just don't think he needs four different EXEs for this.
>
> Under Linux there is supposed to be a much greater chance of
> priority inversion with threads than processes because the
> OS locks all kinds of things on behalf of the process when
> using threads. At least that is what David Schwartz said.

I'm not surprise. pthreads always appeared to be augmented "kludge" to
*nix Oses. But I'm always one that feels if you can't use a tool
right, then you rethinking how you are using the tool or not use it.
Since UNIX is inherently process oriented I bet 99.99% of time
wienie programmers don't worry about, but then again I can't use you
to measure anything about *nix systems. I'm sure thousands of
products and good programmers use pthreads correctly if it helps them.

Man, you are like a drug or a good cigar. Why don't you go harash
David alittle? Go defy him for a while. :)

--
HLS
From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 19:20:46 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>> See below...
>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>(1) Four queues each with their own OCR process, one of
>>>these processes has much more process priority than the
>>>rest.
>> ****
>> Did we not explain that messing with thread priorities
>> gets you in trouble?
>>
>> And you seem to have this totally weird idea that
>> "process" and "thread" have meaning. Get
>
>Oh like the complete fiction of a separate address space for
>processes and not for threads?
>This group really needs to be moderated.
****
Yes. Memory-mapped files allow processes to SHARE address space, so you can have
processes that SHARE address space. So the fiction of processes having purely separate
address spaces is exactly that, a convenient fiction, and can be bypassed trivially at any
time.

If this group was moderated, you would have been removed weeks ago.
****
>
>> this: THERE ARE ONLY THREADS! Processes are a fiction
>> that provide resource management; a
>> process holds the address space, file system handles, and
>> other resources. A process IS
>> NOT SCHEDULABLE! ONLY THREADS ARE SCHEDULABLE! And
>> therefore it doesn't matter if a set
>> of threads are distributed across a dozen processes or all
>> live in one process, the
>> problems DO NOT CHANGE (except that processes add kmore
>> overhead)
>> ****
>>>
>>>(2) My original design four processes that immediately
>>>yield
>>>putting themselves to sleep as soon as a higher priority
>>>process occurs. The highest one of these never yields.
>> ****
>> How? You are postulating mechanisms that do not exist in
>> any operating system I am aware
>
>// shared memory location
>if (NumberOfHighPriorityJobsPending !=0)
> nanosleep(20);
****
Why not
if(InterlockedIncrement(&NumberOfHighPriorityJobsPending) > 0)
which is how a real programmer would write it?

And InterlockedIncrement is a trivial subroutine to write if linux doesn't have it.

extern LONG InterlockedIncrement(LONG * value);

value$=8 ; for 32-bit images
PUBLIC _InterlockedIncrement
_InterlockedIncrement:
mov ecx, value$[ESP]
mov eax, 1
lock xadd [ecx], eax
inc eax
ret

so the cost of doing the increment is AT MOST one memory cycle (less if the location is in
the cache, and you are required to demonstrate that you understand multiprocessor cache
coherency before you start objecting that the cache means there are inconsistent values.
If necessary, I can provide PowerPoint slides on how cache coherency is maintained, since
I actually studied the Intel manuals. But hey, we know that the cost of synchronization
is FAR TOO HIGH to want to pay it, right? Oh, never mind, that's a false assumption. I
just proved it, above.

I hope the above obvious code constitutes "sound reasoning" enough for you.
****
>
>Is seems like every other message you switch into jerk mode.
***
Better than your every-message approach...
joe

****
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 21:04:49 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Peter Olcott wrote:
>
>> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>> message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>>> See below...
>>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>> (1) Four queues each with their own OCR process, one of
>>>> these processes has much more process priority than the
>>>> rest.
>>> ****
>>> Did we not explain that messing with thread priorities
>>> gets you in trouble?
>>>
>>> And you seem to have this totally weird idea that
>>> "process" and "thread" have meaning. Get
>>
>> Oh like the complete fiction of a separate address space for
>> processes and not for threads?
>> This group really needs to be moderated.
>
>
>The funny thing is you think seriously think you are normal! I
>realize we have go beyond the call to duty ourselves to help you, but
>YOU really think you are of a sound mind. You are the one that really
>should be so lucky, these public groups are not moderated - you would
>be the #1 person locked out. Maybe that is what happen in the linux
>forums - people told you to go away - "go to the WINDOWS FORUMS and
>cure them!"
***
I think this was a typo, and you meant "curse"...
****
>
>>> ****
>>> How? You are postulating mechanisms that do not exist in
>>> any operating system I am aware
>>
>> // shared memory location
>> if (NumberOfHighPriorityJobsPending !=0)
>> nanosleep(20);
>>
>> Is seems like every other message you switch into jerk mode.
>
>
>And everything you post seems to be greater evidence of your
>incompetence. Everyone knows that using time to synchronize is the #1
>beginners mistake in any sort of thread, process synchronization designs.
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm