From: Joseph M. Newcomer on 12 Apr 2010 22:54 The nice thing is that it ups my posting count (and Hector's, and you've got quite a few also). But if Microsoft just counts posting frequency, without looking at content, this might be the first AI program to get an MVP award! joe On Mon, 12 Apr 2010 14:29:31 -0400, "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote: > >"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message >news:uhFwgp11KHA.140(a)TK2MSFTNGP05.phx.gbl... >> Live and learn. Which leads to the questions, if you are going to design >> for Linux, then; >> >> Why are you trolling in a WINDOWS development forum? >> >> Why are you here asking/stating design methods that defies logic >> under Windows when YOU think this logic is sound under UNIX? >> >> If you are going to design for Windows, then you better learn how to >> follow WINDOWS technology and deal with its OS and CPU design guidelines. > > >Hector, >You haven't yet figured out the riddle of Peter Olcott despite the repeated >clues? When you look back at the posts after I tell you his little secret, >it should become obvious and you should have one of those "Ah-ha!" moments. > >The truth of the matter is that there is no OCR technology at play here at >all but rather AI technology. The secret is that Peter Olcott is really an >AI program that is being entered to win the Loebner Prize. >http://loebner.net/Prizef/2010_Contest/Loebner_Prize_Rules_2010.html > >Let's look at the evidence again shall we?? Peter originally posed a >question to the group. From each of the answers he recieved, his follow-up >questions contained an amalgam of the original question and the resulting >answer. In each case, the mixture could be made and perceived by humans to >be reasonably logical because the original respondant had already considered >the answer in the context of the original question. This is pretty common >with many Turing Test style programs (mixture of question and response). I >recall some of the games that I had back in the 80's that used this >technique to appear intelligent. > >This also explains the magical morphing requirements and the circular >reasoning being used quite nicely. Each time a post was made by the Peter >Olcott program, it would incorporate the suggestions from previous posts by >members of this group. The interesting thing about this particular Turing >Test program is that if the group reached a consensus on a particular >approach, the program would respond *against* the suggestion even after many >attempts were made to justify the suggestion thus generating even more posts >to the affirmative that the program could respond to. The architecture of >this part of the "personality" was sheer genius because it simulates the >average clueless programmer who has no motivation and below average >intelligence. > >Another clue must be the way the Turing Test program (Peter Olcott) fishes >for additional posts by always responding to *every* post on *every* branch >of a thread. The Turing Test program must make sure that its posts are the >leaf on every branch in order to ensure that *someone*, *somewhere* will >respond to it. Without responses, the machine is simply in wait state which, >of course, means that the program has failed to convince humans that there >is a human intelligence behind the posts. > >I had originally thought that the real "programmer" would come forth on >April 1st and identify him/herself, but apparently the deception has gone so >swimmingly well that testing will continue so long as you and Joe post to >the threads. ;-) > >In an effort to find out who the real programmer was behind the Peter Olcott >Turing Test machine, I consulted with the internet anagram server at >http://wordsmith.org/anagram/ and typed in the name Peter Olcott in the >hopes that the real culpret had simply tried to mask his identity. The >responses included: > >Elect Pro Tot >Creep Lot Tot >Crop Let Tote >Cop Letter To > >The bottom line is that while we may not know the true identity of the >programmers behind the Peter Olcott hoax, it is possible that the internet >anagram program may have come up with an appropriate response to his >spamming of a Windows newsgroup with questions that are to be implemented on >Linux... send a letter to a cop! > >-Pete > >PS: Can we all get back to real MFC programs and real MFC programmers now??? >;-) > Joseph M. Newcomer [MVP] email: newcomer(a)flounder.com Web: http://www.flounder.com MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on 12 Apr 2010 23:04 See below... On Mon, 12 Apr 2010 18:43:38 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote: > >"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in >message news:ppq6s5dvd9d7a50gdnuo6ari00e0pn43fo(a)4ax.com... >> See below... >> On Sun, 11 Apr 2010 19:05:19 -0500, "Peter Olcott" >> <NoSpam(a)OCR4Screen.com> wrote: > > >>>My goal is to provide absolute priority of the high >>>priority >>>jobs over all the other jobs, SQMS can't do that. I can't >>>see how it isn't a more complex design lacking any >>>incremental benefits. Four Queues each with their own >>>process only involves each process seeing if it has a job >>>in >>>its own queue. My purpose of IPC in this case to tell the >>>process that it does have a new job in its queue. >> **** >> Sorry, you have missed the point. OF COURSE SQMS can do >> this, even on a single-core >> machine from the Museum of Obsolete Computers. I even >> have explained how this modern >> concept, "time slicing", makes it work. Please >> demonstrate how it cannot work! > >The scheduler pulls off the priority sorted jobs in priority >order. Since there are no high priority Jobs the scheduler >begins a 3.5 minute low priority job. Immediately following >this a high priority job arrives. After the 3.5 minute job >completes, the scheduler begins the high priority job that >has now exceeded its real-time threshold by a factor of >35-fold. **** That's why I gave you the priority-inversion-prevention algorithm, which is if you have N handlers you never dispatch more than K < N large jobs, which means that you have (N-K) threads to handle the fast-turnaround jobs. But I guess you missed that part. And no thread priority fiddling is required to make this work correctly! It involves a priority-sorted queue. SQMS. Pretty straightforward, easy to implement, easy to tune; you decide what value K needs to be to meet normal requirements. Simple, straightorward, easy to implement. What's wrong with it? ***** > >>>Mostly what both you and Hector have been doing is forming >>>an erroneous view of what I am saying based on false >>>assumptions of what I am meaning and then point out how >>>bad >>>this misconception of what I am saying is without pointing >>>out even why the misconception itself is bad. When you do >>>this I have no way to correct the misconception. > >> And, a complete reluctance to do anything to prove we are >> right or wrong. The only > >I can neither prove nor disprove dogma, (there just isn't >enough to work with) I can only work with reasoning. **** Sadly, if I went back to my books on queueing theory, and found the appropriate formulae, I seriously doubt you could comprehend them. I suggest that if you think your approach is superior, you are guilty of presenting dogma, and you have not proven it, nor have you presented any sound reasoning to prove that it works better than SQMS. Something about pots and kettles comes to mind here, something about color.... You are guilty of the same issues you are accusing me of. You are using false assumptions and perhaps the I Ching to "prove" MQMS is necessarily better than SQMS, and your only evidence seems to be "I'm a superb designer". I, at least, have experience in queueing theory and realtime embedded systems. **** > >> experiment you ever ran (and it took several days of >> badgering by us before you ran it) >> was on the paging behavior. A discrete event simulation >> will show that SQMS is going to >> beat MQMS. We designed these algorithms on single-core >> mainframes in the 1960s (in those >> days, we called it "batch processing"), and nothing has >> changed that makes MQMS work >> better. > >If they work better then there is a reason why they work >better if there is not a reason why they work better then >they don't work better. Please provide the reasoning. As >soon as I see sound reasoning that refutes my view, I change >my view. **** You have already pointed out that you have not read my careful analysis, so why should I reproduce it here? I wrote it once already. joe **** > > Joseph M. Newcomer [MVP] email: newcomer(a)flounder.com Web: http://www.flounder.com MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on 12 Apr 2010 23:15 "Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message news:MPG.262d77475858ed16989871(a)news.sunsite.dk... > In article > <k_adnZQ1so5DJ17WnZ2dnUVZ_rednZ2d(a)giganews.com>, > NoSpam(a)OCR4Screen.com says... > > [ ... ] > >> > But SQMS works better on single-core machines (see >> > reference to that secret technique >> > called "time slicing") >> >> So I am already doing that with my MQMS, and because I >> have >> been told that Linux does not do a very good job with >> threads, my implementation is likely superior to yours. I >> have seen no reasoning to the contrary. > > What relationship do you see between multiple threads of > execution, > and multiple queues? None. A Single queue might work well with multiple threads of a single process because IPC does not need to occur. The MQ is because multiple processes require IPC. > > You actually have things exactly backwards: with enough > processor > cores, separate queues start to gain an advantage due to > lower > contention over a shared resource -- though contention has > to be > quite high for that to apply. Even then, you basically > have to take > pretty careful (and frankly, somewhat tricky) steps to get > the > multiple queues to act like a single priority queue before > you get > any good from it. > > For a single processor, there's no room for question that > a single > priority queue is the way to go though -- a single > hyperthreaded core > simply won't even come code to producing the level of > contention > necessary to make multiple queues superior (something on > the order of > quad hyperthreaded cores might start to get close). I just don't see any scenarios with my current design where this would be true. Multiple queues are easy simply write to the back and read form the front. A single queue is much more difficult. Write at whatever location is appropriate at the moment and read from the head of whichever portion of the queue applies to this priority. What do you do a linear search for the head? If you don't do a linear search and keep track of the different heads separately then this is essentially four queues, simply strung together. How can that be better than four queues?
From: Peter Olcott on 12 Apr 2010 23:42 "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in message news:9of7s5ts91k4e8jerscd9mm86rml0d6ap4(a)4ax.com... > Some useful terminology: > There is no "load balancing" here; instead, the queues are > disjoint and orthogonal. If > one queue is empty, the process is dead, and will not > execute at all. Thus, reducing > throughput and maximizing delay times. Only when it gets > something to do will it start > running, which is why MQMS always fails relative to SQMS. Not really and you already know this you are just trying to [force a square peg into a round hole with a sledgehammer] find any possible excuse even false contrived excuses just to criticize me. Everyone knows that on a single core machine with four concurrent processes if any one of these processes quits using the CPU the others get the rest. >>First, you are giving LINUX/UNIX a bad reputation with >>this primitive >>thinking. Second, SQL is not a packed record concept. >>Sorry. You >>want an ISAM or even varying record concept. But not SQL. >>Wrong TOOL. > *** > His unholy fascination with doing seeks and somehow > thinking this improves performance > shows how little he understands about file systems. THos > 12463 bytes could be fragmented Ah so then the more disk seeks the faster the throughput right? NOT! > into potentially 24 sectors scatter ALL OVER the disk, > with MASSIVE seek time between > each, and the exact sector address is not derivable from > first principles based on the > file offset; he seems to work in a delusional system in > which all files are physically > contiguous on the disk (but remember, this is the guy who > wanted an application to > allocate contiguous physical memory!) So his > understanding of file systems is every bit > as bad as his understanding of virtual memory, or > scheduling, or pretty much any other OS > concept! I don't think that this is the way that this works, I could be wrong, but, it would make much more sense to keep the file allocation table (or whatever its called on whichever platform) cached in RAM, thus all this complex seeking could be done in RAM, and the final actual hardware disk drive seek could be singular. I do know that file fragmentation can drastically slow down file copies. According to my possibly incorrect theory there is no need for it to slow down seeks to a single record that does not span multiple sectors. > If I have a giant BLOB, it also might be fragmented all > over the disk. But hey, what's a > little friendly fragmentation among friends? Disk seeks > obviously take 0 ms. 4 ms to 9 ms > I have no idea how many experts it takes to convince him > he's wrong. But at a guess, an > infinite number is probably too small; we need > second-order infinities to express the > number of experts needed (map the experts to real numbers, > Aleph-1 infinity of experts. > And he probably still won't believe us!) Only one person providing complete and sound reasoning or at least a link to reasonably concise complete and sound reasoning. > It sounds like a cool idea, and it can probably be used to > create a seek address, which > would work well if files were not fragmented and had zero > latency to fetch the sectors. It is far better to seek to a specific record number than it is to seek to an indexed value to find the record number to seek to., twice as many disk seeks, thus half the potential throughput.
From: Peter Olcott on 13 Apr 2010 00:22
"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message news:MPG.262d7a6da11542c4989872(a)news.sunsite.dk... > In article > <pYidndO7AuRyI17WnZ2dnUVZ_rednZ2d(a)giganews.com>, > NoSpam(a)OCR4Screen.com says... > > [ ... ] > >> So Linux thread time slicing is infinitely superior to >> Linux >> process time slicing? > > Yes, from the viewpoint that something that exists and > works (even > poorly) is infinitely superior to something that simply > doesn't exist > at all. David Schwartz from the Linux/Unix groups seems to disagree. I can't post a google link because it doesn't exist in google yet. Here is the conversation. > Someone told me that a process with a higher priority will > almost starve any other process of a lower priority, is > this > true? If it is true to what extent is this true? There are basically two concepts. First, the "priority" of a process is a combination of two things. First is the "static priority". That's the thing you can set with "nice". Second is the dynamic priority. A process that uses up its full timeslices has a very low dynamic priority. A process that blocks or yields will tend to have a higher dynamic priority. The process' priority for scheduling purposes is the static priority, adjusted by the dynamic priority. If a process becomes ready-to-run, it will generally pre-empt any process with a lower scheduling priority. However, if it keeps doing so, its dynamic priority will fall. (And then it will only continue to pre-empt processes with a lower static priority or processes that burn lots of CPU.) Among processes that are always ready-to-run, the old rule was to give the CPU to the highest-priority process that was ready-to-run. However, almost no modern operating system follows this rule (unless you specifically request it). They generally assign CPU time proportionately giving bigger slices to higher priority processes. DS > > From the viewpoint of the OS, a "process" is a data > structure holding > a memory mapping, and suspension state for some positive > number of > threads. There are also (often) a few other things > attached, but > that's pretty much the essentials. > > The scheduler doesn't schedule processes -- it schedules > threads. > User threads are always associated with some process, but > what's > scheduled is the thread, not the process. > >> One of my two options for implementing priority >> scheduling >> was to simply have the OS do it by using Nice to set the >> process priority of the process that does the high >> priority >> jobs to a number higher than that of the lower priority >> jobs. > > Which means (among other things) that you need yet another > process to > do actually do that priority adjustment -- and for it to > be able to > reduce the priority of one of your high priority tasks, it > must have > a higher priority than any of the other threads. Since > it's going to > have extremely high priority, it needs to be coded > *extremely* > carefully to ensure it doesn't start to consume most of > the CPU time > itself (and if it does, killing it may be difficult). > > Don't get me wrong: it's *possible* for something like > this to work > -- but it's neither simple nor straightforward. > > -- > Later, > Jerry. |