From: Hector Santos on 11 Apr 2010 11:03 Peter Olcott wrote: >> You can't read and that DOCUMENT is a blab of illusions. >> Tell me how *YOUR PRODJECT* can SQUEEZE more time that is >> not available to you? >> >> 100 TPS === 1000/10ms >> >> If you go 1 ms above, you are HOSED. > > Yet another respondent that fails to pay attention. > > (1) The total processing time for the OCR is estimated at 10 > ms. You're dreaming. If your process OCR is 10 ms as was and is once again clearly established, that that leaves you 0 time to do anything else, including and beginning with the reception of the HTTP request. >>>>> I am guessing that the sum total of these overhead >>>>> sort of things will only take about 10 ms per >>>>> transaction > > See the word [overhead]? You missed that > > (2) The total processing time for the overhead associated > with this OCR processing is estimated at no more than an > additional 10 ms. So, now you have 20 MS? That means you can only do 50 transactions per second. > This provides at least 50 transactions per second and > possibly the whole 100 will remain Ok, good, so you are now following the simple EQUATION and the table I provided to you. You are now REDUCING your TPS. > because of hyperthreading > allowing aspects of a single transaction to be performance > concurrently. Oh shut up. You are putting your foot in your mouth again. The fact is you FINALLY realize that you can't do this in 10 ms, and are willing to provide another 10ms and now REALIZE that your TPS is reduced to 50. Take it from me, you are still way off. No way you can do this in 20 ms! Try more like 50-100ms for WHAT FOR YOU want to do. And again, YOU are thinking in terms of a serialized, equalized streaming request input. That is NOT reality. The reality is a distribution. So you need to design for your worst case of getting a BLAST of request within a X ms time, NO request for Y ms, and the remainder request for the remainder time within 1 second! Example distribution: 200 ms --> 25 requests --> 8 ms --> WORKLOAD VIOLATED 500 ms --> 0 requests 300 ms --> 25 requests --> 12 ms --> WORKLOAD VIOLATED Don't even bother with anything else until YOU got a realistic WORK TIME per request which will help define what implementations methods you need. -- HLS
From: Hector Santos on 11 Apr 2010 11:15 Peter Olcott wrote: > "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message > news:eHBz1FY2KHA.4332(a)TK2MSFTNGP02.phx.gbl... >> Peter Olcott wrote: >> >>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in >>> message news:%23psrlWX2KHA.5660(a)TK2MSFTNGP04.phx.gbl... >>>> Peter Olcott wrote: >>>> >>>>> Since I only need to be able to process 100 >>>>> transactions per second the speed of these overhead >>>>> sort of things should not be too critical. I am >>>>> guessing that the sum total of these overhead sort of >>>>> things will only take about 10 ms per transaction most >>>>> of this being drive head seek time. >>>> No, 10 ms is a ONLY CONVENIENT # that can be calculated >>>> for 100 transactions per second. It was not based on >>>> any real results you did. You want 100 tps, the work >>>> load must be 10ms. You want 1000 TPS, the work load is >>>> 100 ms - period: >>>> >>>> TPS = 1000/ WORK LOAD >>>> >>>> Period, period, period. >>>> >>>> There is no way on earth you can: >>>> >>>> receive a HTTP request >>>> parse it >>>> authorize it via a database >>>> set some status points >>>> delete it to some queue >>>> wait for a respons <----> OCR process >>>> wait up >>>> set status points >>>> do image processing >>>> set status points >>>> send response >>>> >>>> send HTTP response >>>> >>>> all in 10 ms - you are SICK if you think you can do this >>>> in 10 ms. And thats just for 1 single request. Throw in >>>> 99 more per second, and you are are completely whacked >>>> to now realize the build up our your queuing where each >>>> subsequent request will be delayed by a factor of of >>>> magical 10 ms number. >>>> >>>> Until you realize this, nothing else you saw matters. >>>> >>>> -- >>>> HLS >>> It looks like you are wrong >>> http://www.kegel.com/c10k.html#top >> >> You can't read and that DOCUMENT is a blab of illusions. >> Tell me how *YOUR PRODJECT* can SQUEEZE more time that is >> not available to you? >> >> 100 TPS === 1000/10ms >> >> If you go 1 ms above, you are HOSED. >> >> Lets imagine that your great Linux will do every step in >> its 1 ms clock tick and it is the only process in the OS. >> In fact, there is no other OS process kernel logic or >> other anything else to interrupt you. >> >> 1ms receive a HTTP request >> 1ms log http header >> 1ms save posted data on disk >> 1ms read HTTP and authenticate via a database >> 1ms set some status point >> 1ms delegate it to some queue >> 1ms wait for a respons <----> OCR process wake up >> 1ms set status points >> 1ms read posted file >> 1ms do image processing >> 1ms set status points >> 1ms send response >> 1ms send HTTP response > > I you wouldn't be so rude with me I would tone down my > criticism of you, but, the above sequence does show a > enormous degree of ignorance. No it doesn't. Its reality. You're the one with a whole set of design assumption based on ignorance. I speak with engineering experience. The worst case for you is 1 ms PER clock tick on your LINUX system and the above assumes YOU have FULL attention and that isn't going to be reality. While some steps can happen far less than 1ms, the major interface points with the outside world (outside your OCR box), it can take 1 or more clock ticks! The reality is YOU will not full attention. You will have interrupts, especially at each FILE I/O at the most basic level. You WILL NOT BE ABLE TO DO WHAT YOU WANT in 10, 20 ms per transactions. I say more like at BEST 50-100ms and thats just good engineering estimating based on all the ENGINEERING you want to do. We already proved that your memory I/O for 1.5GB will be MORE greater than 10 ms. You proved it to yourself and admitted how you finally realized how memory virtualization and fragmentation plays a role and was not just a figment of everyone's imagination. You need to get REAL. -- HLS
From: Peter Olcott on 11 Apr 2010 11:23 "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message news:ePWHPnY2KHA.5820(a)TK2MSFTNGP06.phx.gbl... > Peter Olcott wrote: > >> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in >> message news:eHBz1FY2KHA.4332(a)TK2MSFTNGP02.phx.gbl... >>> Peter Olcott wrote: >>> >>>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in >>>> message news:%23psrlWX2KHA.5660(a)TK2MSFTNGP04.phx.gbl... >>>>> Peter Olcott wrote: >>>>> >>>>>> Since I only need to be able to process 100 >>>>>> transactions per second the speed of these overhead >>>>>> sort of things should not be too critical. I am >>>>>> guessing that the sum total of these overhead sort of >>>>>> things will only take about 10 ms per transaction >>>>>> most of this being drive head seek time. >>>>> No, 10 ms is a ONLY CONVENIENT # that can be >>>>> calculated for 100 transactions per second. It was >>>>> not based on any real results you did. You want 100 >>>>> tps, the work load must be 10ms. You want 1000 TPS, >>>>> the work load is 100 ms - period: >>>>> >>>>> TPS = 1000/ WORK LOAD >>>>> >>>>> Period, period, period. >>>>> >>>>> There is no way on earth you can: >>>>> >>>>> receive a HTTP request >>>>> parse it >>>>> authorize it via a database >>>>> set some status points >>>>> delete it to some queue >>>>> wait for a respons <----> OCR process >>>>> wait up >>>>> set status points >>>>> do image processing >>>>> set status points >>>>> send response >>>>> >>>>> send HTTP response >>>>> >>>>> all in 10 ms - you are SICK if you think you can do >>>>> this in 10 ms. And thats just for 1 single request. >>>>> Throw in 99 more per second, and you are are >>>>> completely whacked to now realize the build up our >>>>> your queuing where each subsequent request will be >>>>> delayed by a factor of of magical 10 ms number. >>>>> >>>>> Until you realize this, nothing else you saw matters. >>>>> >>>>> -- >>>>> HLS >>>> It looks like you are wrong >>>> http://www.kegel.com/c10k.html#top >>> >>> You can't read and that DOCUMENT is a blab of illusions. >>> Tell me how *YOUR PRODJECT* can SQUEEZE more time that >>> is not available to you? >>> >>> 100 TPS === 1000/10ms >>> >>> If you go 1 ms above, you are HOSED. >>> >>> Lets imagine that your great Linux will do every step in >>> its 1 ms clock tick and it is the only process in the >>> OS. In fact, there is no other OS process kernel logic >>> or other anything else to interrupt you. >>> >>> 1ms receive a HTTP request >>> 1ms log http header >>> 1ms save posted data on disk >>> 1ms read HTTP and authenticate via a database >>> 1ms set some status point >>> 1ms delegate it to some queue >>> 1ms wait for a respons <----> OCR process wake >>> up >>> 1ms set status points >>> 1ms read posted file >>> 1ms do image >>> processing >>> 1ms set status points >>> 1ms send response >>> 1ms send HTTP response >> >> I you wouldn't be so rude with me I would tone down my >> criticism of you, but, the above sequence does show a >> enormous degree of ignorance. > > No it doesn't. Its reality. You're the one with a whole > set of design assumption based on ignorance. I speak with > engineering experience. Now you are being asinine, If every little thing takes 1 ms, then it would be at least several days before a machine was finished rebooting. I guess there is no sense in paying attention to you any more. > > The worst case for you is 1 ms PER clock tick on your > LINUX system and the above assumes YOU have FULL attention > and that isn't going to be reality. While some steps can > happen far less than 1ms, the major interface points with > the outside world (outside your OCR box), it can take 1 or > more clock ticks! > > The reality is YOU will not full attention. You will have > interrupts, especially at each FILE I/O at the most basic > level. > > You WILL NOT BE ABLE TO DO WHAT YOU WANT in 10, 20 ms per > transactions. I say more like at BEST 50-100ms and thats > just good engineering estimating based on all the > ENGINEERING you want to do. > > We already proved that your memory I/O for 1.5GB will be > MORE greater than 10 ms. You proved it to yourself and > admitted how you finally realized how memory > virtualization and fragmentation plays a role and was not > just a figment of everyone's imagination. > > You need to get REAL. > > -- > HLS
From: Hector Santos on 11 Apr 2010 11:42 Peter Olcott wrote: >> No it doesn't. Its reality. You're the one with a whole >> set of design assumption based on ignorance. I speak with >> engineering experience. > > Now you are being asinine, If every little thing takes 1 ms, > then it would be at least several days before a machine was > finished rebooting. I guess there is no sense in paying > attention to you any more. Because YOU can't handle the TRUTH and I keep proving it and even and every step in this polluted thread. Joe tells you the truth and you can't handle it. I tell you the truth and you can't handle it. You have a MANY THREAD to 1 FIFO QUEUE - IGNORANCE EQUAL PRESSURE You want to use SQLITE in a WAY it was not designed to work with a complete unrealistic freedom for read/write I/O that even DESIGNED in SQLITE: - IGNORANCE EQUAL PRESSURE You think that because LINUX allows for 1 ms clock ticks that you can get 1 ms Unterrupted QUANTUMS. It doesn't mean you can get 1 ms of real time - but 1 ms of TOTAL time - that is not real time. - IGNORANCE EQUAL PRESSURE You think you have FOUR smooth named piping with the ever design change you have. - IGNORANCE EQUAL PRESSURE You think you have control of PAGING and MEMORY VIRTUATION when you go back and forth on minimizing data lost and maximizing crash recovery: - IGNORANCE EQUAL PRESSURE You have no idea of whats going on, you need to buy 10,000 pages worth of books that you can't follow anyway, and still have a 25 year old OS book you forgot to read the 2nd half but want to finish it now thinking it still applies: - IGNORANCE EQUAL PRESSURE You think you have control of PAGING and MEMORY VIRTUATION when you go back and forth on minimizing data lost and maximizing crash recovery: - IGNORANCE EQUAL PRESSURE And whats funny about all this, you won't be able to code for THREADS even thing you go back and forth on whether you will or not. And you can't code for code memory maps. So its all a PIPED DREAM. - IGNORANCE EQUAL PRESSURE -- HLS
From: Peter Olcott on 11 Apr 2010 13:43
"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message news:%237Nwl2Y2KHA.6048(a)TK2MSFTNGP06.phx.gbl... > Peter Olcott wrote: > >>> No it doesn't. Its reality. You're the one with a whole >>> set of design assumption based on ignorance. I speak >>> with engineering experience. >> >> Now you are being asinine, If every little thing takes 1 >> ms, then it would be at least several days before a >> machine was finished rebooting. I guess there is no sense >> in paying attention to you any more. > > > Because YOU can't handle the TRUTH and I keep proving it > and even and every step in this polluted thread. Joe > tells you the truth and you can't handle it. I tell you > the truth and you can't handle it. > > You have a MANY THREAD to 1 FIFO QUEUE > > - IGNORANCE EQUAL PRESSURE > > You want to use SQLITE in a WAY it was not designed to > work with a complete unrealistic freedom for read/write > I/O that even DESIGNED in SQLITE: > > - IGNORANCE EQUAL PRESSURE > > You think that because LINUX allows for 1 ms clock ticks > that you can get 1 ms Unterrupted QUANTUMS. It doesn't > mean you can get 1 ms of real time - but 1 ms of TOTAL > time - that is not real time. Like I told Joe it is beginning to look like reading the 10,000 pages of books that I recently bought is going to be much more efficient and effective in proceeding from here. One of these books provides the details of internals of the Linux kernel. > > - IGNORANCE EQUAL PRESSURE > > You think you have FOUR smooth named piping with the ever > design change you have. > > - IGNORANCE EQUAL PRESSURE > > You think you have control of PAGING and MEMORY VIRTUATION > when you go back and forth on minimizing data lost and > maximizing crash recovery: > > - IGNORANCE EQUAL PRESSURE > > You have no idea of whats going on, you need to buy 10,000 > pages worth > of books that you can't follow anyway, and still have a 25 > year old OS book you forgot to read the 2nd half but want > to finish it now thinking it still applies: > > - IGNORANCE EQUAL PRESSURE > > You think you have control of PAGING and MEMORY VIRTUATION > when you go back and forth on minimizing data lost and > maximizing crash recovery: > > - IGNORANCE EQUAL PRESSURE > > And whats funny about all this, you won't be able to code > for THREADS even thing you go back and forth on whether > you will or not. And you can't code for code memory maps. > So its all a PIPED DREAM. > > - IGNORANCE EQUAL PRESSURE > > -- > HLS |