Prev: LBW 0.1: Linux Binaries on Windows
Next: socket
From: Moi on 3 Apr 2010 07:49 On Fri, 02 Apr 2010 14:49:08 -0500, Peter Olcott wrote: > "Joe Beanfish" <joe(a)nospam.duh> wrote in message > news:hp5fph$sec(a)news.thunderstone.com... >> On 04/01/10 19:23, Peter Olcott wrote: >>> I am trying to convert my proprietary OCR software into a web >>> application. Initially there will be multiple threads, >>> one for each web request, and a single threaded process servicing >>> these web requests. Eventually there may be multiple threads servicing >>> these web requests. >> >> I'd use a database to maintain the queue. Sometimes you can >> use the filesystem to accomplish database like operations. One >> file per record. Separate directories for pending and completed >> jobs. Mail systems often do that. One file for the mail msg, one >> for the headers, and maybe another for status info. >> >> If using the filesystem as a database use "mv" to accomplish >> atomic operations: >> write to tmpfile.pid >> mv tmpfile.pid readytogo.img >> queue reader looks for *.img > > I was going to use a single file with binary data and fixed length > records to keep track of all of the web requests. I also proposed named > pipes as the means of notification of new web requests, and completed > requests. I would choose a spooldir mechanism with one file per request. The producer creates them in the spooldir (or in a tempdir and finally moves them into the spooldir) The consumer takes them out of the spooldir and moves them to a workdir Once completed, the worker moves them into the resultsdir, where the webapp can pick them up. This may seem a heavy mechanism in terms of filesystem operations , but it is very robust, and restart after crash takes almost no extra work. Also the coupling takes no special operations, such as message queues or (named) pipes, so it easy to implement on both sides. A spooldir implementation takes exactly the same count of events/messages, the only difference is that some reads and writes are replaced by creat() , link() , unlink(). The number of syscalls could stay approximately the same. Whether the OCR program uses threads (and how) is a separate issue. HTH, AvK
From: Peter Olcott on 3 Apr 2010 10:47 "Ersek, Laszlo" <lacos(a)caesar.elte.hu> wrote in message news:Pine.LNX.4.64.1004030617470.22478(a)login01.caesar.elte.hu... > On Fri, 2 Apr 2010, Peter Olcott wrote: > >> "Ersek, Laszlo" <lacos(a)caesar.elte.hu> wrote in message >> news:Pine.LNX.4.64.1004022206050.1774(a)login01.caesar.elte.hu... > >>> ... What about using SQLite for safe job storage, and >>> using the other mechanisms only for notification, so you >>> don't have to poll? >>> <http://www.sqlite.org/threadsafe.html> I apologize if >>> this has already been discussed. >> >> That looks like a good idea. I just bought the book on >> Amazon. What other IPC mechanisms might you suggest? > > One idea might be: write a long-lived daemon, restarted by > the init process if it crashes. The daemon would do the > following: > > 1. create a PID file > 2. block SIGUSR1 in the main thread and then install a > simplistic handler > 3. spawn N worker threads (with SIGUSR1 blocked) > 4. pull jobs out of the database and hand them off to the > worker > threads until there are no more recently added jobs > left > 5. wait for SIGUSR1 with sigsuspend() 6. go back to step > 4. > > Worker threads would process the requests and store the > result back into the DB for later retrieval. (Same or > different table.) You have to be very careful when > designing and implementing the state transitions for > individual jobs. Make sure it is no problem to pick up any > job in any state (except the succesful termination state) > and to continue / retry from there. There don't need to be > many states. Invent as few as possible. Don't try to > prevent redundant operations after a crash, or in case a > second daemon instance is started erroneously. Rather make > sure the operations (eg. storing the result) are > idempotent. This is more robust. Don't rely on the > daemon's presence, rely on persistent job states and > clearly defined elementary state transitions. Treat your > daemon as a single-shot batch utility that happens to have > a sometimes functional loop in it. > > The queue between the main thread and the worker threads > can have limited depth. It is no problem if the main > thread blocks in step 4 for some time. > > Make another, short-lived CGI program invoked by the web > server that I have already decided upon this aspect of the architecture, and it will not be CGI. A separate CGI instance must be loaded for every request. I also looked into FastCGI. The solution that was derived was to use a modified web server for this purpose. http://en.wikipedia.org/wiki/Comparison_of_lightweight_web_servers There are numerous web servers that have the source code available, so I will modify one of these to talk directly with my OCR process. It will be this modified web server that persists the in-coming web requests. It is beginning to look like either SQLite or MySQL will provide the required persistence. I was thinking that these may slow things down too much, but, if SQLite can easily recover from a loss of power, this may be the way to go. It may be too slow taking a few dozen milliseconds per transaction because it only uses file locks instead of record locks. I know that I have to have some sort of relational database to keep track of the financial aspect of the transaction. I might be leaning back towards some do-it-yourself approach for the log file. The only missing piece here is providing a way to lock portions of this file. It looks like appends are atomic, pread() and pwrite() are atomic. All that I need now is a way to make pread() + pwrite() into a single atomic operation. One way to do this would be serialize the financial transactions to a single thread or process, then I don't even need a record lock. Multiple threads of one process could write to a FIFO queue that has a single thread to do all the financial transactions. > stores the new job in the DB (with some unique ID strictly > greater than ID's generated before), one that sends a > SIGUSR1 to the process identified by the PID file > thereafter. (Or implement this in PHP or whatever.) If a > SIGUSR1 was pending on that process anyway (eg. due to > jobs arriving quickly in parallel, in a burst), this is > idempotent; SIGUSR1 is not queued. If the daemon was > already selecting jobs from the table, it will wake up > immediately in step 5 after finishing the loop and then > make a possibly empty round, but that's no problem. > > If you don't trust the PID file to be valid (perhaps you > try to send SIGUSR1 while init is reaping and restarting > the crashed daemon), you could render the CGI program to > the daemon itself. If O_CREAT | O_EXCL succeeds with the > PID file, become the daemon, otherwise, send a signal to > the daemon. This is not infallible, but seems good enough. > A variation: try to bind a unix domain datagram socket. > EADDRINUSE -> send message to listening daemon (perhaps > after setting O_NONBLOCK); success -> become listening > daemon. > > Just my $0.02. Sorry if I misunderstood what you intend to > do. > > lacos
From: Peter Olcott on 4 Apr 2010 22:38
"Moi" <root(a)invalid.address.org> wrote in message news:e3681$4bb72b62$5350c024$29456(a)cache120.multikabel.net... > On Fri, 02 Apr 2010 14:49:08 -0500, Peter Olcott wrote: > >> "Joe Beanfish" <joe(a)nospam.duh> wrote in message >> news:hp5fph$sec(a)news.thunderstone.com... >>> On 04/01/10 19:23, Peter Olcott wrote: >>>> I am trying to convert my proprietary OCR software into >>>> a web >>>> application. Initially there will be multiple threads, >>>> one for each web request, and a single threaded process >>>> servicing >>>> these web requests. Eventually there may be multiple >>>> threads servicing >>>> these web requests. >>> >>> I'd use a database to maintain the queue. Sometimes you >>> can >>> use the filesystem to accomplish database like >>> operations. One >>> file per record. Separate directories for pending and >>> completed >>> jobs. Mail systems often do that. One file for the mail >>> msg, one >>> for the headers, and maybe another for status info. >>> >>> If using the filesystem as a database use "mv" to >>> accomplish >>> atomic operations: >>> write to tmpfile.pid >>> mv tmpfile.pid readytogo.img >>> queue reader looks for *.img >> >> I was going to use a single file with binary data and >> fixed length >> records to keep track of all of the web requests. I also >> proposed named >> pipes as the means of notification of new web requests, >> and completed >> requests. > > I would choose a spooldir mechanism with one file per > request. > > The producer creates them in the spooldir (or in a tempdir > and finally moves them into the spooldir) > The consumer takes them out of the spooldir and moves them > to a workdir > Once completed, the worker moves them into the resultsdir, > where the webapp > can pick them up. > > This may seem a heavy mechanism in terms of filesystem > operations > , but it is very robust, and restart after crash takes > almost no extra work. > Also the coupling takes no special operations, such as > message queues > or (named) pipes, so it easy to implement on both sides. Ok so it does seem an easy way to implement a system that can withstand crashes. But it may get a little sloppy if there are very many web requests and for some reason processing is delayed. Hundreds or possibly even thousands of little files could build up. It would not be easy to keep everything straight, for example strict FIFO order. Also it looks like this would require polled notification of data availability. With a little extra work beyond my original design, and employing the SQLite fault recovery design pattern, I could have a system that is just as fault tolerant, and only need a single transaction file. I would notify the OCR thread(s) of incoming data using a named pipe that has the offset within the transaction file as one part of this notification. > > A spooldir implementation takes exactly the same count of > events/messages, > the only difference is that some reads and writes are > replaced by creat() > , link() , unlink(). > The number of syscalls could stay approximately the same. > > Whether the OCR program uses threads (and how) is a > separate issue. > > HTH, > AvK |