From: William M. Klein on 16 May 2005 18:13 Kellie, I think you are mixing up one implementation (probably Micro Focus) with "COBOL" when you say, "COBOL does an excellent job of BackUp, Restore and archive any type of files." COBOL, itself, has NO "backup and restore" facility and its "restart" capabilities are limited (at best). Specific vendors have been FORCED to create "solutions" to this serious lack. (Just as Standard COBOL - up until the '02 Standard had *no* file-sharing or record-locking - and even in the '02 Standard, this is "processor dependent" - aka "OPTIONAL"). If one is designing an application "from scratch" - I would certainly go with a "standard" RDB (Relational Database) system over a COBOL-only solution any-day-of-the-week. These solutions (besides built-in "recovery" systems) also provide built-in (or relatively common) user-interface tools that may or may NOT be available for COBOL files. -- Bill Klein wmklein <at> ix.netcom.com "Kellie Fitton" <KELLIEFITTON(a)YAHOO.COM> wrote in message news:1116279557.772145.199780(a)g49g2000cwa.googlegroups.com... > Hello Robert, > > You can lead a horse to the water, however, if you can make him float, > you got > something pretty good. :---)) > > you are raising an exellent point though --- however, I think that a > skillful > programmer with clairvoyant thinking, and a well thought program design > approach, > can produce or at least emulate a large database management system > functionality. Not to mention also, COBOL does an exellent job of > BackUp, > Restore and archive any type of files. > > Personally, I think the only weakLink in my indexed file design, is the > actual > indexed files per se, indexed files tend to bog down when they get very > large in > size, however, COBOL systems can have a workAround that issue pretty > easy > as Richard pointedOut above. > > Regards, Kellie. >
From: docdwarf on 16 May 2005 18:14 In article <1116258479.794260.20980(a)g14g2000cwa.googlegroups.com>, Kellie Fitton <KELLIEFITTON(a)YAHOO.COM> wrote: [snip] >1). Should shared dataFiles use records locking schemes? Sometimes... but not always. > >2). Should shared dataFiles use data compression mechanism? Rarely... but not never. > >3). Should shared dataFiles (largeSize) be split into several > smaller files? This can help... and it can also mess things up. > >4). Should shared dataFiles be handled by an I/O dynamically > called module per file, that service all processes running > concurrently? At times this is good... at other times, less so. > >5). Should shared dataFiles be handled by an I/O independent > module per file, that runs separately and communicates with > the application thru shared memory, and service all processes > running concurrently? That depends on a few things. You *do* realise that people spend... more than a little time testing these very possibilities on different platforms and under different conditions, don't you? If these questions were so readily answered they could save an awful lot of time. DD
From: Richard on 16 May 2005 18:26 > I would like to use a manual lock with kept locks for muliple record locking, > however, what would happen if some records are locked, and the workStation > experience a powerOutage of some sort? I know the operating system will release > the records automatically, but I donn't know after how long, though? How do you _know_ that the locks will be released automatically ? With MS-DOS based programs using SHARE an OS based record lock was applied to a set of bytes at an offset. If the program crashed without releasing the locks you had to reboot to reliably release the locks. Certainly with systems that I use (non-Microsoft) the locks are released if the process crashes, but all programs run in the one box (server based). If a client machine crashes or just stops then the server simply never gets another message. Specifically it does _NOT_ get a message 'I have just crashed, please tidy up'. The server neither knows nor cares whether the client machine is switched off or the user has gone for an extended lunch and wants the record left locked. The only way that a server will know if the client died is if it polls the machine. It may also release locks after a timeout - show where this is in the documentation before assuming that it will. > doesn't compression de-compression of the file records causes some performance > problems or read/write operation delay? There is some small CPU time spent compressing and decompressing, but RLL encoding is fast and easy (RLL=Run Length Limited) and isn't a 'performance problem' even if you were still using 486 machines. The gain is saving of network time, disk transfers, head moves. > I am trying to choose between a dynamically called or an independent I/O module. > Dynamically speaking though, it will be an iteration of the same module for each > runUnit within the same application. I want to run the interFace program on a client > machine, while the application runs on the file server, so the shared dataFiles can > be used efficiently on the netWork. Also, am looking for a way to reduce the > network traffic to exactly one roundTrip per I/O operation, if I can do that with my > cobol code. > Do you consider that approach wise? I don't think that you have stated a single approach yet. You appear to want to have the server run 'file system services' for you clients but don't know whjether it should be one run-unit or on per client. A single 'file system server' will have to maintain its own lock list as Cobol locks are only detectable by _another_ process. You would also probably want this to multi-thread. It would also have to maintain timeouts or poll the client to detect system outages for lock releasing. But I suspect that you are very, very far from having a finished product and are just at the sketch-the-boxes phase. By using CALLs to access files you can start with having these routines statically linked, move to having them dynamic and later build IPC replacement to export the routines to the server. That way you can _learn_ about the advantages or disadvantages of each approach. > All the runUnits are in one machine for each end-user --- provided though, the main > application will run on the file server, where the main file searching will take place > accordingly. Doesn't that approach reduces the roundTrip delay in the network > connections and in the dataRate transfer as well?? What is your system ? Unix/Linux ? Windows ? Windows TSE ? How does the client talk to the server ? teminal to TSE ? X terminal ? tty ? SMB ? IPC ? If you have "interFace program on a client" how do you start the server application for it ? How do a dozen sever applications startup for a dozen clients ? Every different system has different ways of doing it, you seem to want to define the end point without having any steps in between.
From: Kellie Fitton on 16 May 2005 18:32 Hi Bill, the functionality of BackUp, Restore and Archive does NOT need to be a built-in function within the COBOL language --- a well designed cobol module with a good algorithm functions, can do the job very well as needed. My compiler (NetExpress) provide a file-sharing and an exellent locking mechanism as a built-In features. However, regarding the "Restart" mode, perhaps you can give me some more information, or highlight the question? Regards, Kellie.
From: Richard on 16 May 2005 18:41
> however, I think that a skillful programmer with clairvoyant thinking, and a well thought > program design approach, can produce or at least emulate a large database management > system functionality. That _may_ be true (but probably isn't), and I think that MicroFocus may have got quite close with its 'file share' product some years ago. But I very much doubt that it would be worth while. I suspect that you are not aware of what the full range of DBMS functionality actually is. > Not to mention also, COBOL does an exellent job of BackUp, > Restore and archive any type of files. It does ? How so ? What _COBOL_ features actually do this ? > Personally, I think the only weakLink in my indexed file design, is the > actual indexed files per se, Of course it may be that you only see one "only weaklink" because you haven't found the other problems yet. For example you blithely assume that locks 'will be released when clients stop', you assume that network traffic is the bottleneck, or that index files 'bog down'. You need to work with real cases to see why they arrived at the solutions that they have and what problems these solutions don't cater for. If you do have a real case then comments need to be made based on what it actually is, so you need to outline the requirement. |