From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:k9m1r599bc1s32gn1va7q42luv1huqp0mv(a)4ax.com...
> See below...
>
> On Mon, 29 Mar 2010 10:22:59 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:4ftvq5l0qdfmrfmd4a351cc0lt98er8p56(a)4ax.com...
>>> See below...
>>> On Fri, 26 Mar 2010 09:55:54 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>
>>>>"Oliver Regenfelder" <oliver.regenfelder(a)gmx.at> wrote
>>>>in
>>>>message
>>>>news:e148c$4bac7685$547743c7$23272(a)news.inode.at...
>>>>> Hello,
>>>>>
>>>>> Peter Olcott wrote:
>>>>>> I don't know. It does not yet seem worth the learning
>>>>>> curve cost. The process is intended to be always
>>>>>> running
>>>>>> and loaded with data.
>>>>>
>>>>> I would say using memory mapped files with e.g. boost
>>>>> is
>>>>> not
>>>>> that steep a learning curve.
>>>>>
>>>>> Best regards,
>>>>>
>>>>> Oliver
>>>>
>>>>If we are talking on the order of one day to become an
>>>>expert on this, and it substantially reduces my data
>>>>load
>>>>times, it may be worth looking into. I am still
>>>>convinced
>>>>that it is totally useless for optimizing page faults
>>>>for
>>>>my
>>>>app because I am still totally convinced the preventing
>>>>page
>>>>faults is a far better idea than making them fast.
>>>>Locking
>>>>pages into memory will be my approach, or whatever the
>>>>precise terminology is.
>>> ***
>>> And your ****opinion*** about what is going to happen to
>>> your page fault budget is
>>> meaningless noise because you have NO DATA to tell you
>>> ANYTHING! You have ASSUMED that a
>>> MMF is going to "increase" your page faults, with NO
>>> EVIDENCE to support this ridiculous
>>
>>I did not say anything at all like this. Here is what I
>>said:
>>(1) I need zero pages faults
> ****
> WHich already says you are clueless. What you REALLY said
> was "I need to respond within
> 500ms" and somehow you have converted this requirement to
> "I need zero page faults". And
> you now have added "I must process page faults in order to
> use a transacted database" Huh?
> [Did you miss the part where people explained that the
> disk file system is based entirely
> on paging?]
>
>>(2) MMF does not provide zero page faults
> ****
> In this case, I have to say it: You are STUPID! I have
> explained SEVERAL TIMES why a MMF
> does NOT induce additional page faults,

It is not that I am stupid it is that for some reason (I am
thinking intentionally) you fail to get what I am saying.
I needed something the PREVENTS page faults**, MMF does not
do that, VirtualLock() does.

A thing can ONLY be said to be a thing that PREVENTS page
faults if that thing makes pages faults impossible to occur
by whatever means.

It is like I am saying I need some medicine to save my life,
and you are say here is Billy Bob he is exactly what you
need because Billy Bob does not kill people.

Everyone else here (whether they admit it or not) can also
see your communication errors. I don't see how anyone as
obviously profoundly brilliant as you could be making this
degree of communication error other than intentionally.

Notice this I am not resorting to ad hominem attacks.

> but you have failed to comprehend ANYTHING Hector
> and I have been telling you! I have even tried to explain
> how it can REDUCE your startup
> cost! But no, you REFUSE to pay attention to ANYTHING we
> have said.
>
> This is not an ad hominem attack. This is an evaluation
> based upon a long set of
> interactions in which we tell you "A" and you tell us "not
> A". And we say "A" and you
> insist "not A". And we say "you are wrong, it is A" and
> point you at references, and you
> apparently do not read them, and you say "not A". There
> are very few characterizations of
> a person who holds such a belief in the case where there
> is overwhelming evidence. One is
> "religious fanatic" and the other is "stupid". So as long
> as you continue to make stupid
> statements that conflict with reality, you will continue
> to demonstrate that you are not
> qualified to work in this profession. I'm tired of
> telling you that you are out of
> contact with reality. You apparently inhabit a reality
> which is NOT AT ALL the reality
> the rest of us live and work in.
> ****
>>(3) Locking memory does provide zero pages faults
> ****
> Locking memory solves the zero page faults problem, and
> introduces a whole whopping lot of
> new problems, and does NOT change the PayPal or
> committed-transactions problems.
> ****
>>(4) Therefore I need locking memory and not MMF
> ***
> I repeat: YOU ARE STUPID! NOPLACE DID ANYONE *EVER* TELL
> YOU THAT MMF INDUCED ADDITIONAL
> PAGE FAULTS!!!! In fact, we tried to tell that this is
> NOT true! This is a fiction you
> have invented, based on your own flawed understanding of
> reality! It isn't that we
> haven't tried to educate you! A refusal to accept facts
> as stated and insist that your
> fantasy is reality is indicative of a serious loss of
> contact with reality. So the other
> option is that you are in need of serious psyhchological
> care because you are denying
> reality. Perhaps you are not stupid, just in need of
> professional counseling by someone
> whose profession is not computing but the human mind.
> Medication might help. Those
> little voices that keep telling you "Memory Mapped Files
> induce uncontrolled page faults"
> may be the problem. Hector and I do not hear them, and I
> know that I am not taking
> psychoactive drugs to suppress the little voices that
> whisper "Memory Mapped Files induce
> uncontrolled page faults". I suspect Hector isn't,
> either. I suspect that we both
> understand how the operating system works.
>
> And where, in all lthe reading you have done, is there any
> evidence that VirtualLock will
> not apply to a block of memory that is mapped to a file?
> Duh! In fact, it DOES apply.
> But had you actrually spent any time READING about how any
> of this works, you would have
> UNDERSTOOD this obvious fact! It isn't that we didn't try
> to explain it!
> joe
>
> *****
>>
>>> position. At the very worst, you will get THE SAME
>>> NUMBER
>>> of page faults, and in the best
>>> case you will have FEWER page faults. But go ahead,
>>> assume your fantasy is correct, and
>>> lose out on optimizations you could have. You are
>>> fixated
>>> on erroneous premises, which
>>> the rest of us know are erroneous, and you refuse to
>>> learn
>>> anything that might prove your
>>> fantasy is flawed.
>>> joe
>>> ****
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Hector Santos on
Peter Olcott wrote:

>> In this case, I have to say it: You are STUPID! I have
>> explained SEVERAL TIMES why a MMF
>> does NOT induce additional page faults,
>
> It is not that I am stupid it is that for some reason (I am
> thinking intentionally) you fail to get what I am saying.
> I needed something the PREVENTS page faults**, MMF does not
> do that, VirtualLock() does.
>
> A thing can ONLY be said to be a thing that PREVENTS page
> faults if that thing makes pages faults impossible to occur
> by whatever means.
>
> It is like I am saying I need some medicine to save my life,
> and you are say here is Billy Bob he is exactly what you
> need because Billy Bob does not kill people.
>
> Everyone else here (whether they admit it or not) can also
> see your communication errors. I don't see how anyone as
> obviously profoundly brilliant as you could be making this
> degree of communication error other than intentionally.


HA! Wow!

> Notice this I am not resorting to ad hominem attacks.

Well, maybe you should so that people can finally ignore you.

The only possible reason people continue is that you seem like a lost
soul in not understanding, that maybe we can explain why your thinking
has flaws. I think we make erroneous presumption that you do have
some level of Windows programming ability, that you can explore this
for yourself. Its clear that you can't even do that.

If you want to lock your pages, whether that is a good idea or not,
your only options are to use:

- Virtual Lock which limits you to the 2GB user mode MAX
- AWE (Address Windowing Extension).

The SYSTEM MMF is a natural part of the system. You can't avoid using
it. The difference is what PART of it is memory, locked or otherwise
and it does this so fast, applications with large data needs can run
very well.

An USER MMF is one that you create programmatically and that has been
suggested as something to explore for your real only meta data. You
can also specific mapping views sizes that will be part of memory.

Finally, you have two designs here:

- Multi-threads sharing the process shared read only memory
- Multi-processes sharing a Shared MMF DLL

In other words, where you will see good scalability to use 1 reference
to your READ ONLY meta data needs. Both can be made to work.

Its very simple if you have 8GB memory QUAD computer, not all of is
yours for use. Take away at least 2GB for system use, leaving you with
6 GB. If you wish to theoretically lock memory with no pages, no disk
use, you are limited to 4 processes with 1.5 GB

So naturally, if you want more than 4 processes or threads, you have
to single source the READ ONLY data to one 1.5 GB.

And thats your minimum because you said it get go as high as 5GB or
more. You have no choice but to single source it, otherwise you have
to solicit the OS paging services that it does automatically for you.

--
HLS
From: Joseph M. Newcomer on
I will trust your views on this, because I've never had to implement any form of secure
transaction on the Internet. It was all handled by code outside what I was doing.
joe

On Mon, 29 Mar 2010 15:37:13 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Joseph M. Newcomer wrote:
>
>>> Although it may be temporary ignorance on my part that
>>> suggested such a thing, I was thinking that it might be
>>> simpler to do it this way because every client request will
>>> be associated with a financial transaction. Each financial
>>> transaction depends upon its corresponding client request
>>> and each client request depends upon its corresponding
>>> financial transaction. With such mutual dependency it only
>>> seemed natural for the underlying representation to be
>>> singular.
>> ****
>> There are several possible approaches here, with different tradeoffs.
>> (a) get PayPal acknowledgement before starting the transaction
>> (b) Because the amount is so small, extend credit, and do the PayPal processing "offline"
>> out of the main processsing thread; in fact, don't even request the PayPal debiting until
>> the transaction has completed, and if it is refused, put the PayPal processing FIRST for
>> their next request (thus penalizing those who have had a refused transaction); you might
>> lost a dime here and there, but you have high performance for other than those who aren't
>> cleared.
>> (c) if the transaction fails, and you have already debited the account, have a background
>> process credit the account for the failed transaction.
>>
>> You are confusing IPC with robustness mechanism. IPC is a pure and simply a transport
>> mechanism; anything about robustness has to be implemented external to the IPC.
>> joe
>
>Joe, why is his application any different or special than any other
>client/server framework?
>
>He stated:
>
> 100 ms RTPT (Response Time per Transaction).
>
>That is a 1 thread/process rate. That means within 1 sec, 1 thread
>can handle 10 transactions per second.
>
>Since he also stated:
>
> 100 TPS (Transactions per Sec)
>
>That means he will need at least 10 threads in his worker pool -
>period to handle the 100 TPS at 100 ms RTPT.
>
>His HTTP posted input REQUEST data will have at a minimum:
>
> - HTTP or COOKIE based authentication data, UserAuthData:
> - MIME based form fields meta data information, UserMetaData
> - MIME based form upload file data, UserFileData
>
>So when a HTTP POST arrives, he needs a "TRequestMetaData" structure
>with the above plus session related state information. He can start
>with a enum state point type:
>
>enum TRequestState {
> rsNone = 0, // just to make it understood
> rsPosted,
> rsProcessing,
> rsProcessingComplete,
> rsSendingResult,
> rsSendingComplete,
> rsSuccess
>};
>
>Altogether, TRequestMetaData structure will contain:
>
>struct TRequestMetaData {
> TUserAuthdata UserAuthdata;
> TUserMetaData UserMetaData;
> TUserFileData UserFileData;
> TUserRespData UserRespData;
> FILETIME PostTime;
> FILETIME ProcessTime;
> FILETIME CompleteTime;
> TRequestState RequestState
>};
>
>As long as he keeps this fixed typedef structure, he won't have to
>worry about class serialization and OLE/VARIANT. However, he can use
>a CRecordSet with a data exchange handler for an ODBC interface.
>
>Either way, the fundamental state machine needs to be established.
>TRequestMetaData needs to immediately flushed to disk using whatever
>storage method he wishes to use at each update point:
>
> 1 - Request.Add()
>
> Initialize new TRequestMetaData record
> set RequestState = rsPosted
> set some HASH for the HTTP data request
>
> RESTART CHECK
> - Lookup by HASH to see if request
> already exist. Go to Proper
> state accordindly.
>
> For new, Append to data file/base
> Get new row index
>
> 2 - Request.StartProcessing()
>
> set RequestState = rsProcessing
> set initialize other processing information
> Update Record at index location
>
> call the DFA/OCR processor (in threaded fashion)
> wait for completion, thread efficient BLOCKED
>
> set RequestState = rsProcessingComplete
> set other processing complete information
> Update Record at index location
>
> 3 - Request.SendResult
>
> Prepare HTTP response
> set RequestState = rsSendingResult
> set UserReponseData = HTTP response
> Update Record at index location
>
> Send HTTP result, it is blocked, the
> only reason for failure of the user disappeared
>
> set RequestState = rsSendingComplete
> set other sending complete information
> Update Record at index location
>
> 4 - Request.Complete()
>
> set RequestState = rsComplete
> set other processing complete information
> Update Record at index location
>
>The above is the basic idea. Of course there are parts that could be
>folded or reduced or done better.
>
>But as Peter D. told Peter O, now he doesn't have to worry about much
>about faults at any of the above states. Each request state will be
>known and he can restart or rehandle accordingly. And as you and I
>told him, he has to negate his idea about no pages, no caching, not
>because thats bad - its insignificant.
>
>Anyway, the only thing really different from any other common
>application framework like this, is state #2 and the plug and play
>DFA/OCR processor.
>
>Just consider the odds are good the BUGS and FAULTS will be at this
>point. :)
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Mon, 29 Mar 2010 14:31:30 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>
>Kiddle stuff. :)
****
Yep. Pretty obvious. But "obvious" only to someone who expends more than a few
milliseconds thinking about system architectures. So I had to point it out.
joe
****
>
>Joseph M. Newcomer wrote:
>
>> Actually, the correct mechanism is
>>
>> (a) embedded server: handle WM_POWERBROADCAST, gracefully shut down
>> (b) external server: handle WM_POWERBROADCAST, use IPC mechanism to inform child processes
>> to gracefully shut down.
>>
>> Pretty simple.
>> joe
>>
>> On Mon, 29 Mar 2010 01:14:20 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>>
>>> Joseph M. Newcomer wrote:
>>>
>>>> My servers are all on UPS units and get notification if power is going to fail in the near
>>>> future; robust code handles WM_POWERBROADCAST messages.
>>>
>>> Exactly Joe, but he has to program for that! :)
>>>
>>> So unless he finds a utility that captures the signal and sends mouse
>>> strokes to close his vaporware application, it ain't going to happen.
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:0fn1r513fgotrpqe1u5puurd977jgrb240(a)4ax.com...
> See below...
> On Mon, 29 Mar 2010 10:19:07 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>I have redefined fault tolerance with the much narrower
>>scope of simply not losing any customer transactions, and
>>reporting any transactions that take too long.
> ****
> You have not defined how you plan to achieve this. As I
> explained earlier, you need a
> timeout. And the timeout will probably exceed 500ms, and
> all the effort is going to be on
> the client side. If you have to restart the app, and
> you've lost the TCP/IP connetion,
> there is NO WAY to NOT "lose" the transaction!

It is only considered a transaction if the client request is
committed to disk.

>
> Your failure to deal with the concept that "no client will
> be charged for an uncompleted
> transaction" is going to get you into trouble (seriously).
> You have not suggested a
> mechanism by which a lost transaction (e.g., due to
> catastrophic server failure) will be
> sent back to the client,

Sure I have you just haven't gotten to it yet, email. I
haven't got any more time for this discussion. I am
convinced that I will get it right. I will get back to you
on this when I am pretty sure that I have it right and you
can double check my final design.

> so there is no recovery mechanism. To do recovery after a
> fault
> (one of the essential characteristics of fault tolerance)
> you have to posit a recovery
> mechanism. You have not done this.
>
> Yes, the use of MySQL, or some other transacted file
> mechanism, to record the incoming
> transaction is essential, but you need to justify why the
> page faults this will induce
> (greater than 0 by some considerable amount) will be
> acceptable given your (rather silly)
> "no page faults are acceptable" criterion. And you need
> to posit a "rollback" mechanism
> that handles crediting properly (oh, the best way would be
> a background process that
> handles the charging and credits, but this can interfere
> with the performance issues you
> hold so dear). You have not posited a way to inform the
> client of the failure (other than
> timeout, which will probably exceed 500ms) nor a mechanism
> to re-establish the transaction
> (which requires the client side re-submitting it! You
> can't reconnect to the client!
> TCP/IP doesn't allow this as a concept!) Instead, you
> just hope there is a zero-cost
> handwave that is going to magically solve all this. To do
> this, you need a deep
> understanding of the underlying mechanisms. Which you
> persist in not acquiring, because
> you want everything boiled down to its "essence". OK,
> here's the essence of fault
> tolerance: IT IS HARD. AND YOU HAVE TO UNDERSTAND HOW
> EVERY COMPONENT PLAYS A PART IN
> IMPLEMENTING IT. And to undetrstand the componenents, you
> have to have a DEEP
> understanding of the underlying mechanisms!
>
> Here;s the simplest approach to fault tolerance:
> (a) screw it. Abort the transaction, put the component
> (app, system, subroutine) into an
> initial state, and start the next transaction
> (b) having aborted the transaction, let someone else
> discover this and deal with whatever
> the recovery plan is
>
> I've built several fault-tolerant systems using this
> methodology. Let me tell you,
> exceptions are your new best friend. Transacted file
> systems are essential if there is a
> system terminationl the recovery on system failure
> requires persistent storage (in one
> system, I added new APIs to the operating system that kept
> persistent state in the kernel;
> if the kernel crashed, nobody cared that my app had
> failed, so this was a good repository)
> joe
>
>
>>> ****
>>> Unless it fails...you just said, you are going to have
>>> rebuild. So you move the respons
>>
>>Maybe include an MD5 hash in the process to verify whether
>>or not the image need to be rebuilt.
> ****
> Ohhh, what an insight! If the data is bad, why do you
> think "rebuilding" the same data is
> going to make the problem go away? Note that parity
> checking will generally detect data
> corruption errors. And, if you have been reading at all,
> you would know that you could
> use VirtualAlloc to change your pages to "read-only" so
> they could not accidentally
> change, so only a hardware failure (which will probably be
> detected) will corrupt the
> data; otherwise, the errors will be intrinsic to the data,
> so rebuilding will reintroduce
> the error (I solved this by erasing my heap and rebuilding
> it from scratch; not an easy
> thing to do unless you "own" the heap allocation code,
> which I did)
>>
>>What are the causes of a corrupted image?
>>(1) Software faults
>>(2) Hardware faults
>>
>>What can I do about (1)? Write the best code that I can
>>and
>>make sure that the OS is very stable release.
>>What else can I do about (1)? I don't know what do you
>>know
>>Joe?
> ****
> (1) ignore it. See the above solution; if there is a
> software fault, rebuilding the data
> will not make it go away. Log the problem so you can fix
> it, abort the transaction, and
> go on to the next transaction. Then, somehow (usually by
> some external mechanism) detect
> that the transaction has failed. DO NOT RETRY THE
> TRANSACTION in this case, because
> thiswill block your server, failing repeatedly on the same
> transaction and blocking all
> future transactions.
>
> Instead, I would, just before I start the processing of a
> transaction, record in the
> persistent store that this transaction is "active", make
> sure that update is committed to
> the database, and upon resumption, ignore any pending
> transactions that we active. Only
> when the transaction was completed and the response sent
> to the user would I mark it as
> "completed" and commit that update to the disk. This
> keeps you out of the
> bashing-your-head-against-a-brick-wall syndrome where your
> program fails on a transaction,
> restarts it, fails, restrarts it, fails, and so on,
> indefinitely/
>
> Note that there are at least two commits on each
> transaction, perhaps three (one to enter
> it, one to say it is active, and one to say it is
> completed). While the "is completed"
> transaction doesn't count in the budget for the
> transaction itself, it counts against
> starting the next trnasaction, thus adding to its
> processing time. But, of course, this
> is inconsistent with the "zero page faults" goal.
> Database commits are SLOW operations
> (in fact, the person who invented the concept of
> transacted databases was working on
> eliminating the concept by figuring out how to handle
> distributed and potentially
> disconnected databasesm accirding to a friend of mine who
> was working with him, because of
> the serious performance problems with transactions)
>
> (2) ignore it. If it happens, you have a failure. The
> process will terminate, and you
> have no control over this. So your recovery mechanism has
> to detect that the process has
> failed, and deal with what it means to recover from this.
> ****
>>
>>What can I do about (2)?
>>(a) Preventative things
>>Multiple servers each with RAID 1 arrays
> ****
> Parity errors are one hardware problem.
>
> Page-in errors, when they occur, are another
>
> Disk read failures (related to page-in errors) is a
> fundamental hardware problem.
>
> Data corruption on the disk is another
>
> RAID-1 is a half-assed attempt to deal with this/ The
> simplese solution is to ignore the
> reason for hardware failures and build in a recovery
> mechanism that handles all
> catastrophic hardware failures.
>
> Note that it is up to your ISP to deal with multiple
> servers, clustering, file system
> reliability, etc., and each additional requirement you
> give adds to your cost.
> ****
>>
>>(b) Corrective things after a problem occurs
>>Besides have the service provider replace the hardware, I
>>don't know.
>>Joe do you know of any other alternatives that are not
>>very
>>expensive?
> ****
> I just outlined them. Of course, "expensive" has
> different metrics. Expensive in
> transaction time, expensive in development time, expensive
> in hardware costs. Since I was
> always working with fixed hardware, our expenses were
> always in development costs, and we
> were willing to pay the additional transaction times to
> get the robustness. You should
> assume fixed hardware, and be perpared to spent lots of
> your time building recovery
> mechanisms, and you have ro realize that there will be
> performance costs.
> ****
>>
>>> time from 500ms to 3.5 minutes without warning? This
>>> means you have failed to define what
>>> you mean by "fault tolerant". One of the requirements
>>> of
>>> fault tolerance is knowing what
>>> "tolerance" means.
>>
>>The goal of 500 ms response time is for a narrow subset of
>>customers that will probably have their own dedicated
>>server. I will do comprehensive stress testing before I go
>>into production. I will do this by generating loads at the
>>servers threshold on a 1 gigabit intranet.
> ****
> Actually, you are making an assumption here, that they
> will have a server on their site.
> This is inconsistent with your security goals (you seem to
> have thought that "opening the
> box" was a relevant concept, but send me a server, and
> without any attempt to open it, I
> will very shortly have a copy of your program on my
> machine. This is because I own a
> number of "administrator tools" that let me copy files,
> reset passwords, etc. just by
> booting from a floppy, CD-ROM or USB device; and if you
> send me a computer sealed in a
> block of plastic or concrete (maybe with ventilation
> holes), I can use any number of
> well-known privilege escalation techniques to gain control
> of the machine over the network
> connection). Security? I See No Security Here! Welcome
> to the Real World. So once that
> machine leaves your hands, you can assume that it will be
> compromised. Even if you keep
> it in your building, it can be compromised.
>>
>>I do not ever envision providing a system that can produce
>>correct results even with defective hardware.
> ****
> Define "correct". If there is defective hardware, either
> (a) it is self-diagnosing
> (parity errors, etc.) and therefore results in an error
> condition that aborts the
> transaction or (b) the errors are undetected, in which
> case you will produce *a* result
> but will have no idea if it is correct or not!
> ****
>>
>>>
>>> I suppose my experience building fault-tolerant systems
>>> is
>>> another of those annoying
>>> biases I have.
>>> joe
>>> ****
>>
>>Initially fault tolerance will only be to not lose any
>>customer transactions, and to report and transactions that
>>take too long.
> *****
> Oh, now "fault tolerance" means "a transaction that takes
> too long". You keep changing
> the definition every time you give it.
>
> Define "lose". You are still riding far behind the
> specification curve. Requirements are
> not specifications. Before you start coding, you have to
> at least move from the
> requirements to a specification of how you are going to
> achieve them. ANd those
> specifications have to have realizable implementations.
> Handwaves don't work. You have
> to know how to write actual code. Right now, you are at
> the handwave-requirements. You
> have to go to really precise requirements, to really
> precise specifications of how you
> will implement those requirements, to actual code. You
> are a VERY long way from even a
> decent requirement description. At least every time you
> state them, they are slightly
> different.
> joe
> ****
>>
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm