Prev: Symbolic tracebacks on Debian (Was: About static libraries and Debian policy)
Next: Gnat cross compiler
From: tmoran on 5 Jun 2010 02:13 >> The technical problem is that mechanical components faults have a >> stochastic nature. I.e. you have a certain probability of fault (due >> to physical processes involved in production and function of the given >> component). On the contrary, a software fault is not stochastic, >> neither in its production nor at run-time. A given bug is either here >> or not. > >Whether the bug is encountered is sometimes stochastic. But generally >you are correct. There are a set of bugs in a given piece of software. On any given day, there's a certainly probability that's when you will stumble across one. When you remove a bug you remove its probability component so the total probability of going a day without a bug is now larger (assuming any newly introduced bug is less likely than the removed one). Bugs that are more likely to bite will be found and removed sooner, so the rate of finding bugs will tend to drop. This is all describable with simple probability and statistics. If you want to claim it's "not stochastic" then I would claim neither is a physical fault - the pressure applied today on the weak joint either is or is not sufficient to cause a fracture, and metal fatigue weakening is a straightforward physical process.
From: Martin Krischik on 4 Jun 2010 17:09 Am 24.05.2010, 11:31 Uhr, schrieb Dmitry A. Kazakov <mailbox(a)dmitry-kazakov.de>: > Which is more shamanism than engineering. I like to point out a true shaman is supposed to make a visit to the other word as part is his initiation. He does it by nibbling some interesting mushrooms. If he returns from his visit to the other world he is welcomed as new member to the community of shamans. If he does not return, ah well guess it was not his calling after all. I wonder what would happen if we applies a similar strict finals to our CS graduates… Martin -- Martin Krischik mailto://krischik(a)users.sourceforge.net https://sourceforge.net/users/krischik
From: Martin Krischik on 4 Jun 2010 17:10 Am 04.06.2010, 21:23 Uhr, schrieb Fritz Wuehler <fritz(a)spamexpire-201006.rodent.frell.theremailer.net>: > None of the bank software I have seen has ever been written in Ada, much > less Spark. The Swiss PostFinance uses Ada. And they are not the only one. Martin -- Martin Krischik mailto://krischik(a)users.sourceforge.net https://sourceforge.net/users/krischik
From: Georg Bauhaus on 5 Jun 2010 03:47 On 6/4/10 9:23 PM, Fritz Wuehler wrote: > Ada is better than COBOL except in one way. It is easier to write reports > (the bulk of financial processing) and define decimal (money) fields in > COBOL than Ada. It *could* have been used in financial processing, but > COBOL had two decades and a half of a head start. How do Interfaces.COBOL and Ada.Text_IO.Editing fit in here?
From: Dmitry A. Kazakov on 5 Jun 2010 04:00
On Sat, 5 Jun 2010 06:13:59 +0000 (UTC), tmoran(a)acm.org wrote: >>> The technical problem is that mechanical components faults have a >>> stochastic nature. I.e. you have a certain probability of fault (due >>> to physical processes involved in production and function of the given >>> component). On the contrary, a software fault is not stochastic, >>> neither in its production nor at run-time. A given bug is either here >>> or not. >> >>Whether the bug is encountered is sometimes stochastic. But generally >>you are correct. > There are a set of bugs in a given piece of software. On any given > day, there's a certainly probability that's when you will stumble > across one. When you remove a bug you remove its probability component > so the total probability of going a day without a bug is now larger > (assuming any newly introduced bug is less likely than the removed one). > Bugs that are more likely to bite will be found and removed sooner, > so the rate of finding bugs will tend to drop. This is all describable > with simple probability and statistics. Yes, this is what I tried to describe in terms of program states. (Encounter bug = program transits an "error state") The problem with this is that it presumes that states are random, which are not, because [most of] programs are deterministic. Any randomness which might exist is derived from the inputs. I.e. it is the program usage, which makes the *same* program less or more reliable. According to this approach the most reliable car is one you do not drive. [You booted reliable Windows? That's your fault! (:-))] Another problem which worries me, is program changes. Let I modify the program, the result is *another* program. How can I talk about the "reliability" of what? Well, they share some code, but certainly we cannot consider source lines random. E.g. Let I modify 0,01% of the source of 90% "reliable" program. I can tell nothing about whether the result is 90% reliable +/- factor * 0.01%. This model just does not work. > If you want to claim it's > "not stochastic" then I would claim neither is a physical fault - the > pressure applied today on the weak joint either is or is not sufficient > to cause a fracture, and metal fatigue weakening is a straightforward > physical process. Yes, one could say that physical components at some macro level have the nature of a discrete deterministic system, i.e. function like programs do. But the underlying processes and the process of "discretization" (pressure>X) are stochastic. Well, maybe the notion of reliability cannot be applied to complex physical system? But on the other hand, the more complex system is more random its behavior appears to the observer, looks like a paradox... -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de |