Prev: Symbolic tracebacks on Debian (Was: About static libraries and Debian policy)
Next: Gnat cross compiler
From: tmoran on 5 Jun 2010 13:59 >is that it presumes that states are random, which are not, because [most >of] programs are deterministic. Any randomness which might exist is derived >from the inputs. I.e. it is the program usage, which makes the *same* >program less or more reliable. According to this approach the most reliable >car is one you do not drive. Mechanical devices also fail due to unfortunate, unanticipated, combinations of random inputs. Rockets don't fail in the middle of the night sitting in the assembly building. They fail when, for instance, the air temperature is very low and the rocket is on full thrust and with those inputs the O-ring can't sufficiently do its job. You don't say "O-rings are or are not reliable" - you say "under such and such conditions O-rings are 99.9999% likely to prevent dangerous amounts of leakage. Under such and such other inputs, that drops to 99.9%, or 90%, or 10%." > ... E.g. Let I modify 0,01% of the source of 90% > "reliable" program. I can tell nothing about whether the result is 90% > reliable +/- factor * 0.01%. This model just does not work. The word "model" is key. It is meaningless to talk about whether something, software or hardware, *is* stochastic - but one can observe whether a stochastic *model* of the system is helpful or not. As to program changes, one talks about how confident you are that the program will not hit a bug today, as compared to yesterday before you made the change. Your confidence will depend not just on the fraction of source code changed, but also on careful consideration of the nature and expected effects of the change, and observations while testing the changed version.
From: Robert A Duff on 5 Jun 2010 14:50 "Nasser M. Abbasi" <nma(a)12000.org> writes: > I meant complex type in ada is not an elementary type. Complex cannot be an elementary type, because it has components (real and imaginary parts). That's what "elementary" means in Ada -- no components. > I just meant it seems "easier" to use complex numbers in FORTRAN than > Ada, just because one does not to do all this instantiating every > where. You don't have to instantiate everywhere. If you're willing to stick to the predefined floating point types (Float, Long_Float, etc), then you can use Ada.Numerics.Elementary_Functions, Ada.Numerics.Long_Elementary_Functions, etc. And of course if you're NOT willing to stick to the predefined floating point types, then you won't be using Fortran anyway, so there's no comparison. Is there anything else? I mean reasons why complex in Fortran is "easier" than in Ada? - Bob
From: Dmitry A. Kazakov on 5 Jun 2010 15:34 On Sat, 05 Jun 2010 09:02:36 -0700, Nasser M. Abbasi wrote: > On 6/5/2010 5:59 AM, Dmitry A. Kazakov wrote: > > >> Sorry guys, maybe I missed the point, but Ada does have complex types. See >> ARM G.1. >> > I meant complex type in ada is not an elementary type. as in BTW, as the name suggest "complex" is not "elementary"! (:-)) > http://www.adaic.org/standards/05rm/html/RM-3-2.html > > "The elementary types are the scalar types (discrete and real) and the > access types (whose values provide access to objects or subprograms). > Discrete types are either integer types or are defined by enumeration of > their values (enumeration types). Real types are either floating point > types or fixed point types." Well, in fact I don't know why ARM defines that, because beyond the name there is nothing that could distinguish them from other types. > and > > http://en.wikibooks.org/wiki/Ada_Programming/Type_System > > I copied the list from above: > > "Here is a broad overview of each category of types; please follow the > links for detailed explanations. Inside parenthesis there are > equivalences in C and Pascal for readers familiar with those languages." > > Signed Integers (int, INTEGER) > Unsigned Integers (unsigned, CARDINAL) > unsigned they also have wrap-around functionality. > Enumerations (enum, char, bool, BOOLEAN) > Floating point (float, double, REAL) > Ordinary and Decimal Fixed Point (DECIMAL) > Arrays ( [ ], ARRAY [ ] OF, STRING ) > Record (struct, class, RECORD OF) > Access (*, ^, POINTER TO) > Task & Protected (no equivalence in C or Pascal) > Interfaces (no equivalence in C or Pascal) > > I do not see complex type there :) Same as above. Some of these are classes of types some are not. To be sure, complex is not a type you can derive from. But Integer isn't either. It cannot be constrained, but that does not make much sense anyway, and records cannot be constrained as well. It is not a formal generic class of types, neither are records. > Ofcourse, a standard generic package for complex type, I knew that. > > In FORTRAN: > > http://www.fortran.com/F77_std/rjcnf-4.html#sh-4 > > "4.1 Data Types > The six types of data are: > > 1. Integer > 2. Real > 3. Double precision > 4. Complex > 5. Logical > 6. Character > > " > > So, complex is an elementary type, like an integer is. Maybe, but what does it mean semantically? -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
From: Dmitry A. Kazakov on 5 Jun 2010 15:59 On Sat, 5 Jun 2010 17:59:36 +0000 (UTC), tmoran(a)acm.org wrote: >>is that it presumes that states are random, which are not, because [most >>of] programs are deterministic. Any randomness which might exist is derived >>from the inputs. I.e. it is the program usage, which makes the *same* >>program less or more reliable. According to this approach the most reliable >>car is one you do not drive. > Mechanical devices also fail due to unfortunate, unanticipated, > combinations of random inputs. Rockets don't fail in the middle of the > night sitting in the assembly building. They fail when, for instance, the > air temperature is very low and the rocket is on full thrust and with > those inputs the O-ring can't sufficiently do its job. You don't say > "O-rings are or are not reliable" - you say "under such and such > conditions O-rings are 99.9999% likely to prevent dangerous amounts of > leakage. Under such and such other inputs, that drops to 99.9%, > or 90%, or 10%." That is the difference. if you fixed the inputs/environment there still would be a probability of fault. A Maxwell's daemon sits inside each of these things deciding if he let you go or not. There is nothing in, say, integer addition. If you fixed the inputs it would either overflow or not. >> ... E.g. Let I modify 0,01% of the source of 90% >> "reliable" program. I can tell nothing about whether the result is 90% >> reliable +/- factor * 0.01%. This model just does not work. > The word "model" is key. It is meaningless to talk about whether > something, software or hardware, *is* stochastic - but one can observe > whether a stochastic *model* of the system is helpful or not. Hmm, I think a physicist would strongly disagree with that. AFAIK, there is no working deterministic models of quantum processes. > As to program changes, one talks about how confident you are that the > program will not hit a bug today, as compared to yesterday before you made > the change. Maybe, but 1. Confidence has nothing to do with probability. It is an absolutely different model of uncertainly. That returns us to the square one. Confidences and probabilities are incomparable. 2. Your confidence describes you, it does not the program. It fact, it is like - I give my word, it works. Fine, but why anybody should trust my word? > Your confidence will depend not just on the fraction of > source code changed, but also on careful consideration of the nature and > expected effects of the change, and observations while testing the changed > version. No, it will not, because you defined it as confidence. If there is something behind it, then why confidence? Name that thing, and define reliability in terms of that. The problem that there seems nothing there, except for confidences of other people. BTW, this is my concern about software certification procedures. It fact, they act quite as you suggested. They certify programmers, they don't the software. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
From: (see below) on 5 Jun 2010 16:14
On 05/06/2010 19:50, in article wcchblh9uw6.fsf(a)shell01.TheWorld.com, "Robert A Duff" <bobduff(a)shell01.TheWorld.com> wrote: > "Nasser M. Abbasi" <nma(a)12000.org> writes: > >> I meant complex type in ada is not an elementary type. > > Complex cannot be an elementary type, because it has components > (real and imaginary parts). That's what "elementary" means > in Ada -- no components. > >> I just meant it seems "easier" to use complex numbers in FORTRAN than >> Ada, just because one does not to do all this instantiating every >> where. > > You don't have to instantiate everywhere. If you're willing to stick > to the predefined floating point types (Float, Long_Float, etc), > then you can use Ada.Numerics.Elementary_Functions, > Ada.Numerics.Long_Elementary_Functions, etc. I've never understood the objection to "all this instantiating every where". How much effort is a line or three of boilerplate code? -- Bill Findlay <surname><forename> chez blueyonder.co.uk |