Prev: Simple yet Profound Metatheorem
Next: Modal Logic
From: Steven Zenith on 19 Sep 2006 11:26 Charlie-Boo wrote: > Every programming language can perform real number arithmetic to any > given number of decimal places. Every time you refer to "most > programming languages" you are not referring to Computer Science. The > first step in Computer Science is to realize that all programming > languages are equivalent, and to abstract the notion of all programming > languages to the set of recursive functions. This is due to the work > of Godel, Post, Church, Turing and other logicians in the 1930's. Then you are doing logic, not computer science. Computer Science is an engineering science in that, ultimately, programs are executed by machines. Theoretical computer science deals with the theoretical aspects of that engineering. You started this by asking why Computer Science has not been formalize. But it seems you are asking about logic - and that is more abstract, and has more than enough formalization to go around. Your target machines do not appear to reflect machines that exist in the world today and you propose to ignore real theoretical and engineering issues such as floating point. Formal methods in computer science and electrical engineering have been used extensively, and continue to be used (esp. in EE), in the verification that certain transformations are consistent: specification - software implementation - platform, for example. I gave a clear example of work completed in the late 1980s, by a team of which I was a member, in which the implementation of a floating point unit was formally shown, proven, to meet the specification of the IEEE standard. In addition to verification, I know from personal experience of formal methods used to prove useful properties of programs using concurrency and communication. These include properties such as deadlock freedom (Roscoe), termination, error, failure etc.. Formal methods must include formal descriptions of these properties - I am familiar with CSP, but there are other formalisms that deal with these issues too. There are useful pragmatic first steps familiar to many engineers - encapsulated in common engineering products such as "Purify." In a more general sense, in theoretical computer science, I do also think it is useful to consider value representation and alternatives to existing floating point implementation. But these issues are not going to assist the adoption of formal methods in today's engineering practice, and they will hamper its immediate application. I recognize that I may be introducing certain business pragmatics that prioritize my concerns, but that is in the nature of the industry and where I live (Silicon Valley). With respect, Steven -- Dr. Steven Ericsson-Zenith IASE, Sunnyvale, California
From: Steven Zenith on 19 Sep 2006 11:50 Jack Campin - bogus address wrote: > The accuracy is determined by the caller. Most programmers, no disrespect intended, will not appreciate the implications of specifying the degree of accuracy in my experience - and the accuracy will be arbitrarily assigned. Therefore, it is better for the library author or the platform design to determine the degree of accuracy available. That accuracy then should be in the formal specification of the platform and be used in the generation of a pragmatic analysis of the implementation of a program or, if the requirements are strict, program verification should fail if the platform cannot execute the program as specified. > Traditional (pre-computer) > error analysis will work fine. The nice thing about lazy computation > is that a lot of it can be deferred to run-time - you still need to > structure expressions to avoid blow-ups, but the details of how much > precision you need in subsidiary parts of a computation to guarantee > a specific precision in the answer can be worked out on the fly. You mean that the programmer doesn't care about precision or that the program behaves consistently on all platforms? > > Googling "haskell numerical analysis" shows quite a few approaches > to this. (Being able to do numerical analysis properly was one of > the design goals of Haskell). Your suggestion of using continued > fractions is certainly possible, and has been done in experimental > implementations, but I doubt you could get acceptable performance > out of it - the period length will blow up with simple arithmetical > expressions, and real-world constants in the computation will usually > have unreasonably long periods to begin with. I am familiar with Haskell, I was at Yale briefly, where much of that work was done. > > Since when is root 2 a real number? > > It is in most programming language type systems. What else do you > want it to be? (I know of an extension of Algol68 that allocated a > type for every algebraic extension of the rationals, so there could > have been a more refined answer in that system - you could do that > in Haskell if you wanted, I guess). I hope it is clear that I am using root 2 as a straw-man. I have been trying to make it clear though that any formal consideration needs to provide clear statements concerning such pragmatics as you observe here. I have already stated that a mathematical unit or library that uses continued fraction representation and operations may be a solution. It's not perfect, but at least it would then be as good as conventional mathematics. The reason we don't do this today is that implementation is more complex and much slower than floating point (probably) - current implementations and conventions are entrenched. Changing precision will likely break many existing programs. >From my point of view, it remains a very real problem in computer science theory (and the foundations of mathematics) that root 2 does not terminate. But then this is a long standing problem. With respect, Steven
From: Charlie-Boo on 19 Sep 2006 11:54 Steven Zenith wrote: > Charlie-Boo wrote: > > Every programming language can perform real number arithmetic to any > > given number of decimal places. Every time you refer to "most > > programming languages" you are not referring to Computer Science. The > > first step in Computer Science is to realize that all programming > > languages are equivalent, and to abstract the notion of all programming > > languages to the set of recursive functions. This is due to the work > > of Godel, Post, Church, Turing and other logicians in the 1930's. > > Then you are doing logic, not computer science. Computer Science is an > engineering science in that, ultimately, programs are executed by > machines. Theoretical computer science deals with the theoretical > aspects of that engineering. Why am I doing logic and not Computer Science - because the founders were logicians? Sorry, but computers weren't invented until 1950. So Computer Science doesn't exist? > You started this by asking why Computer Science has not been formalize. > But it seems you are asking about logic - and that is more abstract, > and has more than enough formalization to go around. Your target > machines do not appear to reflect machines that exist in the world > today and you propose to ignore real theoretical and engineering issues > such as floating point. I'm talking about Computer Science, ok? I listed 4 branches: Program Synthesis, Theory of Computation, Recursion Theory and Incompleteness in Logic. (For that last one, maybe I should call it Computing Science?) > Formal methods in computer science and electrical engineering have been > used extensively, and continue to be used (esp. in EE), in the > verification that certain transformations are consistent: specification > - software implementation - platform, for example. I gave a clear > example of work completed in the late 1980s, by a team of which I was a > member, in which the implementation of a floating point unit was > formally shown, proven, to meet the specification of the IEEE standard. We already went through this. You are talking about individual problems that were supposedly solved manually using formal means - like a Mathematician proves a theorem. Computer Programmers make formal deductions concerning programs all day. But they don't have a method of automating those conclusions (other than CBL if they read Google Groups.) I am talking about a formalization of not one process, but entire branches of Computer Science. In other words, you have a procedure for creating the theorems of that branch. Typically people use axioms and rules of inference. In fact, I have discovered using CBL that axioms and rules formalize all 4 branches of Computer Science listed above. And there is a single starting point for Computer Science, for which each branch is an instance. Do you know of a procedure that creates the theorems from the Theory of Computation? Or any of the other branches of Computer Science (that I listed)? C-B > With respect, > Steven > -- > Dr. Steven Ericsson-Zenith > IASE, Sunnyvale, California
From: Steven Zenith on 19 Sep 2006 12:06 Charlie-Boo wrote: > You have the time to write about 10 pages of claims, but not enough > time to write 1/2 page giving the formalization of some results to > prove your claims? I see. I am working from memory and referring you to the references. If CSP (the basis of the Roscoe reference) is not a formalization of concurrency and communication (a branch of computer science) then please explain to me why you consider it inadequate. I'd be genuinely interested to hear your account. > > Ok, I'll do the work. . . . Hmmm . . . Gee, there is no result as you > describe. I have done the work, and the conclusion that I reach is > that there is no formalization of any branch of Computer Science. Do > you somehow reach a different conclusion? How's that? I really don't know. We may simply be miscommunicating. If you can answer my previous question that may help me. .... > > Exactly what were you referring to (that I must have missed somehow)? > What branch of Computer Science is it that has been formalized? What > results have been shown formally? What are the formal expressions that > represent them? > > You can end my 29 year quest in 5 minutes! As previously stated, I am referring specifically to CSP and the work of Tony Hoare and Bill Roscoe - but I know that Robin Milner completed similar work at around the same time. I should probably mention Z also, which is now an international standard for formal specification. There is plenty of literature, I have mentioned specific results in both theoretical computer science and practice. You are suggesting that you have read the Roscoe book. What am I missing? Your references to problems in CS - such as the halting problem - are straight forward enough. Any good formalism should be able to represent them, should they not? Please provide a reference to CBL, I'll take a look at it. With respect, Steven
From: Steven Zenith on 19 Sep 2006 12:23
Charlie-Boo wrote: > ... > Functions don't terminate. Programs terminate (or not.) Functions are > defined (or not.) This is semantics only. A defined function that has no instance is meaningless. > > The root of 2 is not a real number, > > Yes it is. What do you think it is? The square root of 2 is an irrational number. > > so in most languages direct use of the number would formally > > produce a type error, > > How can you "use the number in most languages"? How would that work, > exactly? Well, since the type of root 2 is irrational type and the expression is likely a floating point type then in strictly typed languages you would get a typing error, while in non-strict languages you would not. > > > I did then observe that even if we picked a number of decimal points, > > either implicitly or explicitly, there was no guarantee that the value > > could be computed on any computer > > No, all computers have the same computing power. Not in practice. Also, architectures vary giving vector machines, for example, different characteristics from non-vector machines. > > If you choose, as you suggest, to implement root 2 in a particular way > > I would have to ask you if that way meets the pragmatic understanding > > of the programmer - an arbitrary selection of precision may not be > > compatible with the programmer's expectation. > > It is just a fact of life that real number arithmetic can be calculated > to any given number of decimal places, or simulated to complete > precision by symbolically representing the expressions e.g. the square > root of 2. It has nothing to do with the programmer's "expectations". This does not help in practice when the program you are trying to verify will in fact execute on an IEEE floating point unit. > > The fact that this function is trivial example is not the point here - > > it raises important questions. > > Oh really? Such as how you became so incredibly confused over such > fundamental concepts as real numbers, functions, and the equivalence of > programming languages? I really don't know. I am mystified :-) With respect, Steven -- Dr. Steven Ericsson-Zenith IASE, Sunnyvale, California |