From: Charlie-Boo on 17 Oct 2006 09:45 Peter Olcott wrote: > <sillybanter(a)gmail.com> wrote in message news:MtsYg.9714$gx6.5379(a)trnddc05... > > In comp.theory Peter Olcott <NoSpam(a)seescreen.com> wrote: > >> "William Elliot" <marsh(a)hevanet.remove.com> wrote in message > >> news:Pine.BSI.4.58.0610150722520.7240(a)vista.hevanet.com... > >> > On Sun, 15 Oct 2006, Peter Olcott wrote: > >> > > >> >> The Halt program throws an "Invalid Input" exception. > >> >> This would be analogous to the hardware exception of an attempt to divide > >> >> by > >> >> zero. > >> >> > >> > Error 001: Invalid input. Please try again. > >> > >> The point is there are not any questions too difficult for logic to correctly > >> answer, logic itself is not incomplete as Kurt Godel proposed. There are > >> merely > >> some questions where the question itself is ill-formed. > > > > Brilliant. By defining any unsolvable problems as "ill-formed" then > > we can say that all "well-formed" questions are solvable. No more > > unsolvable problems! > > > Since the only reason that these problems are unsolvable is because they are > ill-formed, this makes perfect sense. To see it any other way would be like me > saying that you don't even know how to tell the time of day because of your > inability to correctly answer the following question; What time is it blue or > green? Neither. What's the prize? C-B > > Advancement of science by changing definitions. Gotta love it. > > > > -- > > > > Steve Stringer > > sillybanter(a)gmail.com > >
From: Peter Olcott on 17 Oct 2006 10:07 "Patricia Shanahan" <pats(a)acm.org> wrote in message news:zRXYg.10503$Y24.7729(a)newsread4.news.pas.earthlink.net... > Peter Olcott wrote: >> "Patricia Shanahan" <pats(a)acm.org> wrote in message >> news:_FWYg.7998$Lv3.6750(a)newsread1.news.pas.earthlink.net... >>> Peter Olcott wrote: >>>> "Patricia Shanahan" <pats(a)acm.org> wrote in message >>>> news:hHTYg.7886$Lv3.7266(a)newsread1.news.pas.earthlink.net... >>>>> Peter Olcott wrote: >>>>>> "Patricia Shanahan" <pats(a)acm.org> wrote in message >>>>>> news:qfOYg.14496$UG4.1437(a)newsread2.news.pas.earthlink.net... >>>>>>> Peter Olcott wrote: >>>>>>> .... >>>>>>>> To frame the exception handling mechanism within the historical context >>>>>>>> of a Turing Machine we only need to establish three integer values >>>>>>>> mapped to three different semantic meanings: >>>>>>>> 0=NOT_HALTS 1=HALTS 2= INVALID_INPUT. The input is invalid in each >>>>>>>> and every case where providing the result of the determination of >>>>>>>> halting derives a different execution trace, thus changing the result. >>>>>>> Yes, but what is the algorithm for determining whether the result of the >>>>>>> determination of halting derives a different execution trace? >>>>>> You have it incorrectly, the result does not provide a different >>>>>> execution trace. It is the act of providing this result that derives the >>>>>> different execution trace. Halt() can determine that the test program >>>>>> will halt, it merely can not provide the results of this determination to >>>>>> the caller in this case. Halt() determines that providing the results of >>>>>> its correct and complete analysis is used to alter the execution trace, >>>>>> thus forming a different execution trace than the one that is the subject >>>>>> of the test. >>>>> Are you saying that the behavior of the algorithm depends on what its >>>>> caller is going to do with the result? If so, that is a far bigger change >>>>> in the concept of algorithm than merely adding exceptions. >>>> The difficulty of the Halting Problem is that because of its infinitely >>>> recursive structure it is very difficult to see that the typical common >>>> sense assumptions that apply to nearly every other algorithm do not apply >>>> to this one. One of the most significant aspects of this atypical case, is >>>> that in the very unusual case of the Halting Problem, a function can see >>>> exactly how it is being called, and the effects of any actions that it >>>> takes upon its own caller. >>>> >>>> Within the specification of the Halting Problem, the Halt() function and >>>> its caller combined together form the program under test. In this case the >>>> behavior of one of the constituent parts (the Halt() function) forms the >>>> basis for the behavior of the other constituent part (the caller). Since >>>> this structure is directly presented to the Halt() function as input, the >>>> Halt() function can see exactly what its caller intends to do with any >>>> result that it provides (or actions that it takes). >>>> >>>> So the Halt() function breaks out of this otherwise infinitely recursive >>>> structure and provides the only possible correct answer given the >>>> specification of this problem: it throws the INVALID_INPUT exception. The >>>> Turing Machine equivalent of this would be halting with the ReadWrite Head >>>> of its tape positioned at a token indicating the semantic meaning of >>>> INVALID_INPUT. >>> I'm not sure whether all that means "yes" or "no". >> >> >> Are you saying that the behavior of the algorithm depends on what its caller >> is going to do with the result? >> >> Yes! In the atypical instance of the Halting Problem the called can know in >> advance both that its actions will effect the caller, and exactly how its >> actions will effect the caller. Because it has this information in advance it >> can decide what to do about it, and thereby effect the total outcome. > > HOW does h know "exactly how its actions will affect this caller"? All > it knows is its caller has fed it a number, and asked "Is this the > Goedel number of a computation that halts?". We know that the TM in that > computation uses a copy of h as part of its source code, but it could be > obfuscated. > The point that I am attempting to make transcends the historical mathematics that frames the Halting Problem. My point exists within the context of the natural language conclusions that we draw from this historical mathematical proof. When one treats natural language more like a mathematical formalism, one begins to realize that some commonly accepted notions lack a sufficient basis. It would seem to me that one of these subtle errors applies to the historical natural language conclusions that are formed on the basis of the Halting Problem, and its equivalents. One of the problems that I see with this historical framework is the artificial requirement that the result must be construed as Boolean. It would seem to follow that a program must either halt or fail to halt, thus a Boolean would seem to correspond to the only possible result type. Whether or not a tertiary result type might be more appropriate has never been fully explored, because it would seem to violate common sense. The other problem with the natural language conclusions pertaining to the Halting Problem and its equivalents has to do with the subtle precise meaning of words and expressions. If one takes the historical definition of the problem artificially constrained to providing a Boolean result, then one can correctly conclude that a case can be constructed such that a Halt() function can not provide a correct Boolean result. It can not provide a correct Boolean result specifically because of the malignant self referential structure of the problem. The inability to provide a correct Boolean result has been misconstrued to mean that there are some problems that can never be solved, because there exists a case of malignant self-reference where a correct Boolean result can not be provided. It is this last statement where the primary issue arises. The conclusion does not logically follow from the premises. There is a subtle fallacy of equivocation error in the forming of this conclusion. Every point that I have ever made on this subject boils down to this one single issue. In order to fully see what I am s
From: Peter Olcott on 17 Oct 2006 10:12 "Charlie-Boo" <shymathguy(a)gmail.com> wrote in message news:1161092748.902472.215090(a)k70g2000cwa.googlegroups.com... > > Peter Olcott wrote: >> <sillybanter(a)gmail.com> wrote in message news:MtsYg.9714$gx6.5379(a)trnddc05... >> > In comp.theory Peter Olcott <NoSpam(a)seescreen.com> wrote: >> >> "William Elliot" <marsh(a)hevanet.remove.com> wrote in message >> >> news:Pine.BSI.4.58.0610150722520.7240(a)vista.hevanet.com... >> >> > On Sun, 15 Oct 2006, Peter Olcott wrote: >> >> > >> >> >> The Halt program throws an "Invalid Input" exception. >> >> >> This would be analogous to the hardware exception of an attempt to >> >> >> divide >> >> >> by >> >> >> zero. >> >> >> >> >> > Error 001: Invalid input. Please try again. >> >> >> >> The point is there are not any questions too difficult for logic to >> >> correctly >> >> answer, logic itself is not incomplete as Kurt Godel proposed. There are >> >> merely >> >> some questions where the question itself is ill-formed. >> > >> > Brilliant. By defining any unsolvable problems as "ill-formed" then >> > we can say that all "well-formed" questions are solvable. No more >> > unsolvable problems! >> > >> Since the only reason that these problems are unsolvable is because they are >> ill-formed, this makes perfect sense. To see it any other way would be like >> me >> saying that you don't even know how to tell the time of day because of your >> inability to correctly answer the following question; What time is it blue or >> green? > > Neither. > > What's the prize? Since you did not provide an answer from the set of possible choices your answer is incorrect. Bertrand Russell's Barber Paradox has this same essential structure. > > C-B > >> > Advancement of science by changing definitions. Gotta love it. >> > >> > -- >> > >> > Steve Stringer >> > sillybanter(a)gmail.com >> > >
From: Patricia Shanahan on 17 Oct 2006 10:30 William Elliot wrote: > On Tue, 17 Oct 2006, Patricia Shanahan wrote: >> Peter Olcott wrote: > >>> Yes! In the atypical instance of the Halting Problem the called can >>> know in advance both that its actions will effect the caller, and >>> exactly how its actions will effect the caller. Because it has this >>> information in advance it can decide what to do about it, and thereby >>> effect the total outcome. >> HOW does h know "exactly how its actions will affect this caller"? All >> it knows is its caller has fed it a number, and asked "Is this the >> Goedel number of a computation that halts?". We know that the TM in that >> computation uses a copy of h as part of its source code, but it could be >> obfuscated. >> > Here's a halting algorithm for OP to lament upon. > 1) OP, to prove the validity of his notion, will always ignore or > misconstrue your comments. > 2) No matter what you say or do, the results will be the same, OP > notions is always correct. It can not, will not ever be else. > 3) The program of you replying and OP stringing you along won't > halt. > 4) Therefore, output is not-halt. > > Thus the algorithm, as the algorithm has put out output, has halted. > Then, dear Patricia, upon the futility of your expert efforts, do you > HALT! You're right. Peter Olcott posted a message that was formatted as though it were a reply to a very specific question I asked in my last message, but had nothing at all to do with the question. There is no point posting any more in this thread until he answers that question. Patricia
From: Peter Olcott on 17 Oct 2006 10:55
"Patricia Shanahan" <pats(a)acm.org> wrote in message news:JN5Zg.14870$UG4.10019(a)newsread2.news.pas.earthlink.net... > William Elliot wrote: >> On Tue, 17 Oct 2006, Patricia Shanahan wrote: >>> Peter Olcott wrote: >> >>>> Yes! In the atypical instance of the Halting Problem the called can >>>> know in advance both that its actions will effect the caller, and >>>> exactly how its actions will effect the caller. Because it has this >>>> information in advance it can decide what to do about it, and thereby >>>> effect the total outcome. >>> HOW does h know "exactly how its actions will affect this caller"? All >>> it knows is its caller has fed it a number, and asked "Is this the >>> Goedel number of a computation that halts?". We know that the TM in that >>> computation uses a copy of h as part of its source code, but it could be >>> obfuscated. >>> >> Here's a halting algorithm for OP to lament upon. >> 1) OP, to prove the validity of his notion, will always ignore or >> misconstrue your comments. >> 2) No matter what you say or do, the results will be the same, OP >> notions is always correct. It can not, will not ever be else. >> 3) The program of you replying and OP stringing you along won't >> halt. >> 4) Therefore, output is not-halt. >> >> Thus the algorithm, as the algorithm has put out output, has halted. >> Then, dear Patricia, upon the futility of your expert efforts, do you >> HALT! > > You're right. Peter Olcott posted a message that was formatted as though > it were a reply to a very specific question I asked in my last message, > but had nothing at all to do with the question. > > There is no point posting any more in this thread until he answers that > question. > > Patricia Take a look at the last reply that I just made to you. It sums up my whole point. |