Prev: Fairness: Where can it be better handled?
Next: Artificial Intelligence Discovery Generates Software Extensions Tied to Math
From: Tim Little on 18 May 2010 21:35 On 2010-05-18, Tim Frink <plfriko(a)yahoo.de> wrote: > Here the termination depends on the input that is unknown. Are such > input- dependent programs also considered for the halting problem? Certainly. The input to the halting problem is an encoding of a machine and all its input. In the case of a Turing machine, that would be an encoding of the state transition table, the initial state, and the nonblank cells of the tape. In particular, all of these must be finite. Machines that accept external input during the execution of the program are not generally considered, though some of these could be modelled by a similar machine that accepts all input up-front and emulates delivery over time. Again, this requires that all the input be finite or at least describable by a finite program. - Tim
From: Reinier Post on 21 May 2010 17:20 Tim Chow wrote: [...] > The reason that people prefer the Turing machine as a model of > computation despite its seemingly unrealistic unbounded memory is > that it captures the notion of algorithms where there's a clear > concept of how to *extrapolate* the algorithm *if* we were given > more memory. [...] I think you're being overly cautious in your phrasing here. The notion of algorithm doesn't imply any fixed bound of memory at all. On the contrary. For instance, Euclid's algorithm for determining greatest common divisor doesn't say 'if the input exceeds available storage, fail' or 'if the number of computation steps exceeds a given bound, give up altogether' or anything of the kind. Turing machines do model algorithms, not "extrapolations of algorithms". I know algorithms are sometimes defined as necessarily deterministic and/or guaranteed to terminate, properties that Turing machines may lack, but I've never seen a definition in which any algorithm is required to operate with a fixed amount of working memory. -- Reinier
From: tchow on 21 May 2010 18:41 In article <4bf6f919$0$14133$703f8584(a)textnews.kpn.nl>, Reinier Post <rp(a)raampje.lan> wrote: >I think you're being overly cautious in your phrasing here. The notion >of algorithm doesn't imply any fixed bound of memory at all. On the >contrary. For instance, Euclid's algorithm for determining greatest >common divisor doesn't say 'if the input exceeds available storage, >fail' or 'if the number of computation steps exceeds a given bound, >give up altogether' or anything of the kind. Turing machines do model >algorithms, not "extrapolations of algorithms". I know algorithms >are sometimes defined as necessarily deterministic and/or guaranteed >to terminate, properties that Turing machines may lack, but I've never >seen a definition in which any algorithm is required to operate with a >fixed amount of working memory. Technically you're right, but only because modern mathematics and computer science has been so thoroughly conquered by this point of view. I was trying to speak from the point of view of someone who is tempted by strict finitism, and who might think of an "algorithm" as "something that runs on an actual computer." How do you explain to such a person why we insist on defining the word "algorithm" in an intrinsically unbounded manner when we only ever run them on bounded machines? -- Tim Chow tchow-at-alum-dot-mit-dot-edu The range of our projectiles---even ... the artillery---however great, will never exceed four of those miles of which as many thousand separate us from the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences
From: Casey Hawthorne on 21 May 2010 19:23 On 21 May 2010 21:20:25 GMT, rp(a)raampje.lan (Reinier Post) wrote: >Tim Chow wrote: > >[...] > >> The reason that people prefer the Turing machine as a model of >> computation despite its seemingly unrealistic unbounded memory is >> that it captures the notion of algorithms where there's a clear >> concept of how to *extrapolate* the algorithm *if* we were given >> more memory. > >[...] > >I think you're being overly cautious in your phrasing here. The notion >of algorithm doesn't imply any fixed bound of memory at all. On the >contrary. For instance, Euclid's algorithm for determining greatest >common divisor doesn't say 'if the input exceeds available storage, >fail' or 'if the number of computation steps exceeds a given bound, >give up altogether' or anything of the kind. Turing machines do model >algorithms, not "extrapolations of algorithms". I know algorithms >are sometimes defined as necessarily deterministic and/or guaranteed >to terminate, properties that Turing machines may lack, but I've never >seen a definition in which any algorithm is required to operate with a >fixed amount of working memory. >I know algorithms are sometimes defined as necessarily deterministic and/or guaranteed to terminate, properties that Turing machines may lack, As Turing machines lack so do algorithms. Turing machines, lambda calculus, ... are all "formalisms" of what it means to be computable. -- Regards, Casey
From: Casey Hawthorne on 24 May 2010 11:49 On Fri, 21 May 2010 16:23:40 -0700, Casey Hawthorne <caseyhHAMMER_TIME(a)istar.ca> wrote: >On 21 May 2010 21:20:25 GMT, rp(a)raampje.lan (Reinier Post) wrote: > >>Tim Chow wrote: >> >>[...] >> >>> The reason that people prefer the Turing machine as a model of >>> computation despite its seemingly unrealistic unbounded memory is >>> that it captures the notion of algorithms where there's a clear >>> concept of how to *extrapolate* the algorithm *if* we were given >>> more memory. >> >>[...] >> >>I think you're being overly cautious in your phrasing here. The notion >>of algorithm doesn't imply any fixed bound of memory at all. On the >>contrary. For instance, Euclid's algorithm for determining greatest >>common divisor doesn't say 'if the input exceeds available storage, >>fail' or 'if the number of computation steps exceeds a given bound, >>give up altogether' or anything of the kind. Turing machines do model >>algorithms, not "extrapolations of algorithms". I know algorithms >>are sometimes defined as necessarily deterministic and/or guaranteed >>to terminate, properties that Turing machines may lack, but I've never >>seen a definition in which any algorithm is required to operate with a >>fixed amount of working memory. > >>I know algorithms are sometimes defined as necessarily deterministic and/or guaranteed to terminate, properties that Turing machines may lack, > As Turing machines lack so do algorithms. Turing machines, lambda calculus, recursive function theory, ... are all equivalent "formalisms" of what it means to be computable. -- Regards, Casey
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 5 Prev: Fairness: Where can it be better handled? Next: Artificial Intelligence Discovery Generates Software Extensions Tied to Math |