From: Daryl McCullough on
Peter Olcott says...

>Does tying you up, and taping your mouth shut make the problem
>undecidable for you, or do you still know that the program will
>halt, even though you can't tell anyone?

You are getting yourself tied up with your analogies. WillHalt is not
being "tied up" by anyone. It is perfectly free to answer the question:
Does LoopIfHalts(LoopIfHalts) halt, or not?

What's really going on has nothing whatsoever to do with being
"tied up" or having your output corrupted, or whatever. Those
analogies are completely bogus. What's really going on is this:

WillHalt is being asked to predict the behavior of another program,
LoopIfHalts. But what is LoopIfHalt doing? It's trying to predict
the behavior of WillHalt, so that it can do the opposite. Nobody
is tying anybody up, or taping anybody's mouth shut. It's all a
computational game.

A human analogy might be this: You and I are playing a game.
We each are trying to accomplish different things in this
game. What you are trying to do is to answer the question:
Will Daryl ever accomplsh what he is trying to do? You are
supposed to answer "yes" or "no".

What I'm being asked to do is to predict what Peter is
going to say, and then I'm supposed to say the opposite.
If I can figure out that you are going to say "yes", than
I'm supposed to say "no", and vice-versa.

After being given our instructions, we each are taken to
our respective isolation booths, to figure out our answers.
There is *no* interaction between us. I'm not allowed to
watch what you are doing, and you aren't allowed to watch
what I'm doing. Instead, we have to use our knowledge about
the other person in order to make our predictions.

There is nothing ill-defined about this game. You have complete
knowledge about what I've been asked to do. We can assume, also
that you have complete knowledge about how my brain works. Similarly,
I have complete knowledge about what you've been asked to do, and
I have complete knowledge about how your brain works. I'm not
in any way "taping your mouth shut" or fooling with your outputs.
You aren't in the room with me, and I have no way to affect your
decision making in the slightest.

What is ill-defined about what we are asked to do? It's perfectly
well-defined. It's just very, very hard. As Yogi Berra said, prediction
is really hard, especially about the future.

If instead of human beings, we play the game with deterministic
computer programs, then the Daryl role becomes much easier, because
one program can perfectly simulate another program. The Daryl program
just has to start simulating the behavior of the Peter program. If
the simulation ever halts, then the Daryl program can check what
Peter's answer is, and output the opposite answer.

On the other hand, the Peter program has a much harder job. If
the answer to the question "Will the Daryl program ever produce
the correct answer?" is "no", then it would take forever to find
that out by simulation. For the Peter program to correctly answer
"no", it has to take a "shortcut" that's quicker than simulation.

That's the real moral of the proof of the unsolvability of the
halting problem: to predict what another program is going to do,
there are no foolproof shortcuts. Simulation is the best you
can do.

--
Daryl McCullough
Ithaca, NY

From: Peter Olcott on

"Daryl McCullough" <stevendaryl3016(a)yahoo.com> wrote in message
news:ehc20o06q3(a)drn.newsguy.com...
> Peter Olcott says...
>
>>Does tying you up, and taping your mouth shut make the problem
>>undecidable for you, or do you still know that the program will
>>halt, even though you can't tell anyone?
>
> You are getting yourself tied up with your analogies. WillHalt is not
> being "tied up" by anyone. It is perfectly free to answer the question:
> Does LoopIfHalts(LoopIfHalts) halt, or not?
>
> What's really going on has nothing whatsoever to do with being
> "tied up" or having your output corrupted, or whatever. Those
> analogies are completely bogus. What's really going on is this:
>
> WillHalt is being asked to predict the behavior of another program,
> LoopIfHalts. But what is LoopIfHalt doing? It's trying to predict
> the behavior of WillHalt, so that it can do the opposite. Nobody

No you are wrong, this is not what the specification of the problem entails.
LoopIfHalts() is never making any predictions what-so-ever. The reason that you
are not seeing what I am saying is that you continue to make these tiny little
errors, that make your whole conclusion come out wrong.

> is tying anybody up, or taping anybody's mouth shut. It's all a
> computational game.
>
> A human analogy might be this: You and I are playing a game.
> We each are trying to accomplish different things in this
> game. What you are trying to do is to answer the question:
> Will Daryl ever accomplsh what he is trying to do? You are
> supposed to answer "yes" or "no".
>
> What I'm being asked to do is to predict what Peter is
> going to say, and then I'm supposed to say the opposite.
> If I can figure out that you are going to say "yes", than
> I'm supposed to say "no", and vice-versa.
>
> After being given our instructions, we each are taken to
> our respective isolation booths, to figure out our answers.
> There is *no* interaction between us. I'm not allowed to
> watch what you are doing, and you aren't allowed to watch
> what I'm doing. Instead, we have to use our knowledge about
> the other person in order to make our predictions.
>
> There is nothing ill-defined about this game. You have complete
> knowledge about what I've been asked to do. We can assume, also
> that you have complete knowledge about how my brain works. Similarly,
> I have complete knowledge about what you've been asked to do, and
> I have complete knowledge about how your brain works. I'm not
> in any way "taping your mouth shut" or fooling with your outputs.
> You aren't in the room with me, and I have no way to affect your
> decision making in the slightest.
>
> What is ill-defined about what we are asked to do? It's perfectly
> well-defined. It's just very, very hard. As Yogi Berra said, prediction
> is really hard, especially about the future.
>
> If instead of human beings, we play the game with deterministic
> computer programs, then the Daryl role becomes much easier, because
> one program can perfectly simulate another program. The Daryl program
> just has to start simulating the behavior of the Peter program. If
> the simulation ever halts, then the Daryl program can check what
> Peter's answer is, and output the opposite answer.
>
> On the other hand, the Peter program has a much harder job. If
> the answer to the question "Will the Daryl program ever produce
> the correct answer?" is "no", then it would take forever to find
> that out by simulation. For the Peter program to correctly answer
> "no", it has to take a "shortcut" that's quicker than simulation.
>
> That's the real moral of the proof of the unsolvability of the
> halting problem: to predict what another program is going to do,
> there are no foolproof shortcuts. Simulation is the best you
> can do.
>
> --
> Daryl McCullough
> Ithaca, NY
>


From: Daryl McCullough on
Peter Olcott says...

>Here is a reasonably possible scenario, and the one that I am proposing. It is
>not that all these people got the math wrong, they did not get the math wrong.
>It is the natural language conclusions that they derived from the mathematical
>conclusions that are incorrect.

Yes, they could all be wrong, but they're not. The natural language
conclusion is exactly right: The halting problem is undecidable in
exactly the natural language sense that there is no computer program
that can correctly answer all instances of the question "Will program P
halt on input X?"

Here's an intuitive way to see why the undecidability of the halting
problem makes sense.

Suppose *you* are asked to answer the question: "Does program P halt
on input X?". How would you do it? Well, the obvious way to do it
would be to start simulating program P on input X. If your simulation
ever halts, then the answer is clearly "yes". But how would you ever
know that the answer is "no"? You can't find that out by simulation,
because the answer "no" means that such a simulation must go on forever.
To answer "no", you somehow have to find a "shortcut" that allows you
to see the effect of an infinite amount of computation in a finite
amount of time.

Sometimes there is such a shortcut; sometimes you can see that P
is getting itself into an infinite loop. But why would you think
that you can *always* find such a shortcut? If there is a shortcut
that allows you to figure out the effect of an infinite computation
in a finite amount of time, what's to prevent P from using that
shortcut?

The moral of the unsolvability of the halting problem is that
in some cases, the *only* way to figure out what a program is
going to do is to simulate it. There are no foolproof shortcuts.

>Specifically calling the HP undecidable, is the error. It might
>be undecidable in some artificially contrived sense that has
>nothing to do with whether or not a decision can be made, but,
>then this would be the error.

I think you have things exactly backwards. Your attempts to
solve the halting problem are artificially contrived, and have
nothing to do with whether or not a decision can be made.
In a very practical, real-world sense, the haling problem is
undecidable. This has implications for all sorts of things.
It means that there is no such thing as perfect data
compression. It means that there is no way to look at
a set of mathematical axioms and tell if they are consistent
or not. It means that there is no foolproof way to check the
correctness of computer programs (there will always be programs
that are correct, but there is no way to prove them correct
except by running them). It means that there is no perfect
optimizing compiler. It means that there is no way to decide
whether an arbitrary multinomial equation has integer solutions.
It means that there is no way to look at two arbitrary
context-free grammars and decide whether they produce the
same language. The undecidability of the halting problem
has profound consequences in almost all branches of mathematics
and computer science. Your speculations about "malignant self
reference" as no relevance to solving any of these problems.

--
Daryl McCullough
Ithaca, NY

From: Daryl McCullough on
Peter Olcott says...

>"Daryl McCullough" <stevendaryl3016(a)yahoo.com> wrote

>> WillHalt is being asked to predict the behavior of another program,
>> LoopIfHalts. But what is LoopIfHalt doing? It's trying to predict
>> the behavior of WillHalt, so that it can do the opposite. Nobody
>
>No you are wrong, this is not what the specification of the problem
>entails.

I'm talking about the normal statement and proof of the unsolvability
of the halting problem. In that proof, we assume that there is a program
Halt(p,x) which is purported to have the following behavior:

If p is the code of a program which halts on input x, then
Halt(p,x) outputs true. Otherwise, Halt(p,x) outputs false.

So clearly Halt is being asked to predict the behavior of program p.

The way that LoopIfHalts is defined is as follows:

LoopIfHalts(x) checks if Halt(x,x) returns false. If
Halt(x,x) returns true, then LoopIfHalts goes into
an infinite loop.

So clearly, LoopIfHalts is dependent on the behavior of Halt,
and vice-versa.

>LoopIfHalts() is never making any predictions what-so-ever.

That's not true. It *first* must figure out whether WillHalt
returns true or false. You defined LoopIfHalts in terms of
WillHalt.

--
Daryl McCullough
Ithaca, NY

From: Daryl McCullough on
Peter Olcott says...

>No you are wrong, this is not what the specification of the
>problem entails. LoopIfHalts() is never making any predictions
>what-so-ever. The reason that you are not seeing what I am
>saying is that you continue to make these tiny little
>errors, that make your whole conclusion come out wrong.

No, I have no trouble seeing what you are saying, but I
see that it is nonsense. You really don't know what you
are talking about, Peter.

The original proof of the unsolvability of the halting
problem was about Turing machines. It's not really important
that it be about Turing machines, but the clearest statement
uses programs that are *functional*. That means that the output
is a deterministic function of the inputs. No matter how the
function is called, it *always* returns the same outputs given
the same inputs.

Such a functional program is not allowed to throw exceptions,
and it is not allowed to have its output depend on the context
in which it is called.

It is clear that your trick cannot possibly work for such
programs. There can be no deterministic program Halt(p,x) such
that

If p is the source code for a program that halts on input x,
then Halt(p,x) returns true. Otherwise, Halt(p,x) returns false.

You agree with that right? The usual proof shows clearly that there
is no functional program Halt(p,x) that solves the halting problem?
Right?

Now, what you are proposing is a *non-functional* program.
It does *not* return the same output given the same inputs.
The output that it produces depends on the context in which
it is called. So the usual proof doesn't apply to your type
of program.

HOWEVER, it is clearly true that with ordinary programming languages,
anything that can be deterministically computed using non-functional
programs can also be computed using functional programs. The basic
idea is just to make all the dependencies of the non-functional
program explicit as arguments. So if there is no solution to the
halting problem using functional programs, then there is no solution,
period. Your excursion into nonfunctional programs makes *no* difference.
It makes the analysis more complicated, but it doesn't change the result.

--
Daryl McCullough
Ithaca, NY