From: Newberry on
On Nov 14, 9:13 pm, stevendaryl3...(a)yahoo.com (Daryl McCullough)
wrote:
> Newberry says...
>
>
>
> >On Nov 14, 4:56 pm, stevendaryl3...(a)yahoo.com (Daryl McCullough)
> >wrote:
> >> Why does it matter whether the machine can prove it,
> >> if the human can't prove it, either? As I said, for
> >> a computer program to be as powerful as a human in
> >> recognizing truth, all that's necessary is for the
> >> program to be able to *recognize* true statements.
> >> It's not necessary that the program be able to
> >> *prove* them.
>
> >How can a computer recognize that PA is consistent?
>
> By producing an output "true" when given the
> input question "Do you believe that PA is consistent?".
>
> Are you asking how one would go about *programming*
> a computer program that would emulate a human's
> mathematical reasoning? If so, I have no idea.

Easy. Just add this axiom: (Ex)P(x, #("F") --> F

> The issue is not whether we *currently* know
> how to make an artificial intelligent computer
> program. The issue is whether Godel's theorem
> implies that it is impossible. It doesn't
> imply any such thing.
>
> There is what I think is a pretty air-tight argument
> that no single human can do any mathematical reasoning
> that is noncomputable: A real human has a finite
> memory capacity, and so there are only finitely
> many different statements of mathematics that we
> can ever hold in our heads at one time. So the
> collection of all statements that any *actual*
> human would ever claim to be "unassailably true"
> is a finite set. Every finite set of formulas
> is computable.
>
> Now, you could argue about what an idealized
> human could do, where we idealize the human
> to have an infinite memory capacity. Could
> such an idealized human do something that
> no Turing machine could do? Well, it depends
> on the details of how the "ideal human" is
> idealized.
>
> --
> Daryl McCullough
> Ithaca, NY

From: Newberry on
On Nov 15, 3:12 am, stevendaryl3...(a)yahoo.com (Daryl McCullough)
wrote:
> Newberry says...
>
>
>
>
>
>
>
> >On Nov 14, 9:13 pm, stevendaryl3...(a)yahoo.com (Daryl McCullough)
> >wrote:
> >> There is what I think is a pretty air-tight argument
> >> that no single human can do any mathematical reasoning
> >> that is noncomputable: A real human has a finite
> >> memory capacity, and so there are only finitely
> >> many different statements of mathematics that we
> >> can ever hold in our heads at one time. So the
> >> collection of all statements that any *actual*
> >> human would ever claim to be "unassailably true"
> >> is a finite set. Every finite set of formulas
> >> is computable.
>
> >Bingo!! You got it! So we have the human mind surpasses any machine
> >and no single human can do any mathematical reasoning that is
> >noncomputable. A contradiction! That is what I was trying to say all
> >along.
>
> No, we *don't* have that the human mind surpasses any machine.
> There is no reason to believe that's true.

But we do. We know that G is true. Proof:
The axioms of PA are manifestly true
PA is consistent
"PA is consistent" is equivalent to G
G QED

This proof cannot be formalized. We can prove the consistency of PA in
ZFC. We believe ZFC to be true. We can prove the consistency of ZFC
only in a stronger theory about which we are not sure that it is true.
We observe two things

1) This proof is not the same as the manifest truth proof
2) Since we have not proved the consistenvy of ZFC we do not know if
the proof of the consistency of PA is not a falsehood. So our
certainty that PA is true does not come from this ZFC based proof.

If the manifest truth proof were formalizable and the axioms/rules
were added to PA we would have a contradiction.
Hence it is not formalizable i.e. the human mind surpasses any
machine.


From: Daryl McCullough on
Newberry says...
>
>On Nov 14, 9:13 pm, stevendaryl3...(a)yahoo.com (Daryl McCullough)
>wrote:

>> Are you asking how one would go about *programming*
>> a computer program that would emulate a human's
>> mathematical reasoning? If so, I have no idea.
>
>Easy. Just add this axiom: (Ex)P(x, #("F") --> F

But Godel shows that adding that axiom is inconsistent.

--
Daryl McCullough
Ithaca, NY

From: Daryl McCullough on
Newberry says...
>
>On Nov 15, 3:12 am, stevendaryl3...(a)yahoo.com (Daryl McCullough)
>wrote:

>> No, we *don't* have that the human mind surpasses any machine.
>> There is no reason to believe that's true.
>
>But we do. We know that G is true.

That does not prove that the human mind surpasses any machine.
You can certainly program a machine to know, among its basic
facts, that G(PA) is true.

>Proof:
>The axioms of PA are manifestly true
>PA is consistent
>"PA is consistent" is equivalent to G
>G QED
>
>This proof cannot be formalized.

That's just not true. It can perfectly well be formalized.
It just can't be formalized in the language of PA.

Extend PA to a new theory PA-plus in the following
way:

Add a new predicate symbol T(x). For every statement S
in the language of PA, add the axiom

S <-> T(#S)

where #S means the Godel code of S. Then add the
axiom

Ax Prove(PA,x) -> T(x)

where Prove(PA,x) is the formalization of the proof
predicate.

In PA-plus, it is perfectly straight-forward to prove
G(PA) (the Godel statement for PA).

--
Daryl McCullough
Ithaca, NY

From: Newberry on
On Nov 15, 8:52 am, stevendaryl3...(a)yahoo.com (Daryl McCullough)
wrote:
> Newberry says...
>
>
>
> >On Nov 14, 9:13 pm, stevendaryl3...(a)yahoo.com (Daryl McCullough)
> >wrote:
> >> Are you asking how one would go about *programming*
> >> a computer program that would emulate a human's
> >> mathematical reasoning? If so, I have no idea.
>
> >Easy. Just add this axiom: (Ex)P(x, #("F") --> F
>
> But Godel shows that adding that axiom is inconsistent.

I do not know if Goedel actually showed this particular case but it
certinly is inconsistent. But what is wrong with this axiom? It is
just as compelling as the rest of them. When people say that G is true
because of its meaning this is the derivation rule they use.