From: MoeBlee on
On Dec 18, 5:35 pm, MoeBlee <jazzm...(a)hotmail.com> wrote:

> They teach the induction schema to little kids?

Nevermind. I see the point was ceded in another post.

MoeBlee

From: Newberry on
On Dec 17, 6:54 am, stevendaryl3...(a)yahoo.com (Daryl McCullough)
wrote:
> Newberry says...
>
> >Let me see if I can summarize your position.
> >a) The human neural system does not surpass the Turing machine.
>
> Yes, I believe that.

The same way you could prove that there is no free will. Do you also
believe that?

> >b) We are NOT equivalent to any theory PA, ZFC, ZFC + an axiom of
> >infity, T_3, T_4 for any n
>
> It depends on what you mean by being "equivalent". Humans don't
> *have* a fixed theory. We can be talked into accepting statements
> as true, but there is no good reason to think that we are perfectly
> consistent about it. Perhaps a human could be talked into believing
> the Continuum Hypothesis, but with a different argument, could be
> talked into believing its negation.
>
> Humans are not very usefully characterized by a formal theory.
>
> >c) We are programmed as a heuristic learning algorithm
>
> I would say that we *have* heuristic learning algorithms.
> I wouldn't say that we *are* those algorithms, or that we
> were *programmed* (unless you want to call natural selection
> a form of programming).
>
> >d) Mathematics is at least partially an empirical science
>
> Yes.
>
> >Is this a fair characterization of your position?
>
> Pretty good.

Let me make sure that I understand the whole thing. By the heuristic
learning algorithms we have arrived at the conclusion that the axioms
of ZFC are true and also that ZFC is consistent. (What is the degree
of certainty, 100%?) In additions there is a also a consistency proof
of ZFC in a stronger system (ZFC + an axiom of infinity. (What is the
cogency of this proof?)

From: Newberry on
On Dec 17, 6:37 am, stevendaryl3...(a)yahoo.com (Daryl McCullough)
wrote:
> Newberry says...
>
> >So noted. I also note that most people do not find Gentzen's proof
> >very convincing. And I still do not see how a proof in a stronger
> >system could be more convincing than a proof in the system itself.
>
> The *strength* of the system is not relevant so much as whether the
> axioms are themselves intuitively true. A proof in a theory whose
> axioms are intuitively true is more useful and interesting than
> a proof in a theory whose axioms are not intuitively true.
>
> So, for example, a proof in PA + the negation of Goldbach's
> conjecture would not be very convincing, because we have no
> reason to believe that the negation of Goldbach's conjecture
> is true.

Why is a consistency proof of a theory in a stronger theory
interesting?
From: Nam D. Nguyen on
MoeBlee wrote:
> On Dec 18, 2:59 pm, "Nam D. Nguyen" <namducngu...(a)shaw.ca> wrote:
>
>> A model
>> *always already* includes a *chosen* interpretation: hence a belief has been
>> "believed" already.
>
> A model is a mathematical object.

To be precise, it's more than just a mere mathematical object: it's an *interpreted*
mathematical object. A syntactical wff on the other hand is a non-interpreted
mathematical object. What is the difference? Well, the entire Gestalt view of the
collection of components of a wff is supposed to be independent of any being's
interpreation or view: a FOL formufla would mean the same to all - no choice of
an alternative. The Gestalt view of the collection of components of a model-structure
is a model basically, and is subject to individual reasoner's interpretation.
Given the same structure, you could subjectively interpret the Gestalt view
differently!

For instance, given the following structure:

xxxbxxxxxxxxxxxxxxxxxxxxxxxxxx

Would you *interpret* that structure as a model of the blue-eye-dragon
theory T? Or would that be *not* a model of T, because you would interpret
"b" as "non-blue" (e.g. "brown"), and in such case it could be a model of
a non-blue-eye-dragon theory T' (if you so care to view it that way).
On the other hand, given an appropriate language, the formula F df= "the dragon
has a blue eye" could not be viewed/interpreted differently. For instance,
that formula could not be viewed as ~F.


> I don't know how you would argue
> that a model requires "a belief has been "believed" already".

Just change "a belief" to "an interpretation" and "believed" to "interpreted"
then you would know.

>
> MoeBlee
From: Peter_Smith on
On 19 Dec, 01:16, george <gree...(a)cs.unc.edu> wrote:

> The content of the axioms is what is relevant.

Indeed. I thought that was my Fregean point, against your Hilbertian
formalism, namely that the axioms do come with semantic content :-)