From: Tony Orlow on
malbrain(a)yahoo.com said:
> Virgil wrote:
> > In article <MPG.1d4863d52071fde5989f51(a)newsstand.cit.cornell.edu>,
> > Tony Orlow (aeo6) <aeo6(a)cornell.edu> wrote:
>
> > > and that I was trying to prove
> > > things about sets, not numbers, which is also bullshit, since I was
> > > proving a property regarding a set DEFINED by a natural number, which
> > > is ultimately a property of that number.
> >
> > But the set N is not defined by any one natural number
>
> Under Tony's theory, the number representing N is defined by a string
> of an infinite number of ones. Yes, more than one Turing machine can
> produce this. karl m
>
>
Actually there are two ways to look at it. In unsigned binary, yes, an infinite
number of 1's is the largest number possible. Since we start with all 0's
representing 0, the size of the set, N, will be one more than 111...111. It
will be 000...001:000...000, or one unit infinity.

Now, the other way to look at things, from a computer science standpoint, is
with 2's complement, the signed binary integer representation used in just
about every computer on the planet. A string of all zeroes, or 0, is obviously
it's own negative. A string of all 1's, rather than being the largest number,
is -1. The largest positive number is 011...111, and the largest negative is
generally considered to be 100...000. BUT, this number is its own negative
value. It is both positive and negative, being at once the largest positive and
negative. 100...000 is really infinity, in the scheme of the signed binary
number circle, and just like + and -0 are equivalent, in this perspective + and
-oo are also equivalent.
--
Smiles,

Tony
From: malbrain on
Daryl McCullough wrote:

> Nothing in mathematics is excepted without question. Not by
> mathematicians, anyway.

I think you meant ACCEPTED. See Barb's post for a discussion of
EXCEPTED.

> Yes, it is certainly the case that *if* you can prove by
> induction "forall x, Phi(x)", *then* you can write a
> corresponding recursive function that given a number n,
> produces a proof of Phi(n). Nobody disputes that. What
> people are disputing is your bizarre belief that proving
> "forall x, Phi(x)" by induction means that you have proved
> Phi(0), Phi(1), Phi(2), ... It means that you *can* prove
> all those infinitely many statements, not that you *have*.

Sorry, but the axiom states that you HAVE INDEED proved your assertion
for each and every n when its conditions are satisfied.

> >For me, mathematics without meaning is unsatisfying, and
> >symbolic manipulation without understanding is boring.
>
> That's true for all mathematicians. The difference with you
> is that you don't want to do all the work necessary to understand
> real mathematics.

There was a decision in mathematics to branch on the axiom of infinity.
It's not part of the Peano axioms. karl m

From: malbrain on
Tony Orlow (aeo6) wrote:
> malbrain(a)yahoo.com said:
> > Virgil wrote:
> > > In article <MPG.1d4863d52071fde5989f51(a)newsstand.cit.cornell.edu>,
> > > Tony Orlow (aeo6) <aeo6(a)cornell.edu> wrote:
> >
> > > > and that I was trying to prove
> > > > things about sets, not numbers, which is also bullshit, since I was
> > > > proving a property regarding a set DEFINED by a natural number, which
> > > > is ultimately a property of that number.
> > >
> > > But the set N is not defined by any one natural number
> >
> > Under Tony's theory, the number representing N is defined by a string
> > of an infinite number of ones. Yes, more than one Turing machine can
> > produce this. karl m
> >
> >
> Actually there are two ways to look at it. In unsigned binary, yes, an infinite
> number of 1's is the largest number possible. Since we start with all 0's
> representing 0, the size of the set, N, will be one more than 111...111. It
> will be 000...001:000...000, or one unit infinity.

There's already a STANDARD method for coding real numbers using the
integers. It's published by the Institute of Electrical and Electronic
Engineering.

> Now, the other way to look at things, from a computer science standpoint, is
> with 2's complement, the signed binary integer representation used in just
> about every computer on the planet. A string of all zeroes, or 0, is obviously
> it's own negative. A string of all 1's, rather than being the largest number,
> is -1. The largest positive number is 011...111, and the largest negative is
> generally considered to be 100...000. BUT, this number is its own negative
> value. It is both positive and negative, being at once the largest positive and
> negative. 100...000 is really infinity, in the scheme of the signed binary
> number circle, and just like + and -0 are equivalent, in this perspective + and
> -oo are also equivalent.

You still haven't answered Daryl's questions from yesterday evening.
karl m

From: malbrain on
David Kastrup wrote:
> Tony Orlow (aeo6) <aeo6(a)cornell.edu> writes:
>
> > Various axioms have their various issues. The most pertinent to this
> > discussion right now, it seems, is Peano's 5th. I don't disagree
> > with the axiom or with the concept of inductive/recursive proof,
>
> There is no such thing as "recursive proof" in this context.

You're branching into the realm of the axiom of infinity. Stop that.

> > but in order to eb careful that what we are doing is correct, we
> > need to keep in mind the original justifications for axioms when
> > applying them.
>
> Wrong. An axiom needs to stand on its, absolutely. If it requires
> additional considerations, it was ill-chosen. Fortunately, this does
> not appear to be the case with the 5th Peano axiom.

I think you mean that one doesn't need to go beyond the axiom standing
on its own, for itself as part of a system.

> > If you are applying a method such as inductive proof, with an
> > inherent infinite loop, you cannot maintain finiteness through an
> > infinity of iterations, each involving finite increase in value.
>
> Completely irrelevant chitchat to the 5th Peano axiom. It is not
> bothered about "increase" in value, it is not bothered about
> "maintaining finiteness", it is not bothered about "iterations" or an
> "infinity" of them.

It does speak to the infinity of N. If you can apply it to your
assertion, then you can CONCLUDE it is true for each and every, for all
n in N. It is very concerned about 'maintaining finiteness.'


> That's what makes it a good choice.

Only in the context of it's history in making the choice of the axiom
of infinity.

karl m

From: malbrain on
David Kastrup wrote:
> Tony Orlow (aeo6) <aeo6(a)cornell.edu> writes:
>
> > Virgil said:
> >> In article <MPG.1d489d7fea8af732989f60(a)newsstand.cit.cornell.edu>,
> >> Tony Orlow (aeo6) <aeo6(a)cornell.edu> wrote:
> >>
> >> > I have explained the flaw in this proof, and it is met with
> >> > confusion because none of you seems to appreciate the recursive
> >> > nature of inductive proof.
> >>
> >> The inductive axiom shortcuts that recursion, which is the point of
> >> the inductive axiom. It says that if the recursive step can be
> >> proved in general, then it never need be applied recursively.
> >>
> >> If TO wishes to reject the inductive axiom, only then can he argue
> >> recursion.
> >
> > What a load of bilge water! Accepting the axiom as a general rule
> > does not mean one has to immediately forget about the lgical basis
> > for the axiom.
>
> Of course not. But the basis for choosing an axiom is irrelevant to
> the application of the axiom. And this axiom was chosen exactly in a
> manner that does not require recursive application.

When working a proof one chooses one's axioms based on the desired
result. How can you say that the choice is irrelevant to the
application?

> Whether it was chosen because it is equivalent to arbitrarily deeply
> nested recursion or because Kronecker's neighbor had a particularly
> ugly elephant locked in his belfry is irrelevant to its application.

Do you know the actual history that went into its development as an
axiom? thanks, karl m