From: Tony Orlow on
David R Tribble wrote:
> Virgil wrote:
>>> If TO does not like to be able to tell one ball from another, he does
>>> not have to play the game, but he should not ever try to pull that in
>>> games of pool or billiards.
>
> Tony Orlow wrote:
>> If distinguishing balls gives a less exact answer, and a nonsensical one
>> to boot, then that attention can be judged to be ill spent, and not
>> contributing to a solution at all. It is clear that sum(x=1->oo: 9)
>> diverges, is infinite, not 0. It's ridiculous to think otherwise.
>
> What about:
> sum{n=0 to oo} (10n+1 + ... + 10n+10) - sum{n=1 to oo} (n)
> The left half specifies the number of balls added to the vase, and
> the left half specifies those that are removed.
>

Do you mean:
sum{n=0 to oo} (10) - sum{n=0 to oo} (1)?
That sounds like what you re describing, and termwise the difference is
sum(n=0 to oo) (9). That's infinite, eh?
From: Tony Orlow on
cbrown(a)cbrownsystems.com wrote:
> imaginatorium(a)despammed.com wrote:
>> Tony Orlow wrote:
>>
>> Hmm, it seems to me, Tony, that this post illustrates rather well just
>> how close to total is your ignorance of what mathematics is, and your
>> inability to grasp the notion of a formal argument. (So why do I
>> bother?...)
>>
>> The theme running through almost all of your comments here is one we
>> see a lot in JSH arguments. When little children learn arithmetic at
>> school, it's common to start with positive integers (which form a
>> semigroup under both addition and multiplication), so lots of things
>> "can't be done": 3-5, or 11 / 6. Later they learn further concepts,
>> such as positive rationals, which form a group under multiplication, so
>> things which previously "couldn't be done" now can. Similarly they
>> learn about negative numbers, so that now 5-23 can be calculated. In
>> the context of school arithmetic as a sort of "calculation
>> engineering", it's reasonable to see this as just learning more and
>> more powerful ways of "working out the answer". This may well include
>> what I call "Javascript arithmetic", in which "Infinity" is one
>> additional value, allowing us to calculate things to do with lenses in
>> a very useful way.
>
> I think this is exacly the problem that occurs with Tony, HdB, and many
> others. (Ross, WM and Zick have other, uh, issues).
>
> What's interesting to me is to consider exactly where this approach
> breaks down.
>
> In some sense, Tony is asking "why can't we simply /define/ some number
> B which is infinite (i.e., B > r for all r in R), and then simply
> evaluate formulas such as f(x) = (x+1)/(x^2+1) at x = B?"
>
> And of course we /can/ construct such a system (which I will call here
> the T-numbers); as I outlined in a previous post, for which his ideas
> hold as long as f is a rational function of polynomials over R. Even
> his "infinite induction" principle holds in its most straightforward
> mode: if f is a rational function of polynomials over R, and for all
> real numbers r > 0, f(r)>0, then f(B)>0.
>
> The system starts to fall apart when we say: what if f is /not/ a
> rational function of polynomials over R?
>
> For TO and others, "a function is a formula"; which loosely means: a
> rational function of polynomials + "other well-known functions, such as
> sin, cos, e^x, log, etc.".
>
> This is tied into his confusion over what the meaning of a "limit" is,
> and why R itself is very different from Q or the algebraic numbers.
>
> By TO's lights, to find the limit of a function f(x) as x->oo, one
> simply "plugs-in" B (or "any infinite") into the "formula", and voila!
> the resulting f(B) is the limit of f(x) as x->oo.
>
> This is a sort of "plug-and-chug" approach to limits that we are taught
> as a sort of rule of thumb: if a limit has a form of (n+1)/(n^2 - n),
> then we apply: oo+1 = oo, oo^2 - oo = oo^2, oo/oo^2 = 1/oo = 0; so
> "therefore" the limit is 0.
>
> In the more, um, extended domain of the T-numbers, this would read as
> "B+1 is negligibly different from B, B^2 - B is negligibly different
> from B^2, so (B+1)/(B^2 - B) is negligibly different from 1/B, which is
> different from 0 by an infinitesimal amount".
>
> But how does one evaluate the expression "sin(x)" when x = B? One
> presumes that it would be the same way that one would evaluate the
> function at any real number x as the limit of as n->oo of the sum::
>
> x - x^3/3! + x^5/5! - x^7/7! + ... + (-1)^n*x^(2*n+1)/(2*n+1)!
>
> This is not the sort of "formula" where one can simply "plug in" B for
> n and "get the answer". TO cannot even answer the question "what is
> (-1)^B?" without getting into logical contortions regarding whether B
> is "prime or composite" (whatever that may mean), let alone deciding
> what is meant by "B!".
>
> All of his confusions seem to boil down to this problem of a "limit" as
> some kind of formulaic substitution of a symbol representing "the
> infinite case" into some kind of algebraic formula, which then exists
> as a "number".
>
> This isn't typically true for limits which are not taken as n->oo. For
> example, lim x->0 (sin(x)/x) isn't "sin(0)/0 = 0/0", where "0/0" is
> some new "number" that we then append to the reals (N.B.: by
> L'hopital's rule, it's 1).
>
> Can the T-numbers be saved? I.e., can we define "limits" on the
> T-numbers in such a way that we can still calculate, e.g., lim 1/x to
> be some specific T-number, while also allowing quantities such as B and
> 1/B, and still remaining a field with the total order so beloved of
> Tony?
>
> Looking at y = lim x->oo 1/x, where y is some T-number, I think we have
> three choices:
>
> (i) The limit is understood to mean that for all positive real number
> e, there is a real number x such that for all real numbers z > x, |1/z
> - y| < e.
>
> First, y = 1/B clearly satisfies the conditions. If y is any
> infinitesimal satisfying the above limit condition, then 2y > y, and
> 1/z - y > 1/z - 2y > 0 follows (since 1/z is real), and so it is also
> the case that |1/z - 2y| < e; and in fact 2y is actually "closer" to
> the limit than y is! (i.e., for any e > 1/z - y, there is a smaller e'
> such that 1/z - y > e' > 1/z - 2y).
>
> We cannot even say "let y be the largest infinitesimal satisfying the
> limit condition", becuase there is no largest infinitesimal in the
> T-numbers (just as there is no smallest infinite in the T-numbers).
>
> So in definition (i), the lim x->oo 1/x is not defined as a particular
> T-number.
>
> (ii) The limit is understood to mean that for all positive T-numbers e,
> there is a real number x such that for all such that for all real
> numbers z > x, |1/z - y| < e.
>
> Again in this case, the limit is undefined, by similar logic to the
> above.
>
> (iii) The limit is understood to mean that for all positive T-numbers
> e, there is a T-number x such that for all such that for all T-numbers
> z > x, |1/z - y| < e.
>
> In this case, the limit is y = 0; because if y > 0, then let e = y/2,
> Then for all z > 2/y, |1/z - y| = y - 1/z > e. On the other hand, if y
> = 0, then for all e, set x = 1/e; then for all z > x, 1/z < e.
>
> I see no "fourth way" of preserving the idea of a limit, that yields
> lim x->oo 1/x = 1/B for some infinite B.
>
> Cheers - Chas
>

Hi Chas -

Gee, I'm sorry I h
From: Virgil on
In article <45286ce5$1(a)news2.lightlink.com>,
Tony Orlow <tony(a)lightlink.com> wrote:

> David R Tribble wrote:

> > What about:
> > sum{n=0 to oo} (10n+1 + ... + 10n+10) - sum{n=1 to oo} (n)
> > The left half specifies the number of balls added to the vase, and
> > the right half specifies those that are removed.
> >
>
> Do you mean:
> sum{n=0 to oo} (10) - sum{n=0 to oo} (1)?
> That sounds like what you re describing, and termwise the difference is
> sum(n=0 to oo) (9). That's infinite, eh?

But the sums are not given termwise in the question, but sumwise, so
cannot be calculated termwise in your answer, but must be done sumwise.

And sumwise they are no different.
From: Tony Orlow on
David R Tribble wrote:
> Tony Orlow wrote:
>>> For the sake of this argument, we can talk about infinite reals, of
>>> which infinite whole numbers are a subset.
>
> David R Tribble wrote:
>>> Every member of N has a finite successor. Can you prove that your
>>> "infinite naturals" are members of N?
>
> Tony Orlow wrote:
>> Yes, if "finite successor" is the only criterion.
>>
>> To prove finiteness of such a string:
>>
>> The bits over each sequence are indexed by natural numbers, which are
>> all finite, yes?
>>
>> For any finite bit position, the string up to and including that bit
>> position can only represent a finite value, yes?
>>
>> Therefore, there is no bit position where the string can have
>> represented anything but a finite value, see? If the length is
>> potentially, but not actually, infinite, so with the value.
>
> So you're saying that finite bitstrings can only represent finite
> naturals.
>

Strings with only finite bit positions.

>
>> To prove successorship of such a string:
>>
>> The rule for successorship for finite values is
>> 1. Find the rightmost (least significant) 0
>> 2. Invert from that 0 rightwards
>>
>> This works for all values where there is a rightmost 0. That excludes
>> ...111, which can only have successor given ignored overflow, allowable
>> in some cases.
>
> So obviously this rule, given a starting point of 0, a finite natural
> and a finite-length bitstring, can never produce anything but another
> finite-length bitstring as a successor. So you've proven that N
> can contain only finite naturals.
>
> Unless you think that your rule allows an infinite bitstring successor
> to be formed from some finite bitstring?

You will not produce 1 bits in infinite positions without an infinite
number of successions.

>
>> You don't really question why the successor to ...11110000 is equal to
>> ...11110001, do you?
>
> Again, I can't answer that until you define those numbers in a
> meaningful way. As you proved above, they are obviously not
> members of N.
>

1) ....00000 is a number.
2) If x is a number, then the successive number, formed by inverting the
rightmost 0 and all 1's to the right of it, is also a number.
From: Tony Orlow on
Dik T. Winter wrote:
> In article <1159976673.345698.261310(a)h48g2000cwc.googlegroups.com> "MoeBlee" <jazzmobe(a)hotmail.com> writes:
> > Tony Orlow wrote:
> ...
> > > It doesn't have primitive operators 'e'and 'succ'?
> >
> > It has the primitive 'S' (read as 'successor'), yes. It does NOT have
> > 'e'.
> >
> > The usual primitives are:
> >
> > 0, S, +, *, as well as = from identity theory.
>
> As far as I understand, you can even dispense with "+" and "*". But you
> get recursive definitions of "+" and "*". As I am not very far in logic,
> I do not know whether that fits entirely. On the other hand, it would
> be great if Tony (who is a computer scientist) wrote up the computer
> routines that would do arithmetic on naturals based on the existence
> of 0 and the S and = operation only. It is possible.

Of course, it's just miserably inefficient. You can implement '+' using
a loop indexed with one value, incrementing the other value, and then
implement '*' using a loop indexed with one value, applying our '+' to
the first value. I sure am glad we have adding and multiplication
circuits, and not everything must be based on increment. I would rather
get to implementing the H-riffic numbers in base 2, since that's
actually an interesting problem. Unfortunately it involves figuring
roots of fractions and powers of those, but I think I have it pretty
well outlined. Still, I'm hoping to find a shortcut, much like adding
and multiplication circuits.

:)