From: Transfer Principle on
On Mar 3, 7:25 pm, Transfer Principle <lwal...(a)lausd.net> wrote:
> Here's what I'm getting at -- if M is a subset of Z, then the elements
> of M ought to be called "integers" -- even if M is a finite subset of
> Z such as {neZ|-65536<=n<=65535}

Correction: M is a finite subset of Z such as {neZ|-32768<=n<=32767}...
From: Transfer Principle on
On Mar 2, 9:09 pm, Marshall <marshall.spi...(a)gmail.com> wrote:
> On Mar 2, 7:22 pm, Transfer Principle <lwal...(a)lausd.net> wrote:
> > Therefore, by Virgil's standards, IEEE 754 arithmetic must be
> > "bloody useless," even though Virgil probably uses software
> > that adheres to IEEE 754 every time he turns on his computer.
> It's not something that qualifies as a model of anything
> the least bit applicable to mathematical proof, which means
> that *in context* Virgil's claim is entirely correct, even if
> perhaps a bit dramatically phrased.
> Algebraic properties so basic and fundamental as
> associativity of addition and multiplication do not hold in
> IEEE 754.

OK, we can attempt to have an ultrafinitist theory in which addition
and multiplication are still associative.

But what I hope won't happen is that Spight will keep gradually
adding more "basic and fundamental" algebraic properties that
theories that aren't "bloody useless" must have, until all of the
properties of a complete ordered field are listed. I refuse to
believe that one must have a complete ordered field in order to
have a "useful" theory, and that one may be able to have an
ultrafinitist theory that's still "useful" even if it doesn't have
_all_
of the properties of a complete ordered field, no matter what
Spight or Virgil might say to convince me otherwise.

But let's stick to associativity. It's easy to find a finite set in
which
both addition and multiplication are associative -- namely the
rings Z/nZ, for each natural number n. (Notice that this goes back
to the Shanahan post about computer "int"s, which is of course
addition and multiplication in Z/65536Z or Z/2^32Z.)

The ultrafinitist "crank" Doron Zeilberger once argued in favor of
using a ring Z/pZ, where p is some large prime number. In choosing
a prime modulus, this makes Z/pZ into a field, so not only do we
have associativity, but we have all of the field properties.

How big must p be? Zeilberger doesn't specify, but it's desirable that
we choose a prime number that's large compared to the numbers
that appear in physics (if this is to be applied to enough math for
the sciences). The best we can do, of course, is to let p be the
largest known prime number, which as of the time of this post is
the prime 2^43112609-1. So we consider the field Z/2^43112609-1Z.

(Of course I know that there exist infinitely many primes, but I'm
only considering the _known_ primes.)

Notice that in this field, 1/2 exists, but is actually a "large"
number
like 2^43112608. But as long as we stick to the numbers that appear
in physics or cryptography, such as 10^500 or RSA-2048 and their
reciprocals, then there's no danger of the fractions like 1/2 being
mistaken for large numbers.

This appears to be the best that we can do in an ultrafinitist theory,
since the next step up (an _ordered_ field) must be infinite (as it
contains an isomorphic copy of Q as a subset).

(Notice that if I were simply to ask, as the complete ordered field
is infinite, what's the greatest number of axioms of a complete
ordered field a finite set may satisfy, one might answer by replying
that just by dropping the axioms that refer to additive or
multiplicative
identities and the _empty set_ satisfies the remaining axioms. But of
course, not even an ultrafinitist would recommend basing all of
arithmetic on the empty set. The Z/pZ approach of Zeilberger appears
to be the best nontrivial approach.)
From: Transfer Principle on
On Mar 3, 3:47 am, "Jesse F. Hughes" <je...(a)phiwumbda.org> wrote:
> Transfer Principle <lwal...(a)lausd.net> writes:
> > Let S be a set of natural numbers (and here we're returning to the
> > standard definition of "natural number"). Then the question is,
> > can we find a theory T such that (ZFC proves that) for every
> > natural number n, n is in S if and only if there exists a set M
> > such that the cardinality of M is n, and M is (a carrier set of) a
> > model of T?
> > But suppose S is the set of even natural numbers. [...]
> As far as I can tell, Russell is only interested in taking finite
> initial segments of N as his urelements and has not yet mentioned a
> connection between cardinality and those urelements.
> Maybe I've missed something, but what you're focusing on here doesn't
> look at all like Russell's work.

Au contraire. Let's go back to the very first post in this thread:

"Let N = 4. It is simple to show there are exactly
2^4 sets. There are 2^(2^4) possible Boolean
expressions with four variables. There can be no
more than 2^64 possible functions or 2^68 proper
classes. This theory is provably finite."

So RE is discussing cardinality -- in particular, counting how
many sets must exist based on how many urelements exist. So
RE seeks a theory which guarantees the existence of 2^4 sets
if there are four urelements, and in general 2^n sets must exist
if there are n urelements. (We'll worry about proper classes later,
since RE did change his definition of proper classes since making
this original post.)

So, continuing my post from above, let S be the set of all standard
natural numbers of the form n+2^n (n of which will be labeled as
"urelements" and 2^n of which will be labed as "sets"):

S = {1, 3, 6, 11, 20, 37, 70, ...}

Taking a suggestion from William Elliot, we can let the models be
sets of the form U({x,P(x)}) for some suitable set x. As Hughes
reminds us how RE intends his urelements to correspond to small
natural numbers, we can let x be an initial segment of N (which we
view as von Neumann naturals). Then the elements of P(x) (i.e., the
subsets of x) correspond to the sets. We exclude the von Neumann
natural number zero since this would correspond to both a urelement
(being a natural number) and a set (i.e., the empty set). So x is the
set of von Neumann ordinals between 1 and n inclusive for some n.

So we have:
"is a urelement" maps to "is a nonzero ordinal"
"is a set" maps to "is a set of nonzero ordinals"
"e" maps to membership _restricted to P(x)_
(Without this restriction, urelements would have elements, since the
von Neumann natural 1 is an element of the von Neumann natural 2.)

Notice that sets can't contain other sets as elements, for "set" means
set of nonzero ordinals. The elements of a set -- nonzero ordinals --
can't themselves be sets of nonzero ordinals, since every nonzero
ordinal contains zero as an element. This guarantees that the sets
x and P(x) are actually disjoint, so that the cardinality of their
binary
union really is the sum of their cardinalities, which is n+2^n.

The problem, of course, is that in this case, it's actually easier to
come up with _models_ of the theory than it is to come up with the
_theory_ itself, as I mentioned in that earlier post. We're still
looking
for a theory which satisfies all of RE's requirements regarding sets
and
urelements -- most notably that there be only _finitely_ many of them.
From: Transfer Principle on
On Mar 3, 8:21 am, MoeBlee <jazzm...(a)hotmail.com> wrote:
> On Mar 2, 9:46 pm, Transfer Principle <lwal...(a)lausd.net> wrote:
> First, Transfer Prinicple, you blowhard, you lied again about me in
> your previous posts. And, so far, you've not responded to my latest
> requests that you stop doing that.

The reason that I don't promise to stop "lying" is that after I do,
I'd
inevitably see a MoeBlee post that I'll consider to be representative
of what I call standard theorist/anti-"crank" behavior, and then I'd
use that post to make a generalization about standard theorists or
anti-"cranks," and that generalization would be considered a lie. So
I'd be making a promise that I know I wouldn't be able to keep.

(Of course, the easiest way to stop posting "lies" about MoeBlee
on Usenet is just to stop posting on Usenet, period. But calling
me a "liar" isn't going to make me disappear that easily, any more
than calling someone a "crank" makes "cranks" stop posting.)

> You have my contempt for that.

We're opponents, and so I expect nothing less.

> > Now ZF+~AC proves the existence of nonempty sets
> > without choice functions. But according to the standard
> > theory ZFC, every nonempty set has a choice function. So
> > what if I were to claim that therefore, these nonempty
> > objects in ZF+~AC that lack choice functions aren't really
> > sets, so we should call them "rets" or "tets" instead?
> But that said, still, when someone uses ordinary terminology in a way
> that is RADICALLY different

I don't consider RE's use of the terminology to be _radically_
different
at all. RE's "urelements" lack elements, and so do urelements in
standard urelement theories like ZFCU or NFU. RE's natural numbers
include "1,2,3," and so do the standard natural numbers. Indeed, I
informally think of the set of RE natural numbers as being a proper
subset of the set of standard natural numbers, and if M is a subset of
N, then the elements of M are still called natural numbers. M being a
finite set doesn't change its elements status as natural numbers.

To me, a radical change would be to let "urelements" have elements,
or something like that. That I would consider unreasonable, and would
justify calling them "rments" instead. Similarly, letting 1/2,
sqrt(2), or pi
be "natural numbers" also justifies use of a different term instead.
Of
course, it appears that standard theorists draw the line at less
radical
changes than I do (a generalization, which might be viewed as a
"lie").

> Also, for someone such as RussellE who does not understand the
> axiomatic method, using words like 'ret' and 'rment' emphasizes that
> his actual mathematical arguments may not make any use of the
> connotations, suggested associations, and other non-formal baggage
> associated with the terminology, but rather that the formal reasoning
> must be purely from the axioms and definitions (i.e., Hilbert's famous
> 'tables and beer mugs' explanation).

I sort of see what MoeBlee is getting at here.

So if I were to have a theory (in FOL=) whose language contains the
single symbol <= and whose axiom set contains the lone axiom "<=
is a total order," then this would be invalid since the phrase "total
order" isn't part of the language. This would force "total order" into
a
primitive, and to emphasize that "total order" doesn't necessarily
have
the same meaning as in standard theory, we ought to call it something
like "zotal zorder" instead.

Of course, a valid axiom would be something like:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

But what if I were to claim that "<= is a total order" is actually
_shorthand_ for the more formal axiom:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

and that the actual axiom of the theory isn't "<= is a total order"
but:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

instead, yet I choose to write "<= is a total order" because it's
simply easier to read. For it make take a reader up to a minute to
parse every symbol in:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

and realize what it saying about the relation <=, but can understand
what "<= is a total order" means in _seconds_. Also, forcing everyone
to write:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

instead of "<= is a total order" reinforces the common "crank" notion
that standard mathematics is all about manipulating symbols and
has nothing to do with the real world. So I'd argue that writing "<=
is
a total order" is _superior_ to writing:

Axyz (x<=x & ((x<=y & y<=x) -> x=y) & ((x<=y & y<=z) -> x<=z))

I want to be able to write "<= is a total order" (or similar
statements
involving equivalence relations, partial orders, possibly wellorders
depending on the circumstance) with the knowledge that what I'm
writing isn't formally an axiom, but is intended as _shorthand_ for a
longer formal expression that may be too cumbersome to write (and
corresponds to the _standard_ definition of the term I'm using). But
the standard theorists won't budge and insist that everyone write in
symbolic language (a generalization that might be viewed as a "lie").
From: Aatu Koskensilta on
MoeBlee <jazzmobe(a)hotmail.com> writes:

> Thanks. You've said 'in a strictly logical sense' a few times. Would
> you amplify what you mean by that in this context?

Do we need the Lorentz group in our physical blather about relativity?
Not in any strictly logical sense, in that in physical applications we
can explain away any general reference to the group, by concentrating on
the concrete physical situation at hand. What we need, in a strictly
logical sense, in our physical thinking, are those basic mathematical
principles -- of a theory conservative over PA, say, in which we can do
stuff with sets of naturals, functions on naturals, reals, what have you
-- without which it is impossible to derive the (particular applications
of) the mathematics we make use of in our analysis of concrete physical
situations. (Here "concrete" is to be understood widely, in a rather
attenuated sense.) This observation, in the philosophy of mathematics,
is essentially a counter to the Quinean idea that classical mathematics
is justified because it is a part of and presupposed in our best
scientific stories, and hence we should accept e.g. infinitary set
theory. Unless one endorses such Quinean follies the observation is of
course perfectly consistent with the view that e.g. large large
cardinals are perfectly fine mathematics and eminently justified.

--
Aatu Koskensilta (aatu.koskensilta(a)uta.fi)

"Wovon man nicht sprechan kann, dar�ber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus
First  |  Prev  |  Next  |  Last
Pages: 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Prev: integration limit notation
Next: Arcs And Marks