From: Mike Terry on
"LauLuna" <laureanoluna(a)yahoo.es> wrote in message
news:36cb4819-04fd-4dbe-b204-7168ae550e23(a)p8g2000yqb.googlegroups.com...
> On Nov 23, 12:22 pm, Herman Jurjus <hjm...(a)hetnet.nl> wrote:
> > Has anyone seen this before?
> >
> > http://possiblyphilosophy.wordpress.com/2008/09/22/guessing-the-resul...
> >
> > I'm not sure yet what to conclude from it; that AC is horribly wrong, or
> > that WM is horribly right, or something else altogether.
> >
> > In short the story goes like this:
> >
> > A game is played, in which infinitely many coins are tossed, and there's
> > one player, who makes infinitely many guesses. Both are done over a
> > finite period of time. The tosses and guesses are not made faster and
> > faster, however, but slower and slower: at t = 1/n. There's no 'first'
> > move.
> >
> > Claim:
> > There exists a strategy with which you're certain to guess all entries
> > correctly except for at most finitely many mistakes. Not 'certain' as in
> > 'probability is 100%', but absolutely certain.
> >
> > Reasoning:
> > On 2^w, consider the equivalence relation that makes x equivalent to y
> > when x(n) =/= y(n) for at most finitely many n. Next, using AC, create a
> > set S that contains precisely one element from every equivalence class.
> > Strategy: at every move, you already know the results of the previous
> > tosses, which is an infinite tail of some sequence in 2^w. Now take the
> > unique element from S associated to that tail, take the n'th element of
> > that sequence from S, and deliver that as your move.
> > After some thinking, you will see that with this strategy, you're indeed
> > certain to guess wrong at most finitely many times.
> >
> > Thanks, AC! Another nice mess you've gotten us into.
> >
> > --
> > Cheers,
> > Herman Jurjus
>
> In fact, you don't need the axiom of choice.
>
> At any 1/n hour past 12pm it is already determinate what equivalence
> class the eventual sequence is in. Since you know what the previous
> results are and there are only finitely many outstanding results, you
> can complete the sequence at random and take it as your
> representative. Which means that from any 1/n hr past 12pm on you can
> guess at random.

...but then how would you prove that there have been only a finite number of
incorrect guesses? If the guesses always follow the representative sequence
(whose existence follows from AC) it's this that allows us to deduce at the
end that we've only made finitely many mistakes.

>
> That is, you can make all your guesses at random and still be certain
> to guess wrong only finitely many times.
>

Your proof for this doesn't work...

Mike.




From: Tim Little on
On 2009-11-25, LauLuna <laureanoluna(a)yahoo.es> wrote:
> At any 1/n hour past 12pm it is already determinate what equivalence
> class the eventual sequence is in. Since you know what the previous
> results are and there are only finitely many outstanding results,
> you can complete the sequence at random and take it as your
> representative. Which means that from any 1/n hr past 12pm on you
> can guess at random.

That strategy fails for the same reason as the previous non-choice
strategy. The choice sequence works only because you provably used
the same sequence in the *past*, and hence at the time of your
decision had already made only finitely many errors.


> That is, you can make all your guesses at random and still be
> certain to guess wrong only finitely many times.

No, in your case you can't prove anything at all about how many of
your past guesses were correct.


> The following is a version of Benardete's. Consider an infinite past
> with a gong peal occurring at each day and a hearer being deafened by
> it iff no previous gong peal has deafened it. The hearer must be deaf
> from eternity, which, paradoxically, implies that no gong peal deafens
> it.

Yes, mixing in other types of paradox is one reason why I prefer other
formulations of this kind of AC problem.


- Tim
From: Bill Taylor on
"Jesse F. Hughes" <je...(a)phiwumbda.org> wrote:

> Well, I'm not at all sure that there's no problem with forward
> supertasks. Surely, it is not difficult to come up with a
> problematic case.

Yes, the worth of supertasks as indicators of philosophical
concerns is very much up in the air. Some seem relevant, others
just stupid. Perhaps (temporally) well-ordered supertasks are
more sensible than most. But I doubt that's all there is to it.

One of my favourites is this; for naturals n:- compare...

a) At each time 1 - 1/n, add balls numbered
2^(n-1) to 2^n - 1 to the pot, and remove ball number n.

b) At each time 1 - 1/n, add 2^(n-1) - 1 balls to the pot, and
replace the numbering stickers in agreement with case (a).

After time 1:
the final situation in case (a) is that the pot is empty.
in case (b), the pot has infinitely many balls with no stickers!

And yet at any intermediate time the two cases are indistinguishable!

This sort of example shows that even omega-supertasks
can be remarkably silly!

> Now alter the situation slightly. At each step, again place 10 balls
> into the vase and then remove one ball, but remove the ball
> *randomly*. At the end, the vase may contain any number of balls

Actually NOT. The pot will be empty(!) [with probability 1]

For any ball, the probability of it being removed is like a harmonic
series, which sums to oo, which by Borel-Cantelli means it will
happen for sure. [meaning probability one, as always here]

HOWEVER - if you add (say) 1, 4, 9, 16... balls per turn,
and again remove one at random, each turn, then (Borel-Cantelli)
each ball has a positive probability of being left behind.

It is an interesting problem in probability generating functions
to work out the individual proabilities for each ball,
and the expected number, left in the pot at the end!

-- Borellic Bill
From: Bill Taylor on
> Apparently there's something wrong with backward supertasks (and not
> with ordinary, 'forward' supertasks). But why should that be?

So not only can Achilles not catch up with the tortoise,
but he can't even get off the starting line!! :)

-- Bionic Bill
From: Daryl McCullough on
Bill Taylor says...

>One of my favourites is this; for naturals n:- compare...
>
>a) At each time 1 - 1/n, add balls numbered
> 2^(n-1) to 2^n - 1 to the pot, and remove ball number n.
>
>b) At each time 1 - 1/n, add 2^(n-1) - 1 balls to the pot, and
> replace the numbering stickers in agreement with case (a).
>
>After time 1:
> the final situation in case (a) is that the pot is empty.
> in case (b), the pot has infinitely many balls with no stickers!
>
>And yet at any intermediate time the two cases are indistinguishable!
>
>This sort of example shows that even omega-supertasks
>can be remarkably silly!

Actually, I think that they are fun to think about.
What this example (and similar ones) show is that
for tasks involving a transfinite number of steps,
you have to be more precise about what you are doing.
In order for mathematics to tell us what the result
is of performing some supertask, you have to describe
the supertask using standard mathematical objects
(typically sets). There are modeling choices that
have to be made, and the answer can depend on the
modeling choice. If it does depend on the modeling
choice, that means that the original supertask was
insufficiently specified.

The big ambiguity is how to compute limit states.
Associated with each ordinal alpha, there is a
corresponding state of the system, S_alpha. The
statement of the supertask explains how to go
from S_alpha to S_{alpha+1}. But that tells us
nothing about the state S_alpha when alpha is
a limit ordinal.

There are certain assumptions about the limit
states that are so "obvious" that they seem
to go without saying. For instance: "If a
ball is added at stage alpha_1, and is never
removed at any stage beta such that
alpha_1 < beta < alpha_2, and alpha_2 is a
limit ordinal, then the ball is present at
stage alpha_2. But that's an assumption about
the limit state. To really reason about supertasks,
you have to state all the assumptions about
limit states.




>
>> Now alter the situation slightly. At each step, again place 10 balls
>> into the vase and then remove one ball, but remove the ball
>> *randomly*. At the end, the vase may contain any number of balls
>
>Actually NOT. The pot will be empty(!) [with probability 1]
>
>For any ball, the probability of it being removed is like a harmonic
>series, which sums to oo, which by Borel-Cantelli means it will
>happen for sure. [meaning probability one, as always here]
>
>HOWEVER - if you add (say) 1, 4, 9, 16... balls per turn,
>and again remove one at random, each turn, then (Borel-Cantelli)
>each ball has a positive probability of being left behind.

Very interesting.

--
Daryl McCullough
Ithaca, NY