From: geo on
On Apr 16, 12:41 pm, geo <gmarsag...(a)gmail.com> wrote:
> On Apr 15, 10:35 am, steve <kar...(a)comcast.net> wrote:
>
>
>
> > Two items that George did not comment on are subnormal
> > numbers and how he determined the distribution is normal
> > over (0,1].  Does his generator create subnormals?  If
> > I understand Figure D-1 in Goldberg, "What every computer
> > scientist should know about floating-point arithmetic,"
> > the distribution cannot be normal.- Hide quoted text -
>
> > - Show quoted text -
>
> Users who insist on getting every possible float
> in the unit interval could form dUNI()*.5^j;
> with j taking values
> 0, 1, 2, 3,...
> with probabilities
> q, qp, qp^2, qp^3, ...
> and p = 2^{-53) and q = 1-p..
> Even at the rate of  70 million per second,
> you wouild have to wait an average of over
> four years before you had to multiply by .5,
> 8 years to multiply by .5^2,  etc..

Reconsidering: the above won't help.

It is a sad, but not very sad, fact that with this
representation: floated rationals of the form k/2^53,
1/2 of the set of possible k's, say K0,
will provide a full complement of 53 bits,

1/4 of the set of possible k's, say K1,
will provide only 52 effective bits,

1/8 of the set of possible k's, say K2,
will provide only 51 effective bits,
and so on.

Ensuring that every float has its full complement
of 53 effective bits would seem to require that
a random extra bit be adjoined to those k's in K1,
two extra bits to those in K2, and so on.

But it doesn't seem worthwhile to add those extra
complexities. The average of 52 effective bits
should still be adequate for most simulations.

Remember that we were forced to go to double
precision because the average of 22 effective
bits fell short---but often not by very much---
in single-precision simulations.

Only floating a 64-bit integer J>2^53
seems to guarantee a full complement
of 53 effective bits in the IEEE representation
of J/2.^64, in one fell swoop.

George Marsaglia




From: Axel Vogt on
geo wrote:
....
>
> Applications such as in Law or Gaming may
> require enough seeds in the Q[1220] array to
> guarantee that each one of a huge set of
> possible outcomes can appear. For example,
> choosing a jury venire of 80 from a
> list of 2000 eligibles would require at least
> ten 53-bit seeds; choosing 180 from 4000 would
> require twenty 53-bit seeds.
> To get certification, a casino machine that could
> play forty simultaneous games of poker must be
> able to produce forty successive straight-flushes,
> with a resulting minimal seed set.
>
> Users can choose their 32-bit x,y for the
> above seeding process, or develop their own
> for more exacting requirements when a mere
> set of 64 seed bits may not be enough.
>
> Properties:
....
> George Marsaglia

May be a somewhat bearishly question (after all the
detailed replies):

What is the 'recipe' to (repeated) seed?
From: steve on
On Apr 17, 7:57 am, geo <gmarsag...(a)gmail.com> wrote:
> On Apr 16, 12:41 pm, geo <gmarsag...(a)gmail.com> wrote:
>
>
>
> > On Apr 15, 10:35 am, steve <kar...(a)comcast.net> wrote:
>
> > > Two items that George did not comment on are subnormal
> > > numbers and how he determined the distribution is normal
> > > over (0,1].  Does his generator create subnormals?  If
> > > I understand Figure D-1 in Goldberg, "What every computer
> > > scientist should know about floating-point arithmetic,"
> > > the distribution cannot be normal.- Hide quoted text -
>
> > > - Show quoted text -
>
> > Users who insist on getting every possible float
> > in the unit interval could form dUNI()*.5^j;
> > with j taking values
> > 0, 1, 2, 3,...
> > with probabilities
> > q, qp, qp^2, qp^3, ...
> > and p = 2^{-53) and q = 1-p..
> > Even at the rate of  70 million per second,
> > you wouild have to wait an average of over
> > four years before you had to multiply by .5,
> > 8 years to multiply by .5^2,  etc..
>
> Reconsidering: the above won't help.
>
> It is a sad, but not very sad, fact that with this
> representation:    floated rationals of the form k/2^53,
> 1/2 of the set of possible k's, say K0,
> will provide a full complement of 53 bits,
>
> 1/4 of the set of possible k's, say K1,
> will provide only 52 effective bits,
>
> 1/8 of the set of possible k's, say K2,
> will provide only 51 effective bits,
> and so on.
>
> Ensuring that every float has its full complement
> of 53 effective bits would seem to require that
> a random extra bit be adjoined to those k's in K1,
> two extra bits to those in K2, and so on.
>
> But it doesn't seem worthwhile to add those extra
> complexities.  The average of 52 effective bits
> should still be adequate for most simulations.
>
> Remember that we were forced to go to double
> precision because the average of 22 effective
> bits fell short---but often not by very much---
> in single-precision simulations.
>
> Only floating a 64-bit integer J>2^53
> seems to guarantee a full complement
> of 53 effective bits in the IEEE representation
> of J/2.^64, in one fell swoop.

George,

Thanks for the helpful follow-ups. IIUC, your new
generator is sampling only a subset of all possible
floating point values in (0,1]. This subset is then
uniformly distributed in the interval.

--
steve