From: David Bernier on
David Bernier wrote:
> The prime-counting function pi(x) gives the number of primes <= x, x
> being a
> real number:
>
> < http://en.wikipedia.org/wiki/Prime-counting_function > .
>
> In the Wikipedia article on the Riemann Hypothesis, there's the statement:
>
> << Von Koch (1901) proved that the Riemann hypothesis is equivalent
> to the "best possible" bound for the error of the prime number
> theorem." >>
>
> [ intuitively, "best possible" reminds me of "assuming the primes are
> distributed pseudo-randomly with n being prime about 1/log(n) of the
> time" ].
>
> The Wikipedia article goes on to state:
>
> Schoenfeld (1976) showed that RH is equivalent to:
> | pi(x) - Li(x) | < 1/(8 pi) sqrt(x) log(x) for all x >= 2657.
>
>
> Following ideas of Cramer's model
> [ cf. Section 5 of
> http://numbers.computation.free.fr/Constants/Primes/countingPrimes.html
> although not much detail is given]
>
> we could have independent variables Y_2, Y_3, ... Y_n with
> Y_j = 1 with probability 1/log(j) and Y_j = 0 with probability 1 -
> 1/log(j).
>
> If S_n = Y_2 + ... + Y_n, then
> E[S_n] = sum_{j=2 ... n} 1/log(j) and the variance of S_n is
> Var[S_n] = sum_{j=2 .. n} Var[Y_j] = sum_{j=2 ... n} 1/(log(j)) * (1 -
> 1/log(j))
>
> As n gets larger an larger, the tail part with j large has (1 - 1/log(j))
> very close to 1, so
>
> K_1 sum_{j=2 ... n} 1/log(j) < Var[S_n] < K_2 sum_{j=2 ... n} 1/log(j)
> for some 0< K_1 < K_2 fixed and with n "sufficiently large" seems
> reasonable.
>
> Next, I'd like to apply the central limit theorem, but the Y_j are not
> identical random variables ...
>
> Anyway, if we take the square root of the variance we get:
> sqrt(K_1) sqrt(E[S_n]) < sqrt(Var[S_n]) < sqrt(K_2) sqrt(E[S_n]).
>
> If some weak form of CLT can be applied, oscillations of the r.v. S_n
> around
> E[S_n] will typically be about sqrt(E[S_n]) to within a multiplicative
> factor
> of 10 (or 100 for more encompassing validity).
>
> -----
>
> Gauss asked about the number of integral x- and y-coordinate points
> within a circle of radius 'x' centered at (0, 0).


N(x) (or N(r) using 'r' for the radius) in the Wikipedia article:

< http://en.wikipedia.org/wiki/Gauss_circle_problem > .



> Since the area enclosed is pi*x^2, one would guess about pi*x^2 integral
> coordinate points. I remember reading that although the error term seem
> empirically small [ perhaps of the order of x or xlog(x) ], proving
> non-trivial
> upper bounds (as a function of the radius x) on the absolute value of
> the error term is very difficult.


N(x) is smoother than I thought:

< http://www.research.att.com/~njas/sequences/table?a=328&fmt=5 >

for a plot.


and
< http://www.research.att.com/~njas/sequences/A000328 >

for the sequence, formulas, references.




> So I've been wondering if there are non ad-hoc
> < http://en.wikipedia.org/wiki/Ad_hoc >
> non sparse sequences of integers (or reals)
> with some semblance of pseudo-randomness that
> have large oscillations around the "mean".
>
> N.B. natural in the Subject line and non ad-hoc mean pretty much the
> same thing.
>
> If there are Q(x) numbers in the sequence <=x, large fluctuations could
> mean
> some oscillations larger than Q(x)^(1/2 + 1/(10^100) ) for infinitely
> many x.
>
> David Bernier
From: David Bernier on
Gerry Myerson wrote:
> In article <hrg9s4017pd(a)news2.newsguy.com>,
> David Bernier <david250(a)videotron.ca> wrote:
>
>> So I've been wondering if there are non ad-hoc
>> < http://en.wikipedia.org/wiki/Ad_hoc >
>> non sparse sequences of integers (or reals)
>> with some semblance of pseudo-randomness that
>> have large oscillations around the "mean".
>>
>> N.B. natural in the Subject line and non ad-hoc mean pretty much the same
>> thing.
>>
>> If there are Q(x) numbers in the sequence <=x, large fluctuations could mean
>> some oscillations larger than Q(x)^(1/2 + 1/(10^100) ) for infinitely many
>> x.
>
> So you want the error term to be larger than the square root of
> the main term by a power of the main term. Maybe like this:


Yes, although cases where somewhat plausible heuristics or
probabilistic methods underestimate the error term would also be interesting.


> Let r(n) be the number of expressions of n as a sum of 3 squares.
> Then r(1) + r(2) + ... + r(n) = (4/3) pi n^(3/2) + O(n).
>

I wonder if you're counting the number of expressions the same way
as in the so-called "sphere problem" in "On the Sphere Problem" below ...

I've been looking through an article by H. Iwaniec and Fernando Chamizo,
"On the Sphere Problem" :

< http://dmle.cindoc.csic.es/pdf/MATEMATICAIBEROAMERICANA_1995_11_02_08.pdf > .

They write that with
f(R) = #{ (m, n, p) in Z^3 such that || (m, n, p)|| <= R } ,

f(R) = (4/3) pi R^3 + O( R^(theta)) with theta = 4/3 + epsilon is due to
Chen and Vinogradov, if I'm not mistaken. They improve that to
theta = 29/22 + epsilon. Here, sqrt(R^3) = R^(3/2) and

3/2 > 4/3 > 29/22 . Iwaniec and Chamizo write that
theta = 1 + epsilon for the sphere problem. The conjectures
for dimensions two and three [ sphere problem] are said to
" ( [be] supported by some mean results)". I'm not sure
what that means.

David Bernier