From: Osher Doctorow on
From Osher Doctorow

Probable Causation/Influence (PI) is strongly related to Memory, of
which Long Memory is a special case, as readers can see from previous
posts in this thread.

Magda Peligrad and Hailin Sang of respectively U. Cincinnati USA and
National Institute for Statistical Sciences Research Triangle Park
North Carolina USA, have a paper:

1) "Asymptotic properties of self-normalized linear processes with
long memory," arXiv: 1006.1572 v1 [stat.ME] 8 Jun 2010, 199 pages

which proves that Long Memory can yield asymptotic dependence of
increments for time series with a type of Central Limit Theorem
involving convergence of "normalized" partial sums to Fractional
Brownian Motion.

Fractional Brownian Motion is especially nicely presented in Kenneth
Falconer's (U. Bristol U.K.) "Fractal Geometry Mathematical
Foundations and Applications," Wiley: Chichester, N.Y., 1990. As
Falconer points out, there are two modifications of Brownian Motion
(finite variance stationary independent increment functions), namely:

2) Fractional Brownian Motion, which removes the independence of the
increments (so that they are dependent) although they are normally
(Gaussian) distributed as with Brownian motion.

3) Stable Processes, which have infinite variances.

"Increments" refer to quantities like X(t + h) - X(t) where X is a
random process.

I will not go further into this paper now because of the late hour.

Osher Doctorow
From: Osher Doctorow on
From Osher Doctorow

Technically, the paper proves that:

1) S_[nt] /Bn converges weakly to Fractional Brownian Motion, where
[nt] is the integer part of nt and Sn is the nth partial sum of random
variables Xi for i = 1 to n, the quantity Bn having a somewhat lengthy
definition not discussed here but serving to "normalize" the S_[nt] in
a sense.

The time series or causal linear process is {X_k} or {Xk} for brevity,
defined by:

2) Xk = sum ai epsilon_(k-i), where epsilon_(k-i) are independent
identically distributed with infinite variance and the ai for i > = 1
are a sequence of real constants, etc.

The Long Memory case studied has:

3) sum |ai| = infinity, sum over i > = 1.

Osher Doctorow
From: jaimie on

"Osher Doctorow" <osherdoctorow87(a)gmail.com> wrote in message
news:893e6af2-df07-4184-91d1-08e898c9bb80(a)34g2000prs.googlegroups.com...
> From Osher Doctorow
>
> Probable Causation/Influence (PI) is strongly related to Memory, of
> which Long Memory is a special case, as readers can see from previous
> posts in this thread.

A vast river of impotent blithering, entirely unrelated to
physics, from a google-posting fuckwit with a gmail address.