From: Dean on
Hi All,

Thank you for all your comments, really useful and encouraging.

I feel I also like classic non-parametric ones better than AR methods.

To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case. However, I got an error message like shown.

The signal is synthesized without noise added. I don't understand what makes matlab think I should use smaller P? Of course, I will never see three peaks for P=4.

Cheers,

Dean

----------Error Message ---------------------------------
??? Error using ==> arspectra at 87
The variance estimate of the white noise input to the AR model is negative. Try to lower the order of the AR model.
----------END of Error Message ----------------------



-------------Code-------------------------------------------
reset(RandStream.getDefaultStream);
fs = 1000;
t = (0:fs/15)/fs;
A = [2 1 1];
f = [140;150;155];
xn = A*sin(2*pi*f*t);
[Pxx_cov,f]=pcov(xn,6,2048,fs);
From: Wayne King on
"Dean " <jiangwei0515(a)gmail.com> wrote in message <i3mqog$j8q$1(a)fred.mathworks.com>...
> Hi All,
>
> Thank you for all your comments, really useful and encouraging.
>
> I feel I also like classic non-parametric ones better than AR methods.
>
> To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case. However, I got an error message like shown.
>
> The signal is synthesized without noise added. I don't understand what makes matlab think I should use smaller P? Of course, I will never see three peaks for P=4.
>
> Cheers,
>
> Dean
>
> ----------Error Message ---------------------------------
> ??? Error using ==> arspectra at 87
> The variance estimate of the white noise input to the AR model is negative. Try to lower the order of the AR model.
> ----------END of Error Message ----------------------
>
>
>
> -------------Code-------------------------------------------
> reset(RandStream.getDefaultStream);
> fs = 1000;
> t = (0:fs/15)/fs;
> A = [2 1 1];
> f = [140;150;155];
> xn = A*sin(2*pi*f*t);
> [Pxx_cov,f]=pcov(xn,6,2048,fs);

Hi Dean, Fitting an AR(p) models explicitly models the output, y(n), as
y(n)-a(1)y(n-1)-a(2)y(n-2)- a(p)y(n-p)=e(n)
where e(n) is white noise. There should be a white noise input.

To be fair to AR spectral methods, they do have a lot of important uses. There are applications where the physical nature of the signal generation mechanism makes an AR model a good a priori choice. If you look at human speech analysis, you find a number of applications where AR models are quite useful. It just so happens that the things I've used spectral analysis for, nonparametric methods have been more appropriate.

Another thing you might find interesting about AR processes given your other posts, AR processes have nice closed-form expressions for their PSDs. Therefore, you can generate a realization of an AR process, determine its true PSD, and then test spectral estimators against it. Note for the example the use of an AR(4) process to illustrate the problem of bias in the periodogram:


reset(RandStream.getDefaultStream);
x = randn(1024,1);
%AR(4) Process
A = [1 -2.7670 3.8106 -2.65335 0.9238];
y = filter(1,A,x);
sigma = 1;
f =0:(1/length(x)):0.5;
tmp = zeros(length(2:length(A)),length(f));
for k = 2:length(A)
tmp(k-1,:) = A(k)*exp(-1j*(k-1)*2*pi*f);
end
tmp = sum(tmp);
denom = abs(1+tmp).^2;
psdtrue = sigma^2./denom;
h = spectrum.periodogram;
psdest = psd(h,y,'Fs',1,'NFFT',length(x));
figure;
plot(f,10*log10(psdest.Data)); hold on;
plot(f,10*log10(psdtrue),'r','linewidth',2);

See how the periodogram blows the estimate above about 0.2 Hz? That is a demonstration of the known bias effect in the periodogram. Note how the use of a Hamming window alleviates that problem.

hwin = spectrum.periodogram('Hamming');
psdwin = psd(hwin,y,'Fs',1,'NFFT',length(x));
figure;
plot(f,10*log10(psdtrue),'r','linewidth',2);
hold on;
plot(f,10*log10(psdwin.Data));


Wayne
From: Rune Allnor on
On 8 Aug, 19:50, "Dean " <jiangwei0...(a)gmail.com> wrote:
> Hi All,
>
> Thank you for all your comments, really useful and encouraging.
>
> I feel I also like classic non-parametric ones better than AR methods.

Agreed. Better than that: There exists a scientific argument to
support that subjective feeling. See below.

> To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case.

Yep indeed!

Congratulations! You are one of the extremely few who are able
to figure that one out on your own! Seriously. For once none of
my irony or sarcasm; merely sheer joy and happiness that somebody
are able to actually figure this one out.

Yes, the model order is the CATCH 22 of DSP. The feasible range of
model orders depends on the number data points available (the
exact details depend on the method in use). For simplicity, say
P = N/2 where N is the number of data points and P is the maximum
allowable order. Denote the *true* number of sinusoidals as D.
For the AR-type predictors to work, D < P. If one uses the Levinson
recursion one can estimate D on the fly, using some sort of
order estimator.

If one contemplates this, one finds that one necessary criterion
for AR-type methods to work, is that the relation

D < P = N/2

holds.

In other words, the designer / analyst who configures the AR-type
methods can not choose N arbitrarily, but must know in advance what
scenario he is working in. Feed the method too few data points and
one ends up with

D >= N/2

and the method unconditionally breaks down.

Just try it. Use N = 12 data points, no noise, and try and feed
the AR method first data from an AR(1) process, then and AR(2),
then AR(3), and so on. With the 'usual' model, you will be able
to recover the AR parameters (possibly with errors due to estimator
bias) for models up to and including AR(5).From AR(6) and beyond,
the computations need not make any sense at all.

All this forms the argument in support of non-parametric methods:
They might not produce the best results, but they have no such
Achilles heels either. The non-parametrics work where the
parametrics break down.

Rune
From: dbd on
On Aug 8, 10:50 am, "Dean " <jiangwei0...(a)gmail.com> wrote:
> Hi All,
>
> Thank you for all your comments, really useful and encouraging.
>
> I feel I also like classic non-parametric ones better than AR methods.
>
> ...
> Dean

The classical non-parametric method for tones is really a two part
process. First the peaks of the periodogram and then an interpolation
method applied to the dft coefficients about the peaks. See:

D. C. Rife, and R. R. Boorstyn, “Single tone parameter estimation
from discrete-time observations,” IEEE Trans. Inform. Theory, vol.
IT-20, pp591-598, 1974

D. C. Rife, and R. R. Boorstyn, “Multiple tone parameter estimation
from discrete-time observations,” Bell Sysr. Tech. J., vol. 55, pp.
1389-1410, 1976.

The stationary tone case has been frequently presented in the IEEE
Trans on Instrumentation and Measurement:

http://wwwtw.vub.ac.be/elec/Papers%20on%20web/Papers/JohanSchoukens/IM92Schoukens-The%20Interpolated.pdf

For the non-stationary case, one of the approaches can be found under
the label "frequency reasignment", for example:

http://perso.ens-lyon.fr/patrick.flandrin/0065-Ch05.pdf

Anyone who limits their comparisons to non-parametric techniques to
peak-picking the periodogram is either uninformed or intentionally
misleading. You can find both in the literature.

Dale B. Dalrymple