From: wbbrdr on
On Apr 17, 6:24 pm, Jerry <Cephalobus_alie...(a)comcast.net> wrote:
>
> You can do almost anything you want with bad data, if you are
> willing to ignore proper error analysis.
>
Error analysis that is based on a false premise can also lead to good
data being painted as bad.
I have explained how that happened with Tom Robert's analysis of
Miller's data.

>
> The flyby anomalies are indeed interesting, but unlike the Pioneer
> anomaly, there is unlikely to be any "new physics" involved.
>
Correct. There is nothing new about in vacuo light-speed anisotropy.
GR predicts it so has theoretically existed ever since the start of
GR.
What is new is that Cahill has proved that it can be measured. Of
course all mainstream physicists agree that it cannot be directly
measured--ie direct measurements of in vacuo light speed will always
be c. However, Cahill's latest flyby paper shows that it can be
INDIRECTLY measured via doppler effects. His earlier papers showed
that it can be INDIRECTLY measured with gas mode interferometers. That
is, the MEASURED anisotropy of light-speed IN GASES can be used to
CALCULATE the in vacuo anisotropy.
>
> Back to my original question, which you don't want to
> answer, for some reason:
>
I was focussing on other issues. But have answered now below.

> Suppose I have a theory that the "true curve" should be a sine
> curve half the amplitude of the one suggested by the Miller data,
> but 90 degrees out of phase relative to Figure 16 in Cahill's
> Aperion paper, i.e. exhibiting a peak where the Miller data
> suggests a trough:http://redshift.vif.com/JournalFiles/V11NO1PDF/V11N1CA2.pdf
>
> There are four rotations from the Joos data that I can average
> together to yield a sine curve of the stated characteristics.
> See a reproduction of Joos's curves in the following:
> http://allais.maurice.free.fr/EtudeFuerxer.pdf
>
> Am I justified in selecting these four runs as representing
> "good data" and rejecting the remaining 18 runs, because the
> average of these four match my theory?
>
I would say it depends if you have a way to distinguish between good
data and bad data. If you have no way to tell the difference you had
better accept it all. If you can tell the difference, you had better
reject the bad data.

However, if you reject data, you would have a duty of care to readers
to tell them what and why you are doing it.
Cahill wrote that:
"Out of 22 rotations, only in the one rotation, at 11 23 58 , does
the data actually look like the form expected. This is probably not
accidental as the maximum fringe shift was
expected at that time,based on the Miller direction of absolute
motion,and the sensitivity of the device was ±1 thousandth of a fringe
shift."

I don't have a problem with that.






From: Jerry on
On Apr 17, 10:25 pm, wbb...(a)gmail.com wrote:
> On Apr 17, 6:24 pm, Jerry <Cephalobus_alie...(a)comcast.net> wrote:
>
> > Suppose I have a theory that the "true curve" should be a sine
> > curve half the amplitude of the one suggested by the Miller data,
> > but 90 degrees out of phase relative to Figure 16 in Cahill's
> > Aperion paper, i.e. exhibiting a peak where the Miller data
> > suggests a trough:
> > http://redshift.vif.com/JournalFiles/V11NO1PDF/V11N1CA2.pdf
>
> > There are four rotations from the Joos data that I can average
> > together to yield a sine curve of the stated characteristics.
> > See a reproduction of Joos's curves in the following:
> >  http://allais.maurice.free.fr/EtudeFuerxer.pdf
>
> > Am I justified in selecting these four runs as representing
> > "good data" and rejecting the remaining 18 runs, because the
> > average of these four match my theory?
>
> I would say it depends if you have a way to distinguish between good
> data and bad data. If you have no way to tell the difference you had
> better accept it all. If you can tell the difference, you had better
> reject the bad data.
>
> However, if you reject data, you would have a duty of care to readers
> to tell them what and why you are doing it.
> Cahill wrote that:
>  "Out of 22 rotations, only in the one rotation, at 11 23 58 , does
> the data actually look like the form expected. This is probably not
> accidental as the maximum fringe shift was
> expected at that time,based on the Miller direction of absolute
> motion,and the sensitivity of the device was ±1 thousandth of a fringe
> shift."
>
> I don't have a problem with that.

1) Joos specifically REJECTED run 11 conducted at 23:58 as a
probable instrumental artifact because the amplitude of the
fringe shifts during this run was extremely inconsistent with
the fringe shifts observed in the twenty-one other runs.
2) Cahill specifically ACCEPTED run 11 conducted at 23:58 and
REJECTED all other runs because run 11 agreed with his
theory and none of the other runs did.
3) I specifically ACCEPTED four of the runs and REJECTED eighteen
runs because the average of these four runs produced a curve
that agreed with my theory, and including more runs worsened
the fit.

Do you notice any difference in the way Joos rejected certain
data versus the way Cahill did and the way that I did?

Joos's accept/reject criteria were based on considerations of
instrumental stability and the internal consistency of the
data. Cahill's accept/reject criteria were based on whether the
data fit his theory. My accept/reject criteria were based on
whether the data fit -my- theory.

Joos's rejection of a single outlier used theory-independent
criteria, and is considered an acceptable practice.

Cahill's selection of the run that Joos specifically rejected
was motivated by his belief that this run supported his
theory. Theory-dependent data selection is considered an
unacceptable practice.

My selection of four nondescript runs was motivated by the
fact that these runs support my theory. Theory-dependent data
selection is considered an unacceptable practice.

Jerry
From: Surfer on
On Apr 17, 1:50 pm, Tom Roberts <tjroberts...(a)sbcglobal.net> wrote:
> wbb...(a)gmail.com wrote:
> > On Apr 17, 6:01 am,TomRoberts<tjroberts...(a)sbcglobal.net> wrote:
> >> You must READ MY PAPER. I derived an errorbar
>
> > by misinterpreting signal fluctuations as measurement errors.
> > So your errorbar is completely invalid.
>
> Not true. My errorbar is derived directly from the raw data, without any
> signal dependence at all. It shows that the DATA THEMSELVES, using
> Miller's analysis technique, are not capable of displaying any
> orientation dependence (the errorbars from the averaging significantly
> exceed the variation in the averages). READ MY PAPER to see this.
>
> > On Apr 17, 10:03 am,TomRoberts<tjroberts...(a)sbcglobal.net> wrote:
> >> > Bottom line: the patterns Miller found are not real, they are
> >> > INSIGNIFICANT.
>
> > Owing to the false premise in your paper you haven't proved that.
>
> There is no "false premise" --

Here is my explanation of your false premise.
==========================
The location of Tom Robert's false premise:

The caption under Fig 3. says:

"The assumed-linear systematic drift from the data of Fig. 1.
The lines are between successive Marker 1 values and the points are
Marker 9. These markers are 180 degrees apart, so any real signal has
the same value for every corner and every point; the variations are
purely an instrumentation effect."

This statement is FALSE, because measurements at Marker 1 and Marker 9
were not made simultaneously. So any real FLUCTUATING signal would
have different values at the two markers.

Light-speed anisotropy caused by motion through a calm and stable
medium would give rise to a non-fluctuating signal, but while its
reasonable to suppose that light does propagate through a medium,
there is NO guarantee that such a medium would be calm and stable.

Hence if we wish to discover if a medium exists, the assumption that
it is calm and stable is unwarranted.
=========================
>
> what I did is to apply standard
> statistical techniques to the average of a set of data points to derive
> an errorbar on the average.
>
How could you get an valid errorbar doing that, if every data point
just happened to be dead accurate?
You must have made some assumption as to how to distinguish signal
from error.
I think the caption under Figure 3 explains what you did.
But its a false premise as I explained.

From: Florian on
Surfer <wbbrdr(a)gmail.com> wrote:

> On Apr 17, 1:50 pm, Tom Roberts <tjroberts...(a)sbcglobal.net> wrote:

> > what I did is to apply standard
> > statistical techniques to the average of a set of data points to derive
> > an errorbar on the average.
> >
> How could you get an valid errorbar doing that, if every data point
> just happened to be dead accurate?
> You must have made some assumption as to how to distinguish signal
> from error.
> I think the caption under Figure 3 explains what you did.
> But its a false premise as I explained.

I think there is a way to discriminate between a noisy regular signal
that is accurately measured and pure noise.
You have to check the variation of amplitude of the errorbars. If the
amplitude remains constant, then there is a good chance that there is a
noisy signal.

For example, looking at Fig 5 (*), I would say that there is a noisy
signal. But you have to calculate the variation of the errorbars to
verify it. Still, that would not be a confortable proof of a real
signal...

(*) <http://arxiv.org/abs/physics/0608238v3>

--
Florian
"Toute v�rit� passe par trois phases. D'abord, elle est ridiculis�e;
ensuite, elle rencontre une vive opposition avant d'�tre accept�e comme
une totale �vidence" - Arthur Schopenhauer
From: Dono on
On Apr 17, 8:25 pm, wbb...(a)gmail.com wrote:

> However, Cahill's latest flyby paper shows that it can be
> INDIRECTLY measured via doppler effects.

You mean by using Cahill's Newtonian explanation for the Doppler
effect mixed with a ballistic interpretation of the relativity , Bozo?