From: Osher Doctorow on
From Osher Doctorow

By previous posts in this thread, we have:

1) P(A-->B) = P ' (A-->B) iff P(AB) = P(B) provided that P(B) < = P(A)
in P(A-->B).

2) P ' (A-->B) > = P(A-->B)

The equation P(AB) = P(B) is equivalent to: P(B is a subset of A) = 1,
that is to say, B is a subset of A except for sets/subsets of
probability 0.

There are some disadvantages if P(A-->B) = P ' (A-->B) as in (1),
because P(AB) = P(B) yields:

3) D_P(A) [P(AB)] = 0 if we assume that A and B do not mathematically
affect each other.
4) D_B[P(AB)] = 1 = dP(AB)/d(B).

From (3) and (4), A doesn't have much effect if any on P(AB), even
though A is the Cause in the expression P(A-->B) and in P ' (A--
>B).

This indicates that the optimum situation for A is when P(AB) does not
equal P(B), in which case A and B are either only partly intersecting
(with parts outside each other having probabilities > 0) or disjoint.
Lets write that as a Principle:

5) Principle of A-Influence on P(AB): A does not have optimal
influence on P(AB) when B is a subset of A (except for sets of
probability 0), but does have optimal influence when B is not a subset
of A but is either partly intersecting A (with part outside A having
probability > 0) or disjoint from A.

To further study (5), let us consider what happens in (1) if A and B
are disjoint:

6) If P(AB) = 0, that is to say A and B are disjoint except for sets/
subsets of probability 0, then P(A-->B) = 1 + P(AB) - P(A) = 1 - P(A)
= P ' (A-->B) = 1 + P(B) - P(A) for P(B) < = P(A) is equivalent to
P(B) = 0, so in other words if P(AB) = 0 then for P(B) < = P(A) we
have P(A-->B) = P ' (A-->B) equivalent to P(B) = 0.

We now have a further extension of (5):

7) Revised Principle of A-Influence on P(AB): Other than for P(B) = 0
(roughly speaking, B either being the Null Set or a Hologram (an
"infinitely thin" set in some direction/dimension)), A has optimal
influence on AB if 0 < P(AB) < P(B).

Visually, think of A and B as Venn diagrams (such as circles) partly
intersecting as the ideal scenario for the influence of A on AB
roughly speaking unless Null Sets or Holograms are desired or
"admissible" for B.

Osher Doctorow
From: Osher Doctorow on
From Osher Doctorow

If A is a wave-particle or field-particle fermion transmitting a boson
to wave-particle or field-particle fermion B, then according to the
last post, A optimally influences the intersection of A and B when 0 <
P(AB) < P(B), which is to say when A and B are partly intersecting but
neither disjoint nor subsets of one or the other, if we ignore sets B
of probability 0.

Since particles are regarded as ordinarily not intersecting
themselves, either the wave or field of A is PARTLY intersecting the
wave or field of B under optimality conditions of A on their
intersection, ignoring sets B of probability 0. So whatever
Interaction is transmitted occurs or is transmitted during the partial
intersection "phase", suggesting that the boson that transmits the
interaction can be identified with the intersection of A and B or
their wave/fields. The disappearance of the boson would then occur
when A and B separate, since then P(AB) = 0.

Osher Doctorow
From: Osher Doctorow on
From Osher Doctorow

We would arguably also want to exclude A and B from being
statistically/probabilistically independent from the optimal influence
of A on AB scenario, which is to say exclude P(AB) = P(A)P(B). Thus,
0 < P(AB) < P(B) is optimal provided that P(AB) does not equal
P(A)P(B). We could say that if P(AB) = P(A)P(B) (independence),
then A has "sub-optimal" influence on AB, which is not as bad as P(AB)
= P(B) (B being a subset of A up to sets of probability 0). Notice
that E. Lehmann's (late 1960s) Positive Quadrant Statistical
Dependence is equivalent to P(AB) > P(A)P(B), while Negative Quadrant
Statistical Dependence is equivalent to 0 < P(AB) < P(A)P(B). So
these things also tie in.

Osher Doctorow