From: Schoenfeld on
On Mar 3, 12:13 pm, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
> On Mar 3, 5:34 am, "Dono." <sa...(a)comcast.net> wrote:
>
> > On Mar 2, 7:11 am, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
> > > since subsitution of [5] into the RHS of [4] gives [1]
>
> > > [6]     A'.B' = (TA).(T^-1 B)
> > >               = (T T^-1) A.B
> > >               = A.B
>
> > Matrix multiplication does not commute, so you can't write
>
> > (TA)T^-1=(T T^-1) A
>
> That step is valid using tensors
>
> [6] A'^r B'_r   = A^i (@x'^r/@x^i) B_j (@x^j/@x'r)
>                 = A^i B_j (@x'^r/@x^i) (@x^j/@x'r)
>                 = A^i B_j KroneckaDelta_ij
>                 = A^r B_r


TYPO: the above refers to KroneckaDelta^i_j


NOTE: Standard vector algebra just isn't flexible enough and you
actually "lose structure". Some people even say Maxwell's original
Electromagnetism (written with Quaternions) has lost structure after
Gibbs coverted it to standard vector algebra. I wonder what physical
principles exist but remain unknown because they can't be expressed
with Gibbs's representation of Maxwell's E&M.
From: mpalenik on
On Mar 2, 6:25 pm, "Dono." <sa...(a)comcast.net> wrote:
> On Mar 2, 1:58 pm, "Dono." <sa...(a)comcast.net> wrote:
>
>
>
>
>
> > On Mar 2, 12:07 pm, mpalenik <markpale...(a)gmail.com> wrote:
>
> > > On Mar 2, 2:34 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > > On Mar 2, 7:11 am, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
> > > > > since subsitution of [5] into the RHS of [4] gives [1]
>
> > > > > [6]     A'.B' = (TA).(T^-1 B)
> > > > >               = (T T^-1) A.B
> > > > >               = A.B
>
> > > > Matrix multiplication does not commute, so you can't write
>
> > > > (TA)T^-1=(T T^-1) A
>
> > > This is why I should have used indices in my original response.  We
> > > have:
>
> > > T^beta_alpha  (T^-1)^gamma_beta A^alpha B_gamma
>
> > > but T^beta_alpha (T^-1)^gamma_beta = delta^gamma_alpha
>
> > > I hope I wrote that correctly.  It's always hard for me to tell in
> > > ASCII.
>
> > You can also use an interesting property of the dot product in order
> > to achieve the same result:
>
> > <A_matrix*u_vector,
> > B_matrix_v_vector>=<B_transposed*A*u_vector,v_matrix>
>
> > Schoenfeld's derivation will hold only if T_transposed=T (i.e.
> > T=symmetric). Otherwise, it doesn't.- Hide quoted text -
>
> Turns out that the general form of Lorentz matrix IS indeed symmetric:
>
> http://en.wikipedia.org/wiki/Lorentz_transformation#Matrix_form
>
> So, Schoenfeld's proof is now complete.- Hide quoted text -
>
> - Show quoted text -

Also, (T^-1)T = T(T^-1) = I for any matrix and its inverse. But using
indices as I did, and as Schoenfeld did, really, I think is the best
way to express what happens when you take these products. In this
case, its which indices in the tensors that you decide to match up
with each other and with the vectors that matters, not the arbitrary
order that you write them down. For me, at least, it makes everything
a lot more clear, and some properties that weren't obvious before
become easier to see.
From: Dono. on
On Mar 2, 6:31 pm, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
> On Mar 3, 12:13 pm, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
>
>
> > On Mar 3, 5:34 am, "Dono." <sa...(a)comcast.net> wrote:
>
> > > On Mar 2, 7:11 am, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
> > > > since subsitution of [5] into the RHS of [4] gives [1]
>
> > > > [6] A'.B' = (TA).(T^-1 B)
> > > > = (T T^-1) A.B
> > > > = A.B
>
> > > Matrix multiplication does not commute, so you can't write
>
> > > (TA)T^-1=(T T^-1) A
>
> > That step is valid using tensors
>
> > [6] A'^r B'_r = A^i (@x'^r/@x^i) B_j (@x^j/@x'r)
> > = A^i B_j (@x'^r/@x^i) (@x^j/@x'r)
> > = A^i B_j KroneckaDelta_ij
> > = A^r B_r
>
> TYPO: the above refers to KroneckaDelta^i_j
>
> NOTE: Standard vector algebra just isn't flexible enough and you
> actually "lose structure". Some people even say Maxwell's original
> Electromagnetism (written with Quaternions) has lost structure after
> Gibbs coverted it to standard vector algebra. I wonder what physical
> principles exist but remain unknown because they can't be expressed
> with Gibbs's representation of Maxwell's E&M.



I got it working with standard vectors and matrices, there is no need
for tensors.
Mind you, I am not disputing the validity of the tensorial proof, I
just tightened the matrix-vector proof. This remark is valid for bot
you and mpalenik.
It is nice to have a sane converstaion, uninterupted by kooks once
in awhile :-)

From: Dono. on
On Mar 2, 6:45 pm, mpalenik <markpale...(a)gmail.com> wrote:
> On Mar 2, 6:25 pm, "Dono." <sa...(a)comcast.net> wrote:
>
>
>
> > On Mar 2, 1:58 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > On Mar 2, 12:07 pm, mpalenik <markpale...(a)gmail.com> wrote:
>
> > > > On Mar 2, 2:34 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > > > On Mar 2, 7:11 am, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
> > > > > > since subsitution of [5] into the RHS of [4] gives [1]
>
> > > > > > [6] A'.B' = (TA).(T^-1 B)
> > > > > > = (T T^-1) A.B
> > > > > > = A.B
>
> > > > > Matrix multiplication does not commute, so you can't write
>
> > > > > (TA)T^-1=(T T^-1) A
>
> > > > This is why I should have used indices in my original response. We
> > > > have:
>
> > > > T^beta_alpha (T^-1)^gamma_beta A^alpha B_gamma
>
> > > > but T^beta_alpha (T^-1)^gamma_beta = delta^gamma_alpha
>
> > > > I hope I wrote that correctly. It's always hard for me to tell in
> > > > ASCII.
>
> > > You can also use an interesting property of the dot product in order
> > > to achieve the same result:
>
> > > <A_matrix*u_vector,
> > > B_matrix_v_vector>=<B_transposed*A*u_vector,v_matrix>
>
> > > Schoenfeld's derivation will hold only if T_transposed=T (i.e.
> > > T=symmetric). Otherwise, it doesn't.- Hide quoted text -
>
> > Turns out that the general form of Lorentz matrix IS indeed symmetric:
>
> >http://en.wikipedia.org/wiki/Lorentz_transformation#Matrix_form
>
> > So, Schoenfeld's proof is now complete.- Hide quoted text -
>
> > - Show quoted text -
>
> Also, (T^-1)T = T(T^-1) = I for any matrix and its inverse.

True but no germaine to the discussion, you really need T to be
symmetric in order for the proof to work. Since the general Lorentz
transform is symmetric, the proof worked out ok in the end.





From: mpalenik on
On Mar 2, 10:05 pm, "Dono." <sa...(a)comcast.net> wrote:
> On Mar 2, 6:45 pm, mpalenik <markpale...(a)gmail.com> wrote:
>
>
>
>
>
> > On Mar 2, 6:25 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > On Mar 2, 1:58 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > > On Mar 2, 12:07 pm, mpalenik <markpale...(a)gmail.com> wrote:
>
> > > > > On Mar 2, 2:34 pm, "Dono." <sa...(a)comcast.net> wrote:
>
> > > > > > On Mar 2, 7:11 am, Schoenfeld <schoenfeld.fore...(a)gmail.com> wrote:
>
> > > > > > > since subsitution of [5] into the RHS of [4] gives [1]
>
> > > > > > > [6]     A'.B' = (TA).(T^-1 B)
> > > > > > >               = (T T^-1) A.B
> > > > > > >               = A.B
>
> > > > > > Matrix multiplication does not commute, so you can't write
>
> > > > > > (TA)T^-1=(T T^-1) A
>
> > > > > This is why I should have used indices in my original response.  We
> > > > > have:
>
> > > > > T^beta_alpha  (T^-1)^gamma_beta A^alpha B_gamma
>
> > > > > but T^beta_alpha (T^-1)^gamma_beta = delta^gamma_alpha
>
> > > > > I hope I wrote that correctly.  It's always hard for me to tell in
> > > > > ASCII.
>
> > > > You can also use an interesting property of the dot product in order
> > > > to achieve the same result:
>
> > > > <A_matrix*u_vector,
> > > > B_matrix_v_vector>=<B_transposed*A*u_vector,v_matrix>
>
> > > > Schoenfeld's derivation will hold only if T_transposed=T (i.e.
> > > > T=symmetric). Otherwise, it doesn't.- Hide quoted text -
>
> > > Turns out that the general form of Lorentz matrix IS indeed symmetric:
>
> > >http://en.wikipedia.org/wiki/Lorentz_transformation#Matrix_form
>
> > > So, Schoenfeld's proof is now complete.- Hide quoted text -
>
> > > - Show quoted text -
>
> > Also, (T^-1)T = T(T^-1) = I for any matrix and its inverse.
>
> True but no germaine to the discussion, you really need T to be
> symmetric in order for the proof to work. Since the general Lorentz
> transform is symmetric, the proof worked out ok in the end.- Hide quoted text -
>
> - Show quoted text -

Yeah, my mistake. I was thinking for a second that T(T^-1) = (T^-1)(T
+), when it's actually ((T^-1)+)(T+) where + indicates the transpose.