From: John Baez on
Also available at http://math.ucr.edu/home/baez/week295.html

April 16, 2010
This Week's Finds in Mathematical Physics (Week 295)
John Baez

This week I'll talk about the principle of least power, and Poincare
duality for electrical circuits, and a generalization of Hamiltonian
mechanics that people have introduced for dissipative systems. But
first....

Now and then the world does something that forcefully reminds us of
its power. As you probably know, the Eyjafjallaj�kull volcano in
Iceland is emitting a plume of glass dust which has brought air
traffic to a halt over much of Europe. This dust is formed as lava
hits cold water and shatters. When sucked into a jet engine, it can
heat up to about 1400 degree Celsius and re-melt. And when it cools
again, it can stick onto the turbine blades.

This is not good. In 1982, a British Airways Boeing 747 flew through
an ash cloud created by a volcano in Indonesia. All four engines cut
out. The plane descended from 11,000 meters to 3,700 meters before the
engines could be restarted. Whee!

Here's a picture of the Eyjafjallaj�kull plume, taken yesterday by
NASA's "Aqua" satellite:

1) NASA, Ash plume from Eyjafjallajokull Volcano over the North Atlantic
(afternoon overpass), http://rapidfire.sci.gsfc.nasa.gov/gallery/?2010105-0415

Here's what the volcano looked like back in March:

2) Bjarni T, 2010 Eruptions of Eyjafjallj�kull,
http://www.fotopedia.com/en/2010_eruptions_of_Eyjafjallaj%C3%B6kull/slideshow/sort/MostVotedFirst/status/default/photos

Starting around 1821, the same volcano erupted and put out ash for
about 6 months. What will it do this time? Nobody seems to know.
If it goes on long enough, will people invent some sort of ash filter
for jet engines?

Oh well. Back to electrical circuits...

I want to explain the "principle of minimum power" and how we can use
it to understand electrical circuits built from linear resistors. In
future Weeks this will lead us to some symplectic geometry, complex
analysis and loop groups. But I want to start with some very basic
stuff! I want to illustrate the principle of minimum power by using
it to solve two basic problems: resistors in series and resistors in
parallel. But first I should work out the answers to these problems
using a more standard textbook approach - just in case you haven't
seen this stuff already.

In the textbook approach, we'll use Kirchoff's voltage and current
laws over and over again. I explained these laws in "week293" and
"week294" - so if necessary, you can either review what I said there,
or just nod and act like you understand what I'm doing.

First, suppose we have two resistors "in series". This means they're
stuck together end to end, like this:

|
|
-----
| R1 |
-----
|
|
-----
| R2 |
-----
|
|

What happens when we put a voltage across this circuit? How much
current will flow through?

To answer this, fix the voltage across the whole circuit, say V.
By Kirchoff's voltage law, this is the sum of the voltages across
the individual resistors, say V1 and V2:

|
|
-----
| R1 | V1
-----
| V = V1 + V2
|
-----
| R2 | V2
-----
|
|

Next let's think about the current flowing through each resistor. By
Kirchhoff's current law, the current through the first resistor must
equal the current through the second one. So, let's call this current
I in each case:

|
|
-----
I | R1 | V1
-----
| V = V1 + V2
|
-----
I | R2 | V2
-----
|
|

Now, Ohm's law says that the voltage across a linear resistor equals
the current through it times its resistance. Let's say our resistors
are linear. So, we get:

I R1 = V1

and

I R2 = V2

Adding these two equations we get:

I (R1 + R2) = V

This looks like Ohm's law again, but now for a resistor with
resistance R1 + R2.

The moral: two resistors in series act like a single resistor whose
resistance is the sum of theirs!

Next, suppose we have two resistors "in parallel". This means they're
stuck together side by side, like this:

/\
/ \
/ \
/ \
/ \
/ \
/ \
---- ----
| R1 | | R2 |
---- ----
\ /
\ /
\ /
\ /
\ /
\ /
\/

What happens when we make some current flow through this circuit?
What will the voltage across it be?

To answer this, fix the current through the whole circuit, say I.
By Kirchoff's current law, this is the sum of the currents through
the individual resistors, say I1 and I2:

/\
/ \
/ \
/ \
/ \
/ \
/ \
---- ----
I1 | R1 | I2 | R2 | I = I1 + I2
---- ----
\ /
\ /
\ /
\ /
\ /
\ /
\/

Next let's think about the voltage across each resistor. By Kirchhoff's
voltage law, the voltage across the first resistor must equal the voltage
across the second one. So, let's call this voltage V in each case:

/\
/ \
/ \
/ \
/ \
/ \
/ \
---- ----
I1 | R1 | V I2 | R2 | V I = I1 + I2
---- ----
\ /
\ /
\ /
\ /
\ /
\ /
\/

Now, Ohm's law says that the current through a linear resistor equals
the voltage across it divided by its resistance. So, if our resistors
are linear, we get

I1 = V / R1

and

I2 = V / R2

Adding these two equations we get:

I = V (1/R1 + 1/R2)

In our previous problem we were adding up resistances. Now we're
adding up reciprocals of resistances. Luckily, there's a name for
the reciprocal of a resistance: it's called an "admittance".

The moral: two resistors in parallel act like a single resistor whose
admittance is the sum of theirs!

And there's also another moral. If you compare this problem to the
previous one, you'll see that everything was almost exactly the same!
In fact, I repeated a lot of sentences almost word for word. I just
switched certain concepts, which come in pairs:

current and voltage
series and parallel
resistance and admittance

In fact, switching concepts like this is an example of Poincare
duality for electrical circuits, as mentioned in "week291".

You may know Poincare duality for graphs drawn on a sphere: you get a
new graph from an old one by:

drawing a new vertex in the middle of each old face,
replacing each edge with a new one that crosses the old one, and
drawing a new face centered at each old vertex.

This works fine for "closed" planar circuits - but for circuits with
input and output wires, like we've got here, we need Poincare duality
for graphs drawn on a closed disk! This is should probably be called
"Poincare-Lefschetz duality".

Instead of giving you a long-winded description of how this works,
let me just illustrate it. We start with two resistors in series.
This is a graph with two edges and three vertices drawn on something
that's topologically a closed disk. Let's draw it on a rectangle:

......x......
. | .
. | .
. | .
. o .
. | .
. | .
. | .
......x......

The two dashed edges are the resistors. The two vertices on the
boundary of the square, drawn as x's, are the "input" and "output"
vertices. There's also a vertex in the interior of the square,
drawn as a little circle.

Now let's superimpose the Poincare dual graph:

......x......
. ___|___ .
. / | \ .
./ | \.
x o x
.\ | /.
. \___|___/ .
. | .
......x......


This is a mess, so now let's remove the original graph:

.............
. _______ .
. / \ .
./ \.
x x
.\ /.
. \_______/ .
. .
.............

This Poincare dual graph shows two resistors in parallel! There's
an "input" at left connected to an "output" at right by two edges,
each with a resistor on it. In case you're wondering, the difference
between "input" and "output" is purely conventional here.

Poincare duality is cool. But now let's solve the same problems -
resistors in series and resistors in parallel - using the "principle
of least power". Here's what the principle says. Suppose we have any
circuit made of resistors and we fix boundary conditions at the wires
leading in and out. Then the circuit will do whatever it takes to
minimize the amount of power it uses - that is, turns into heat.

What do I mean by "boundary conditions"? Well, first of all, I'm
thinking of an electrical circuit as a graph with resistors on its
edges, and with some special vertices that we think of as inputs and
outputs:

x x
| |
o-----------o
/ \ |
/ \ |
/ o--------o
| / \ |
| / \ |
o---o-----o-----o
| | |
x x x

The inputs and outputs are marked as x's here. I've drawn a planar
graph, but we could also have a nonplanar one, like this:

x x x x
\ \ / |
\ \ |
\ / \ |
\ / \ |
o--------o
/ \ |
/ \|
o-----o-----o
| | |
x x x

(Poincare duality works best for planar circuits, but I'm still
struggling to find its place in the grand scheme of things - for
example, how it permeates the big set of analogies between different
physical systems that I explained starting in "week288".)

But what do I mean by "boundary conditions"? Well, one sort of
boundary condition is to fix the "electrostatic potential" at the
input and output vertices of our graph. Remember from last week that
the electrostatic potential is a function phi on the vertices of our
graph. So, we'll specify the value of this function at the input and
output vertices. Then we'll compute its values at all the other
vertices using the principle of minimum power.

To do this, we need to remember some stuff from "week293" and "week294".
First, for any edge

e
x --------> y

the voltage across that edge, V(e), is given by

V(e) = phi(y) - phi(x)

Second, since we have a circuit made of linear resistors, the current
I(e) through that edge obeys Ohm's law:

V(e) = I(e) R(e)

where R(e) is the resistance.

Third, the power consumed by that edge will be

P(e) = V(e) I(e)

The principle of minimum power says: fix phi at the input and output
vertices. Then, to find phi at the other vertices, just minimize the
total power:

P = sum_e P(e)

Using all the equations I've lined up, we see that the total power
is indeed a function of phi, since:

P(e) = (phi(y) - phi(x))^2 / R(e)

The total power is a quadratic function in a bunch of variables, so
it's easy to minimize.

Let's actually do this for two resistors in series:

phi0 x
|
| R1
|
phi1 o
|
| R2
|
phi2 x

We need to find phi1 that minimizes the total power

P = (phi1 - phi0)^2 / R1 + (phi2 - phi1)^2 / R2

So, we differentiate P with respect to phi1 and set the derivative
to zero:

2 (phi1 - phi0) / R1 - 2 (phi2 - phi1) / R2 = 0

This implies that

V1 / R1 = V2 / R2

where V1 and V2 are the voltages across our two resistors.

By Ohm's law, voltage divided by resistance is current. So, we get

I1 = I2

where I1 and I2 are the currents through our two resistors. Hey - the
current flowing through the first resistor equals the current flowing
through the second one! That's no surprise: it's a special case of
Kirchoff's current law! But the cool part is that we *derived*
Kirchhoff's current law from the principle of minimum power. This
works quite generally, not just in this baby example.

Since the currents I1 and I2 are equal, let's call them both I. Then
we're back to the textbook approach to this problem. Ohm's law says

I R1 = V1

and

I R2 = V2

Adding these equations, we see that when you put resistors in series,
their resistances add.

Okay, now let's try two resistors in parallel:

x phi0
/ \
/ \
/ \
/R1 \R2
\ /
\ /
\ /
\ /
x phi1

This problem is oddly boring. There are no vertices except the input
and the output, so the minimization problem is trivial! If we fix the
potential at the input and output, we instantly know the voltages across
the two resistors, and then using Ohm's law we get the currents.

Why was this problem more boring than two resistors in series? Shouldn't
they be very similar? After all, they're Poincare duals of each other!

Well, yeah. But the problem is, we're not using the Poincare dual
boundary conditions. For the resistors in series we had a graph with
a vertex in the middle:

......x......
. | .
. | .
. | .
. o .
. | .
. | .
. | .
......x......


For the resistors in parallel we have a graph with a face in the middle:


.............
. _______ .
. / \ .
./ \.
x x
.\ /.
. \_______/ .
. .
.............

So, to treat the resistors in parallel in a Poincare dual way, we
should use boundary conditions that involve faces rather than
vertices. I talked about these faces back in "week293": electrical
engineers call them "meshes". Each mesh has a current flowing around
it. So, our boundary conditions should specify the current flowing
around each input or output mesh: that is, each mesh that touches
the boundary of our rectangle. We should then find currents flowing
around the internal meshes that minimize the total power. And in the
process, we should be able to derive Kirchhoff's *voltage* law.

All this could be further illuminated using the chain complex approach
I outlined in "week293". Let me just sketch how that goes. We can
associate a cochain complex to our circuit:

d d
C^0 -------> C^1 -------> C^2

The electrostatic potential phi is a 0-cochain and the voltage

V = d phi

is a 1-cochain. As we've seen, the total power is

P = sum_e V(e)^2 / R(e)

We can write this in a slicker way using an inner product on
the space of 1-cochains:

P = <V, V>

The principle of minimum power says we should find the electrostatic
potential phi that minimizes the total power subject to some boundary
conditions. So, we're trying to minimize

P = <d phi, d phi>

while holding phi fixed at some "input and output vertices". If you
know some mathematical physics you'll see this is just a discretized
version of the minimum principle that gives Laplace's equation!

There's also a dual version of this whole story. Our circuit
also gives a chain complex:

delta delta
C_0 <-------- C_1 <-------- C_2

The mesh currents define a 2-chain J and the currents along edges
define a 1-chain

I = delta J

In these terms, the total power is

P = sum_e R(e) I(e)^2

We can write this in a slicker way using an inner product on the space
of 1-chains:

P = <I, I>

In fact I already talked about this inner product in "week293".

In these terms, the principle of minimum power says we should find the
mesh current that minimizes the total power subject to some boundary
conditions. So, now we're trying to minimize

P = <delta J, delta J>

while holding J fixed along certain "input and output meshes".

In short, everything works the same way in the two dual formulations.
In fact, we can reinterpret our chain complex as a cochain complex
just by turning it around! This:

delta delta
C_0 <-------- C_1 <-------- C_2

effortlessly becomes this:

delta delta
C_2 -------> C_1 --------> C_0

And we didn't even need our graph to be planar! The only point in
having the graph be planar is that this gives us a specific choice of
meshes. Otherwise, we must choose them ourselves.

Finally, I want to mention an interesting book on nonequilibrium
thermodynamics. The "principle of minimum power" is also known as the
"principle of least entropy production". I'm very curious about this
principle and how it relates to the more familiar "principle of
least action" in classical mechanics. This book seems to be pointing
towards a unification of the two:

3) Hans Christian Oettinger, Beyond Equilibrium Thermodynamics,
Wiley, 2005.

I thank Arnold Neumaier for pointing it out! It considers a
fascinating generalization of Hamiltonian mechanics that applies to
systems with dissipation: for example, electrical circuits with
resistors, or mechanical systems with friction.

In ordinary Hamiltonian mechanics the space of states is a manifold
and time evolution is a flow on this manifold determined by a smooth
function called the Hamiltonian, which describes the *energy* of any
state. In this generalization the space of states is still a manifold,
but now time evolution is determined by two smooth functions: the
energy and the *entropy*! In ordinary Hamiltonian mechanics, energy
is automatically conserved. In this generalization that's also true,
but energy can go into the form of heat... and entropy automatically
*increases*!

Mathematically, the idea goes like this. We start with a Poisson
manifold, but in addition to the skew-symmetric Poisson bracket {F,G}
of smooth functions on some manifold, we also have a symmetric bilinear
bracket [F,G] obeying the Leibniz law

[F,GH] = [F,G]H + F[G,H]

and this positivity condition:

[F,F] >= 0

The time evolution of any function is given by a generalization of
Hamilton's equations:

dF/dt = {H,F} + [S,F]

where H is a function called the "energy" or "Hamiltonian", and S is a
function called the *entropy*! The first term on the right is the
usual one. The new second term describes dissipation: as we shall
see, it pushes the state towards increasing entropy.

If we require that

[H,F] = {S,F} = 0

for every function F, then we get conservation of energy, as usual
in Hamiltonian mechanics:

dH/dt = {H,H} + [S,H] = 0

But we also get the second law of thermodynamics:

dS/dt = {H,S} + [S,S] >= 0

Entropy always increases!

Oettinger calls this framework "GENERIC" - an annoying acronym for
"General Equation for the NonEquilibrium Reversible-Irreversible
Coupling". There are lots of papers about it. But I'm wondering if
any geometers have looked into it!

If we didn't need the equations [H,F] = {S,F} = 0, we could easily
get the necessary brackets starting with a Kaehler manifold. The
imaginary part of the Kaehler structure is a symplectic structure,
say w, so we can define

{F,G} = w(dF,dG)

as usual to get Poisson brackets. The real part of the Kaehler structure
is a Riemannian structure, say g, so we can define

[F,G] = g(dF,dG)

This satisfies

[F,GH] = [F,G]H + F[G,H]

and

[F,F] >= 0

Don't be fooled: this stuff is not rocket science. In particular, the
inequality above has a simple meaning: when we move in the direction
of the gradient of F, the function F increases. So adding the second
term to Hamilton's equations has the effect of pushing the system
towards increasing entropy.

Note that I'm being a tad unorthodox by letting w and g eat cotangent
vectors instead of tangent vectors - but that's no big deal. The big
deal is this: if we start with a Kaehler manifold and define brackets
this way, we don't get [H,F] = 0 or {S,F} = 0 for all functions F
unless H and S are constant! That's no good for applications to physics.
To get around this, we would need to consider some sort of *degenerate*
Kaehler structure - one where w and g are degenerate bilinear forms on
the cotangent space.

Has anyone thought about such things? They remind me a little of "Dirac
structures" and "generalized complex geometry" - but I don't know
enough about those subjects to know if they're relevant here.

This GENERIC framework suggests that energy and entropy should be
viewed as two parts of a single entity - maybe even its real and
imaginary parts! And that in turn reminds me of other strange things,
like the idea of using complex-valued Hamiltonians to describe
dissipative systems, or the idea of "inverse temperature as imaginary
time". I can't tell yet if there's a big idea lurking here, or just
a mess....

-----------------------------------------------------------------------

Addendum: I thank Tom Leinster, Gunnar Magnusson and Esa Peuha for
catching typos. Also, Esa Peuha noticed that I was cutting corners in
my definition of "admittance" as the inverse of "resistance".
Admittance is the inverse of resistance for circuits made of linear
resistors, which is the situation I was talking about. But he notes:

In Week 295, you claim that admittance is the inverse of resistance,
but that's not true; admittance is the inverse of impedance. Of
course, resistance and impedance are the same thing for circuits
containing only resistors, but not in the presence of capacitors
and inductors. Usually it's said that the inverse of resistance is
conductance (and the inverse of reactance is susceptance), but
that's not quite right: resistance and reactance are the real and
imaginary parts of impedance, and conductance and susceptance are
the real and imaginary parts of admittance, so resistance, reactance,
conductance and susceptance don't usually have physically meaningful
inverses.

For more discussion visit the n-Category Cafe at:

http://golem.ph.utexas.edu/category/2010/04/this_weeks_finds_in_mathematic_56.html

-----------------------------------------------------------------------

Quote of the Week:

"I would rather discover a single fact, even a small one, than debate
the great issues at length without discovering anything new at all." -
Galileo Galilei

-----------------------------------------------------------------------
Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained at

http://math.ucr.edu/home/baez/

For a table of contents of all the issues of This Week's Finds, try

http://math.ucr.edu/home/baez/twfcontents.html

A simple jumping-off point to the old issues is available at

http://math.ucr.edu/home/baez/twfshort.html

If you just want the latest issue, go to

http://math.ucr.edu/home/baez/this.week.html
From: robert bristow-johnson on
On Apr 26, 4:29�pm, b...(a)math.removethis.ucr.andthis.edu (John Baez)
wrote:
....
>
> In our previous problem we were adding up resistances. �Now we're
> adding up reciprocals of resistances. �Luckily, there's a name for
> the reciprocal of a resistance: it's called an "admittance". �

actually, John, the correct terminology is "conductance". both are
considered real quantities.

admittance is the reciprocal of complex quantity "impedance" which has
resistance as the real part and reactance as the imaginary part. when
admittance is split into real and imaginary, the real part is
conductance and the imaginary part has the weird label "susceptance".

i dunno if we're lucky to have such terms. they probably should have
used "impedance" and "admittance" and ditched the terms "resistance",
"conductance", "reactance", and especially "susceptance".

r b-j

From: Han de Bruijn on
On 26 apr, 22:29, b...(a)math.removethis.ucr.andthis.edu (John Baez)
wrote:

[ .. snip good stuff .. ]

Third attempt. The fact that my article is not shown in the (Google)
newsgroups is, of course, due to crossposting to, and censorship of,
a moderated group: 'sci.physics,research'. So let's try again. Sorry
if the article has shown up in the meantime.

In two dimensions, how about this:

http://www.xs4all.nl/~westy31/Electric.html#Irregular
http://hdebruijn.soo.dto.tudelft.nl/jaar2004/purified.pdf [ page 9 ]

In three dimensions, how about this:

http://hdebruijn.soo.dto.tudelft.nl/hdb_spul/belgisch.pdf [ page 7 ]

And this:

http://en.wikipedia.org/wiki/Dual_polyhedron [ related to ]
http://hdebruijn.soo.dto.tudelft.nl/jaar2010/octaeder.pdf

Han de Bruijn
From: spudnik on
I'll work on that proof.

thus:
1. The circle that measures ecliptic latitude, that is, the number of
degrees above or below the ecliptic of the Moon or the planets. It is
properly calibrated when it reads "zero" every day at noon, when
sighting the Sun.

2. A half-circle and plumb-bob attached to the sighting arm, which
gives the elevation of a star or planet above the horizo
3. Equatorial plane, points to the celestial equator, by tilting it
from the horizontal by an angle equal to the co-latitude. The
14"circle on it is divided up into hours, for sidereal time or right
ascension (when necessary, these readings can easily be converted into
degrees, since 1 hour = 15 degrees).
4. Base, in the plane of the observer's horizon, oriented so that the
axis of symmetry is on the north-south meridian.
5. Ecliptic plane, also known as the 23.5-degree wedge, set parallel
to the plane of the ecliptic. The 12"circle on this plane is divided
up into 24 hours, giving ecliptic longitude, where the position of the
Sun is the sidereal time at noon for that day.

6. Sighting arm, with sights for "shooting"a planet, star, the Moon,
or the Sun.
Source: Adapted from Sentiel Rommel, "Maui's Tanawa: A Torquetum of
232 B.C.,"
21st Century, Spring 1999, p. 75.

> Aether is uncompressed matter and matter is compressed aether, so if
> you want to say light propagates through uncompressed matter, that
> would be correct.


> > what ever it says, Shapiro's last book is just a polemic;
> > his real "proof" is _1599_;
> > the fans of de Vere are hopelessly stuck-up --
> > especially if they went to Harry Potter PS#1.
http://www.takethetrashoutofgoogolURL.com/url?sa=D&q=http://entertainment.timesonline.co....

--Light: A History!
http://wlym.com
From: spudnik on
well, you made an assumption about the general tetrahedron,
early in your proof, that only applies
to a small class of them.

thus:
now that you've read some of it; so?

> Nice site, lyndon larouche & 21stcenturysciencetech.googolplexth.com.

thus:
he seems to be unaware of the neccesity in a"proof,"
of "neccesity AND sufficiency," as first stated
by Leibniz (although having one or the other is,
still, very good -- if actually so .-)
> state of the aether, as determined by our inability to detect it.

thus:
so, you applied Coriolis' Force to General Relativity, and
**** happened? > read more »

thus:
with only the "trivial" solutions on the curves o'Fermatttt,
it sounds like a "necessary but insufficient" proof;
PdF certainly could have done it.

> I have been interested in the odd and even aspect of FLT , and
> when Cn = 1. May I have your reference? DRMARJOHN

thus:
so, your coinage of pi(a,b) is the same as pi(b) - pi(a); now,
can you say thr proof as a wordprolemmum?

--Light: A History!
http://wlym.takeTHEgoogolOUT.com
 |  Next  |  Last
Pages: 1 2
Prev: Shhhhhhe-boom shee-boom
Next: The truth about Time