From: Andrew Magyar on
From vector calculus, in 3-dimensions it is a known fact that a given vector field is the gradient field of scalar function from R^3 -> R if the curl of that vector field is equal to 0.

Correct me if I am wrong, but this condition really boils down to the fact that the order in which second order partial derivatives is taken is irrelevant (ie d^2f/dxdy = d^2f/dydx).

In general d-dimensions, given a vector field the question of interests for me is whether this given vector field corresponds to the gradient field of a scalar function from R^d -> R.

Is the condition for which this is true similar to that in R^3, namely that the order in which you take second order partial derivative be irrelevant?
From: W. Dale Hall on
Andrew Magyar wrote:
> From vector calculus, in 3-dimensions it is a known fact that a given
> vector field is the gradient field of scalar function from R^3 -> R
> if the curl of that vector field is equal to 0.
>
> Correct me if I am wrong, but this condition really boils down to the
> fact that the order in which second order partial derivatives is
> taken is irrelevant (ie d^2f/dxdy = d^2f/dydx).
>
> In general d-dimensions, given a vector field the question of
> interests for me is whether this given vector field corresponds to
> the gradient field of a scalar function from R^d -> R.
>
> Is the condition for which this is true similar to that in R^3,
> namely that the order in which you take second order partial
> derivative be irrelevant?

This is more or less the case. You do have to be a little careful, since
holes in the domain of your vector field can cause things to go awry:

Let v(x,y) = -y/(x^2 + y^2) e_x + x/(x^2 + y^2) e_y

where e_x, e_y are the standard unit vectors in the x,y directions,
respectively. Note that v(x,y) is undefined at the origin x=y=0.

It's not hard to show that the "mixed partial" criterion is met:

D_y(-y/(x^2 + y^2)) = ((-1)(x^2 + y^2) - (-y)(2y))/(x^2 + y^2)^2
= (y^2 - x^2)/(x^2 + y^2)^2

D_x(x/(x^2 + y^2)) = ((1)(x^2 + y^2) - (x)(2x))/(x^2 + y^2)^2
= (y^2 - x^2)/(x^2 + y^2)^2

However, v(x,y) isn't really the gradient of an actual function, rather
it's the gradient of the polar angle theta; the problem is that theta
is only defined modulo 2 pi.

This general phenomenon is seen in considering (smooth) differential
forms (http://en.wikipedia.org/wiki/Differential_form), that is,
expressions of the form

sum_I(f_I(x1 ... xn) w_I)

where

I ranges over multi-indices (I1 I2 ... Ik) of length k
from the set {1 ... n},
the f_I(...) are smooth functions,
and w_I represents the wedge (totally antisymmetric
[or exterior] product)

w_I = dx_I1 ^ dx_I2 ^ ... ^ dx_Ik

These objects form the natural setting for doing multi-dimensional
integrals, and the antisymmetry accounts for orientations of the
domain of integration.

0-forms (forms of degree k = 0) are just smooth functions.

1-forms (i.e., forms of degree k = 1) are not quite vector fields;
instead, they are linear functions from vector fields to the reals
(or complexes if that's your base field), or "dual" vector fields.
The correspondence dx_i <---> e_i (i'th canonical basis vector) is
always available, since it's just a description of the standard
inner product)

2-forms (btw: it's forms of degree k = 2) don't look like vector
fields, except in dimension n = 3, where we can map the basic
2-forms to the canonical basis vectors as follows:

dx ^ dy <---> e_z
dy ^ dz <---> e_x
dz ^ dx <---> e_y

noting the pattern here is that dx_i ^ dx_j <---> e_k, with (i,j,k)
being a cyclic (more relevant: even) permutation of (1,2,3)

For other dimensions, you don't get a "curl" operator mapping
vector fields to vector fields. However, not all is lost:

There is a differential operator d (called the exterior differential)
that maps k-forms to (k+1)-forms, defined by this:

d f(x1 ... xn) w = sum(D_i f(x1 ... xn) dx_i ^ w, i = 1 ... k)

where w is a k-form of the form

w = dx_i1 ^ ... ^ dx_ik

accounting for antisymmetry as follows

first: dx_i ^ (dx_i1 ^ ... ^ dx_ik) = any of the following:

- dx_i1 ^ dx_i ^ dx_i2 ^ ... ^ dx_ik
+ dx_i1 ^ dx_i2 ^ dx_i ^ ... ^ dx_ik
- dx_i1 ^ dx_i2 ^ dx_i3 ^ dx_i ^ ... ^ dx_ik

and so forth,

and: dx_i ^ dx_i = 0 (so any repeated index forces the
product to be zero)

then extending to all k-forms by linearity. Note that in the above
sum, the only derivatives that will appear will be those for which
the corresponding dx_i does *not* appear in w.

For reference, this antisymmetry is seen in the curious minus
signs that appear in the definition of curl(V).

One important feature of the exterior differential is that it squares
to zero: d(dQ) = 0 for any k-form Q.

Finally, after all this hot air I been spewing: the Poincare Lemma:

For Q any k+1-form defined on a contractible open set U
in R^n, then if dQ = 0 on U, there is a k-form P on U
for which dP = Q.

If U is not contractible, there may be forms Q that have dQ = 0
(these are called *closed* differential forms) but fail to have
a P for which Q = dP (if Q = dP then Q is called *exact*).
This leads to a host of techniques for studying the topology
of the domain U, and ultimately to tools for the study of
smooth manifolds [google "deRham cohomology"].

In the example v(x,y) I gave above, the vector field v actually
detects the hole in its domain by virtue of its failure to
be a gradient of a function.

Well, that's the short answer.

Dale