From: keith on
Kaba pravi:
> keith wrote:
>>> Let P = {p_1, ..., p_m} subset R^n
>>>
>>> The cost function is
>>> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
> With the same intuition as in the previous post, I say 'yes' and 'yes',
> with no deeper proofs. I think the term unimodal is not applicable here
> although I know that by that you mean 'there are no other minima'.
>
> Let C = {(x, h) | x in R^n, h >= f(x)}. Then in my mind R^n \ C looks
> (at least when n = 1, 2) like a deep hole in ground whose walls and
> floor are made of pieces of quadratics (of the same shape). Where the
> pieces join there are more or less sharp corners. Convexity of f is a
> consequence of the (used) quadratics being convex.

Still, you can make nonconvex functions by sticking quadratics together.
There must be some deeper intuition behind this.

The problem I have is that 1 or more (up to 4) points will domineer over
others. Say there is just 1 domineering point at first and you follow a
path minimizing f and 'catch' another domineering point. For a switch of
the quadratic the first and second domineering points must first make
for an equal f. You cannot move closer to the first, nor to the second
domineering point then, as you are then going to make f bigger either
way. You can only move on a path where the two points are equidistant (a
plane in 3D), eventually 'catching' more points, restricting movement
further. But if you are dealing with paths, to which certain points from
P are equidistant, then no 'switch' in the quadratics ever occurs.

What I am saying is that maybe a path exists from any point to the
global minimum, where no switch in the quadratics occurs and hence
derivatives are ok?
From: keith on
keith pravi:
> Kaba pravi:
>> keith wrote:
>>>> Let P = {p_1, ..., p_m} subset R^n
>>>>
>>>> The cost function is
>>>> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}

> What I am saying is that maybe a path exists from any point to the
> global minimum, where no switch in the quadratics occurs and hence
> derivatives are ok?

Well, I see now, that, there are infinitely many points (but not in the
1D line case you mention), where f is not differentiable.

Hence no derivatives.
From: Kaba on
keith wrote:
> Kaba pravi:
> > keith wrote:
> >>> Let P = {p_1, ..., p_m} subset R^n
> >>>
> >>> The cost function is
> >>> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
> > With the same intuition as in the previous post, I say 'yes' and 'yes',
> > with no deeper proofs. I think the term unimodal is not applicable here
> > although I know that by that you mean 'there are no other minima'.
> >
> > Let C = {(x, h) | x in R^n, h >= f(x)}. Then in my mind R^n \ C looks
> > (at least when n = 1, 2) like a deep hole in ground whose walls and
> > floor are made of pieces of quadratics (of the same shape). Where the
> > pieces join there are more or less sharp corners. Convexity of f is a
> > consequence of the (used) quadratics being convex.
>
> Still, you can make nonconvex functions by sticking quadratics together.
> There must be some deeper intuition behind this.

Proof:

Let
f_j : R^n -> R: f_j(x) = ||x - p_j||^2
f = max {f_j}

If you accept that each f_j is convex, then:

(1 - t)f(x_1) + tf(x_2)
>= (1 - t)f_j(x_1) + tf_j(x_2) (definition of f)
>= f_j((1 - t)x_1 + tx_2) (convexity of f_j)

Because the previous holds for arbitrary j:

(1 - t)f(x_1) + tf(x_2)
>= max {f_j((1 - t)x_1 + tx_2)}
= f((1 - t)x_1 + tx_2)

Thus f is convex [].

> What I am saying is that maybe a path exists from any point to the
> global minimum, where no switch in the quadratics occurs and hence
> derivatives are ok?

Could be, I don't know for sure. Anyway, if there are at least two
points, f seems to be non-differentiable at the (global) minimum.

--
http://kaba.hilvi.org
From: keith on
Kaba pravi:
> keith wrote:
>> Kaba pravi:
>>> keith wrote:
>>>>> Let P = {p_1, ..., p_m} subset R^n
>>>>>
>>>>> The cost function is
>>>>> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
>
> Proof:
>
> Let
> f_j : R^n -> R: f_j(x) = ||x - p_j||^2
> f = max {f_j}
>
> If you accept that each f_j is convex, then:
>
> (1 - t)f(x_1) + tf(x_2)
>> = (1 - t)f_j(x_1) + tf_j(x_2) (definition of f)
>> = f_j((1 - t)x_1 + tx_2) (convexity of f_j)
>
> Because the previous holds for arbitrary j:
>
> (1 - t)f(x_1) + tf(x_2)
>> = max {f_j((1 - t)x_1 + tx_2)}
> = f((1 - t)x_1 + tx_2)
>
> Thus f is convex [].

I am no mathematician, but is convexity not determined by an inequality?
Furthermore, how can you be certain that the same f_j applies for both
x_1 and x_2?
From: Kaba on
keith wrote:
> > Thus f is convex [].
>
> I am no mathematician, but is convexity not determined by an inequality?

There are inequalities at the start of some rows. However, it seems
newsreaders interpret them as quotes. Reformatting:

Proof:

Let
f_j : R^n -> R: f_j(x) = ||x - p_j||^2
f = max {f_j}

If you accept that each f_j is convex, then:

(1 - t)f(x_1) + tf(x_2) >= (definition of f)
(1 - t)f_j(x_1) + tf_j(x_2) >= (convexity of f_j)
f_j((1 - t)x_1 + tx_2)

Because the previous holds for arbitrary j:

(1 - t)f(x_1) + tf(x_2) >=
max {f_j((1 - t)x_1 + tx_2)} =
f((1 - t)x_1 + tx_2)

Thus f is convex [].

> Furthermore, how can you be certain that the same f_j applies for both
> x_1 and x_2?

Perhaps you mean this step:

(1 - t)f(x_1) + tf(x_2) >= (definition of f)
(1 - t)f_j(x_1) + tf_j(x_2)

Everything's positive, and f(x_i) >= f_j(x_i) by definition.

--
http://kaba.hilvi.org