From: Kaba on
keith wrote:
> What I do in my algorithm is, I first derive an AABB of the point set
> and then put a starting rc in the center of this AABB. That's my initial
> guess. Then I use the hooke-jeeves algorithm (paper is online, but
> better check Schaum's Operations Research for an easy explanation) to
> minimize:
>
> max({||rp-rc||^2}) with rc as the argument and all rps from P taken into
> account. You'll get close to the minimum ball center this way and will
> also get the radius (returned by the cost function). Even so, the welzl
> algorithm will usually outdo this approach, due finite floating point
> precision.
>
> What I wanted to ask is:
>
> Is the cost function max({||rp-rc||^2}) over all rp continuous?
> Differentiable? Maybe one could get to the rc even faster using
> derivatives? This approach works, I have working code.

Let P = {p_1, ..., p_m} subset R^n

The cost function is
f : R^n -> R: f(x) = max {||p - x||^2 : p in P}

To get an intuition for what this cost function looks like, consider the
problem in 1D (on real line) and visualize the graphs of the functions.
There the norm is the absolute value and each point in P generates a
quadratic curve around itself. The value of the cost function at x is
given by finding the maximum value on these curves at x. The 2D case can
similarly be visualized in 3D.

It should then intuitively be clear that the cost function is continuous
but not differentiable. The non-differentiable points are exactly those
at which the generator of the maximum value changes.

--
http://kaba.hilvi.org
From: keith on
> Let P = {p_1, ..., p_m} subset R^n
>
> The cost function is
> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
>
> To get an intuition for what this cost function looks like, consider the
> problem in 1D (on real line) and visualize the graphs of the functions.
> There the norm is the absolute value and each point in P generates a
> quadratic curve around itself. The value of the cost function at x is
> given by finding the maximum value on these curves at x. The 2D case can
> similarly be visualized in 3D.
>
> It should then intuitively be clear that the cost function is continuous
> but not differentiable. The non-differentiable points are exactly those
> at which the generator of the maximum value changes.

How do you know that the transfer between two generators is not smooth
enough for differentiability?
From: keith on
keith pravi:
>> Let P = {p_1, ..., p_m} subset R^n
>>
>> The cost function is
>> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
>>
>> To get an intuition for what this cost function looks like, consider the
>> problem in 1D (on real line) and visualize the graphs of the functions.
>> There the norm is the absolute value and each point in P generates a
>> quadratic curve around itself. The value of the cost function at x is
>> given by finding the maximum value on these curves at x. The 2D case can
>> similarly be visualized in 3D.
>>
>> It should then intuitively be clear that the cost function is continuous
>> but not differentiable. The non-differentiable points are exactly those
>> at which the generator of the maximum value changes.
>
> How do you know that the transfer between two generators is not smooth
> enough for differentiability?

Hmm, yeah, now I see it. It's like switching from one parabola to
another at a point. Too sharp a switch. Thanks for the info. That's why
on the safe side, I've used Hooke-Jeeves, which does not require
derivatives. The Ritter algorithm for the minimum sphere seems to be
much faster and mostly gives similar results.
From: keith on
> Let P = {p_1, ..., p_m} subset R^n
>
> The cost function is
> f : R^n -> R: f(x) = max {||p - x||^2 : p in P}

I have a more difficult question now, Kaba (or anyone else). Is the
function f convex? Is it unimodal? The reason why I ask is, that I have
not fallen into any local minimum while searching for x yet.
From: Kaba on
keith wrote:
> > Let P = {p_1, ..., p_m} subset R^n
> >
> > The cost function is
> > f : R^n -> R: f(x) = max {||p - x||^2 : p in P}
>
> I have a more difficult question now, Kaba (or anyone else). Is the
> function f convex? Is it unimodal? The reason why I ask is, that I have
> not fallen into any local minimum while searching for x yet.

With the same intuition as in the previous post, I say 'yes' and 'yes',
with no deeper proofs. I think the term unimodal is not applicable here
although I know that by that you mean 'there are no other minima'.

Let C = {(x, h) | x in R^n, h >= f(x)}. Then in my mind R^n \ C looks
(at least when n = 1, 2) like a deep hole in ground whose walls and
floor are made of pieces of quadratics (of the same shape). Where the
pieces join there are more or less sharp corners. Convexity of f is a
consequence of the (used) quadratics being convex.

--
http://kaba.hilvi.org