From: Robert Macrae on
"Hannes Naudé" <naude.jj+matlab(a)gmail.com> wrote in message

> The reason why our simple block based solvers appear to outperform the cutting edge stuff from academia is the fact that the problem as stated here does not actually correspond exactly to the compressive sampling problem typically studied in the real world. In that case, one is not able to adjust your query pattern dynamically based on the results from earlier queries. Rather, the entire set of query masks need to be specified at the outset.

That is exactly why I rejected random (and Hadamard). However I think the concept of large overlapping queries can still be made to work.

> Being able to adjust our queries to zoom in on areas with detail is a significant (allthough not realistic) advantage

Its a different problem, but I don't think its unrealistic.

Robert
From: Amitabh Verma on
"Alan Chalker" <alancNOSPAM(a)osc.edu> wrote in message <hrsnpd$rfa$1(a)fred.mathworks.com>...
> "Alan Chalker" <alancNOSPAM(a)osc.edu> wrote in message
> > "Extraction of puzzles in the test suite by manipulating the score, runtime, or error conditions is also forbidden. In the small scale, this has been an element of many past contests, but in the Blockbuster Contest, Alan Chalker turned this into a science."

Guess I have lots to learn :)

Thanks to all the contestants and the Matlab team, it was a real fun and educational trip !
From: Hannes Naudé on
> It sounds like Hannes' random entry will finally beat my 'final effort' series. Well-done!
>
> Yi

Take a closer look my friend, it's not as random as you might think. ;-)

Cheers
H
From: Hannes Naudé on
Robert:
> That is exactly why I rejected random (and Hadamard). However I think the concept of large overlapping queries can still be made to work.

Agree with you there. The algorhithms you and Oliver discussed here sounded very promising. However, I've learnt the hard way not to attempt any global changes after about sunday. Past a certain point in the contest the only edits that stand a chance are nested inside lots of conditions and fire only very rarely, so as not to disturb the solver from its resting place in the deep local minima. :-(

Another unimplemented idea is excluding the outer edge from the estimation process until the very end and then just assigning each pixel in the outer edge the value of its nearest neighbour on the inner. This gives us more resolution towards the center (where the detail is more likely to be in any case) in exchange for a minimal loss on the outer edge.

In a similar way one can leave open regions BETWEEN queries and then use standard image inpainting algorithms to fil them in.

> > Being able to adjust our queries to zoom in on areas with detail is a significant (allthough not realistic) advantage

Well, by unrealistic I meant unrealistic in typical compressive sensing applications where minimal processing power/battery life is available on the sensor end . I'm sure real-world applications which match the problem as studied here exist, I just don't know what they are.
From: Hannes Naudé on
"Nicholas Howe" <NikHow(a)hotmail.com> wrote in message <hrsld1$k01$1(a)fred.mathworks.com>...
Thanks for clarifying this, Hannes. I feel better now about not putting the time into delving through that CS material! (As I write this your entry stands at the top of the heap, so it looks like you managed to do quite well with other techniques also.)

Thanks. Unfortunately, in this case, "other techniques" refers to intentionally overfitting the dataset, something for which I have a deep seated dislike. But hey if you can't beat them, join them.