From: Hannes Naudé on
"Nicholas Howe" <NikHow(a)hotmail.com> wrote in message <hrsfd9$e8f$1(a)fred.mathworks.com>...
> "Robert Macrae" <robertREMOVEME(a)arcusinvest.com> wrote in message
> > The problem with overlapping regions is that they have to be
> > 1) reasonably localised (so that they capture smooth features well) and
> > 2) reasonably orthogonal (so that the information from them can be combined without too much iteration.
>
> Robert,
> I'm not an expert on this field, but the reading that I have done suggests that you actually want queries that are as random as possible (and hence entirely unlocalized). This runs counter to our usual intuition, but so does beating the Nyquist limit...

That's exactly right. Typical compressive sampling queries are just random masks. I experimented with some compressive sampling code in the darkness phase of the competition (from the L1 magic website as well as from David Wipf's homepage), but it wasn't competitive.

The reason why our simple block based solvers appear to outperform the cutting edge stuff from academia is the fact that the problem as stated here does not actually correspond exactly to the compressive sampling problem typically studied in the real world. In that case, one is not able to adjust your query pattern dynamically based on the results from earlier queries. Rather, the entire set of query masks need to be specified at the outset.

Being able to adjust our queries to zoom in on areas with detail is a significant (allthough not realistic) advantage and allows the custom developed code to outperform the off the shelf CS code by orders of magnitude.

Regards
Hannes
From: Nicholas Howe on
"Hannes Naudé" <naude.jj+matlab(a)gmail.com> wrote in message
> The reason why our simple block based solvers appear to outperform the cutting edge stuff from academia is the fact that the problem as stated here does not actually correspond exactly to the compressive sampling problem typically studied in the real world. In that case, one is not able to adjust your query pattern dynamically based on the results from earlier queries. Rather, the entire set of query masks need to be specified at the outset.
>
> Being able to adjust our queries to zoom in on areas with detail is a significant (allthough not realistic) advantage and allows the custom developed code to outperform the off the shelf CS code by orders of magnitude.


Thanks for clarifying this, Hannes. I feel better now about not putting the time into delving through that CS material! (As I write this your entry stands at the top of the heap, so it looks like you managed to do quite well with other techniques also.)
From: Amitabh Verma on
"Alan Chalker" <alancNOSPAM(a)osc.edu> wrote in message
> That's an excellent counter-tactic! I'm glad to see someone trying to 'beat me at my own game';)


Just an afterthought... about the possibilities of using a tool.

In Daylight using a tool/utility one can also query for actual pixel value checking against Result. If one runs it for 2 days with 10 ids. 48*60*10 attempts can yield (48*60*10)/255 true pixels.

255 is the max attempts required sequentially. Using a simple algorithm (2^n) one can find the true pixel value in <8 attempts. Highly increasing the remainder value >3600.

The queue will also not get backed up since this entry will run in ~2-3 sec.

Now the smallest image in the test suit had ~2500 pixels.

Now I have got one image accurate pixel to pixel. Wonder how much score difference that can make ?

My stand on this is as long as it is within the rules defined by the Matlab team, its fine. However, if a number of people start doing this, thing can go astray. Just my 2 cents.

PS: I used a modest total of 10 ids only who were named 'Uncle_Sam' 8-)
From: Sergey Y. on
As we can see some people were trying to use random mask approach.
I personally spend first half of darkness trying to implement that method
(It is widely used in neuroscience for receptive field mapping in visual cortex).
Unfortunately it did not look promising. Maybe using Gabor function as mask is better but I did not try it.
From: Sergey Y. on
"Amitabh Verma" <amtukv(a)gmail.com> wrote in message <hrsld1$k15$1(a)fred.mathworks.com>...
> "Alan Chalker" <alancNOSPAM(a)osc.edu> wrote in message
> > That's an excellent counter-tactic! I'm glad to see someone trying to 'beat me at my own game';)
>
>
> Just an afterthought... about the possibilities of using a tool.
>
> In Daylight using a tool/utility one can also query for actual pixel value checking against Result. If one runs it for 2 days with 10 ids. 48*60*10 attempts can yield (48*60*10)/255 true pixels.
>
> 255 is the max attempts required sequentially. Using a simple algorithm (2^n) one can find the true pixel value in <8 attempts. Highly increasing the remainder value >3600.
>
> The queue will also not get backed up since this entry will run in ~2-3 sec.
>
> Now the smallest image in the test suit had ~2500 pixels.
>
> Now I have got one image accurate pixel to pixel. Wonder how much score difference that can make ?
>
> My stand on this is as long as it is within the rules defined by the Matlab team, its fine. However, if a number of people start doing this, thing can go astray. Just my 2 cents.
>
> PS: I used a modest total of 10 ids only who were named 'Uncle_Sam' 8-)



However, direct probing is against the rules.