From: Nick Keighley on
On 12 May, 02:02, Richard Heathfield <r...(a)see.sig.invalid> wrote:
> Daniel T. wrote:

> > I think part of the problem here is that I'm being painted as some sort
> > of SESE bigot and attacked from those grounds.
>
> I don't think you're in any danger of being called a SESE bigot as long
> as I'm around. I'm a much easier target. :-)

I think of you as low hanging fruit



From: Nick Keighley on
On 12 May, 03:00, pete <pfil...(a)mindspring.com> wrote:
> Richard Heathfield wrote:
>
> > Daniel T. wrote:
> > <snip>
>
> > > I think part of the problem here is that
> > > I'm being painted as some sort
> > > of SESE bigot and attacked from those grounds.
>
> > I don't think you're in any danger of being called
> > a SESE bigot as long
> > as I'm around. I'm a much easier target. :-)
>
> I find SEME to be less objectionable in functions
> which are small enough so that you can't look at one exit
> without noticing the all the rest of the exits.

is there any other sort of function?

:-)
From: Lie Ryan on
On 05/11/10 09:56, Ben Bacarisse wrote:
> Seebs <usenet-nospam(a)seebs.net> writes:
>
>> On 2010-05-10, Ben Bacarisse <ben.usenet(a)bsb.me.uk> wrote:
> <snip>
>>> You can't assume that the data is contiguous or, to be a little more
>>> general, that a single [] operation means the same as three of them.
>>> This is, after all, C++ not C.
>>
>> In C++ it's at least conceivable that someone could come up with some
>> exceptionally sneaky overloading of [] to make it viable.
>
> That's why I said "you can't assume...". It is conceivable but it is
> not a sensible assumption for the poster[1] to make. The post was about
> a re-write to simplify come code. If that involves a "sneaky
> overloading of []" any argument about simplification is blown away.

If it hides the complexity, then it could be argued that it's simpler,
at least from the outside. C/C++ hides the complexity of assembly, who
would argue that?

I think there are two separate definition of complexity:

- subjective complexity: how complex the source code looks like
- objective complexity: how complex what the machine is actually doing

nowadays, as machine price goes down, the tendency is to reduce
subjective complexity, while potentially increasing machine complexity
slightly.
From: Phil Carmody on
Richard Heathfield <rjh(a)see.sig.invalid> writes:
> Phil Carmody wrote:
>> Richard Heathfield <rjh(a)see.sig.invalid> writes:
>>> Willem wrote:
>>>> Let's take your example to make my point:
>>>> Richard Heathfield wrote:
>>>> ) found = 0;
>>>> ) for(x = 0; !found && x < xlim; x++)
>>
>>> No, the benefit of SESE is that you know that every loop has a single
>>> entry point and a single exit point.
>>
>> It can exit after !found fails, or after x < xlim fails.
>
> Nice try. They are, however, effectively the same exit point.

I disagree - there's a sequence point separating them.

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1
From: gwowen on
On May 11, 6:44 pm, Seebs <usenet-nos...(a)seebs.net> wrote:
> On 2010-05-11, Richard Heathfield <r...(a)see.sig.invalid> wrote:
>
> > Seebs wrote:
> >>> You already have a list of foos, properly ordered for fast searching.
> >> No, I don't.
> > Then I suggest you get one.
>
> It's often completely unsuitable to the problem at hand.

If your problem is not amenable to Richard H's potted solutions, the
fault lies with your problem, not Richard's solution. You should know
this by now.

So if you have a disk-backed self-caching nested container storing a
2million * 2million * 2million high resolution MRI scan image,
scrupiously reconstructed using Radon transforms. You're searching
for pixels of a certain shade (that indicate a possible tumor, say),
then the first thing you should do is convert it into a one-
dimensional in-memory sorted C-array. The fact that you won't be able
to obtain the co-ordinates afterwards to display the appropriate slice
to the physician is irrelevant. Sort and binary search. Any other
solution is wrong.