From: Tim Wescott on
On Tue, 01 Sep 2009 23:41:14 +0000, Andrew Reilly wrote:

> On Tue, 01 Sep 2009 07:43:08 -0700, D Yuniskis wrote:
>> I've revisited this subject (and question) far too many times in my
>> career -- yet, I keep *hoping* the answer might change (sure sign of
>> insanity? :> )
>
> The answer *will* eventually change, because the environment (the number
> and popularity of languages) is changing.
>
>> I have a largish application that *really* would benefit (from the
>> standpoint of maintenance) from the use of something like C++.

-- snip --
>
> Like Tim said: watch out for allocation and avoid deallocation, and be
> wary of exceptions. To that I'd add: be gentle with the operator
> overloading: "cool" is the enemy of understandable. That still leaves
> you with first class inheritance and polymorphism, which are IMO the big
> check-boxes that you're looking for, to support object oriented design
> (and which aren't easy to do "nicely" in C).
>
> Cheers,

I _only_ use operator overloading for well-defined arithmetic types (i.e.
matrices & vectors), as already done in the STL (i.e. the '>>' operator
for input and '<<' for output), and for some types of 'smart' pointers
(i.e. the [] operator, for selection).

In other words, if I don't change the obvious semantics, then I'll
consider it. Otherwise I leave it be.

--
www.wescottdesign.com
From: D Yuniskis on
Hi Vladimir,

Vladimir Vassilevsky wrote:
> D Yuniskis wrote:
>> Are there any folks who have successfully deployed larger
>> applications in an OO language? No, I'm not talking about
>> desktop apps where the user can reboot when the system
>> gets munged. I'm working in a 365/24/7 environment so
>> things "just HAVE to work".
>
> We have a relatively big embedded system (several M of the source code,
> filesystem, TCP/IP, RTOS, etc.) developed by a team of five programmers.
> This system works in the field, 24/7, unattended.

With a filesystem, I am assuming (not a "given") that you have
lots of resources available (?). E.g., does the OS support VM?
Or, are all of the tasks (processes) running on it known to have
bounded resource requirements?

Was the OS written in C++ or just the applications?

Does the OS have provisions to detect (and recover from)
crashed applications? Or, does a crashed application
bring the system to its knees?

> At the very beginning, there was the usual trivial argument about C vs
> C++, and it was decided to use C++. Now I can say that was a wise
> choice; it would be difficult to tackle the complexity in C.

Agreed. But, the problem (IMO) with C++ (or other 4G languages)
is that it is often hard to find folks who *really* know what's
happening "under the hood". I.e., the sort of intuitive
understanding of exactly what the compiler will generate for
*any* arbitrary code fragment.

E.g., I have been writing in C++ for many years now and I am
constantly surprised by things that happen "unexpectedly".
There's just way too many little things that go on that
catch me off guard. If I am writing for a desktop
environment, I can usually shrug and deal with it. But,
when I have a fixed, tightly constrained set of resources
to work within (TEXT, DATA and "time"), these "judgment
lapses" quickly get out of hand. :<

>> Any tips you can share that can help me get C-like
>> behavior from a C++-like implementation? (besides the
>> obvious: "use only the C subset of C++" :> )
>
> "There is no and there can't be any substitute for the intelligence,
> experience, common sense and good taste" (Stroustrup).

(sigh) But said by someone working for a firm with lots of
re$ource$ to devote to staff, etc. Things seem to be considerably
different in the "real world" (I am continually disappointed
with the caliber of the "programmers" I meet... "just get it
done" seems to be their mantra -- note that "right" is not
part of that! :< )

It is exactly this problem that has me vacillating about whether
a "highly structured C approach" would be better or worse than
doing it in C++ (or other 4G HLL). I.e., which are "average Joes"
least likely to screw up? :-/
From: D Yuniskis on
larwe wrote:
> On Sep 1, 10:43 am, D Yuniskis <not.going.to...(a)seen.com> wrote:
>
>> But, doing things in such a "low level" language (C)
>> will ultimately make the code harder to maintain for those
>> that follow me. It really would be nice if I could leverage
>> some OO features to relax some of the discipline that I
>> have had to impose on the code doing "OO under C".
>
> A thought: Have you considered who is likely to follow you, and how
> they will be recruited? Consider that the job posting will (if you
> move to C++) likely say "Mandatory 5+ years C++ programming experience
> in an embedded environment". Ponder the fact that the qualifier "in an
> embedded environment" might just mean writing trivial UI code for some
> product that's a PC-in-a-box.

Exactly. But, the same can apply to a C implementation.
People seem to think (or, perhaps, just *claim*) they know
more than they do about language particulars. I am inherently
leary of anyone who doesn't have a hardware background and
hasn't spent a fair bit of time writing assembly language
code -- just so they have a good feel for what the "machine"
really is, can do, etc. But, hiring often gives way to
expediencies under the assumption that "what they don't
know, they can *learn*"...
From: D Yuniskis on
Hi Chris,

Chris Stratton wrote:
> On Sep 1, 10:43 am, D Yuniskis <not.going.to...(a)seen.com> wrote:
>
>> But, doing things in such a "low level" language (C)
>> will ultimately make the code harder to maintain for those
>> that follow me. It really would be nice if I could leverage
>> some OO features to relax some of the discipline that I
>> have had to impose on the code doing "OO under C".
>
> Personally, what I dislike is having to mention "objects" in multiple
> places. What I want is a system where I can instantiate things inline
> as I need them ("make me a menu with these choices"), but have all the
> allocations be predetermined during compilation (no runtime memory
> management surprises) and summarized for me.

Well, sometimes that just isn't possible. Or, at least not
easily automated (especially if you are resource constrained).
Memory management is always a tricky subject. At upper application
levels, you can afford (somewhat) to let the application deal
with memory problems (running out of heap, etc.). At lower levels
(e.g., within the OS), you often have to "guarantee" that
"/* Can't Happen */" REALLY CAN'T HAPPEN!

>> Are there any folks who have successfully deployed larger
>> applications in an OO language? No, I'm not talking about
>> desktop apps where the user can reboot when the system
>> gets munged. I'm working in a 365/24/7 environment so
>> things "just HAVE to work".
>
> The android phone I've been playing with comes close... pseudo-java
> with an active garbage collector...

<frown> I think anything that relies on GC will bite me
as it is hard to get deterministic performance when you don't
know how/when the GC will come along. Hence my suggestion
that new() be heavily overloaded (probably relying on lots
of "buffer pools" for the corresponding objects) just to
make sure "automatic allocation" can work in a reasonably
deterministic fashion (like purely static allocation)

But, I've had other folks mention android to me so it is probably
worth poking under the hood...
From: D Yuniskis on
Hi Tim,

Tim Wescott wrote:
> On Tue, 01 Sep 2009 07:43:08 -0700, D Yuniskis wrote:

[snip]

>> Are there any folks who have successfully deployed larger applications
>> in an OO language? No, I'm not talking about desktop apps where the
>> user can reboot when the system gets munged. I'm working in a 365/24/7
>> environment so things "just HAVE to work".
>>
>> Any tips you can share that can help me get C-like behavior from a
>> C++-like implementation? (besides the obvious: "use only the C subset
>> of C++" :> )
>
> I was software lead for a series of projects that brought my then-
> employer into using C++ for largish code bases. They have been quite
> successful, although there have been rough edges, too.
>
> Tips for success?
>
> * You're already aware of the heap problem. I solve this not so much by
> avoiding 'new' as by _never_ allocating off of the heap inside a task
> loop. I'll allocate off the heap in start up code (i.e. use 'new', but
> never 'delete'), I'll allocate off the heap in 'tweak and tune' code
> (i.e. code that'll be used by engineering, service and manufacturing
> personnel, but never by a customer in normal operation), and if pressed
> I'll make a pool of blocks and overload 'new' -- but I do this latter
> only rarely. Of course, you can only do this if you can allocate all
> required memory statically -- but if you can't, you probably don't have
> enough memory anyway.

The problem that I see with C++ is that it really *likes* to
create anonymous objects, etc. So, it goes to the heap often
(unless you are very careful and manually create each object that
will be used in a computation/operation and later destroy it;
but, this just makes the allocation more obvious -- it doesn't
prevent it!).

Since resources are bounded, I have to think carefully about
what memory needs will be, *who* will need them and *when*.
Often, this lets me reuse memory regions among mutually
exclusive "tasks" (generic sense of the word) for an overall
economy.

E.g., I cheat and redirect file system operations directly to
the actual (solid state) media (e.g., SD card, CF, etc.)
instead of pulling the data into "real" memory. It takes a
performance hit in some cases but eliminates that otherwise
redundant copy of the "file data".

> * Don't reuse code too early. The company has a great big pile of very
> useful reusable code, all of which was written the second or third time
> around in it's current form. Reusable code is an art that requires you
> to capture just the right feature sets in your interfaces, and put deep
> thought into just what belongs where. It's not for everyone, but if you
> can do it you can _really_ leverage it to your advantage.

I don't worry about reusing code. I spend more time concentrating
on reusing *algorithms*. Its hard to reuse code when you may
be writing for different languages, different machines, etc.
OTOH, (re)using an *approach* to a problem saves you all the
engineering time (coding a known algorithm usually is pretty
trivial).

> * Most embedded C++ compilers have gotten pretty efficient with core
> language constructs (I'm told that even exception handling isn't so bad
> anymore, but I haven't tried it). _Don't_ use the standard libraries
> with anything but extreme caution, however -- if you want to find
> yourself pulling in everything including at least ten kitchen sinks,
> start using the STL in your embedded code.

<frown> Yes, I have found the STL to be incredibly *un*useful.
Its too generic and too "fat" for most applications (perhaps
desktop applications can afford this?)

> * Beware C programmers masquerading as C++ programmers, and beware
> desktop programmers masquerading as embedded programmers. When we were

Exactly! And, without meaning any disrespect to any of those
folks, often they imply don't *know* the differences (i.e.,
what they *don't* know)

> doing this we were embedded folks teaching ourselves C++, and we had much
> better success taking good, 'virgin' embedded C programmers and tossing
> them at C++ -- the "it looks like C but compiles when you name it .cpp"
> crowd is bad, and the "what do you mean I don't have virtual memory?"
> crowd is worse.

Yes. There is a very different mindset between the different camps.
Desktop programmers tend to do things "easier" (for themselves)
and run through lots of resources that just aren't available
in an embedded environment. And, they probably have never actually
looked at how well their code performs (e.g., what is your
maximum stack penetration? where are your timing bottlenecks?
how many pageins/outs are happening: under light load vs. heavy
load, etc.)

You can often catch these folks with trivial "tests" that
highlight some aspect of a language (or "operating environment")
that they might not be aware of. For example:

for (i = 0; i < MAXI; i++)
for (j = 0; j < MAXJ; j++)
foo[i][j] = <whatever>

vs. the "identical":

for (j = 0; j < MAXJ; j++)
for (i = 0; i < MAXI; i++)
foo[i][j] = <whatever>

> * If you can, start small. Just use the basic OO features of C++,
> possibly in just a portion of your code. Grow from there. Trying to do
> a giant project at the same time that you learn C++ will be awkward --
> although if you're already using OO design techniques it may not be as
> bad as it sounds.

*Learning* C++ isn't the issue. The issue is deciding whether the
risks (of subsequent development/maintenance efforts) of using
*it* are greater than the risks of using highly structured C.

There are many cases where I have found that C++ *looks* like
the right way to approach a problem but the performance
issues just make it incredibly WRONG for that problem. Just
looking at the C++ code makes that solution (e.g.) very
tempting -- until you actually profile it and compare it
to an equivalent algorithm in C.

E.g., I wrote a gesture recognizer in C++. It was *slick*.
But, almost twice the size and three times slower than the
identical algorithm implemented in C. End result: figure
out how to gain the structure advantages of C++ without
dragging in all of its *cost*. :-(
 |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: AVR BASIC COMPILER source code released
Next: ATtiny10