From: D Yuniskis on
Hi Arlet,

Arlet wrote:
> On Sep 1, 4:43 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
>
>> I have a largish application that *really* would
>> benefit (from the standpoint of maintenance) from the
>> use of something like C++. Currently, I use an OO
>> programming *style* but write the code entirely in
>> C. I just find C++ too damn difficult to keep track
>> of all the magic that goes on behind the scenes so
>> getting deterministic behavior (RT application) is
>> much easier from C (i.e., I can more accurately
>> visualize what the machine is doing each time I write
>> a statement -- without having to worry about whether
>> an anonymous object is being created as a side-effect of
>> a statement).
>
> The Linux kernel also uses an OO programming style in C, and it works
> quite well. Maybe you can look at some of the design ideas (if you're
> not already familiar with them).

Ah! I wasn't aware of that! Thanks! I *think* I have a good
approach (in my "highly structured C" approach). Things *work*
well. I'm just afraid others won't be able to keep up that
discipline or understand fully what is going on (e.g., lots of
function pointers -- people seem to not like pointers... let
alone pointers to functions! :< )
From: D Yuniskis on
Hi Robert -- and Hans,

robertwessel2(a)yahoo.com wrote:
> On Sep 1, 3:25 pm, Hans-Bernhard Br�ker <HBBroe...(a)t-online.de> wrote:
>> D Yuniskis wrote:
>>> I have a largish application that *really* would
>>> benefit (from the standpoint of maintenance) from the
>>> use of something like C++.
>> Beware, here be dragons. One of the more successfully disguised ones
>> is: how do you make sure you don't run out of stack at run-time, if
>> basically all function calls go through run-time evaluated function
>> pointers?
>>
>> Now that'll need some explaining. If you've used static stack size
>> determination tools before, you'll have noticed that basically none of
>> them can follow calls made via function pointers or recursive ones (and
>> ultimately, it's provably impossible to do so anyway). But in bona-fide
>> OO code every other function calls gets dispatched via some object's
>> method table (a.k.a. "vtable"), i.e. deep down it's a function pointer.
>>
>> So from the point-of-view of static analysis, your whole OO program
>> falls apart into many disconnected shreds of call tree, and there's no
>> way left to put it all back together.
>
> At theoretically, the problem is not nearly as bad as unbounded
> function pointers. A virtually dispatched function will come from a
> precisely defined set of possibilities - the classed derived from the
> base in question, and thus is no worse than a switch statement
> selecting a particular function to call (which obviously is amenable
> to worst case stack depth analysis).
>
> Obviously the ability of a particular tool to follow that construct
> through the entire code base is an issue.
>
> And it's only an issue with (base) classes with virtual functions,
> which does limit the scope somewhat.

I only have to "worry" about those issues within the OS.
An application can run out of stack and the OS will compensate.
It will allocate additional stack space as required. And, if
it can't do this, it will stop the offending application
and notify it of the problem. It can then be restarted
with a more generous resource request (if *that* is too
much for the system to accommodate, then the application
isn't started, etc.)

I am particularly fond of recursive algorithms so I am
used to having to manually analyze stack penetration
(since there is no way to really tell the "application"
what types of input it *will* encounter). In fact, I have
another post pending that tries to address bounding what
would otherwise be an automatic variable's allocation
(to get deterministic performance from the algorithm
in the face of unbounded input)
From: D Yuniskis on
Hi Andrew,

Andrew Reilly wrote:
> On Tue, 01 Sep 2009 07:43:08 -0700, D Yuniskis wrote:
>> I have a largish application that *really* would benefit (from the
>> standpoint of maintenance) from the use of something like C++.
>> Currently, I use an OO programming *style* but write the code entirely
>> in C. I just find C++ too damn difficult to keep track of all the magic
>> that goes on behind the scenes so getting deterministic behavior (RT
>> application) is much easier from C (i.e., I can more accurately
>> visualize what the machine is doing each time I write a statement --
>> without having to worry about whether an anonymous object is being
>> created as a side-effect of a statement).
>>
>> It also makes it much easier for me to keep track of physical resources
>> (it seems like I'd be constantly overloading new() in a C++
>> implementation just to make sure I can corral each wayward instance into
>> a known part of memory, etc.)
>
> I like to keep a mental distinction between design and implementation,
> and I don't find myself too limited by designing in an object-oriented
> style and coding in C. Well, that is there was a time when I didn't.

Exactly. But, I find most C programmers find this difficult
to grasp. I.e., it *looks* (to them) like lots of extra
complexity and "machinery" -- especially because *they*
must do all of the things that the C++ compiler would have done.
You get this, "why bother?" look from them...

> More recently I've been doing some higher-level projects and using higher-
> level, more dynamic languages, and I have to say, they certainly have
> some appeal. Have you completely discarded the possibility of using more
> than one language for your project? Just as in days of yore it was

There are several languages at play in the product.
At the highest levels, I use much more modern approaches
to problems -- but, the problems are usually much "simpler"
(e.g., the OS and the services that are directly bundled
with it is, by far, the most complicated piece of code).
E.g., I can compute pi to any arbitrary precision in a
few lines of code. OTOH, guaranteeing that a TCP/IP
connection is serviced at the right relative priority wrt
other active connections takes considerably more code! :>

> common to code the high-level parts in C and fall back to assembly
> language for the tricky parts, I find it pretty effective to code the
> high level parts in C++/Java/scheme/whatever and fall back to C for the
> peices that I really have to be certain about. All of the "higher
> level", OO languages have good ways to interact with C code.
>
>> But, doing things in such a "low level" language (C) will ultimately
>> make the code harder to maintain for those that follow me. It really
>> would be nice if I could leverage some OO features to relax some of the
>> discipline that I have had to impose on the code doing "OO under C".
>
> Consider that if your "OO under C" is making your C too complicated, then
> you are probably going into one of those areas where your C++ or whatever
> would also be difficult to reason about or understand.

Exactly. I think a C++ guru *might* be willing to instinctively
code "appropriately" for this environment. But, I suspect
the garden variety "C++ coder" is likely to be oblivious to
his errors. Possibly *through* development, production
and *deployment*! :<

I wonder if I'm just stuck with a "you really MUST hire talented
people" problem (in which case, does it really *matter* which
approach you take?)

>> I am sorely tempted to go back and rewrite the OS itself, as a first
>> exercise, just to see how damaging this venture might become (in terms
>> of code size, runtime performance and reliability). But, that's a fair
>> bit of work (the image is a bit over 500K currently) and I'd hate to
>> undertake it naively.
>>
>> Are there any folks who have successfully deployed larger applications
>> in an OO language? No, I'm not talking about desktop apps where the
>> user can reboot when the system gets munged. I'm working in a 365/24/7
>> environment so things "just HAVE to work".
>
> I know of at least two RTOSes that have been written in C++ (OK-L4 and
> eCos), and there have been a few in the past in Modula-3 and Java, so
> even automatic garbage collection can be managed for some systems.

eCos is tiny. I have no familiarity with OK-L4 (is it the
successor to "L3"? I don't know what the "OK" means...)

> When a similar discussion came up on Comp.arch last week, there was quite
> a bit of vocal support for both contemporary Ada and D, both of which are

<groan> Stay away from Ada. It *really* doesn't seem worth the
effort! :<

> on my list to try. I suspect that I'll be getting to D first, because
> I've been reading about a project on FreeBSD that will be using it...
>
>> Any tips you can share that can help me get C-like behavior from a
>> C++-like implementation? (besides the obvious: "use only the C subset
>> of C++" :> )
>
> Like Tim said: watch out for allocation and avoid deallocation, and be
> wary of exceptions. To that I'd add: be gentle with the operator
> overloading: "cool" is the enemy of understandable. That still leaves
> you with first class inheritance and polymorphism, which are IMO the big
> check-boxes that you're looking for, to support object oriented design
> (and which aren't easy to do "nicely" in C).

Inheritance is *way* too heavy a burden on the developer (under C).
You just have to remember to do to much "manually".

In looking at the OS itself (my first candidate for rewrite),
I don't think I would gain much/anything using inheritance.
The objects are too "orthogonal". While there might be *one* common
base class that applied to many/all of them, this could easily
be implemented through "discipline".
From: D Yuniskis on
Hi Chris,

Chris Burrows wrote:
> "D Yuniskis" <not.going.to.be(a)seen.com> wrote in message
> news:h7jbos$a1p$1(a)aioe.org...
>> Are there any folks who have successfully deployed larger
>> applications in an OO language? No, I'm not talking about
>> desktop apps where the user can reboot when the system
>> gets munged. I'm working in a 365/24/7 environment so
>> things "just HAVE to work".
>
> "Minos - The design and implementation of an embedded real-time operating
> system with a perspective of fault tolerance" (IMCSIT 2008):
>
> http://www.proceedings2008.imcsit.org/pliks/85.pdf
>
> Minos was written in Oberon-07. Oberon-07 hasn't got the full extent of OO
> features as its cousin Oberon-2. However, if gives you the same level of
> control as C without the associated risks.

Thanks! I will look into that!
From: Chris Stratton on
On Sep 2, 2:18 am, D Yuniskis <not.going.to...(a)seen.com> wrote:

> > Personally, what I dislike is having to mention "objects" in multiple
> > places.  What I want is a system where I can instantiate things inline
> > as I need them ("make me a menu with these choices"), but have all the
> > allocations be predetermined during compilation (no runtime memory
> > management surprises) and summarized for me.
>
> Well, sometimes that just isn't possible.  Or, at least not
> easily automated (especially if you are resource constrained).
> Memory management is always a tricky subject.  At upper application
> levels, you can afford (somewhat) to let the application deal
> with memory problems (running out of heap, etc.).  At lower levels
> (e.g., within the OS), you often have to "guarantee" that
> "/* Can't Happen */" REALLY CAN'T HAPPEN!

Actually what I want to do is not very complicated. The end result I
want can be achieved by manually typing the right things in the right
places in the source files. What I dislike is that when I add say a
menu to my user interface, then I have to go back to a different place
in the file and list it's choices. I've been considering writing a
sort of pre-preprocessor to take find these "as needed" mentions and
move them to a place where the compiler will tolerate... but then
compiler error locations would be off, and source-level debugging not
reflect the editable sources...

> >> Are there any folks who have successfully deployed larger
> >> applications in an OO language?  No, I'm not talking about
> >> desktop apps where the user can reboot when the system
> >> gets munged.  I'm working in a 365/24/7 environment so
> >> things "just HAVE to work".
>
> > The android phone I've been playing with comes close... pseudo-java
> > with an active garbage collector...
>
> <frown>  I think anything that relies on GC will bite me
> as it is hard to get deterministic performance when you don't
> know how/when the GC will come along.  Hence my suggestion
> that new() be heavily overloaded (probably relying on lots
> of "buffer pools" for the corresponding objects) just to
> make sure "automatic allocation" can work in a reasonably
> deterministic fashion (like purely static allocation)

Not sure if I can blame that or not, but it crashed when my alarm
clock went off this morning after about a week and a half of
uptime... fortunately the "alarm" that continued to sound was a
decent audio track
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: AVR BASIC COMPILER source code released
Next: ATtiny10