From: Grant Edwards on
On 2009-09-03, Niklas Holsti <niklas.holsti(a)tidorum.invalid> wrote:
> Grant Edwards wrote:
>> On 2009-09-03, Niklas Holsti <niklas.holsti(a)tidorum.invalid> wrote:
>>
>>> Some quotes: "Ada cost almost half of what the C code cost,
>>> and contained significantly fewer defects per 1000 SLOC by 7x
>>> the C code (700%). Even on the new C++ code, Ada still has
>>> 440% fewer defects."
>>
>> Huh? 440% fewer defects? Doesn't "100% fewer defects" mean
>> zero defects? What does 440% fewer mean?
>
> Yeah, bad language, ugh and sorry -- but I just quoted.

I didn't mean to imply that you were teh author of the
sentence, as it's clearly a quote from another document.

> Earlier in the quote, "700%" means the same as "7 times", so
> no doubt "440%" means that the C++ code had on the average 4.4
> times as many errors, per SLOC, as the Ada code. In other
> words, the error density in Ada code was about 1/4 of that in
> C++.
>
> If we can pass over this question of proper use of per cent,
> the numbers are pretty impressive, right?

Yes. Taken on face value, they are impressive.

> So why isn't Ada used more?

The only data point I have first hand is from the late 80's.
The first and only time I used Ada the toolset and development
environment was amazingly horrible. It was on a VAX/VMS
system. Rather than a stand-alone toolchain that could be used
alongside normal VMS stuff (text editors, file utilities, build
systems, configuration management) like all the other compilers
(Pascal, C, FORTRAN, etc.), the Ada compiler lived it its own
completely isolated (and very crippled) "world". It had its
own command-line interface, its own useless editor, build
management system, source control system, and
_even_its_own_file_system_. You could import/export files to
the normal VMS filesystem system but it wasn't easy.

It was absolutely awful.

The intention was apparently that the Ada "environment" (more
aptly called a "torture chamber") would be completely host
system independent and standardized so that the user would have
the same identically painful and unproductive experience under
VMS as he would on Unix or OS360 or any other host system. I
don't know how successfully the goal of system-independance was
achieved but I can vouch for the fact the the result was almost
impossible to use.

I had used Pascal for a lot of embedded stuff in the past and
thought it worked very well, and I actually rather liked the
Ada language. But, it was _by_far_ the worst language
implementation I'd ever used (included some pretty bad
batch-mode punched-card based stuff).

Later Pascal's popularity waned and I had to start using C for
embedded stuff. It was definitely a step backwards in terms of
reliability and quality of resulting code.

I think the vast majority of embedded projects would be much
better off using something like Pascal, Modula 2/3, Oberon or
Ada instead of C or C++.

--
Grant Edwards grante Yow! I wonder if I should
at put myself in ESCROW!!
visi.com
From: Dombo on
Niklas Holsti schreef:

> If we can pass over this question of proper use of per cent, the numbers
> are pretty impressive, right? So why isn't Ada used more? I suspect that
> many C/C++ programmers feel a bit offended by statistics like these;
> sort of thinking "If that's right, then I must be stupid to use C/C++...
> Are you calling me stupid? No way!"

From what I have read/heard Ada appears to me a very interesting
programming language. What prevented me to invest time to learn it is
that none of my clients uses it.

> In practice, there are many good reasons for choosing C in many embedded
> projects -- the compilers are often cheap and always available, there
> may be example code and legacy code in C,

Another consideration is the availability of programmers who actually
are proficient with the language. I couple of times a year I speak
someone working for a company which has been using Ada for their
products more than a decade. That company moving away from Ada, the most
important reason being that they have a difficulties finding Ada
programmers. Since they often hire programmers on temporary bases,
having access to people which have alwith the required skills readily
available is an important consideration.

Also the availability of libraries and tooling is something to consider.

The lack of an ecosystem like there is for C, and to lesser degree C++,
probably prevents Ada from gaining momentum. Without a sufficiently big
player willing to invest to make Ada popular (like Sun did with Java and
Microsoft did with C#), Ada will likely remain a niche language, despite
its merits.

> sick with what you know, etc.
> But for larger projects, perhaps with a long life, these studies suggest
> that it could be cheaper to pay for Ada tools, perhaps even to select a
> processor for which an Ada compiler is available. And not to forget that
> GNU Ada (gnat) is available for many 32-bit targets, and there are Ada
> compilers that emit C code and so support almost any target processor
> (or so their vendors claim, unfortunately I haven't had the occasion to
> try them).

I suspect once a programming language is 'good enough', other aspects
become more important than the qualities of the programming language
itself.
From: Paul Carpenter on
In article <4aa02a6a$0$24788$4f793bc4(a)news.tdc.fi>,
niklas.holsti(a)tidorum.invalid says...
> Grant Edwards wrote:
> > On 2009-09-03, Niklas Holsti <niklas.holsti(a)tidorum.invalid> wrote:
> >
> >> Some quotes: "Ada cost almost half of what the C code cost,
> >> and contained significantly fewer defects per 1000 SLOC by 7x
> >> the C code (700%). Even on the new C++ code, Ada still has
> >> 440% fewer defects."
> >
> > Huh? 440% fewer defects? Doesn't "100% fewer defects" mean
> > zero defects? What does 440% fewer mean?
>
> Yeah, bad language, ugh and sorry -- but I just quoted. Earlier in the
> quote, "700%" means the same as "7 times", so no doubt "440%" means that
> the C++ code had on the average 4.4 times as many errors, per SLOC, as
> the Ada code. In other words, the error density in Ada code was about
> 1/4 of that in C++.
>
> If we can pass over this question of proper use of per cent, the numbers
> are pretty impressive, right?

No they are just numbers as any language and humans comparison the error
rates (percentage, degree of error, deviations..) is VERY difficult to
determine.

Akin to "for a thousand people exposed to swine flu what are the relative
percentages of those who are not affected, and all the other stages
through to how many will die". Depends on so many factors that each group
of 1000 people have many other factors to NOT fit the pattern.

Are the defects related to

What was the sample size for each comparison
language
compiler
usage of compiler
different competency levels of programmers
different abilities/speed of tools used
different environment programmer working in
different coding standards environments
Were the defects due to differing specifications
Were the defects due to other problems beyond control (API etc..)
Were the defects due to feature creep and poor timescales
Were the defects due to poor design at the BEFORE coding stages
Were the defects due to poor or non-existant testing plans

Many, many other factors!!!

> So why isn't Ada used more?

Inertia to change, supply of C/C++ programmers, not needing to cross
train development teams, time to market, perceived costs, perceived
applicability of language is the 'magic bullet'.

> I suspect that many C/C++ programmers feel a bit offended by statistics like these;
> sort of thinking "If that's right, then I must be stupid to use C/C++...
> Are you calling me stupid? No way!"

I have a screwdriver with a 1inch wide blade, as this is for use with
screws therefore all screws are the same.

What other library support for what the programme is to be used in or
with...

> In practice, there are many good reasons for choosing C in many embedded
> projects -- the compilers are often cheap and always available, there
> may be example code and legacy code in C, sick with what you know, etc.
> But for larger projects, perhaps with a long life, these studies suggest
> that it could be cheaper to pay for Ada tools, perhaps even to select a
> processor for which an Ada compiler is available. And not to forget that
> GNU Ada (gnat) is available for many 32-bit targets, and there are Ada
> compilers that emit C code and so support almost any target processor
> (or so their vendors claim, unfortunately I haven't had the occasion to
> try them).

There are many reasons it might not be used, even as far as there are one
hell of a lot of NON-32 bit targets

--
Paul Carpenter | paul(a)pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/> PC Services
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate
From: D Yuniskis on
Hi Vladimir,

Vladimir Vassilevsky wrote:

>>>> Are there any folks who have successfully deployed larger
>>>> applications in an OO language? No, I'm not talking about
>>>> desktop apps where the user can reboot when the system
>>>> gets munged. I'm working in a 365/24/7 environment so
>>>> things "just HAVE to work".
>>>
>>> We have a relatively big embedded system (several M of the source
>>> code, filesystem, TCP/IP, RTOS, etc.) developed by a team of five
>>> programmers. This system works in the field, 24/7, unattended.
>>
>> With a filesystem, I am assuming (not a "given") that you have
>> lots of resources available (?).
>
> Not a lot. Only 600MHz, 64M. In our days, that's nothing. :-)

Ah, I had assumed you also had magnetic media (for the file
system and, by extension, VM). My bad.

> Actually, the OS, filesystem and TCP/IP is not very resource consuming:
> depending on the number of threads, files, sockets, etc., the system
> needs are practically in the ~100K range. At minimum, we can run from
> the BlackFin CPU L1 memory.

I'm considerably over that limit. The OS provides lots of
services -- and supports several "critical" services for
applications. So, its got a pretty heavy footprint.
(e.g., HRT guarantees, VM support, tight and loosely
coupled multiprocessing, etc.)

>> E.g., does the OS support VM?
>
> Unfortunately, BlackFin only has the rudimentary MMU. The full featured
> MMU would be very useful; I will certainly consider a CPU with the MMU
> for the next project like that.

Yes. I am a firm believer in this as it probably does more to
"help" the developer than most language features! Of course, there
are costs associated with its use but they are easily (IMO)
outweighed by the extra tricks you can play...

>> Or, are all of the tasks (processes) running on it known to have
>> bounded resource requirements?
>
> You feel my pain...
>
>> Was the OS written in C++ or just the applications?
>
> The RTOS was written in C++. The multitasking and the hardware
> abstraction concepts fit nicely with the C++ paradigm; this was one of

*Exactly*! This is the "draw" (for me) to using C++ (or
similar OO). It is just *so* much more elegant to be able
to deal with OS constructs as tangible objects than as
just "handles", etc. And, being able to augment their
definitions with extra cruft to facilitate debugging
(e.g., have a CString in the DEBUG version of each object
that lets you tag the object's instantiation with something
descriptive that you can later use the debugger to inspect.)

The OS itself is object oriented but not implemented in an
OO language. :(

> the arguments for using C++. I was very frustrated with the C call
> nuisance of mucos-II and ADI VDK before.
>
>> Does the OS have provisions to detect (and recover from)
>> crashed applications? Or, does a crashed application
>> bring the system to its knees?
>
> A crashed application can very well screw up everything. However, if
> this happens, we fall into the bootloader, so we can recover.

But there is nothing that *detects* that something has gone awry?
I.e., you rely on the errant application to clobber "something"
that *eventually* causes the system to crash? (perhaps a watchdog
brings you back to sanity?)

>>> At the very beginning, there was the usual trivial argument about C
>>> vs C++, and it was decided to use C++. Now I can say that was a wise
>>> choice; it would be difficult to tackle the complexity in C.
>>
>> Agreed. But, the problem (IMO) with C++ (or other 4G languages)
>> is that it is often hard to find folks who *really* know what's
>> happening "under the hood".
>
> That's the whole point: making the application programming available for
> dummies. The OO system is supposed to protect them from themselves.

At one level, it *does* (e.g., uninitialized variables, walking
off the end of arrays, etc. -- assuming a well defined set of
classes). But, on other levels, it can bring with it all sorts
of invisible overhead that might be hard for someone not
intimately familiar with the language to pick up on. (e.g.,
I am *constantly* startled by the presence of anonymous
objects that materialize in my code -- albeit of transitory
nature. As objects get "heavier" (e.g., adding debug support),
each one of these that the compiler creates starts to hammer
on memory...

>>>> Any tips you can share that can help me get C-like
>>>> behavior from a C++-like implementation? (besides the
>>>> obvious: "use only the C subset of C++" :> )
>>>
>>> "There is no and there can't be any substitute for the intelligence,
>>> experience, common sense and good taste" (Stroustrup).
>>
>> (sigh) But said by someone working for a firm with lots of
>> re$ource$ to devote to staff, etc. Things seem to be considerably
>> different in the "real world" (I am continually disappointed
>> with the caliber of the "programmers" I meet... "just get it
>> done" seems to be their mantra -- note that "right" is not
>> part of that! :< )
>
> From the other hand, the bulk of the programmer's work is nothing more
> then a legwork; it doesn't have to be done brilliantly; it is just has
> to work somehow.

I think that is true of work in a desktop environment.
There are no *typical* time constraints nor resource
constraints. If the application crashes, the user can
"try again". You (or the OS) can provide feedback to
the user in the form of dialog boxes, log messages, etc.

But, in an embedded system, the application often *must*
work. It may be unattended (no one there to push the
reset button) or perform some critical role, etc. Your
user I/O may be seriously constrained so conversing with
the user may be difficult or impractical (especially if
the user isn't *there*!).

>> It is exactly this problem that has me vacillating about whether
>> a "highly structured C approach" would be better or worse than
>> doing it in C++ (or other 4G HLL). I.e., which are "average Joes"
>> least likely to screw up? :-/
>
> Just recently I had to fix the project in C developed by "average Joes".
> There were the usual C problems with not initializing something,
> providing not enough of memory for something, and running out of the
> array size. So I think C++ is the better way to do the things.

Again, I think you trade one set of problems for another.
I'm just trying to figure out where the "least pain" lies :<
From: D Yuniskis on
Hi Phil,

Phil O. Sopher wrote:
> It is a mystery to me as to how recent graduates of Computer Science
> are vaunted as experts on computers, yet haven't a clue about the actual
> operation of a computer at the assembly language (or even machine code)
> level.

I think this is a consequence of these folks being trained as
"programmers" instead of as "engineers". E.g., my background is
as an EE where "logic elements" and "processors" were just building
blocks like "transistors" and "rectifiers".

I.e., you can visualize how an op amp is "just" a bunch of
Q's, R's, D's, etc. in a miniaturized form. OTOH, if your
exposure to electronics was at the op amp level, someone
had to go to *extra* lengths to show you what was "inside".
Without that extra exposure, you simply were unaware of
how the devices were built and, as a result, how they
fundamentally operated as well as the reasons behind
their limitations, etc.

> Indeed, to understand the XOR subroutine for a PDP8, you not only
> had to understand assembly language (as it was so coded) but also had to
> understand the operation of Half and Full Adders. (You did an addition,
> and then subtracted the Logical And of the two input variables after that
> Logical And had been left shifted). This was much shorter than evaluating
> (A and not B) or (B and not A)
>
> Those of us who cut our teeth on assembly language find no difficulty
> in understanding the concepts inherent in any high level language, even
> those as arcane as LISP ("Lots of Infernal Stupid Parentheses")

However (playing advocate, devil's), many people fail to "get
out of the mud" and rise to use these levels of detail. Or,
cling to their special knowledge of these low level intricacies
at the expense of benefiting from higher level abstractions.

E.g., writing an OS in ASM nowadays (for all but trivial processors)
is a self-indulgent waste of time. Comparable to debugging in
*hex* (instead of using a symbolic debugger).

<shrug>
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: AVR BASIC COMPILER source code released
Next: ATtiny10