From: Joerg on
John Larkin wrote:
> On Wed, 13 Aug 2008 10:40:47 +0100, Martin Brown
> <|||newspam|||@nezumi.demon.co.uk> wrote:
>
>
>>> I read that for a major bunch of Windows APIs, the only documantation
>>> was the source code itself.
>> That is probably slightly unfair (but also partly true). It was the
>> unruly Windows message API that eventually killed the strongly typed
>> language interface for me. Just about every message was a pointer to a
>> "heaven knows what object" that you had to manually prod and probe at
>> runtime to work out its length and then what it claimed to be.
>> Maintaining the string typed interface definitions even with tools
>> became too much of a chore.
>>
>> Imagine doing electronics where all components are uniform sized and
>> coloured and you have to unpeel the wrapper to see what is inside. Worse
>> still some of them may contain uninitialised pointers if you are
>> unlucky, or let you write off the end of them with disastrous results.
>
> Hardware design keeps moving up in abstraction level too. I used to
> design opamps and voltage regulators out of transistors. Now I'm
> dropping sixteen isolated delta-sigma ADCs around an FPGA that talks
> to a 32-bit processor. That's sort of equivalent to building a
> software system using all sorts of other people's subroutines. We did
> just such a board recently, 16 channels of analog acquisition, from
> thermocouples to +-250 volt input ranges, all the standard
> thermocouple lookup tables, RTD reference junctions, built-in
> self-test, VME interface. No breadboards, no prototype. The board has
> 1100 parts and the first one worked.
>

My designs seems to go the other way. Yeah, also lots of delta-sigmas
but even more transistor level designs. Main reason is that they often
can't find anyone else to do it so it all lands on my pile.


> Hardware design works better than software. One reason is that the
> component interfaces are better defined. Another reason is that we
> check our work - each other's work - very carefully before we ever try
> to build it, much less run it. Most engineering - civil, electrical,
> mechanical, aerospace - works that way. People don't hack jet engine
> cores and throw them on a test stand to see what blows up.
>

But people do hack AGW "science" :-) SCNR.

--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
From: John Larkin on
On Wed, 13 Aug 2008 11:23:46 -0700, Joerg
<notthisjoergsch(a)removethispacbell.net> wrote:

>John Larkin wrote:
>> On Wed, 13 Aug 2008 10:40:47 +0100, Martin Brown
>> <|||newspam|||@nezumi.demon.co.uk> wrote:
>>
>>
>>>> I read that for a major bunch of Windows APIs, the only documantation
>>>> was the source code itself.
>>> That is probably slightly unfair (but also partly true). It was the
>>> unruly Windows message API that eventually killed the strongly typed
>>> language interface for me. Just about every message was a pointer to a
>>> "heaven knows what object" that you had to manually prod and probe at
>>> runtime to work out its length and then what it claimed to be.
>>> Maintaining the string typed interface definitions even with tools
>>> became too much of a chore.
>>>
>>> Imagine doing electronics where all components are uniform sized and
>>> coloured and you have to unpeel the wrapper to see what is inside. Worse
>>> still some of them may contain uninitialised pointers if you are
>>> unlucky, or let you write off the end of them with disastrous results.
>>
>> Hardware design keeps moving up in abstraction level too. I used to
>> design opamps and voltage regulators out of transistors. Now I'm
>> dropping sixteen isolated delta-sigma ADCs around an FPGA that talks
>> to a 32-bit processor. That's sort of equivalent to building a
>> software system using all sorts of other people's subroutines. We did
>> just such a board recently, 16 channels of analog acquisition, from
>> thermocouples to +-250 volt input ranges, all the standard
>> thermocouple lookup tables, RTD reference junctions, built-in
>> self-test, VME interface. No breadboards, no prototype. The board has
>> 1100 parts and the first one worked.
>>
>
>My designs seems to go the other way. Yeah, also lots of delta-sigmas
>but even more transistor level designs. Main reason is that they often
>can't find anyone else to do it so it all lands on my pile.


We still use discrete parts here and there, especially for the fast
stuff, and high-power things. A board is typically a mix of
high-abstraction parts - big complex chips - and a bunch of simpler
stuff. "Glue logic", which actually does logic, is rare nowadays.

Lots of opamps and precision resistors. One good resistor can cost
more than an opamp.

Our stuff isn't a cost-sensitive as some of yours, so we don't mind
using an opamp if it works a little better than a transistor. And our
placement cost is high, so we like to minimize parts count.

People keep talking about analog programmable logic...

John


From: Joerg on
John Larkin wrote:
> On Wed, 13 Aug 2008 11:23:46 -0700, Joerg
> <notthisjoergsch(a)removethispacbell.net> wrote:
>
>> John Larkin wrote:
>>> On Wed, 13 Aug 2008 10:40:47 +0100, Martin Brown
>>> <|||newspam|||@nezumi.demon.co.uk> wrote:
>>>
>>>
>>>>> I read that for a major bunch of Windows APIs, the only documantation
>>>>> was the source code itself.
>>>> That is probably slightly unfair (but also partly true). It was the
>>>> unruly Windows message API that eventually killed the strongly typed
>>>> language interface for me. Just about every message was a pointer to a
>>>> "heaven knows what object" that you had to manually prod and probe at
>>>> runtime to work out its length and then what it claimed to be.
>>>> Maintaining the string typed interface definitions even with tools
>>>> became too much of a chore.
>>>>
>>>> Imagine doing electronics where all components are uniform sized and
>>>> coloured and you have to unpeel the wrapper to see what is inside. Worse
>>>> still some of them may contain uninitialised pointers if you are
>>>> unlucky, or let you write off the end of them with disastrous results.
>>> Hardware design keeps moving up in abstraction level too. I used to
>>> design opamps and voltage regulators out of transistors. Now I'm
>>> dropping sixteen isolated delta-sigma ADCs around an FPGA that talks
>>> to a 32-bit processor. That's sort of equivalent to building a
>>> software system using all sorts of other people's subroutines. We did
>>> just such a board recently, 16 channels of analog acquisition, from
>>> thermocouples to +-250 volt input ranges, all the standard
>>> thermocouple lookup tables, RTD reference junctions, built-in
>>> self-test, VME interface. No breadboards, no prototype. The board has
>>> 1100 parts and the first one worked.
>>>
>> My designs seems to go the other way. Yeah, also lots of delta-sigmas
>> but even more transistor level designs. Main reason is that they often
>> can't find anyone else to do it so it all lands on my pile.
>
>
> We still use discrete parts here and there, especially for the fast
> stuff, and high-power things. A board is typically a mix of
> high-abstraction parts - big complex chips - and a bunch of simpler
> stuff. "Glue logic", which actually does logic, is rare nowadays.
>
> Lots of opamps and precision resistors. One good resistor can cost
> more than an opamp.
>
> Our stuff isn't a cost-sensitive as some of yours, so we don't mind
> using an opamp if it works a little better than a transistor. And our
> placement cost is high, so we like to minimize parts count.
>

That's what I ran into a lot, whenever something is made domestically in
a western country placement costs are through the roof. When I design
circuits that will be produced on lines in Asia I can adopt a whole
different design philosophy where replacing a $1 chip with 15 discrete
jelly-bean parts makes a lot of sense.


> People keep talking about analog programmable logic...
>

With me it's the other way around. I am not allowed to talk about it ;-)

--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
From: Nick Maclaren on

In article <cVDok.93318$tc1.22014(a)newsfe24.ams2>,
"Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes:
|>
|> > It does as soon as you switch on serious optimisation, or use a CPU
|> > with unusual characteristics; both are common in HPC and rare outside
|> > it. Note that compilers like gcc do not have any options that count
|> > as serious optimisation.
|>
|> Which particular loop optimizations do you mean? I worked on a compiler
|> which did advanced HPC loop optimizations. I did find a lot of bugs in the
|> optimizations but none had anything to do with the interpretation of the C
|> standard. Do you have an example?

I didn't say loop optimisations. But you could include any of the
aliasing ambiguities (type-dependent and other), the sequence point
ambiguities and so on. They are fairly well-known.

|> You have to give more specific examples of differences of interpretation.

As I said, I will send you my document if you like, which includes
examples and explanations. Otherwise I suggest that you look at the
archives of comp.std.c, which has dozens of examples. I don't have
time to search my records for other examples for you.

|> I'd like to hear about failures of real software as a direct result of these
|> differences. I haven't seen any in over 12 years of compiler design besides
|> obviously broken compilers.

And I have seen hundreds. But I do know the C standards pretty well,
and a lot of "obviously broken compilers" actually aren't.

|> I bet that most code will compile and run without too much trouble.
|> C doesn't allow that much variation in targets. And the variation it
|> does allow (eg. one-complement) is not something sane CPU
|> designers would consider nowadays.

The mind boggles. Have you READ the C standard?

|> Not specifying the exact size of types is one of C's worst mistakes.
|> Using sized types is the right way to achieve portability over a wide
|> range of existing and future systems (including ones that have different
|> register sizes). The change to 128-bit is not going to affect this software
|> precisely because it already uses correctly sized types.

On the contrary. Look, how many word size changes have you been
through? Some of my code has been through about a dozen, in
succession, often with NO changes. Code that screws 32 bits in
will not be able to handle data that exceeds that.

You are making PRECISELY the mistake that was made by the people
who coded the exact sizes of the IBM System/360 into their programs.
They learnt better, but have been replaced by a new set of kiddies,
determined to make the same old mistake :-(


Regards,
Nick Maclaren.
From: Wilco Dijkstra on

"Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message news:g7vfa8$3km$1(a)gemini.csx.cam.ac.uk...
>
> In article <cVDok.93318$tc1.22014(a)newsfe24.ams2>,
> "Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes:

> As I said, I will send you my document if you like, which includes
> examples and explanations. Otherwise I suggest that you look at the
> archives of comp.std.c, which has dozens of examples. I don't have
> time to search my records for other examples for you.

I'd certainly be interested in the document. My email is above, just make
the obvious edit.

> |> I bet that most code will compile and run without too much trouble.
> |> C doesn't allow that much variation in targets. And the variation it
> |> does allow (eg. one-complement) is not something sane CPU
> |> designers would consider nowadays.
>
> The mind boggles. Have you READ the C standard?

More than that. I've implemented it. Have you?

It's only when you implement the standard you realise many of the issues are
irrelevant in practice. Take sequence points for example. They are not even
modelled by most compilers, so whatever ambiguities there are, they simply
cannot become an issue. Similarly various standard pendantics are moaning
about shifts not being portable, but they can never mention a compiler that fails
to implement them as expected...

Btw Do you happen to know the reasoning behind signed left shifts being
undefined while right shifts are implementation defined?

> |> Not specifying the exact size of types is one of C's worst mistakes.
> |> Using sized types is the right way to achieve portability over a wide
> |> range of existing and future systems (including ones that have different
> |> register sizes). The change to 128-bit is not going to affect this software
> |> precisely because it already uses correctly sized types.
>
> On the contrary. Look, how many word size changes have you been
> through? Some of my code has been through about a dozen, in
> succession, often with NO changes. Code that screws 32 bits in
> will not be able to handle data that exceeds that.

It will work as long as the compiler supports a 32-bit type - which it will of
course. But in the infinitesimal chance it doesn't, why couldn't one
emulate a 32-bit type, just like 32-bit systems emulate 64-bit types?

> You are making PRECISELY the mistake that was made by the people
> who coded the exact sizes of the IBM System/360 into their programs.
> They learnt better, but have been replaced by a new set of kiddies,
> determined to make the same old mistake :-(

Actually various other languages support sized types and most software
used them long before C99. In many cases it is essential for correctness
(imagine writing 32 bits to a peripheral when it expects 16 bits etc). So
you really have to come up with some extraordinary evidence to explain
why you think sized types are fundamentally wrong.

Wilco


First  |  Prev  |  Next  |  Last
Pages: 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Prev: LM3478 design gets insanely hot
Next: 89C51ED2