From: Jonathan Kirwan on
On Thu, 07 Sep 2006 17:52:31 GMT, Joerg
<notthisjoergsch(a)removethispacbell.net> wrote:

>Hello Jim,
>
>>> It would be hard to create architecture worse than x51 today. Limited
>>> stack, single pointer, three different type of data memory, no thought
>>> at all about high-level language, 12 clocks per cycle, etc.
>>> By any rules it should be dead long ago...
>>
>> Of course, with a clean slate, and a huge amount of hindsight, and a
>> shift of the target goal posts, with new 2006 processes and design
>> tools, then it's no real surprise a different answer would result.
>>
>> You could say the same about almost any processor, the Pentium included.
>>
>> Just one tiny problem: the real world is not a clean slate, and there
>> are huge amounts of IP and training invested.
>
>Fully agree. The other tiny problem: Mankind wants to be able to buy
>$9.95 thermostats, $19.95 sprinkler timers and whatnot at the stores. A
>big ARM processor or a fat DSP ain't in the cards there. Many things
>have to be produced in China for a grand total of a couple Dollars or less.
><snip>

Don't forget $3 digital thermometers in grocery stores.

Jon
From: Walter Banks on
I was in Taiwan a while ago looking at a very nice double sided fiber glass board of a prototype for a mass produces consumer product. My host apologized and told me that this was a only a prototype that of course the production board would be single
sided phenolic.

w..


Joerg wrote:

> An so do discrete parts. Look at "modern" TV sets: Many use non-SMT
> parts because they wanted to save a few cents on the circuit board and
> consequently chose, ahem, (Yuriy, close your eyes now) the good old
> phenolic board. There is a reason why smart companies like TI brought
> their new 430F2xx out in DIP. Hobbyisist are most certainly not the reason.

From: Jonathan Kirwan on
On Thu, 07 Sep 2006 17:38:01 GMT, Joerg
<notthisjoergsch(a)removethispacbell.net> wrote:

><snip>
>> Yuriy wrote:
>>
>> Necessity to use assembler usually points to the inadequate processor
>> selection.
>
>For hardcore realtime apps the is no alternative to assembler yet. No
>matter which processor. Also, there are times (many times) when the BOM
>budget or the battery budget does not allow a fancy chip. I just have
>one of these: A Blackfin would just hit the spot. But, it's too
>expensive and would deplete the batteries way too fast. IOW, with what
>you coined "adequate" processor you would not have a saleable product.
>Instead you would have an unhappy client or boss.

This is right on target with my own experiences, as well. Perhaps
some folks live in a world without compromising trade-offs and can
merrily use C and remain aloof from assembly. That's not been my
experience, though. Competition sets difficult size, battery life,
and other performance issues and instrumentation requirements can mean
precise, repeatable observation windows and information delivery
without the ability to tolerate variable latencies (closed loops
impress serious limits here for some process control situations.) In
these cases, letting a C compiler generate variable prolog and epilog
code (which isn't under control at the source level) depending on
coding may very well not be acceptable.

You mention power and battery time. Dissipation of the micro may
create thermal differentials and otherwise inject more significant
levels of noise into the system, not to mention that increasing power
means more expensive and larger power support. But there are cases
where the micro (and its oscillator) is the _most_ significant heat
source where being a heat source is a VERY BAD thing and can ruin your
low signal end performance to the point of no longer competing.
Battery life may similarly exclude your product from success.

Compromises and tradeoffs are the rule in my life. Assembly is yet
another tool to bring to the table in gaining some competitive edge by
using a part that improves a little on size, power drain, cost,
dissipation, etc. At least in areas where I'm at. Those who aren't
competent at juggling these compromises to their advantage won't
survive as long.

Another small point. A lot of customers don't really know much about
closed loop control and simply think they can hook up some PID
controller to your output and make things work. And they can if their
process is remarkably tolerant of near ignorant use. But if an
instrument observes a process variable with precise timing, yet fails
to deliver those results with a fixed and short latency to some
controller, then it's a disaster for some processes. Precision timing
is vital, if a priori analysis or rare but periodic system tuning is
to work well. What really kills closed loop control systems is any
significant and unpreditable variable latencies. It's bad enough to
have a long latency -- seriously bad -- but to not only be long, but
variable is enough. One case that hammered this home to me was a case
where we had someone pulling gallium arsenide bools using our
instrumentation to observe temperature. At the time, our instrument
had a variable timing latency in driving the analog outs (we used C)
and the customer was using a stock PID controller, commercial and well
known variety, as part of their closed loop control. They were seeing
serious ripples in the bools they needed to figure out and fix. It
turned out that I'd just added PID control to this unit and so we
updated their software. This was my very first experience writing PID
software, btw, so I was no expert at this and did just a basic job of
it. The customer called me back and simply raved about the
performance, asking me what I did to make things so good. It
completely solved their problems. Very happy customer. On later
reflection, I learned that there were several things working in my
favor. One was the fact that my PID had direct access to the raw data
as it was observed. The entire timing involved in the loop had
greatly shortened. The other fact was that I'd taken pains to create
a repeatable latency in the control output. It didn't vary nearly as
much as it once did. Those two things alone made all the difference.
The commercial controller they used to use, I'm sure, was written from
long experience. But compared to a neophyte who could shorten the
delays and make them closer to fixed, they had no chance at all.

Jon
From: Joerg on
Hello Walter,


> I was in Taiwan a while ago looking at a very nice double sided fiber glass board of a prototype for a mass produces consumer product. My host apologized and told me that this was a only a prototype that of course the production board would be single
> sided phenolic.
>

Sure. Way to go. Use only what's needed to reliably fulfill the function
but no more. It's a lesson many in the western world should take to
heart. The best method to learn it is to spend some time with Asian R&D
engineers.

For readers of this thread who still don't believe it: Open a TV remote
or a guitar tuner. Chances are pretty good that it's phenolic.

--
Regards, Joerg

http://www.analogconsultants.com
From: Joerg on
Hello Jon,

>
> Another small point. A lot of customers don't really know much about
> closed loop control and simply think they can hook up some PID
> controller to your output and make things work. And they can if their
> process is remarkably tolerant of near ignorant use. But if an
> instrument observes a process variable with precise timing, yet fails
> to deliver those results with a fixed and short latency to some
> controller, then it's a disaster for some processes. Precision timing
> is vital, if a priori analysis or rare but periodic system tuning is
> to work well. What really kills closed loop control systems is any
> significant and unpreditable variable latencies. It's bad enough to
> have a long latency -- seriously bad -- but to not only be long, but
> variable is enough. One case that hammered this home to me was a case
> where we had someone pulling gallium arsenide bools using our
> instrumentation to observe temperature. At the time, our instrument
> had a variable timing latency in driving the analog outs (we used C)
> and the customer was using a stock PID controller, commercial and well
> known variety, as part of their closed loop control. They were seeing
> serious ripples in the bools they needed to figure out and fix. It
> turned out that I'd just added PID control to this unit and so we
> updated their software. This was my very first experience writing PID
> software, btw, so I was no expert at this and did just a basic job of
> it. The customer called me back and simply raved about the
> performance, asking me what I did to make things so good. It
> completely solved their problems. Very happy customer. On later
> reflection, I learned that there were several things working in my
> favor. One was the fact that my PID had direct access to the raw data
> as it was observed. The entire timing involved in the loop had
> greatly shortened. The other fact was that I'd taken pains to create
> a repeatable latency in the control output. It didn't vary nearly as
> much as it once did. Those two things alone made all the difference.
> The commercial controller they used to use, I'm sure, was written from
> long experience. But compared to a neophyte who could shorten the
> delays and make them closer to fixed, they had no chance at all.
>

This stuff happens a lot. The nice and fancy solution is not necessarily
the best. Like in your case with the commercial controller it might
actually be an approach that couldn't have worked at all, no matter how
hard they try.

--
Regards, Joerg

http://www.analogconsultants.com
First  |  Prev  |  Next  |  Last
Pages: 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Prev: Tiny Bootloader
Next: Link&Locate 86?