From: rickman on
Terran Melconian wrote:
> On 2006-10-05, Isaac Bosompem <x86asm(a)gmail.com> wrote:
> >
> > CBFalconer wrote:
> >
> >> A UART needs much better precision (and stability) than you can
> >> expect from any r/c oscillator.
> >
> > Yes, from what I've read you only got an error margin of 1 or 2%. Not
> > too big if you ask me.
>
> Assume a byte is 10 bits long (one start, one stop, no parity). If we
> sample the bits in the middle of where we think they are, then by the
> end of the byte we can be off by just under half a bit and still read
> correctly. This looks like about 5% to me.

That will work if the other end is timed perfectly, but in reality you
need to split the timing error budget between sender and receiver. So
that gives you 2.5% in an otherwise perfect world. So in the real
world 2% is a better figure.


> I wonder how exactly they arrive at 1% or 2%; do you know?
>
> I've used the internal oscillator on an AVR and it usually worked
> (except when it didn't, of course); good enough if it was only to be
> used for debugging and turned off in the final product.

Exactly. I was under the impression that on the newer parts internal
oscillators were being trimmed and compensated to get under the 2%
figure over temperature and voltage. But since I normally use
crystals, I have not followed this closely.

From: Ulf Samuelsson on
>>
>> And the R/C oscillator is only useful in a small percentage of
>> applications where you don't need any more timing precision than
>> what is required to run a UART, and just barely that!
>
> A UART needs much better precision (and stability) than you can
> expect from any r/c oscillator.
>

Not when you can calibrate the R/C oscillator against a known
source (like the incoming data on the serial port)

The LIN protocol for automotive application was designed
to allow the use controllers running from an R/C oscillator.

The protocol start with a "wakeup" byte followed by
a "calibration" byte and after that, the uC knows the bit time
in clocks regardless of its current frequency.


So if you have a choice of protocol, and choose a protocol that
requires a stable CPU clock, you just did not do your homework.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB


From: David Brown on
rickman wrote:
> Terran Melconian wrote:
>> On 2006-10-05, Isaac Bosompem <x86asm(a)gmail.com> wrote:
>>> CBFalconer wrote:
>>>
>>>> A UART needs much better precision (and stability) than you can
>>>> expect from any r/c oscillator.
>>> Yes, from what I've read you only got an error margin of 1 or 2%. Not
>>> too big if you ask me.
>> Assume a byte is 10 bits long (one start, one stop, no parity). If we
>> sample the bits in the middle of where we think they are, then by the
>> end of the byte we can be off by just under half a bit and still read
>> correctly. This looks like about 5% to me.
>
> That will work if the other end is timed perfectly, but in reality you
> need to split the timing error budget between sender and receiver. So
> that gives you 2.5% in an otherwise perfect world. So in the real
> world 2% is a better figure.
>

The theoretical limit (for 10-bit chars, including start and stop) is a
5% match. If each end is within 2.5%, you'll get a match - you don't
need to chop off anything extra because of uneven splits. In fact, you
can often *add* significant margins because of uneven splits - if one
end is a PC, or other equipment with known tight margins, you might be
confident of a 1% margin at the PC end, giving you 4% to play with at
the embedded end.

However, there are other things that eat away at your margins. Assuming
a balanced split, you need to aim for (0.5 / 10) / 2 = 2.5% for a
half-bit error. If you want to take advantage of the majority vote
noise immunity feature of many uarts, you want to be maximally 7/16 bits
out over a total of 9 + 9/16 bits (only half the stop bit is used),
giving you 4.6%/2 = 2.3% margins.

Then you need to look at your drivers, terminations, cable load, etc.,
and how they affect your timings. In particular, you are looking for
the skew between the delays on a rising edge and the delays on a falling
edge. These delays are absolute timings, independent of the baud rate
and the length of each character.

Thermal and voltage effects will also affect your timing reference, and
can be considered as part of the same margin calculations.

>
>> I wonder how exactly they arrive at 1% or 2%; do you know?

I'd say 2% for a simple RS-232 or RS-485 link that was not stretching
speed or distance limits, and 1% when using optical isolation or cables
with significant capacitance. YMMV, of course.


>>
>> I've used the internal oscillator on an AVR and it usually worked
>> (except when it didn't, of course); good enough if it was only to be
>> used for debugging and turned off in the final product.
>
> Exactly. I was under the impression that on the newer parts internal
> oscillators were being trimmed and compensated to get under the 2%
> figure over temperature and voltage. But since I normally use
> crystals, I have not followed this closely.
>

If you have good control of the environment (temperature and voltage),
and simple links, then 2% is good enough. If one end of the
communication link has a more precise reference, then 2% is more than
sufficient.
From: CBFalconer on
Terran Melconian wrote:
> On 2006-10-05, Isaac Bosompem <x86asm(a)gmail.com> wrote:
>> CBFalconer wrote:
>>
>>> A UART needs much better precision (and stability) than you can
>>> expect from any r/c oscillator.
>>
>> Yes, from what I've read you only got an error margin of 1 or 2%. Not
>> too big if you ask me.
>
> Assume a byte is 10 bits long (one start, one stop, no parity). If we
> sample the bits in the middle of where we think they are, then by the
> end of the byte we can be off by just under half a bit and still read
> correctly. This looks like about 5% to me.
>
> I wonder how exactly they arrive at 1% or 2%; do you know?

There are two ends. If each is off by 2% the disagreement is 4%.
You also haven't considered the end effects of locating that middle
of the bit sample. For a 16x master clock that position can be off
by 6% all by itself, which is the equivalent of 0.6% at the 10th
position. Then there is the quantizing error due again to the
period of the master clock.

Draw timing diagrams to see all this.

--
Some informative links:
<news:news.announce.newusers
<http://www.geocities.com/nnqweb/>
<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/>

From: Pete Bergstrom on
Everett M. Greene wrote:
> "Ulf Samuelsson" <ulf(a)a-t-m-e-l.com> writes:
> [snip]
>>> I did also read the tutorial but I didn't read through all of it.
>>> Eclipse is damn terrible, consumes a large amount of memory (seriously,
>>> on my system it consumes almost as much physical memory as that FEAR
>>> game) and is very slow.
>> I attended an Eclipse Seminar, and 1GB RAM is minimum
>> and many need 2 GB to run properly.
>
> Good gawd, Gertie! What were the implementors using for
> brains (presuming they had any)? The bloated size and
> user testimonial above indicates that the implementors
> have the intelligent level of a rock.

The implementors largely worked for large companies selling computer
hardware. They did their job.

Pete