From: JosephKK on
On Sat, 09 Aug 2008 09:09:29 -0700, John Larkin
<jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote:

>On Sat, 09 Aug 2008 09:02:53 -0700, JosephKK <quiettechblue(a)yahoo.com>
>wrote:
>
>>On Wed, 06 Aug 2008 19:57:23 -0700, John Larkin
>><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote:
>>
>>>On Tue, 5 Aug 2008 12:54:14 -0700, "Chris M. Thomasson"
>>><no(a)spam.invalid> wrote:
>>>
>>>>"John Larkin" <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in message
>>>>news:rtrg9458spr43ss941mq9p040b2lp6hbgg(a)4ax.com...
>>>>> On Tue, 5 Aug 2008 13:30:52 +0200, "Skybuck Flying"
>>>>> <BloodyShame(a)hotmail.com> wrote:
>>>>>
>>>>>>As the number of cores goes up the watt requirements goes up too ?
>>>>>
>>>>> Not necessarily, if the technology progresses and the clock rates are
>>>>> kept reasonable. And one can always throttle down the CPUs that aren't
>>>>> busy.
>>>>>
>>>>>>
>>>>>>Will we need a zillion watts of power soon ?
>>>>>>
>>>>>>Bye,
>>>>>> Skybuck.
>>>>>>
>>>>>
>>>>> I saw suggestions of something like 60 cores, 240 threads in the
>>>>> reasonable future.
>>>>
>>>>I can see it now... A mega-core GPU chip that can dedicate 1 core per-pixel.
>>>>
>>>>lol.
>>>>
>>>>
>>>>
>>>>
>>>>> This has got to affect OS design.
>>>>
>>>>They need to completely rethink their multi-threaded synchronization
>>>>algorihtms. I have a feeling that efficient distributed non-blocking
>>>>algorihtms, which are comfortable running under a very weak cache coherency
>>>>model will be all the rage. Getting rid of atomic RMW or StoreLoad style
>>>>memory barriers is the first step.
>>>
>>>Run one process per CPU. Run the OS kernal, and nothing else, on one
>>>CPU. Never context switch. Never swap. Never crash.
>>>
>>>John
>>
>>OK. How do you deal with I/O devices, user input and hot swap?
>>
>
>I/O and user interface, just like now: device drivers and GUI's. Just
>run them on separate CPUs, and have hardware control over anything
>that could crash the system, specifically global memory mapping. There
>have been OS's that, for example, pre-qualified the rights of DMA
>controllers so even a rogue driver couldn't punch holes in memory at
>random.
>
>But hot swap? What do you mean? All the CPUs are on one chip.
>
>John

There are several things in play here. More and more instruments have
ports for memory cards, usb memory sticks, usb printer ports, IOW
conventional UPNP style hot plug.

Then we are ever more using dynamic unit / core switch out if it makes
a detected error, even at the sub chip level now.

From: Kim Enkovaara on
Martin Brown wrote:
> It might be an interesting academic study to see how the error rates of
> hardware engineers using VHDL compare with those using Verilog tools for
> the same sorts of design. The latter I believe is less strongly typed.

In real industry design flows things get more complicated, because
almost all of the Verilog flows seem to suggest the use of Lint
type of tools (actually they do much more than just rudimentary language
checks). Those tools do some of the checks that VHDL type system
does during the compilation as a part of the language.

--Kim
From: Kim Enkovaara on
Bernd Paysan wrote:
> Well, first of all, Verilog has way less types. There are only bits, bit
> vectors, 32 bit integers, and floats. You can't use the latter for
> synthesis; usually only bits and bit vectors are used for register data
> types.

And all that changes even with the synthesizable subset of
SystemVerilog, it adds enums, structs etc. And synthesis tools are
starting to support those features already, because they are quite easy
to map to VHDL features, that the tools had to support in the past
already.

> My experience is that people make way less errors in Verilog, because it's
> all straight-forward, and not many traps to fall in. E.g. a typical VHDL
> error is that you define an integer subrange from 0..F, instead of a 4 bit
> vector, and then forget to mask the add, so that it doesn't wrap around but
> fails instead.

I have also seen major errors in Verilog, because it's so easy to
connect two different width vectors together etc. All languages have
their pros and cons.

And the synthesis result for the integer and bitvector are the same. The
difference is that the other one traps in the simulation and the
designer has to think about the error. In HW there is no bounds
checking.

We also have to differentiate what is meant with an error. Is is
something that traps the simulation and it might be a bug, or is it
something that exists in the chip. I like code that traps as early as
possible and near the real problem, for that reason assertions and
bound checking are a real timesaver in verification.

> My opinion towards good tools:
>
> * Straight forward operations
> * Simple semantics

At least Verilog blocking vs. nonblocking and general scheduling
semantics are not very simple. VHDL scheduling is much harder to
misuse.

> * Don't offer several choices where one is sufficient

Sometimes people like to code differently, choices help that. In
SystemVerilog there are at least so many ways to do things that
most people should be happy :)

> * Restrict people to a certain common style where the tool allows choices

Coding style definition is a good way to start endless religious wars :)

--Kim
From: Terje Mathisen on
Jan Panteltje wrote:
> On a sunny day (13 Aug 2008 14:32:44 GMT) it happened nmm1(a)cus.cam.ac.uk (Nick
> Maclaren) wrote in <g7urac$a3n$1(a)gemini.csx.cam.ac.uk>:
>
>> "Properly sized types like int32_t", forsooth! Those abominations
>> are precisely the wrong way to achieve portability over a wide range
>> of systems or over the long term.
>
> I dare say you show clue-lessness

Them are fightin' words...

> No, int32_t and friends became NECESSARY when the 32 to 64 wave hit,
> a simple example, and audio wave header spec:
> #ifndef _WAVE_HEADER_H_
> #define _WAVE_HEADER_H_
>
> typedef struct
> { /* header for WAV-Files */
> uint8_t main_chunk[4]; /* 'RIFF' */
> uint32_t length; /* length of file */

This is precisely the wrong specification for a portable specification!

Which byte order should be used for the length field?

If we had something like uint32l_t/uint32b_t with explicit
little-/big-endian byte ordering, then it would be portable.

The only way to make the struct above portable would be to make all
16/32-bit variables arrays of 8-bit bytes instead, and then explicitly
specify how they are to be merged.

Using a shortcut specification as above would only be allowable as a
platform-specific optimization, guarded by #ifdef's, for
machine/compiler combinations which match the actual specification.

The alternative is to hide the memory ordering behind access functions
that take care of any byte swapping that might be needed.

Terje
--
- <Terje.Mathisen(a)hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
From: Terje Mathisen on
Wilco Dijkstra wrote:
> time consuming ISO process used. One of the funny moments was when I
> spotted 5 mistakes in a 20-line function that calculated the log2 of an integer
> during a code review. I hope that wasn't their average bugrate!

Indeed.

How many ways can you define such a function?

The only serious alternatives would be in the handling of
negative-or-zero inputs or when rounding the actual fp result to integer:

Do you want the Floor(), i.e. truncate, Ceil() or
Round_to_nearest_or_even()?

Using the latest alternative could make it harder to come up with a
perfect implementation, but otherwise it should be trivial.

Terje
--
- <Terje.Mathisen(a)hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
First  |  Prev  |  Next  |  Last
Pages: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Prev: LM3478 design gets insanely hot
Next: 89C51ED2