From: JosephKK on
On Thu, 27 May 2010 15:12:36 GMT, Jan Panteltje
<pNaonStpealmtje(a)yahoo.com> wrote:

>On a sunny day (Thu, 27 May 2010 06:55:46 -0500) it happened "Tim Williams"
><tmoranwms(a)charter.net> wrote in <htlmk9$sfc$1(a)news.eternal-september.org>:
>
>>*Cough*
>>
>>How long do you figure the original software took to write? 300 days? If
>>they had designed it for 300-core operation from the get-go, they wouldn't
>>have had that problem.
>>
>>Sounds like a failure of management to me :)
>>
>>Tim
>
>You are an idiot, I can hardly decrypt your rant.
>The original soft was written when there WERE no multicores.
>And I wrote large parts of it,
>AND it cannot be split up in more then say 6 threads if you wanted to.
>But, OK, I guess somebody could use a core for each pixel,
>plus do multiple data single instruction perhaps,
>will it be YOU who writes all that? Intel has a job for you!
>
>64 bit x86 is still around and one of the reasons AMD was successful
>with that, is that it would run EXISTING code, not all though,
>but even a recompile was easy.
>But multicore is a totally different beast.
>
>I'd love to see a 300 GHz gallium arsenide x86 :-)
>I would buy one.

How could you feed it over 30 billion memory transactions per second?
For that matter over 3 billion memory transactions per second? Several
sequenced edges are involved, and what about bus width? 64 lane PCIe
2/3? Where are you going to get the RAM?

How are you going to cool it?

From: Paul Keinanen on
On Fri, 04 Jun 2010 14:53:53 -0700,
"JosephKK"<quiettechblue(a)yahoo.com> wrote:

>On Thu, 27 May 2010 15:12:36 GMT, Jan Panteltje
><pNaonStpealmtje(a)yahoo.com> wrote:

>>I'd love to see a 300 GHz gallium arsenide x86 :-)
>>I would buy one.
>
>How could you feed it over 30 billion memory transactions per second?
>For that matter over 3 billion memory transactions per second? Several
>sequenced edges are involved, and what about bus width? 64 lane PCIe
>2/3? Where are you going to get the RAM?

The RAM part does not sound too demanding.

Assuming sufficient on chip memory, any external DRAM could act as the
backing storage in a virtual system, assuming 4 KiB pages (for x86
architecture), a page load would transfer 32 Kib.

A 1 Gib DRAM is arranged as 32 kiRows x 32 KiColumns. Inside a DRAM,
in a read request, the row address will be used to represent all bits
in a row to the sense amplifiers (and eventually write pack those bits
to all bits in the row). The column address is then used to select one
(or more) sense amplifier(s) to the output pin(s). In videoRAMs, the
row is parallel loaded into a shift register which is then clocked out
at high speed.

Assuming 50 ns RAS cycle time, a virtual memory page can be delivered
in less than 50 ns (compared to several milliseconds for a disk based
backing storage), corresponding to 80 GiBytes/s.

With such long messages, the low speed of light does not destroy the
throughput, even if the DRAM and CPU are at some distance from each
other (for cooling etc.).

Transferring 80 GiBytes/s with one or few pins would be challenging,
even with multilevel coding, however, this optical fiber, this could
be realistic with multicolour (WDM) systems.

From: Paul Keinanen on
On Fri, 04 Jun 2010 23:48:27 -0400, Phil Hobbs
<pcdhSpamMeSenseless(a)electrooptical.net> wrote:

>JosephKK wrote:

>> Naw, 80 GHz (U)LVPECL 8-bitters and maybe 12 or 16 bitters. Single 1.5 V
>> supply.
>
>In GaAs? Don't think so. Just driving the wires at that speed would
>take insane amounts of power.

Why would driving a 50 ohm transmission line require a huge amount of
power ? On the receiver side, how many bits would be required to
_reliably_ detect if 0 or 1 is sent ?

Assuming -174 dBm/Hz thermal noise density at room temperature, at 80
GHz bandwidth, the thermal noise power would be -65 dBm and assuming a
few dB extra required for binary detection, we are still talking about
a few nanowatts at the receiver end.

Of course at these frequencies, the transmission line skin effect and
dielectric losses on a PCB would be considerable, requiring a high
transmitter power and hence limiting the transfer distance.

At such high frequencies, a low loss waveguide would have nearly
manageable dimensions for "long distance" communication across the
PCB:-).

From: Paul Keinanen on
On Sat, 05 Jun 2010 20:53:02 -0400, Phil Hobbs
<pcdhSpamMeSenseless(a)electrooptical.net> wrote:

>Paul Keinanen wrote:
>> On Fri, 04 Jun 2010 23:48:27 -0400, Phil Hobbs
>> <pcdhSpamMeSenseless(a)electrooptical.net> wrote:
>>
>>> JosephKK wrote:
>>
>>>> Naw, 80 GHz (U)LVPECL 8-bitters and maybe 12 or 16 bitters. Single 1.5 V
>>>> supply.
>>> In GaAs? Don't think so. Just driving the wires at that speed would
>>> take insane amounts of power.
>>
>> Why would driving a 50 ohm transmission line require a huge amount of
>> power ? On the receiver side, how many bits would be required to
>> _reliably_ detect if 0 or 1 is sent ?
>>
>> Assuming -174 dBm/Hz thermal noise density at room temperature, at 80
>> GHz bandwidth, the thermal noise power would be -65 dBm and assuming a
>> few dB extra required for binary detection, we are still talking about
>> a few nanowatts at the receiver end.
>>
>> Of course at these frequencies, the transmission line skin effect and
>> dielectric losses on a PCB would be considerable, requiring a high
>> transmitter power and hence limiting the transfer distance.
>>
>> At such high frequencies, a low loss waveguide would have nearly
>> manageable dimensions for "long distance" communication across the
>> PCB:-).
>>
>
>Lines on ICs aren't 50 ohms, they're all RC.

When the speed goes up, the physical distances must be reduced, in
which a single synchronous clock can be used and the logic considered
by simple RC model.

In the old days a complete 19" box might considered a single entity
clocked by a central clock and the interconnections analyzed as RC
circuits. The interconnection between the boxes was handled with
serial or parallel transmission lines driven by proper line drivers
and receivers.

Later on a single card was a self contained unit with transmission
line communication through the backplane.

These days the interconnections between ICs on a PCB are often
transmission lines.

For even greater speeds, physically small sections within a single IC
chip must be considered as independent entities, interconnected
asynchronous transmission lines to transfer data between entities. The
popularity of multicore processors is a clear indication of this
trend.

>There are millions of
>them, so even with 200 mV swings you'd be talking about 400 watts per
>million wires. Lava city.

On an independent entity, much less than 1 mm� in size, what forces
using such huge voltage swing ?

At lower speeds with unbalanced logic, the ground bounce will finally
eat the noise margin. How about some ECL style gates with true and
complement outputs, the ground potential fluctuations would not be
significant, thus reducing the required voltage swing and hence power
dissipation ?

>Not to mention that the long lines all have
>repeaters to preserve the bandwidth, which multiplies the power dissipation.

How many decibels/mm are the losses on a transmission line on the
chip?


From: keithw86 on
On Jun 6, 12:39 am, Paul Keinanen <keina...(a)sci.fi> wrote:
> On Sat, 05 Jun 2010 20:53:02 -0400, Phil Hobbs
>
>
>
> <pcdhSpamMeSensel...(a)electrooptical.net> wrote:
> >Paul Keinanen wrote:
> >> On Fri, 04 Jun 2010 23:48:27 -0400, Phil Hobbs
> >> <pcdhSpamMeSensel...(a)electrooptical.net> wrote:
>
> >>> JosephKK wrote:
>
> >>>> Naw, 80 GHz (U)LVPECL 8-bitters and maybe 12 or 16 bitters.  Single 1.5 V
> >>>> supply.
> >>> In GaAs?  Don't think so.  Just driving the wires at that speed would
> >>> take insane amounts of power.
>
> >> Why would driving a 50 ohm transmission line require a huge amount of
> >> power ?  On the receiver side, how many bits would be required to
> >> _reliably_ detect if 0 or 1 is sent ?
>
> >> Assuming -174 dBm/Hz thermal noise density at room temperature, at 80
> >> GHz bandwidth, the thermal noise power would be -65 dBm and assuming a
> >> few dB extra required for binary detection, we are still talking about
> >> a few nanowatts at the receiver end.
>
> >> Of course at these frequencies, the transmission line skin effect and
> >> dielectric losses on a PCB would be considerable, requiring a high
> >> transmitter power and hence limiting the transfer distance.
>
> >> At such high frequencies, a low loss waveguide would have nearly
> >> manageable dimensions for "long distance" communication across the
> >> PCB:-).
>
> >Lines on ICs aren't 50 ohms, they're all RC.  
>
> When the speed goes up, the physical distances must be reduced, in
> which a single synchronous clock can be used and the logic considered
> by simple RC model.
>
> In the old days a complete 19" box might considered a single entity
> clocked by a central clock and the interconnections analyzed as RC
> circuits. The interconnection between the boxes was handled with
> serial or parallel transmission lines driven by proper line drivers
> and receivers.
>
> Later on a single card was a self contained unit with transmission
> line communication through the backplane.
>
> These days the interconnections between ICs on a PCB are often
> transmission lines.
>
> For even greater speeds, physically small sections within a single IC
> chip must be considered as independent entities, interconnected
> asynchronous transmission lines to transfer data between entities. The
> popularity of multicore processors is a clear indication of this
> trend.
>
> >There are millions of
> >them, so even with 200 mV swings you'd be talking about 400 watts per
> >million wires.  Lava city.  
>
> On an independent entity, much less than 1 mm² in size, what forces
> using such huge voltage swing ?
>
> At lower speeds with unbalanced logic, the ground bounce will finally
> eat the noise margin. How about some ECL style gates with true and
> complement outputs, the ground potential fluctuations would not be
> significant, thus reducing the required voltage swing and hence power
> dissipation ?
>
> >Not to mention that the long lines all have
> >repeaters to preserve the bandwidth, which multiplies the power dissipation.
>
> How many decibels/mm are the losses on a transmission line on the
> chip?

Repeaters aren't used because of loss. They're used because RC is too
high. The delay of a line is ~ the square of its length. At some
point a gate delay becomes less than the difference between (2l)^2 and
2l+gate. I've seen lines with four repeaters. Major work was done to
get the tools to just use inverters when there were an even number of
repeaters; even larger gain.


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6
Prev: Shielded Banana Plugs
Next: Which Charging Method is Best?