From: David Kanter on
There were 2-3 very interesting presentations on inductive coupling at
ISSCC this year from a professor at Tokyo:
http://www.eetimes.com/news/design/showArticle.jhtml?articleID=222700422

IIRC, they put a spiral inductors on an upper metal layer. The
downside is that your upper level power grids get bigger...but the
penalty is ~20-30%. The upside for communication bandwidth and power
looked really good, especially compared to TSVs.

David
From: Przemek Klosowski on
On Tue, 02 Mar 2010 19:55:31 -0500, nedbrek wrote:

> "MitchAlsup" <MitchAlsup(a)aol.com> wrote in message
> news:49270cd8-a1b5-4186-9b82-0564efee8c56(a)33g2000yqj.googlegroups.com...
> On Mar 2, 5:28 am, n...(a)cam.ac.uk wrote:

>> Consider a chip containing CPUs sitting in a package with a small-
>> medium number of DRAM chips. The CPU and DRAM chips orchestrated with
>> an interface that exploits the on die wire density that cannot escape
>> the package boundary.
>
> I'm not aware of any MCM that will support on die wire density.

What about solder bumps on face-to-face chips, like this (side view):

======= memory chip facing down, with solder bump field at the right end
UUU /______
AAA //
============= CPU chip facing up, with solder bump field on the left
and regular wired bond pads on the right

The assembly would be similar to SMT: solder paste, and heat profile that
melts the solder, which then pulls parts into alignment by surface
tension. I think soldering temperatures are still compatible with bare
silicon.

One could even design a bus-style parallel metal passthrough traces going
to another field of pads on the left edge of the memory chip, and
continue this scheme, although assembly would get progressively trickier
for more than two chips.
From: Del Cecchi on
Andy "Krazy" Glew wrote:
> Terje Mathisen wrote:
>> Robert Myers wrote:
>>> From that experience, I acquired several permanent prejudices:
>>>
>>> 1. For scientific/engineering applications, "programmers" should
>>> either be limited to sorting and labeling output, or (preferably) they
>>> should be shipped to the antarctic, where they could be sent, one at a
>>> time, to check the temperature gauge a quarter mile from the main
>>> camp.
>>>
>>> 2. No sane computational physicist should imagine that even a thorough
>>> knowledge of FORTRAN was adequate preparation for getting things done.
>>>
>>> 3. Computer architects are generally completely out of touch with
>>> reality.
>>
>> Hmmm... let's see...
>>
>> 1. I'm a programmer, but otoh I do like xc skiing and I would love to
>> be able to spend a season in Antarctica.
>>
>> 2. I learned programming on a Fortran 2 compiler, '27H' Hollerith text
>> constants and all. I've done CFC (computational fluid chemistry)
>> optimization, doubling the simulation speed.
>>
>> 3. Yes, my employer tend to put me in the 'Architect' role on the
>> staff diagrams.
>>
>> So Robert, do I satisfy your prejudices?
>
>
> Hey, I fill all the same criteria that Terje does.
>
> Does that mean we can both go spend a season in Antarctica? My
> telemarking will improve.
>
> (One of the big disappointments of my life is that it is not typical for
> programmers to work in isolated places, like on top of volcanoes.)

For a few months a year, Minnesota isn't that much different from
antartica. :-)
From: Morten Reistad on
In article <hmjf1a$72v$1(a)news.eternal-september.org>,
Stephen Fuld <SFuld(a)Alumni.cmu.edu.invalid> wrote:
>On 3/1/2010 7:15 PM, Robert Myers wrote:
>> On Mar 1, 8:05 pm, Del Cecchi<delcecchinospamoftheno...(a)gmail.com>
>> wrote:

>But there are lots of things we could do given enough money. For
>example, we could integrate the memory on chip or on an MCM to eliminate
>the pin restrictions. We are also not near the limit of pin wiggling speed.

Has anyone experimented with having a large (100s of megabytes) l2 cache,
used as RAM, and used all the DRAM as awap space? I mean, let the OS
actually control the transition, not just a simple hardware hash?

>I cite as a counter example, that if we had wanted more bandwidth and
>were willing to pay some more, and sacrifice some latency, we would all
>be using more banks of FB-DIMMs.

I use several systems with at least 8way dram banks. But that is
in addition to huge caches, not instead of.

-- mrr
From: nmm1 on
In article <80oa67-6k2.ln1(a)laptop.reistad.name>,
Morten Reistad <first(a)last.name> wrote:
>In article <hmjf1a$72v$1(a)news.eternal-september.org>,
>Stephen Fuld <SFuld(a)Alumni.cmu.edu.invalid> wrote:
>>On 3/1/2010 7:15 PM, Robert Myers wrote:
>>> On Mar 1, 8:05 pm, Del Cecchi<delcecchinospamoftheno...(a)gmail.com>
>>> wrote:
>
>>But there are lots of things we could do given enough money. For
>>example, we could integrate the memory on chip or on an MCM to eliminate
>>the pin restrictions. We are also not near the limit of pin wiggling speed.
>
>Has anyone experimented with having a large (100s of megabytes) l2 cache,
>used as RAM, and used all the DRAM as awap space? I mean, let the OS
>actually control the transition, not just a simple hardware hash?

I have heard that some of the embedded people do, though on a smaller
scale.


Regards,
Nick Maclaren.