From: Kenneth P. Turvey on 12 Jan 2010 17:12 On Tue, 12 Jan 2010 08:16:18 -0800, Wojtek wrote: > I am curious. Just what WOULD need such a large index? Every sand grain > on earth? Every star in every galaxy in the universe? <biggeek> We'll need to to make sure we don't run into any of the stars in the various universes we pass through while using our faster than light trans- dimensional inter-universal warp drives. </biggeek> -- Kenneth P. Turvey <evoturvey(a)gmail.com>
From: John B. Matthews on 12 Jan 2010 17:34 In article <dbe988bf-ad61-48f9-b189-a140d7d429a9(a)g25g2000yqd.googlegroups.com>, Lew <lew(a)lewscanon.com> wrote: > Lew wrote: > > Where will you store the array? Either you have a crapload of RAM > > (one bit per atom storage density?) or the largest-capacity storage > > device ever invented (one bit per atom storage density?). > > > > What's the average retrieval latency? Even at one bit per atom > > storage density, it must take even a light beam noticeable time to > > reach the further reaches of the storage device; anything slower like > > a semiconductor must take a really long time. > > > > A silicon crystal lattice has a lattice spacing of just over half a > > nanometer, or 5.4 x 10^-10 m. A three-dimensional storage medium for > > a 9 x 10^18-element array would hold juar over 2 x 10^6 elements to > > the side. An average access would be halfway in each dimension, or > > 10^6 elements, which in a silicon lattice is about 5.4 x 10-4 m, times > > three for a total traversal distance of about 1.6 x 10^-3 m. Each > > way. For a round trip slightly over 3 x 10^-3 m. A light beam > > travels that in 0.1 microseconds (10^-7 s). > > > > Drat! Mixed up my CGS and MKS. You caught it in time; not so for the Mars Climate Orbiter folks: <http://programmer.97things.oreilly.com/wiki/index.php/ Prefer_Domain-Specific_Types_to_Primitive_Types> The article mentions Ada, but Java is well represented: <http://jscience.org/api/javax/measure/unit/package-summary.html> > That's 10^-5 s, or 10 microseconds. That's about 20,000 > > clock cycles of latency on a modern processor, > > far more on the future processors of 2074. > > > > Either we'll find a sparse representation for such arrays, we'll > > invent much denser storage media and better ways to access them, > > we'll find some way to keep the processor busy during that latency, > > or we'll use super-luminal access speeds, perhaps through quantum > > superposition. -- John B. Matthews trashgod at gmail dot com <http://sites.google.com/site/drjohnbmatthews>
From: Joshua Cranmer on 12 Jan 2010 18:06 On 01/12/2010 02:08 PM, Lew wrote: > That's about 200 clock > cycles of latency on a modern processor, far more on the future > processors of 2074. I am not an electrical engineer, but I doubt general processor clock speeds are ever going to go significantly further than the 3.5-ish GHz that we have now, due primarily to significant power dissipation issues as well as the fact that the chip will be too damn big. To my knowledge, we are hitting the physical limits of making an individual core much more powerful. So I find it far mare likely that the processors of 2074 will be 3.5 megacore processors with each core having the performance of, say, a Pentium IV. > Either we'll find a sparse representation for such arrays, we'll > invent much denser storage media and better ways to access them, we'll > find some way to keep the processor busy during that latency, or we'll > use super-luminal access speeds, perhaps through quantum > superposition. It's been years since I last looked at quantum computers, but the progress on them has been slow. The most powerful one I can find evidence of right now was 8 qubits, which used a design that I recall maxing out around 40 qubits. I also recall many of the available designs have theoretical capacities below 100 qubits which makes them inadequate for useful purposes. I also recall quantum superposition doesn't allow you to transfer information at superliminal speeds. Then again, my knowledge of quantum mechanics is extremely poor, so I could be wrong. -- Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald E. Knuth
From: Arne Vajhøj on 17 Jan 2010 19:50 On 12-01-2010 11:16, Wojtek wrote: > Arne Vajh�j wrote : >> On 11-01-2010 17:09, Maarten Bodewes wrote: >>> Arne Vajh�j wrote: >>>> BTW, even long would be too small for indexes if Java >>>> will be used after 2074, but somehow I doubt that would >>>> be the case. And besides we do not have the verylong datatype >>>> yet. >>>> >>> You are expecting memory sizes of 9,223,372,036,854,775,807 bytes???? >>> >>> That's 9,223 PETA bytes. Hmm, weird, may happen. But it is certainly >>> rather large. >> >> In 2074 ? Yes ! > > I am curious. Just what WOULD need such a large index? Every sand grain > on earth? Every star in every galaxy in the universe? I don't know. What I do know is that several times during the last 50 years somebody has said that you will never need more than X memory. And they have been wrong every time. I think it is most logical to assume that trend will continue and that we indeed will find something to use such huge address spaces for. Arne
From: Arne Vajhøj on 17 Jan 2010 19:51
On 12-01-2010 18:06, Joshua Cranmer wrote: > On 01/12/2010 02:08 PM, Lew wrote: >> That's about 200 clock >> cycles of latency on a modern processor, far more on the future >> processors of 2074. > > I am not an electrical engineer, but I doubt general processor clock > speeds are ever going to go significantly further than the 3.5-ish GHz > that we have now, due primarily to significant power dissipation issues > as well as the fact that the chip will be too damn big. To my knowledge, > we are hitting the physical limits of making an individual core much > more powerful. > > So I find it far mare likely that the processors of 2074 will be 3.5 > megacore processors with each core having the performance of, say, a > Pentium IV. That seems to be the direction for the next decade and possible beyond. And it will have some implications for memory as well. Arne |