Prev: Multi-core lag for Left 4 Dead 1 and 2 and Quake 4 on AMD X23800+ processor... why ?
Next: Which is the most beautiful and memorable hardware structure in a CPU?
From: Del Cecchi` on 1 Apr 2010 23:48 MitchAlsup wrote: > On Apr 1, 5:40 pm, timcaff...(a)aol.com (Tim McCaffrey) wrote: > >>The PCIe 2.0 links on the Clarkdale chips runs at 5G. > > > Any how many dozen meters can these wires run? > > Mitch Maybe 1 dozen meters, depending on thickness of wire. Wire thickness depends on how many you want to be able to put in a cable, and if you want to be able to bend those cables. 10GbaseT or whatever it is called gets 10 gbits/second over 4 twisted pairs for 100 meters by what I classify as unnatural acts torturing the bits. The block codes and stuff involved also add a fair amount of latency, and considerable power is consumed.
From: Robert Myers on 2 Apr 2010 00:03 On Apr 1, 11:48 pm, Del Cecchi` <delcec...(a)gmail.com> wrote: > MitchAlsup wrote: > > On Apr 1, 5:40 pm, timcaff...(a)aol.com (Tim McCaffrey) wrote: > > >>The PCIe 2.0 links on the Clarkdale chips runs at 5G. > > > Any how many dozen meters can these wires run? > > > Mitch > > Maybe 1 dozen meters, depending on thickness of wire. Wire thickness > depends on how many you want to be able to put in a cable, and if you > want to be able to bend those cables. > > 10GbaseT or whatever it is called gets 10 gbits/second over 4 twisted > pairs for 100 meters by what I classify as unnatural acts torturing the > bits. The block codes and stuff involved also add a fair amount of > latency, and considerable power is consumed. I could build a helluva a computer in a dozen meters^3, if only I could figure out how to get rid of the heat. Robert.
From: Muzaffer Kal on 2 Apr 2010 00:07 On Thu, 01 Apr 2010 22:48:47 -0500, Del Cecchi` <delcecchi(a)gmail.com> wrote: >MitchAlsup wrote: >> On Apr 1, 5:40 pm, timcaff...(a)aol.com (Tim McCaffrey) wrote: >> >>>The PCIe 2.0 links on the Clarkdale chips runs at 5G. >> >> >> Any how many dozen meters can these wires run? >> >> Mitch > >Maybe 1 dozen meters, depending on thickness of wire. Wire thickness >depends on how many you want to be able to put in a cable, and if you >want to be able to bend those cables. > >10GbaseT or whatever it is called gets 10 gbits/second over 4 twisted >pairs for 100 meters by what I classify as unnatural acts torturing the >bits. You should also add to it that this is full-duplex ie simultaneous transmission of 10G in both directions. One needs 4 equalizers, 4 echo cancellers, 12 NEXT and 12 FEXT cancellers in addition to a fully parallel LDPC decoder (don't even talk about the insane requirement on the clock recovery block). Over the last 5 years probably US$ 100M of VC money got spent to develop 10GBT PHYs with several startups disappearing with not much to show for. Torturing the bits indeed (not the mention torture of the engineers trying to make this thing work.) -- Muzaffer Kal DSPIA INC. ASIC/FPGA Design Services http://www.dspia.com
From: Terje Mathisen on 2 Apr 2010 08:07 Morten Reistad wrote: > Now, can we attack this from a simpler perspective; can we make > the L2-memory interaction more intelligent? Like actually make > a paging system for it? Paging revolutionised the disk-memory > systems, remember? Morten, I've been preaching these equivalences for more than 5 years: Old Mainframe: cpu register -> memory -> disk -> tape Modern micro: cpu register -> cache -> ram -> disk Current cache-ram interfaces work in ~128-byte blocks, just like the page size of some of the earliest machines with paging (PDP10/11 ?). RAM needs to be accessed in relatively large blocks, since the hardware is optimized for sequential access. Current disks are of course completely equivalent to old tapes: Yes, it is possible to seek randomly, but nothing but really large sequential blocks will give you close to theoretical throughput. Tape is out of the question now simply because the time to do a disaster recovery rollback of a medium-size (or larger) system is measured in days or weeks, instead of a couple of hours. Terje
From: Stephen Fuld on 2 Apr 2010 11:23
On 4/2/2010 5:07 AM, Terje Mathisen wrote: > Morten Reistad wrote: >> Now, can we attack this from a simpler perspective; can we make >> the L2-memory interaction more intelligent? Like actually make >> a paging system for it? Paging revolutionised the disk-memory >> systems, remember? > > Morten, I've been preaching these equivalences for more than 5 years: > > Old Mainframe: cpu register -> memory -> disk -> tape > Modern micro: cpu register -> cache -> ram -> disk > > Current cache-ram interfaces work in ~128-byte blocks, just like the > page size of some of the earliest machines with paging (PDP10/11 ?). > > RAM needs to be accessed in relatively large blocks, since the hardware > is optimized for sequential access. > > Current disks are of course completely equivalent to old tapes: Yes, it > is possible to seek randomly, but nothing but really large sequential > blocks will give you close to theoretical throughput. > > Tape is out of the question now simply because the time to do a disaster > recovery rollback of a medium-size (or larger) system is measured in > days or weeks, instead of a couple of hours. While this is all, at least sort of, true, the question is what do you want to do about it. ISTM that the salient characteristics paging, i.e. memory to disk, interface are that it requires OS interaction in order to optimize, that the memory to disk interfaces have been getting narrower (i.e. SATA, Fibre Channel and serial SCSI) not wider, and that the CPU doesn't directly address the disk. Do you want to narrow the CPU's addressing range to just include the cache? Do you want the software to get involved in cache miss processing? This is all to say that, as usual, the devil is in the details. :-( -- - Stephen Fuld (e-mail address disguised to prevent spam) |