From: Anton Ertl on 25 Oct 2009 15:05 "Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes: >I am not aware of an Itanium shipped or proposed that had an "x86 core >on the side". > >There were proposals to have some special purpose hardware, like some >x86 instruction decoders that packed into VLIW instructions. And the IA-64 implementations before Montecito all had hardware for executing IA-32 code, however that was implemented. - anton -- M. Anton Ertl Some things have to be seen to be believed anton(a)mips.complang.tuwien.ac.at Most things have to be believed to be seen http://www.complang.tuwien.ac.at/anton/home.html
From: nmm1 on 25 Oct 2009 15:23 In article <ffjer6-5l5.ln1(a)vimes.paysan.nom>, Bernd Paysan <bernd.paysan(a)gmx.de> wrote: >Robert Myers wrote: > >> Thus, even though you can't do operations with *no* net cost in >> energy, we can still build and operate devices that act as quantum >> mechanical computers to an arbitrarily good approximation. > >No, we can't. The quantum computer people are struggling with the same >thermal de-Broglie wavelength as the superconducting people. Just that >in quantum computing, the wavelength also depends on how many bits your >system has - but unfortunately, it goes exponentially with the number of >bits. So you can either make it arbitrary slow for many bits, or reduce >the number of bits to an almost useless amount, but not both at the same >time. > >The people who work on these research projects are certainly more >optimistic than I am, but maybe Andy is right, and I've taken some >lessons from Nick ;-). Heaven help you :-) Every so often, there is an announcement from some research group that they have found a way around the exponential increase in difficulty. But it usually turns out that statement is limited to the press release, and the actual paper is a bit more circumspect. The delivered number of bits does seem to be linear in time at present, though. My main reason for doubting that this will get anywhere is that several very good theoretical physicists I know of are similarly doubtful. And I mean specialists in quantum mechanics, of course. Regards, Nick Maclaren.
From: Anton Ertl on 25 Oct 2009 15:12 Robert Myers <rbmyersusa(a)gmail.com> writes: >I don't have enough insight into the other architectures to comment. >I first looked at the chart and said, yup, just like I said, it's a >compiler built and tuned around x86. Gcc? The first target was the 68k Architecture, so it's certainly not built around IA32. Of course, different targets have received different amounts of tuning, and IA32 is probably among those that have received the most tuning. But in any case, the performance advantage of IA32 implementations on the Gforth benchmarks comes from indirect-branch prediction. >I just happened to have your charts fresh in mind when I made the >comment, and neither your results nor the fact that binary translation >doesn't work well is a surprise. Your theories of how binary translation works is interesting, because Gforth 0.7.0 works like such a naive binary tranlator for straight-line Forth code: It just concatenates the code fragments for the VM instructions in the Forth code sequence together. However, for VM control flow, Gforth uses indirect branches, and even naive binary translators are more sophisticated. I also doubt that the PA-RISC->IA64 binary translator is as naive as Gforth for straight-line code. - anton -- M. Anton Ertl Some things have to be seen to be believed anton(a)mips.complang.tuwien.ac.at Most things have to be believed to be seen http://www.complang.tuwien.ac.at/anton/home.html
From: Robert Myers on 25 Oct 2009 16:32 On Oct 25, 2:54 pm, Bernd Paysan <bernd.pay...(a)gmx.de> wrote: > Robert Myers wrote: > > Let's see. Quantum mechanics properly applied takes account of > > everything in the whole universe, which is, so far as I know, quantum > > mechanical and reversible in it's entirety. > > Nope. That's just wishful thinking from people who do QM. The concepts > behind decoherence are partly understood, and some even quantified (e.g. > the critical temperature corresponds to what has been called "thermal > de-Broglie wavelength", which more or less describes the relation > between random changes in the conductor and a "volume of coherence"), > but overall, this is not part of QM. QM describes what happens within > the volume of coherence, classical physic describes what happens > outside. There is no accepted unified theory that gives a good reason > for this boundary and works equally well on both sides of the coherence > fence. > > Note that when the "observer" is actually a quantum mechanical object, > it won't disturb the other parts of the system - it will be part of this > reversible dance. Thanks. I understand the disconnect better. The problem of people using different foundational language to describe quantum mechanics is ancient. I'm troubled by the idea that QM describes what happens inside some coherence volume and classical physics happens elsewhere. If you take an interesting limit of QM, you recover classical physics, an ancient result that is reassuring to everyone, but "classical" systems still obey the laws of QM. They just don't as readily display the weird effects that make quantum mechanics *seem* so different from classical physics. Coherent radiation displays properties like speckle. For incoherent radiation, the fundamental mathematics that lead to speckle are still there, but they are blurred to the point where you can't see anything that looks similarly interesting. It's not as if, for light, there were some hard boundary: coherent and incoherent. There are actually only degrees of coherency that can be described without semantic sloppiness by the use of properly- formulated correlation functions. I've never been through a similar exercise with quantum mechanics, but I'm fairly certain that the entire program would go through without modification. The problem is that, so far as I know, the mathematics of quantum mechanics has no place for an "observer," which is always added as a deus ex machina, and it is in trying to introduce an observer that problems arise and mathematics and sometimes even sanity take a beating. Coherence theory is a *huge* improvement over talking about things like "the collapse of the wavefunction," but I don't find the idea of drawing a box and saying things are quantum mechanical within it and "classical" outside it to be particularly helpful. I don't know how to introduce the notion of an observer cleanly, but I don't think anyone else does, either, even after the advent of coherence theory. You can make the observer part of the wavefunction (or simply recognize that, yes, indeed, the laboratory and the observer are part of the wavefunction whether you want to think about it or not), but then you are left with the problem of observing the observer. > > Thus, even though you can't do operations with *no* net cost in > > energy, we can still build and operate devices that act as quantum > > mechanical computers to an arbitrarily good approximation. > > No, we can't. The quantum computer people are struggling with the same > thermal de-Broglie wavelength as the superconducting people. Just that > in quantum computing, the wavelength also depends on how many bits your > system has - but unfortunately, it goes exponentially with the number of > bits. So you can either make it arbitrary slow for many bits, or reduce > the number of bits to an almost useless amount, but not both at the same > time. > When I say something like "you can build and operate" I am not meaning to say what is practically possible (what can actually be done with equipment that can at least be imagined with existing technology), but rather to make a statement about what I believe the mathematics says. I really have no clue as to how to build an actual device, and I'm quite sure that there are important limitations that can be quantified as in any other area of statistical physics, but I can't find anything in the mathematics that would lead to the notion of QM in one place and classical in another. Robert.
From: Terje Mathisen on 25 Oct 2009 16:51
Robert Myers wrote: > On Oct 25, 4:10 am, Terje Mathisen<terje.wiig.mathi...(a)gmail.com> >> I don't mind Kudos from you Robert, but I don't think I deserve it >> this time: >> >> I didn't post anything about Linus' OoO ideas. > > Terje, you were kind enough to explain to me, in a private > correspondence, that, no matter how inept the coder or the compiler, > OoO hardware would eventually figure out and exploit a circumstance > where software pipelinging might conceivably have been helpful. That > is to say (not your words, but mine) OoO hardware knows how to do > software pipelining, even if, in some rare awkward instances, it might > take a while. Right, this is exactly what a PPro and other OoO cpus was designed to do: Figure out the dependency chains within and between each loop iteration, and allow as many as possible to be in flight simultaneously, so as to cover the chain latency. An in-order cpu requires the asm programmer and/or compiler writer to figure out statically how long each of those chains will be, and then unroll the code sufficiently to handle it. This btw. requires a _lot_ more architectural registers, and is quite brittle when faced with a new cpu generation with slightly different latency numbers. Terje -- - <Terje.Mathisen at tmsw.no> "almost all programming can be viewed as an exercise in caching" |