Prev: Processors stall on OLTP workloads about half the time--almost no ?matter what you do
Next: Processors stall on OLTP workloads about half the time--almostno matter what you do
From: Quadibloc on 30 Apr 2010 05:49 On Apr 29, 11:51 pm, Robert Myers <rbmyers...(a)gmail.com> wrote: > One possible (likely?) outcome is that we will ditch the Navier-Stokes > equations (at least for computation) in favor of approaches like > Lattice Boltzmann methods: > > http://en.wikipedia.org/wiki/Lattice_Boltzmann_methods Ouch. I would have thought that a lattice Navier-Stokes method, following the "relaxation" approach, would have been more efficient as a parallel method than following individual particles around. Yes, one can parallelize some problems by choosing algorithms with higher total resource consumption. Avoiding *such* methods _is_ due to inertia, as people tend to expect their programs might get run on serial machines on occasion. John Savard
From: Casper H.S. Dik on 30 Apr 2010 05:55 nmm1(a)cam.ac.uk writes: >That being said, MOST of the problem IS only that people are very >reluctant to change. We could parallelise ten or a hundred times >as many tasks as we do before we hit the really intractable cases. Reluctant? It's in our genes; we can only do one task at the same time and whenever we subdivide a task, we'll do so serialized. That's why we use the languages and the algorithms we use. Casper
From: nmm1 on 30 Apr 2010 06:34 In article <4bdaa920$0$22940$e4fe514c(a)news.xs4all.nl>, Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote: > >>That being said, MOST of the problem IS only that people are very >>reluctant to change. We could parallelise ten or a hundred times >>as many tasks as we do before we hit the really intractable cases. > >Reluctant? It's in our genes; we can only do one task at the same >time and whenever we subdivide a task, we'll do so serialized. >That's why we use the languages and the algorithms we use. Oh, is it? Maybe that's why I have never been able to understand the bizarre 'thought processes' of the human race :-) More seriously, it's two, not one, actually - and that's not the real issue, anyway. Your mistake is to assume that parallelism is necessarily about doing several logically unrelated tasks at once. That is only one form of it, and not the most useful one. Many mathematicians can 'think in parallel', which includes the ability to think in terms of the transformation of invariants over a set of data. My point is that people are reluctant to move from the very serial logic that they were taught at school - and I am including the top level of academic scientists when I am using the word 'people' in that respect. We need a paradigm shift, in mathematics and science teaching as much as computing. Yes, I know that I am a long-haired and wild-eyed radical .... Regards, Nick Maclaren.
From: nedbrek on 30 Apr 2010 07:39 Hello all, <nmm1(a)cam.ac.uk> wrote in message news:hre2p7$3nf$1(a)smaug.linux.pwf.cam.ac.uk... > That being said, MOST of the problem IS only that people are very > reluctant to change. We could parallelise ten or a hundred times > as many tasks as we do before we hit the really intractable cases. I'm curious what sort of problems these are? My day-to-day tasks are: 1) Compiling (parallel) 2) Linking (serial) 3) Running a Tcl interpreter (serial) 4) Simulating microarchitectures (serial, but I might be able to run multiple simulations at once, given enough RAM). I'm particularly interested in parallel linking. Thanks, Ned
From: nmm1 on 30 Apr 2010 07:48
In article <hrec1m$jse$1(a)news.eternal-september.org>, nedbrek <nedbrek(a)yahoo.com> wrote: > >> That being said, MOST of the problem IS only that people are very >> reluctant to change. We could parallelise ten or a hundred times >> as many tasks as we do before we hit the really intractable cases. > >I'm curious what sort of problems these are? Anything where the underlying problem requires a complete solution to one step before proceeding to the next, and the solution of a step is a provably intractable problem (except by executing the logic). The extreme answer is sequentially analysing data as it comes in, in real time. >My day-to-day tasks are: >1) Compiling (parallel) >2) Linking (serial) >3) Running a Tcl interpreter (serial) >4) Simulating microarchitectures (serial, but I might be able to run >multiple simulations at once, given enough RAM). > >I'm particularly interested in parallel linking. Linking is fairly simply parallelisable, in the same way that most such transformations are - i.e. more in theory than practice. The only problem is when you have do do a large amount of the work of one part to work out what other tasks that part implies. Regards, Nick Maclaren. |