Prev: From Scripting to Scaling: Multi-core is challenging even the most battle-scared programmer
Next: Coarray Fortran
From: Friedrich on 2 Jul 2010 19:43 On Fri, 02 Jul 2010 21:22:21 +0200, Tobias Burnus <burnus(a)net-b.de> wrote: >Well, the following is very rough but should be okay for an overview: > >OpenMP has a shared-memory model, i.e. all variables can be accessed, >unless they are explicitly marked as private. Additionally, the program >is run in serial (single thread) until a parallel section comes. >The consequence is that OpenMP is well suited for shared-memory systems, >i.e. multi-processors/multi-core systems. Additionally, you can slowly >start parallelizing by starting with the hot spots of the program. > >MPI (and also coarrays, a Fortran 2008 feature, one will soon see in >more compilers) have a distributed-memory model: All data is private to >each process and the data has to be explicitly transferred to be >available to other processes. >The advantage is that this model works also well for distributed memory >systems (i.e. clusters of computers - linked with a fast connection >[Gigabitethernet, Infiniband etc.]). The disadvantage is that one >essentially needs to parallelize the whole program before it works - as >the processes cannot share memory. > >Thus, if you want to parallelize the program for a few cores only, >OpenMP makes it much easier to do it step by step. If you plan to run it >on big machines, MPI (and coarrays) are better suited. Of course, one >can also run MPI on a single (multi-core) computer - and there was also >some compiler which supported OpenMP on distributed memory, but that's >roughly the difference. > >Note: One can also combine both techniques in the same program. > >Tobias Tobias, thanks for answering, if I may just give a short introduction. I'm just getting into and learning the very basics of paralellism, principles. With an older coleague who I'm working with - experimenting, so to say (we have no practical experience with either OpenMP or MPI; so we're experimenting to see what we can accomplish on simpler models with OpenMP for now). We took the OpenMP route first, because at one point in time it seemed simpler, and because by chance we stumbled upon two books we were about to order online, in a local library (one by Chandra "Parallel programming in OpenMP", for other one I'm not sure about the authors, but it's popular; it's on OpenMP's front page ... blue cover ... I don't have it with me - since he's (my coleague) currently using it). To make any decisions at this point in time between both models seems like rushing into unknown to me, but my strategy was generally to start with OpenMP model (since all 4 of machines which we have at our disposal are relatively strong. They were bought in the "package" with the machines for running Fluent), and maybe when we've introduced ourselves to it, and should the need arise, maybe move onto MPI. Do you think that is a sensible way to go ? Or, to put it bluntly, do you think learning OpenMP in first, will be useful for us in the future, should we at one point decide to move to MPI ? Will our knowledge and experience be useful in transition from one to the other? (I don't know how to put this better, but I believe you understand my dilemma). Friedrich
From: Friedrich on 2 Jul 2010 19:48 On Fri, 2 Jul 2010 13:28:56 -0700, nospam(a)see.signature (Richard Maine) wrote: >glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote: > >> A language specifically designed around the problems of parallel >> computation, For some reason, I cannot see glen herrmannsfeldt's post (apart from the quoted sentence above). Apart from that, and this goes without saying (but I'll say it nevertheless :-), thanks to all others who replied. Me not replying does not mean I did not read your posts, and found their information valuable in my learning. Friedrich
From: glen herrmannsfeldt on 2 Jul 2010 21:36 As it seems that some might not have received this I am posting again. Otherwise, it is the same. > Richard Maine <nospam(a)see.signature> wrote: (after I wrote) >> A language specifically designed around the problems of parallel >> computation, > Designing a language around a particular architecture feature seems to > me like a mistake in most cases. I suppose there can be appropriate > targets for such a thing, but the concept all but defines itself as > aiming at a niche rather than being general purpose. Niche markets do > exist. Well, I suppose programming for parallel architectures has been a niche part of programming for many years. It may not be able to stay that way, though. > I didn't bother to go research the details. I'm just reacting to the > above phrase as it stands. If that phrase is not an accurate > representation of something, then my comment would not apply. That was just my thought when writing that, and may or may not apply. It does seem that multicore processors and processor arrays are getting more and more popular, and that the need for programming such is increasing. > In particular, there is a huge difference between designing a language > around something and designing a language that accomodates something. > That kind of difference came up in adding some of the object oriented > features in f2003. There was at least one person pushing for what I > would categorize as redesigning the Fortran language around object > oriented programming. (The proposer might disagree with the > categorization, but that's the way it seemed to me). In that case, I see the distinction. The name C/C++ is often used, with the assumption that the combination is a single language. (Most likely be people who don't use either one.) It might be interesting to have Fortran++, or whatever one might want to call a new language. Even so, one can write object oriented programs even in Fortran 66. (There is a graphics package that I used in the Fortran 66 days that was object oriented, though I hadn't known that at the time.) You can also write (mostly) non-object-oriented code in C++ or Java. (You likely need some object-oriented I/O for Java, but the rest of a program could be pretty much non-OO.) > Instead, some > object oriented features were added into the Fortran language in a way > that fit with the language. To me, the difference was fundamental. The > f2003 language is not designed around object orientation. You can > program in f2003 without caring a thing about object orientation unless > you happen to want to. And, it seems, some parallel programming features were also added along the way. As with OO, adding some features does not obviate the demand for object-oriented languages. It seems that both languages designed around, and accomodating, parallel programming could be useful. In the case of shared-memory machines, one can almost program sequentially, with some operations completing faster than they otherwise would. It isn't so easy in the case of message passing, where minimizing the amount of data to be transfered is very important. (Insert reference to Amdahl's law here.) In any case, I just wanted to bring it into the discussion. People can decide for their own problems what the best solution is. -- glen
From: Ron Shepard on 2 Jul 2010 21:52 In article <i0lg4p$afj$1(a)speranza.aioe.org>, glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote: > Now, if one wanted to design an implicitly parallel language > that had the look of Fortran, that might not be a bad thing. > (It seems to me that ZPL has the look of Pascal.) These might qualify: http://softlib.rice.edu/fortran_M.html http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.5681 $.02 -Ron Shepard
From: Victor Eijkhout on 13 Jul 2010 15:23
Colin Watters <boss(a)qomputing.com> wrote: > OpenMP's single process means little or no communications overhead. OpenMP has thread overhead. Not to mention core affinity problems. The communications in MPI can (often) be hidden behind computations. Victor. -- Victor Eijkhout -- eijkhout at tacc utexas edu |