Prev: Threadpool 1.14 ...
Next: RAPID for Windows
From: Dmitry A. Kazakov on 25 Mar 2010 04:24 On Wed, 24 Mar 2010 20:04:18 +0000 (UTC), Warren wrote: > Georg Bauhaus expounded in news:4baa5987$0$6762$9b4e6d93 > @newsspool3.arcor-online.net: > >> Warren schrieb: >>> Georg Bauhaus expounded in news:4baa27f2$0$6770$9b4e6d93 >>> @newsspool3.arcor-online.net: >>> >>>> Dmitry A. Kazakov schrieb: >>>>> how the proposed algorithms map onto the >>>>> Ada tasking model, especially taking into account that Ada tasking >>>>> primitives are higher level, than ones known in other languages. >> >>>> As a side note: it seems anything but easy to explain >>>> the idea of a concurrent language, not a library, and >>>> not CAS things either, as the means to support the programmer >>>> who wishes to express concurrency. >>>> Concurrency is not seen as one of the modes of expression >>>> in language X. Rather, concurrency is seen as an effect >>>> of interweaving concurrency primitives and some algorithm. >>>> >>>> What can one do about this? >>> >>> I thought the Cilk project was rather interesting in >>> their attempt to make C (and C++) more parallel >>> to take advantage of multi-core cpus. But the language >>> still requires that the programmer program the parallel >>> aspects of the code with some simple language enhancements. >>> >>> As cores eventually move to 128+-way cores, this needs >>> to change to take full advantage of shortened elapsed >>> times, obviously. I think this might require a radical >>> new high-level language to do it. >> >> Or efficient multicore Ada will have to go radically back to >> the roots ;-) > > I do believe that an Ada compiler probably has enough > internal info to manage something along this line. Some > work would also have to be done to deal with explicitly > coded tasking. > >> How did they achieve efficient execution on >> massively parallel processors? HPF? Occam? What do Sisal >> implementations do? > > I don't know about them, but if any of them are "interpreted", > then there would be execution time semantics possible. Occam was compiled, but its concurrency model was extremely low level (communication channels) and heavy-weight as compared to Ada. I believe that Ada's tasks on massively parallel processors could really shine if compiler supported. Especially because Ada's parameter passing model is so flexible to support both memory sharing and marshaling. Well, there is a problem with tagged types which are by-reference, this must be fixed (e.g. by providing "tagged" types without tags, and thus copyable). -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
From: Robert A Duff on 25 Mar 2010 09:44 "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> writes: > I believe that Ada's tasks on massively parallel processors could really > shine if compiler supported. Especially because Ada's parameter passing > model is so flexible to support both memory sharing and marshaling. Well, > there is a problem with tagged types which are by-reference, this must be > fixed (e.g. by providing "tagged" types without tags, and thus copyable). Tagged types are passed by copy when doing a remote procedure call. By "by copy" I mean marshalling/unmarshalling, which of course involves copying the data. Or did you mean something else? - Bob
From: Dmitry A. Kazakov on 25 Mar 2010 10:09 On Thu, 25 Mar 2010 09:44:11 -0400, Robert A Duff wrote: > "Dmitry A. Kazakov" <mailbox(a)dmitry-kazakov.de> writes: > >> I believe that Ada's tasks on massively parallel processors could really >> shine if compiler supported. Especially because Ada's parameter passing >> model is so flexible to support both memory sharing and marshaling. Well, >> there is a problem with tagged types which are by-reference, this must be >> fixed (e.g. by providing "tagged" types without tags, and thus copyable). > > Tagged types are passed by copy when doing a remote procedure call. > By "by copy" I mean marshalling/unmarshalling, which of course > involves copying the data. Or did you mean something else? Yes, this is what I meant. An entry call to a task running on another processor (with no shared memory) should marshal. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
|
Pages: 1 Prev: Threadpool 1.14 ... Next: RAPID for Windows |