Prev: PEEEEEEP
Next: Texture units as a general function
From: Stephen Fuld on 1 Jan 2010 19:53 Robert Myers wrote: > I doubt if operating systems will ever be written in an elegant, > transparent, programmer-friendly language. What is your definition of such a language? I know of at least three commercial OSs that are written in languages that are not C descendants. One in an extended Algol, one substantially in an Algol descendant, and one in a PL/1 variant. -- - Stephen Fuld (e-mail address disguised to prevent spam)
From: Robert Myers on 1 Jan 2010 20:21 On Jan 1, 7:53 pm, Stephen Fuld <SF...(a)alumni.cmu.edu.invalid> wrote: > Robert Myers wrote: > > I doubt if operating systems will ever be written in an elegant, > > transparent, programmer-friendly language. > > What is your definition of such a language? I know of at least three > commercial OSs that are written in languages that are not C descendants. > One in an extended Algol, one substantially in an Algol descendant, > and one in a PL/1 variant. > I should have been more clear. I never used Algol, but I used PL/1 to do computations for my PhD thesis, as well as for classes that required a computer. I regarded PL/1 as elegant, transparent, and programmer-friendly and missed it tremendously when I had to adapt to Fortran. I was aware that at least one operating system was written in PL/1. In fact, I think that later versions of the PL/1 compiler were written in PL/1. I knew that OS's were written in higher level languages before C and that not all such languages are nearly as ugly as C. It would have been more correct for me to say that it was unlikely that systems programmers would ever give up the close to the metal advantages of C for something that didn't so naturally mimic the style, flexibility, and degree of programmer control that assembly language offers. Robert.
From: Mike on 1 Jan 2010 21:41 "Bill Todd" <billtodd(a)metrocast.net> wrote in message news:QbWdnaCf-9eaBKPWnZ2dnUVZ_omdnZ2d(a)metrocastcablevision.com... | Del Cecchi wrote: | > "Mike" <mike(a)mike.net> wrote in message | > news:v_qdnUeuT-97zKPWnZ2dnUVZ_hadnZ2d(a)earthlink.com... | | ... | | >> The IBM System i (not single threaded) places the file system in a | >> single virtual address space in which all objects have a single | >> constant virtual location which is never reassigned. That may | >> provide | >> a lead to a practical approach. | >> | > Back in the day it used to be said that system/i (os/400, s/38) didn't | > really have a file system since it had a very large virtual address | > space in which objects were located. | | Well, sort of - at least in the sense that it didn't have a file system | that was exposed to applications. | | But it must have had something resembling a file system internally if it | allowed objects to grow, because despite the fact that it had (for the | time) an effectively infinite virtual address space into which to map | them it had decidedly finite physical storage space on disk in which to | hold them, hence needed a mechanism to map an arbitrarily large | expandible object onto multiple separate areas on disk while preserving | its virtual contiguity (and likely also required a means to instantiate | new objects too large to fit into any existing physically-contiguous | area of free space). | | The normal way a file system (just like almost everyone else) supports | movable/expandible objects with unvarying addresses is via indirection, | substituting the unvarying address of a small pointer for that of an | awkwardly large and/or variable-size object. That unvarying address | need not be physical, of course - e.g., the i-series may have hashed the | constant virtual address to a chain address and then walked the chain | entries until it found one stamped with the desired target virtual address. | | But it's not clear how applicable this kind of solution would be to the | broader subject under discussion here. | | - bill The problem, as I under stand it, is that it is hard to build efficient hardware that allows multiple CPU's to safely access a single data structure, string or array. The reason is that languages like C use pointers to reference individual bytes and modern CPU's cash words or multiple word cash lines which are invisible to the higher level languages. Andy Glew said part of the problem was relocation of objects to different addresses which the Sys i solves. The other part of the problem is that compilers and probably languages need to provide the OS additional information so that multiple threads accessing a common cash line will not be executed on separate CPU's. The OS is already responsible for thread / CPU affinity so this does not seem insurmountable. Mike
From: Mayan Moudgill on 2 Jan 2010 04:32 Robert Myers wrote: > The scientists I know generally want to speed things up because they > are in a hurry. > > The question is: is it better to do a bit less physics and/or let the > machine run longer, or is it better to use up expensive scientist/ > scientific programmer time and, at the same time, make the code opaque > and not easily transportable? > > If we can't do "unbounded" ("scalable") parallelism, then there is an > end of the road as far as some kinds of science are concerned, and we > may already be close to it or even there in terms of massive > parallelism (geophysical fluid dynamics would be an example). The > notion that current solutions "scale" is pure bureaucratic fraud. > Manufacturers who want to keep selling more of the same (do you know > any?) cooperate in this fraud, since the important thing is what the > customer thinks. > If your problems can be solved by simply increasing the number of machines, why not go with Beowulf clusters or @Home style parallelism? They are cheapa and easy to put together. If your problem can't be solved with those approaches, then I suspect that going to a different language (or approach, or whatever) is not going to be a viable alternative.
From: Rob Warnock on 2 Jan 2010 04:44
Robert Myers <rbmyersusa(a)gmail.com> wrote: +--------------- | I doubt if operating systems will ever be written in an elegant, | transparent, programmer-friendly language. +--------------- So the various Lisp Machines never existed? ;-} ;-} Oh, wait: http://en.wikipedia.org/wiki/Lisp_Machine ... Several companies were building and selling Lisp Machines in the 1980s: Symbolics (3600, 3640, XL1200, MacIvory and other models), Lisp Machines Incorporated (LMI Lambda), Texas Instruments (Explorer and MicroExplorer) and Xerox (InterLisp-D workstations). The operating systems were written in Lisp Machine Lisp, InterLisp (Xerox) and later partly in Common Lisp. ... Symbolics continued to develop the 3600 family and its operating system, Genera, and produced the Ivory, a VLSI implementation of the Symbolics architecture. Starting in 1987, several machines based on the Ivory processor were developed: ... ... The MIT-derived Lisp machines ran a Lisp dialect called ZetaLisp, descended from MIT's Maclisp. The operating systems were written from the ground up in Lisp, often using object-oriented extensions. Later these Lisp machines also supported various versions of Common Lisp (with Flavors, New Flavors and CLOS). ... And there are still some who persist in working in this area even today [well, fairly recently], e.g.: http://common-lisp.net/project/movitz/ Movitz: a Common Lisp x86 development platform http://download.plt-scheme.org/mzscheme/mz-103p1-bin-i386-kernel-tgz.html Package: MzScheme Version: 103p1 Platform: x86 Standalone Kernel [Though the latter uses OSKit <http://www.cs.utah.edu/flux/oskit/> for some of the lowest-level stuff.] -Rob ----- Rob Warnock <rpw3(a)rpw3.org> 627 26th Avenue <URL:http://rpw3.org/> San Mateo, CA 94403 (650)572-2607 |