From: Nick Maclaren on 27 Dec 2006 08:31 In article <Jo9kh.12353$1W1.2489(a)newsfe4-win.ntli.net>, ChrisQuayle <nospam(a)devnul.co.uk> writes: |> David Ball wrote: |> |> > I usually just lurk since I'm not a chip designer and haven't worked |> > with debugging boards and writing firmware since the 8080/8085/Z-80 |> > days. I just wanted to point out that on the linux kernel mailing |> > list, getting X to be responsive without messing up the rest of the |> > system seemed to cause an incredible amount of problems in the |> > scheduler. I think they ended up with a bunch of code to try to decide |> > if a process was interactive and give it priority for short bursts of |> > cpu time, then degrade it to non-interactive if it used too much cpu. |> > IIRC, they spent months tuning it not to starve important processes |> > and actually degrade the display or mess up things like playing mp3 |> > files. I don't follow the list as much as I used to so I'm not really |> > sure if they ever found something they were reasonably satisfied with. Whether or not they did, I can assure you that the users aren't. It isn't a soluble problem. |> Perhaps the long term answer is to embed the X server and window manager |> support into the graphics card and just talk to the card using the X |> protocol, using dma for speed. Download the selected widget set, look |> and feel etc to the card at login time. |> |> It's certainly doable with modern embedded technology and half way house |> to full redesign, while retaining compatability with present systems... Ugh. Yes, it could be done. Would it help? Doubtful. The time when the data transfer speed was the problem is long since gone, and there is precious little more 'wire delay' between two processes on the same modern CPU and two on the same graphics chip. The problems lie elsewhere. That wouldn't help at all with the RAS aspects, and would embed a known, broken design in hardware. For all of its faults, TCP/IP is orders of magnitude (decimal orders, too) cleaner and less buggy than X - and that applies whether you mean the design/protocols or the implementations. Regards, Nick Maclaren.
From: insert name on 28 Dec 2006 08:27 > Perhaps the long term answer is to embed the X server and window manager > support into the graphics card and just talk to the card using the X > protocol, using dma for speed. Download the selected widget set, look > and feel etc to the card at login time. Sutherland Wheel Reincarnation... If you cant access the original paper, here is a modern paraphrase: http://www.cap-lore.com/Hardware/Wheel.html
From: ChrisQuayle on 29 Dec 2006 10:01 insert name wrote: > > > Sutherland > Wheel > Reincarnation... > > If you cant access the original paper, here is a modern paraphrase: > http://www.cap-lore.com/Hardware/Wheel.html Thanks - turns out to be quite a rich resource for older graphics ideas. Have been trying to find info on early windowing system internals and data structures to get a bit more background and pointers for an embedded gui project. The basic gui is written, but it's sparked off a more general interest. There would be no problem embedding X onto the graphics card and would allow all kinds of on board optimisation without affecting the call interface to the application. If Nick's earlier comments were primarily about X affect on overall system performance, it could be addressed by giving X it's own cpu. An X terminal on a card ?... Chris
From: Joe Seigh on 14 Jan 2007 18:04 Anne & Lynn Wheeler wrote: > Joe Seigh <jseigh_01(a)xemaps.com> writes: > >>The Java gui? It's threaded. There's an event handling thread. Most >>programmers can't deal with that. STM (software transactional memory) >>is a huge area of research precisely because of that since it eliminates >>the deadlock issue that most programmers don't know how to avoid. > > > recent news URL on (hardware) transactional memory > > Getting Serious About Transactional Memory > http://www.hpcwire.com/hpc/1196095.html > > from above: > > To that end, Intel researchers are looking to transactional memory as > one of the key technologies that will enable developers to write the > terascale killer apps of the next decade. The attraction of TM is that > is appears to solve the most annoying problems of global locks: > application robustness and scalability. These attributes are > especially important for the type of large-scale concurrency required > by terascale applications. > > ... snip ... From the article "Like locks, transactional memory is a construct for concurrency control that enables access to data shared by multiple threads. But unlike locks it is an optimistic model. It assumes that in most cases only a single thread will be contending for a given data item." Conventional locks scale really well under low contention, i.e. only a single thread attempting to get the lock. I don't understand unless they're using a different definition of scalability here. -- Joe Seigh When you get lemons, you make lemonade. When you get hardware, you make software.
From: David Hopwood on 14 Jan 2007 21:54
Joe Seigh wrote: > From the article > "Like locks, transactional memory is a construct for concurrency control > that enables access to data shared by multiple threads. But unlike locks > it is an optimistic model. It assumes that in most cases only a single > thread will be contending for a given data item." > > Conventional locks scale really well under low contention, i.e. only > a single thread attempting to get the lock. I don't understand unless > they're using a different definition of scalability here. Besides, the fact that it's an "optimistic" model is a weakness. The worst-case performance of optimistic transactional memory is truly awful. (Note that non-optimistic transactional models are also perfectly possible.) -- David Hopwood <david.nospam.hopwood(a)blueyonder.co.uk> |