Prev: VMWare tools killed my Mac OS X ?
Next: Software vs hardware floating-point [was Re: What happened ...]
From: nmm1 on 8 Sep 2009 05:13 In article <h854if$fsf$1(a)aioe.org>, Noob <root(a)127.0.0.1> wrote: > >> Because you are running Macrosloth Bloatware - and even Linux seems >> to be competing on that front :-( > >Are you reviling generic or custom Linux kernels? The ability to build custom >kernels is an important advantage of Linux over Windows. (Indeed, of open source >over closed source.) > >What makes software bloatware? Code size? Run-time? Another metric? Either of those (and more), but relative to the functionality that is actually wanted by the end users. It's non-trivial to measure, but ranking systems of comparable functionality is usally feasible. Regards, Nick Maclaren.
From: Ken Hagan on 8 Sep 2009 05:18 On Mon, 07 Sep 2009 18:44:39 +0100, Kai Harrekilde-Petersen <khp(a)harrekilde.dk> wrote: > (who needs a multi-core-multi-GHz processor to show webpages, edit > text and do a bit of email/blogging/twitter?). More than you might imagine. I have an Atom based netbook in my kitchen that I mostly use for pretty much that and it feels palpably slower than any other machine I work with. The bottleneck certainly isn't the few Mb/s on my broadband connection, because every other machine in the house is faster. It *shouldn't* be the half gigabyte of RAM on the machine or the GHz+ CPU either, but it appears to be. Perhaps that's because most of the web appears to have been written by people who think it is clever to generate static web-pages dynamically, use scripted buttons instead of hyperlinks and embedded flash instead of standard graphics formats. It used to be quite common for web-sites to have a "low bandwidth" option for those on slow dial-up links. Now that the bottleneck appears to be the CPU, perhaps we ought to have "low bloat" options for those of us not running our browser on a cryogenically cooled supercomputer.
From: Thomas Womack on 8 Sep 2009 06:19 In article <87r5uifczd.fsf(a)ami-cg.GraySage.com>, Chris Gray <cg(a)graysage.com> wrote: >Robert Myers <rbmyersusa(a)gmail.com> writes: > >> You've such a way with words. Nick. Browsers, which are probably the >> OS of the future, are already multi-threaded or soon to be. No longer >> does the browser freeze because of some java script in an open tab. >> Browsers that don't seize that advantage will fall by the wayside. >> The same will happen all over software, and at increasing levels of >> fineness of division of labor. > >I'm also in the camp of not believing this will go far. It all eventually >has to be rendered to your screen. As far as I know, that currently >involves serialization in things like the X interface, or the equivalent >in Windows. Those interfaces are serialized so that you can predict what >your display will look like after any given set of operations. I believe current stuff (Windows since Vista, X with things like Compiz, definitely OS X) has a reasonable level of parallelism across windows: the applications talk to a layer which talks to the GPU and has it write onto a texture, and a separate job has the GPU draw rectangles with the textures on. I think there's only one GPU - I don't know whether systems with more than one graphics process present can texture on GPU 1 using a texture stored on GPU 2, I suspect not. Tom
From: nmm1 on 8 Sep 2009 06:39 In article <YUs*SVyQs(a)news.chiark.greenend.org.uk>, Thomas Womack <twomack(a)chiark.greenend.org.uk> wrote: >In article <87r5uifczd.fsf(a)ami-cg.GraySage.com>, >Chris Gray <cg(a)graysage.com> wrote: >> >>I'm also in the camp of not believing this will go far. It all eventually >>has to be rendered to your screen. As far as I know, that currently >>involves serialization in things like the X interface, or the equivalent >>in Windows. Those interfaces are serialized so that you can predict what >>your display will look like after any given set of operations. > >I believe current stuff (Windows since Vista, X with things like >Compiz, definitely OS X) has a reasonable level of parallelism across >windows: the applications talk to a layer which talks to the GPU and >has it write onto a texture, and a separate job has the GPU draw >rectangles with the textures on. I think there's only one GPU - I >don't know whether systems with more than one graphics process present >can texture on GPU 1 using a texture stored on GPU 2, I suspect not. That is correct. But experiment a bit, and you will discover that you rarely have more than 2-3 active windows at any one time, and often only one. Indeed, modern browser design (especially the default focus handling) often makes it difficult to have multiple active windows. Active means that they are being updated or interacted with and not just displayed, of course. Also, where you can and do parallelise window use, the bottleneck is often in the network - i.e. it is rarely worthwhile displaying many images/PDFs/videos/etc. simultaneously, as the aggregate bandwidth isn't enough to make that fly. Regards, Nick Maclaren.
From: Mayan Moudgill on 8 Sep 2009 07:30
vandys(a)vsta.org wrote: > Kai Harrekilde-Petersen <khp(a)harrekilde.dk> wrote: > >>I wouldn't be surprised if we see a minor revival in processor design >>centered around low power consumption and "acceptable" performance >>(who needs a multi-core-multi-GHz processor to show webpages, edit >>text and do a bit of email/blogging/twitter?). > > > And don't forget FPGAs. The lines get fuzzy when anybody who can > afford a couple grand can design in a space previously reserved for > "architects" at a CPU vendor. I don't disagree that FPGAs can be used to do architecture, but there are some issues involved: 1. The ones with enough circuit-equivalents to really support "architecture" are expensive 2. The contraints are different: your main concern is the efficient use of heterogenous resoucrces (multipliers, RAM, high-speed pins etc.). 3. It usually (always?) makes more economic sense to treat them as ASIC equivalents and build the whole solution (possibly including a micro-controller equivalent), rather than first builting a high-performance processor equivalent and then programming it. |