Prev: a potential lisp convert, and interpreting the shootout
Next: ANN: ABLE 0.1, A Basic Lisp Editor
From: Chris Barts on 14 Jan 2007 00:31 Tim Bradshaw <tfb+google(a)tfeb.org> wrote on Saturday 13 January 2007 08:12 in comp.lang.lisp <1168701139.573700.122790(a)a75g2000cwd.googlegroups.com>: > Chris Barts wrote: > >> "Like, wow, dude! Language is whatever I say it is! Crumb buttercake up >> the windowpane with the black shoehorn butterhorse!" > > I'm afraid I can make neither head nor tail of your curious colonial > speech. You know, I never thought I could jerk your chain this effectively. -- My address happens to be com (dot) gmail (at) usenet (plus) chbarts, wardsback and translated. It's in my header if you need a spoiler. ----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
From: mark.hoemmen on 14 Jan 2007 02:49 Tim Bradshaw wrote: > I don't know about these, but yes, there are profilers of course - to > be useful they typically need to be able to get access to various > counters in the processor which let them know how many instructions are > stalling, what's happening to the caches, etc. What I really meant was > that a lot of the tools people use to decide where the time is going > don't usefully tell you whether the system is waiting for memory all > the time or whether it's actually really busy. A number of profilers (I imagine Intel's VTune does, for example) do -- you can count cache misses and compare them with the number of loads / stores to get an idea if your application is successfully exploiting locality. If you use the PAPI library, you can get that information without paying for VTune. mfh
From: mark.hoemmen on 14 Jan 2007 02:56 Tim Bradshaw wrote: > Yes, of course, but the HPC world is an odd one. A similar argument > (and this isn't sarcasm) says that for performance it can help a lot if > you don't use a dynamic/high-level language, avoid non-array datatypes > &c &c. It can, but for most application areas there are other > considerations. Of course :) I'm not suggesting that Joe/Jane Programmer be required to insert software prefetches into their code ;P One should of course first code for correctness, then if (and ONLY IF) performance is inadequate, profile to find the bottlenecks, and then apply trickier optimizations to those as necessary. I would argue that the HPC world has a lot to do with the game world (a large number of mathematical floating-point computations; physics calculations; some tolerance for inaccuracy in many cases) and the embedded world (more strict resource restrictions and performance requirements than usual). > Yes, I agree with this. but HPC is, as I said, odd (though I'm > interested in it). For most applications you want to have some idea of > what the performance model of the machine is like (which is beyond the > vast majority of programmers for a start), to write code which (if > performance is an issue which very often it is not) should sit well > with that model, and then to allow the machine (in the `compiler, HW > and OS' sense) do most of the boring work of, for instance, making sure > memory is local to threads &c &c. That's right, that kind of stuff should be automated, and the infrastructure exists to do that already. > I can see that I've now got completely sidetracked. My original point > in this thread was that multicore systems will end up on desktops for > reasons which have rather little to do with performance, and now I'm > talking about HPC :-) I will go and devour some minions. Heh heh, HPC will rule the world!!! *brainwashes more minions* mfh
From: Tim Bradshaw on 15 Jan 2007 01:24 mark.hoemmen(a)gmail.com wrote: > > I would argue that the HPC world has a lot to do with the game world (a > large number of mathematical floating-point computations; physics > calculations; some tolerance for inaccuracy in many cases) and the > embedded world (more strict resource restrictions and performance > requirements than usual). > Yes, I think that's definitely true - game programming *is* HPC programming, albeit you tend to be targetting a platform whose end-user cost is hundreds not millions of dollars. I think that one important issue for general-purpose processors (or general-purpose computer systems, be they multi core, multi socket, multi board) is that they need to be able to support naive programs, and support them without too catastrophic a performance hit. "Naive programs" are probably something like "programs that assume a cc SMP system" or something like that. That's not true for true HPC systems or for special games hardware, be it graphics cards or consoles. Though obviously even there you don't want to make the thing *too* hard to program for. --tim
From: Ray Dillinger on 18 Jan 2007 13:59
Ken Tilton wrote: > Spiros Bousbouras wrote: >> If you want to analyse chess positions you can never >> have too much speed and it has nothing to do with >> rendering. I'm sure it's the same situation with go and >> many other games. > That's kind of a reductio ad absurdum argument. Deciding the edge in a > middle game chess position is a tad trickier than deciding if that > cluster bomb went off close enough to this NPC to kill it. No, it's not. If you give game designers the power, they're going to do generalized min-maxing with generalized pruning to decide exactly which armaments to deploy in the next ten seconds, and when it comes down to a choice between spending ammo for the machine gun and spending a missile, they're going to test both scenarios against opponent's responses and odds of opponents still being able to respond with a 3-move lookahead generating some thousands of scenarios, simulate them all, and pick the highest-scored one. Just like they do now with simple games such as chess. Bear |