Prev: Best way to learn ... need some help
Next: A tool that suggests optimized logic for a piece of code/module/function
From: Andy on 13 Jan 2010 12:22 On Jan 12, 2:19 pm, karthikbalaguru <karthikbalagur...(a)gmail.com> wrote: > Maybe, it would be better if it > can do it before user rather than > behind the scenes so that the user > is aware of the kind of optimzations > that are being done and could be > taken care. It is also needed so that > the developer will be able to take care > of further developments on the same > program/software and also for the > maintenance activities on that > program/software. As a hardware designer using HDL (VHDL) I have also struggled with this issue of writing "the best" code for a given purpose. I used to fret every gate and delay when I coded something, trying to make my code directly lead to the most optimum implementation (usually performance or size). However, as I have gotten more experience (and I hope knowlede too), I have come to recognize that "the best" code is that code that is easy to read, write, understand (review), and maintain, while meeting performance (speed, power, area) requirements. I start out with the most straight-forward implementation of the algorithm, and see if it meets my requirements by the time the tools are finished optimizing it. If it does meet them, I'm done. Only if/when it does not meet performance will I start to sacrifice the read/write/understand/maintain goals. In the same manner, if I can code a piece of SW such that it is easiest to use (r/w/u/m), yet it is still sufficiently optimized by the tools to meet performance requirements, that's what I should do. Coding something such that is harder to r/w/u/m for no or needless improvement in final performance is a loosing battle. As an academic excercise, it may be helpful to understand how code can be written well or poorly WRT un-optimized performance, especially if you were writing an optimizer, but for everyday use, I don't see the benefit. If you are talking about much higher level optimizations than can be done by current tools, optimization technology will have to grow quite a bit before it would be available either in "behind the scenes" or "up-front" (source modification) versions. Andy
From: karthikbalaguru on 14 Jan 2010 20:51 On Jan 15, 3:26 am, Hans-Bernhard Bröker <HBBroe...(a)t-online.de> wrote: > David Brown wrote: > > Nothing stops the compiler from doing this sort of thing in /theory/. > > But /practice/ is a different matter. > > The same thing applies to the original question. If a compiler's > full-blown optimizer, given practically infinite time to ponder the > problem, can't get that analysis job done, then no other tool can, and > certainly not while just looking over the programmers' shoulders as they > type. I think, the tool should be trained to develop tiny logics by giving tiny infos to it. I think, the tool should also be trained to develop its own self decision capabilities by giving lot of small tasks that might require tiny decisions initially. The above should make it robust in finding alternate logic. Maybe, it can also bank on the reliable sources on internet or other servers to get more info incase it runs out of steam. But i think, it is better to avoid some internet dependencies as sometimes we might need to use a PC that is not connected to internet ! Thx in advans, Karthik Balaguru
From: Chris McDonald on 14 Jan 2010 20:57 Andrew Poelstra <apoelstra(a)localhost.localdomain> writes: > ....... >What if all it did was count ops to get a big-O estimate, and check >the function name for words like 'sort' or 'hash' or 'find' to suggest >algorithms? This has "Halting Problem" written all over it! -- Chris.
From: Lie Ryan on 15 Jan 2010 01:57 On 01/15/10 12:51, karthikbalaguru wrote: > On Jan 15, 3:26 am, Hans-Bernhard Br�ker <HBBroe...(a)t-online.de> > wrote: >> David Brown wrote: >>> Nothing stops the compiler from doing this sort of thing in /theory/. >>> But /practice/ is a different matter. >> >> The same thing applies to the original question. If a compiler's >> full-blown optimizer, given practically infinite time to ponder the >> problem, can't get that analysis job done, then no other tool can, and >> certainly not while just looking over the programmers' shoulders as they >> type. > > I think, the tool > should be trained to develop tiny > logics by giving tiny infos to it. > I think, the tool should also be > trained to develop its own > self decision capabilities by > giving lot of small tasks that might > require tiny decisions initially. > The above should make it > robust in finding alternate logic. > Maybe, it can also bank on the > reliable sources on internet or > other servers to get more info > incase it runs out of steam. But > i think, it is better to avoid some > internet dependencies as > sometimes we might need to > use a PC that is not connected > to internet ! I found a similar tool: http://xkcd.com/416/
From: Nick Keighley on 15 Jan 2010 04:11
On 13 Jan, 13:23, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote: > [***] A well implemented bubblesort will outperform any more advanced > algorithm easily if the number of elements to sort is sufficiently > small. "It took a great deal of work to analyze the bubble sort [...] the results are disappointing since they tell us that bubble sort isn't really very good at all. Compared to straight insertion [...], bubble sorting requires a more complicated program and takes more than twice as long!" "In short bubble sort seems to have nothing to recommend it, except a catchy name and the fact that it leads to some interesting theorestical problems." Knuth, TAoCP |