From: Rohit M on
Hi All,
I want a method to time the specific portions of my TCL code. I
know I can
do so using the "time" function provided by tcl. However I want this
to be included
in my checked in code i.e. production code and should be hidden under
a flag,
so that I can switch on the flag for my daily build test environment
but disable it for
released application. Something I can think of it is to do the
following:
1. Put the code I want to time into a function.
2. Check for an env. variable and if it is true call my function
embedded into a "time"
command with iteration set to 1 and it is not set just call my
function directly.

Problem is that I have to do this for each function and it will make
the code look messier.
Two questions:

1. Is there a cleaner method to do it provided I am using object
oriented TCL?
2. Is there a way so that I can ensure that I dont affect the
production code runtime
(after all I am adding an extra check for an env variable)?

thanks in advance for the answers / pointers.
rgds
RM



From: tom.rmadilo on
On May 24, 5:10 am, Rohit M <rohit.marka...(a)gmail.com> wrote:
> Hi All,
>    I want a method to time the specific portions of my TCL code. I
> know I can
> do so using the "time" function provided by tcl. However I want this
> to be included
> in my checked in code i.e. production code and should be hidden under
> a flag,
> so that I can switch on the flag for my daily build test environment
> but disable it for
> released application. Something I can think of it is to do the
> following:
> 1. Put the code I want to time into a function.
> 2. Check for an env. variable and if it is true call my function
> embedded into a "time"
> command with iteration set to 1 and it is not set just call my
> function directly.
>
> Problem is that I have to do this for each function and it will make
> the code look messier.
> Two questions:
>
> 1. Is there a cleaner method to do it provided I am using object
> oriented TCL?
> 2. Is there a way so that I can ensure that I dont affect the
> production code runtime
>  (after all I am adding an extra check for an env variable)?

Some problems I have noticed with timing Tcl code:
1. Different runs produce different timings: if the improvements are
not huge, it is easy to get old and new code to run faster than the
other. So one test run doesn't give you a definitive answer.
2. The number of repetitions is very significant. Too many reps almost
always results in worse performance. But what is "too many" can only
be determined by testing.
3. Timing might be the least significant goal in code development. You
might reduce running time by making code less maintainable or readable
or reusable.

But assuming you are not changing the API of your code, you could use
the tcl test framework to include timings. Maybe you could modify the
tcl test api to add timings (using [rename]).


From: Donal K. Fellows on
On 24 May, 16:48, "tom.rmadilo" <tom.rmad...(a)gmail.com> wrote:
> 3. Timing might be the least significant goal in code development. You
> might reduce running time by making code less maintainable or readable
> or reusable.

Also, algorithmic improvements trump peephole improvements when you
need to do performance enhancement at all. (Mind you, switching to
using [lsort] instead of a homebrew list sorter is probably both at
the same time. :-))

Generally speaking, it's *hard* to do performance measurement and the
hardest part is getting a machine is doing nothing else.
Reproducibility of timing runs is a close second...

Donal (I like [time] but I acknowledge its weaknesses...)
From: Rohit M on
Thanks Tom and Donal for your comments.

1. I am only looking for significant changes and more and less I will
have same load on the machine
running the tests everyday.
2. I will do only 1 repetition for the function but the function
itself gets
called thousands of time, I will keep a log and so it will give a good
average
fairly independent of machine specifics.
3. Timing is a significant goal for me. More so to keep a track of
notifications
sent to my code (which is not controlled by me), i.e. any significant
change in
time will mean that notifications have increased / decreased
considerably.

I am thinking of "clock" as an alternative. Your thoughts on this,
advantages / disadvantages
vis-a-vis "time"?

On May 25, 5:49 pm, "Donal K. Fellows"
<donal.k.fell...(a)manchester.ac.uk> wrote:
> On 24 May, 16:48, "tom.rmadilo" <tom.rmad...(a)gmail.com> wrote:
>
> > 3. Timing might be the least significant goal in code development. You
> > might reduce running time by making code less maintainable or readable
> > or reusable.
>
> Also, algorithmic improvements trump peephole improvements when you
> need to do performance enhancement at all. (Mind you, switching to
> using [lsort] instead of a homebrew list sorter is probably both at
> the same time. :-))
>
> Generally speaking, it's *hard* to do performance measurement and the
> hardest part is getting a machine is doing nothing else.
> Reproducibility of timing runs is a close second...
>
> Donal (I like [time] but I acknowledge its weaknesses...)

From: tom.rmadilo on
On May 26, 4:46 am, Rohit M <rohit.marka...(a)gmail.com> wrote:
> Thanks Tom and Donal for your comments.
>
> 1. I am only looking for significant changes and more and less I will
> have same load on the machine
> running the tests everyday.
> 2. I will do only 1 repetition for the function but the function
> itself gets
> called thousands of time, I will keep a log and so it will give a good
> average
> fairly independent of machine specifics.
> 3. Timing is a significant goal for me. More so to keep a track of
> notifications
> sent to my code (which is not controlled by me), i.e. any significant
> change in
> time will mean that notifications have increased / decreased
> considerably.
>
> I am thinking of "clock" as an alternative. Your thoughts on this,
> advantages / disadvantages
> vis-a-vis "time"?

I would just buy an additional processor, upgrade the amount of
memory, or both and forget about timing everything. In the rare event
that I time code, it is usually to compare with other code and it
usually takes me a half-hour or more to figure out what parameters
work best and to perform a number of runs with both code versions. If
the code performs within about 20% or so, it is virtually guaranteed
that both versions will win a particular test comparison. In other
words, if you are making small changes to timing code may discover you
have made some kind of huge mistake, or huge discovery of a faster
algorithm, but algorithms themselves are highly data-dependent on
performance. You may pick parameters which never discover a particular
problem. Automated testing makes some sense to me, but automated
timings seems doomed to either suck away your development time, or
fail to do what you want it to.

My suggestion is to change the tcl testing code to do a timing as well
as the API test. You get to reuse the testing framework and get
additional data for free. What the extra data means...