From: Gerry Snyder on
Bruce wrote:
> Uwe Klein wrote:
>> Alexandre Ferrieux wrote:
>>> set ::Defaults(procA) [ list -items 22 -levels 5]
>>> proc procA args {
>>> array set opt $::Defaults(procA)
>>> array set opt $args
>>> # body of proc, using $opt(-...)
>>> }
>>>
>>> -Alex
>> nice.
>> Thinking about where the downside is.
>>
>
> the one downside (and your version had it too ;)
> is that is silently ignores invalid options
> so if i typo an option name in the call e.g.
>
> procA -ietms 5
>
> instead of an error, i just get the default value
> and have no idea where i screwed up.
>
> Bruce

Various levels of error checking ar easy to add. Here is what I use:

###################################################
# Set up default arguments
array set opts {
-user me(a)mymail.com
-password *****
}

# Minimal error checking: odd number of args,
# or first of pair not a valid option name
set optsLength [llength [array get opts]]
set optsNames [array names opts]
if {[llength $args] % 2 != 0} {puts "Unpaired option request.\
Valid options are:\n$optsNames"; return}
array set opts $args
if {[llength [array get opts]] != $optsLength} {puts \
"Illegal option. Valid options are:\n$optsNames"; return}
###################################################

It checks for an odd number of arguments and for the first element of a
pair not being a valid option. In either case is lists the valid options.

In many situations it would make no sense to run with no options being
set. It would be trivial to add another test or modify the first one to
check for that.

Showing what invalid option was used would not be hard, but I decided it
would not be useful enough to add it.


Gerry
From: tom.rmadilo on
On Sep 8, 3:36 am, "Donal K. Fellows"
<donal.k.fell...(a)manchester.ac.uk> wrote:
> On 8 Sep, 03:39, "tom.rmadilo" <tom.rmad...(a)gmail.com> wrote:
>
> > I'm not impressed by the speed gains (one million requests and I save
> > users 13 seconds), but the code is a little cleaner, so I'll update
> > the proc.
>
> 18% is quite a good improvement, FWIW, especially as it is really not
> much more than a peephole optimization. Small scale stuff usually only
> wins a percent or so...

I'm more surprised that nobody noticed I didn't use [upvar 1 otherVar
myVar].

The original reason for using individual variable names was that I was
under the mistaken impression that "myVar" must not exist. From the
manpage: "There must not exist a variable by the name myVar at the
time upvar is invoked." Apparently myVar doesn't count as a local
variable, as the faster version reuses "var". The manpage does explain
this later on: "it is possible to retarget an upvar variable by
executing another upvar command."

I still think the context for profiling code is at least at the proc
level. The speed gains in the foreach loop may depend on the number of
times the loop runs. Should I profile for 1,2,3...10 options? Should I
profile at 10, 100, 10000 reps? From my experience you sometimes get
ambiguous results the more testing you do. But if your algorithm only
spends a small fraction of the overall time in a section of code, the
gains are not 18%, but much less.
First  |  Prev  | 
Pages: 1 2 3 4 5
Prev: How to detect if event loop is running ?
Next: regexp