set result [cached c [list myLongCommand $i $j]]
proc cached {cacheName command} { upvar 1 $cacheName cache if {![info exists cache($command)]} { set cache($command) [uplevel 1 $command] } else { set cache($command) } } # ---------------------------------------- testing: % time {cached c {stringSimilarity "Tool Command Language" "Tool Command Languages"}} 422000 microseconds per iteration % time {cached c {stringSimilarity "Tool Command Language" "Tool Command Languages"}} 0 microseconds per iteration % array get c {stringSimilarity "Tool Command Language" "Tool Command Languages"} 0.953488372093The second call saves you more than 0.4 seconds...(on second try with cleared cache, it was only 63 ms, though). Note however that caching results makes only sense if the command always returns the same value - caching commands like
gets $fp expr rand() clock secondsis certainly a bad idea... And side effects are of course not produced, as the command isn't executed after the first time.DKF notes: The above timings aren't very good (there's granularity problems) but on Solaris8 (on a not-very-fast processor) I get:
% time {cached c {stringSimilarity "Tool Command Language" "Tool Command Languages"}} 97151 microseconds per iteration % time {cached c {stringSimilarity "Tool Command Language" "Tool Command Languages"}} 200 44 microseconds per iteration
Philip Greenspun calls this concept "memoization"... A heavyweight XOTcl version is on Cacheable class. See also memoizing.
NaviServer has the commands ns_cache_eval[1] and ns_memoize[2], which are thread friendly. The cache-data is global, shared by all threads; the interlocking is transparent. Also, if one thread is computing expensive_function because the cache is empty/stale, and then another thread also checks the cache, the second thread will suspend until the first thread completes the cache update. Timeouts can be specified.
See Arrays as cached functions where you can call $f($x,$y)... or A size-bounded cache.