For tales of computing tasks, where much work was put into computing something (which perhaps wasn't all that interesting to begin with). It's not about the goal, but about the journey there.
Of course, a rule for this kind of story is that the more elaborate and efficient the program used, the better.
Anyone can waste lots of CPU time by being stupid, but it's an art to waste CPU time on a task when performing it efficiently.
For an oldie,
FPX writes in
factorial on his efforts in computing these. It ends with:
-
- Eventually, I extended my program to work over a network, and had 70 workstations grinding away for 48 hours on the computation of 1000000!. Never was CPU time more pointlessly wasted.
Lars H: In a paper mentioned in
Scientific papers mentioning Tcl, Daniel Andrén, Klas Markström, and I ran some very large jobs to determine the diameter of some Cayley graphs. Our main result (apart from a new fast algorithm for row-reduction of matrices over finite fields, which was found using pen and paper) are summarised in the table
Diameter(n,q) | n=2 | n=3 | n=4 | n=5 | n=6 |
---|
q=2 | 2 | 4 | 7 | 10 | 13 |
q=3 | 3 | 6 | 9 | 12 | |
q=4 | 4 | 7 | 11 | | |
q=5 | 4 | 7 | 11 | | |
7≤q≤23 | 4 | 8 | | | |
where the numbers in bold were computed by a Tcl program. The algorithm was a dead simple Breadth First Search, but its implementation was not entirely trivial since the spaces being searched through were rather large; for example the case (n,q)=(3,4) has 250047 vertices and the case (n,q)=(4,3) has 40960000. Having to house that much data within a Tcl process forced me to think about
compact data storage.
Anyway, the core part of one of the Tcl program was subsequently ported to C (though it was still the Tcl script which produced input to the C program), parallelized, and set to work on some
really large problems. The last we ran — (n,q) = (3,23) — required 430GB RAM (that's only 2 bits per vertex), caused us to crash the then-largest computer in Norway (Well, maybe it was rather the manufacturer SGI which crashed it. What first happened was that we couldn't run the program since the OS had a 256 GB limit on memory allocated to any single process, so the nice sysops asked SGI for a new OS version without that limit. After a couple of weeks SGI delivered such an update, which they however hadn't been able to fully test since they didn't themselves have any machine with this much memory. And sure enough, there was a bug in it which caused the machine to crash.), ran for several days on over 400 processors, and accumulated a runtime of 4.1 CPUyears.
Never was CPU time more pointlessly wasted: we still don't know at what value of q the asymptotic diameter 9 for n=3 is attained! (The next possibility is q=25, which would require 954GB RAM to run.)