[isabelle-dev] Speed of @{thm}

Jasmin Blanchette jasmin.blanchette at gmail.com
Fri Aug 10 18:31:40 CEST 2012


Hi all,

In the new codatatype package, which we aim at integrating into Isabelle soon, there are 1277 @{thm} annotations spread over about 10 000 lines of ML. On my machine, each @{thm} costs about 30 ms, which means that 40 s of CPU time is spent alone looking up theorems whenever we load or build the package.

I experimented with an alternative, rough proof-of-concept @{fasthm} antiquotation that simply generates a theorem value:

   val tokenize = Outer_Syntax.scan Position.start #> filter Token.is_proper;
   fun thm s = Attrib.thm (Context.Proof @{context}, tokenize (s ^ ";")) |> fst;

   val _ = Context.>> (Context.map_theory
     (ML_Antiquote.value (Binding.name "fasthm")
        (Scan.lift Args.name >> (fn name => "thm " ^ ML_Syntax.print_string name))));

It's 20 times faster (!).

This is mind-boggling. Looking at the code (correct me someone if I'm wrong), it appears that @{thm} generate a variable that's bound in the background and stores the looked up theorem. One would think this ensures the lookup takes place only once, at compile time, even if @{thm} occurs in a tight loop. But despite taking no special precautions, @{fasthm} also has this property; the following is lightning fast:

   fun f n = @{fasthm "nibble.simps(1)"};
   map f (1 upto 10000);

The hg history did not reveal any insights into why theorems are handled differently from (less efficiently than) @{term}, @{typ}, etc. Any idea?

Cheers,

Jasmin



More information about the isabelle-dev mailing list