[isabelle-dev] LPAR and IWIL
Jasmin Blanchette
jasmin.blanchette at gmail.com
Mon Oct 18 15:36:26 CEST 2010
Hi all,
Last week I attended LPAR-17 (Int'l Conf. on Logic for Programming,
Artificial Intelligence and Reasoning) and IWIL-2010 (Int'l Workshop
on the Implementation of Logics) in Yogyakarta, Indonesia. The LPAR
proceedings are in my office; those for IWIL are online at
http://www4.in.tum.de/~schulz/PAPERS/STS-IWIL-2010.pdf.
LPAR attracts submissions on a variety of topics -- as one participant
put it, "If you're doing something with logic, you're welcome." SAT
solving, TRS termination and complexity (complexity is really in), and
nonmonotonic logics were well represented. Here are summaries of a few
of the talks.
LPAR:
* Lazy Abstraction for Size-Change Termination
Carsten Fuhs (w/ Michael Codish, Jürgen Giesl, Peter
Schneider-Kamp)
http://sunsite.informatik.rwth-aachen.de/Publications/AIB/2010/2010-14.pdf
"Lazy abstraction" means that the SAT solver is responsible for
choosing a base order at the same time as it solves the SCNP
constraints -- as opposed to fixing a base order and generating
SCNP constraints from that. This is now implemented in AProVE and
fits nicely with the rest of the dependency pair framework.
* Human and Unhuman Commonsense Reasoning (Invited Talk)
Michael J. Maher
Michael presented a defeasible logic FDL and compared it with
other such logics. A defeasible logic is a logic where a chain of
reasoning (say, a proof of "p") can be defeated by another chain
(e.g., a proof of "~ p"), like real-life argumentation. Two common
issues with nonmonotonic logics is that even though they in
principle try to capture "commonsense reasoning", they sometimes
give counterintuitive results, and that they are rarely based on
empirical evidence but much more on the intuitions of their
inventors. FDL addresses these issues by being more conservative
in its judgments and taking empirical evidence on how humans
reason into account.
* Magically Constraining the Inverse Method Using Dynamic Polarity
Assignment
Kaustuv Chaudhuri
Magic Sets Transformation (MST) is a technique for transforming a
Prolog (or in general a "top-down logic programming engine")
program and query into something that a bottom-up logic
programming engine can handle to compute exactly the same answers.
Kaustuv showed that dynamically assigning polarities can
characterize MST without having to transform the program or the
query. (One attendee commented that this had been done
independently by somebody else but I didn't catch the name.)
Because of my flight schedule, I missed the following two
Isabelle-related talks:
* Generic Methods for Formalising Sequent Calculi Applied to
Provability Logic
Jeremy E. Dawson (w/ Rajeev Goré)
They described generic methods for reasoning about multiset-based
sequent calculi with allow to "mix and match" shallow and deep
embeddings, using shallow when possible and deep otherwise. These
methods are applied on the Isabelle proof of cut admissibility for
the sequent calculus GLS for the provability logic GL.
* An Isabelle-Like Procedural Mode for HOL Light
Petros Papapanagiotou and Jacques Fleuriot
They implemented "{,d,e,f}rule{,_tac}" in HOL Light as normal
tactics and call this "Isabelle Light". They also have "case_tac",
"subgoal_tac", and even "simp". Isabelle proofs can be translated
one line at a time to this style; see the paper for examples. The
main challenge was to emulate features provided by Isabelle's
metalogic. They introduced ===> for metaimplication. There are
various limitations related to the use of metavariables and the
lack of metaquantification. The paper is rather light on
implementation details and feels more like a tool paper or system
description.
The motivation for this work is weak. They sell it along the lines
of "HOL Light is highly flexible and programmable, but hard to use
to do natural deduction proofs" and "Isabelle supports natural
deduction proofs", but everything nice they have to say about HOL
Light applies equally well to Isabelle, begging the question: Why
not simply use Isabelle?
IWIL:
* On Implementing Modular Complexity Analysis
Harald Zankl (w/ Martin Korp)
This work is about finding bounds on the complexity of term
rewriting systems. The novel aspect of this work, according to the
authors, is that the approach is modular, exploiting the fact that
dc(R \cup S) = dc(R/S) + dc(S/R)
where "dc" is the derivational complexity of a set of equations.
They break down the rewrite system into several parts by applying
the equation recursively and obtain something like O(n) + O(n^5) +
O(n^2). At this point, they focus on the O(n^5) part and try to
find a tighter bound -- modularity makes it possible to ignore the
rest of the problem.
* Optimizing the AES S-Box using SAT
Carsten Fuhs
He showed how to use a SAT solver to synthesize XOR circuits used
in cryptography algorithms. Some issues are familiar, such as
eliminating universal quantifiers and breaking symmetries.
Unsurprisingly, Soos's CryptoMiniSat (the first SAT solver to
support XOR natively and even infer them from standard CNFs, and
2010 SAT Race winner) performed best on those problems, by a wide
margin.
On the whole, I have mixed feelings about LPAR. On the one hand, it
attracts many submissions (105) and had decent invited talks. With all
these formal method people attending, it's also a nice place to
advertise Isabelle. On the other hand, the topics are quite varied,
and at any time about half the attendees were missing (even though
there were no parallel tracks). For surprisingly many talks, no
questions were asked.
At lunch tomorrow, I can tell you the story of the rat I saw in a
local restaurant's kitchen.
Jasmin
More information about the isabelle-dev
mailing list