[isabelle-dev] Fwd: Word Libraries

Florian Haftmann florian.haftmann at informatik.tu-muenchen.de
Sat Oct 9 10:41:16 CEST 2021


(Repost after using a bad mail address – sorry for the noise)

> There is some relevant experience from Isabelle2021. As a rough order of magnitude, it took about 2 full person months to do the complete proof update for l4v, which finished a bit more than a week ago now. 2 person months is about 1 person month higher than a usual Isabelle update. Not quite all, but most of the changes were due to Word_Lib.
> 
> You did an initial foray into this proof update for one architecture (ARM, 32 bit) after the first half of the Isabelle2021 changes. The result of that foray was that those changes were relatively benign, which is why I was surprised by the much higher effort later. I think the main fallout was from a few small changes in the second half of the Isabelle2021 update.

In the retrospective I did underestimate the side conditions imposed by
the life cycle of l4v wrt. to Isabelle releases.

Do I understand correctly that there is a running version built on
Isabelle2021 / AFP 2021?  If yes, this can be used as point of reference
when appropriate.

It seems there are a couple of things to iron out properly.  There is
still some time towards the upcoming Isabelle release.  Beyond that, how
strictly is l4v coupled to the Isabelle / AFP release cycle?  Maybe
consolidations for Word_Lib could also happen on a dedicated AFP branch
*after* the Isabelle release and later converge to the (then) next AFP
release.

> - things started breaking immediately for 64-bit architectures, because the Sumo concept alone does not quite work. It is not enough to make all lemmas for all word lengths available. The point of the Word_SetupXX theories is to additionally establish a set of common names that refer to the default word type of the program (basically, whatever "unsigned int" is in C for that architecture, and what the register width is for the machine). Having these names common means that the same proof text can work on different concrete types. This is different to actually generic lemmas that work for any word length or that need to resolve preconditions on the word length before they can be applied (the latter can be solved in theories like Word8 etc). It is not the prettiest form of genericity, but it is crucial for not having to duplicate thousands of lemmas that for other reasons do need to refer to a concrete word length (which is known in the proof state, but unknown to the proof text). Ultimately this is comes from C, which as painful as it is, works on exactly that principle that the same type name can refer to different representations, depending on architecture.

My proposal:
* Theories Word_SetupXX re-appear offering the same name bindings as
pre-Isabelle2021
* Their purpose is documented in the Guide.thy

>>> The shift*1 Operations have been auxiliary definitions in the
>>> distribution to bootstrap the bit shifts operations, similarly to
>>> iszero, equipped only with a minimal set of lemmas. If one tool out
>>> there relies on it as explicit constant i. e. due to pattern matching, I
>>> won’t shed a tear to keep it as constant.
>> 
>> Great, let's keep them constants.
>> 
>>> (Aside, the primitive definitions then would be maybe: shiftl1 a = (a << 1), shiftr1 a = (a >> 1), sshiftr1 a = (a >>> 1)).
>> 
>> I'd be fine with these as definitions as long as the old forms still available as lemmas, preferably under the previous names. Maybe we can make sure over time that these constants are not produced any more by any tools and then they can finally be removed, but that is a number of release cycles away.

My proposal:
* Proper constants for shiftl1 a = (a << 1), shiftr1 a = (a >> 1),
sshiftr1 a = (a >>> 1)
* Facts for those available under the same names as Isabelle2021 / AFP
2021 (point of reference)

>> In terms of Isabelle's robustness: I'd agree with it being more robust for moving constants where antiquotations can be used to refer to unqualified names, but renaming is still one of the major pain points in almost every Isabelle release -- compounded by the fact that by far not all of these are documented in a way that would be usable for a simple search/replace. Those names that are listed in NEWS with their replacements are highly appreciated, but the lists are not complete, and there is no such NEWS for Word_Lib.
>> 
>> Maybe the impression of robustness comes from the incremental way we update the AFP. Each specific renaming is obvious to the author of that change and likely not hard to automate. On the other hand, being confronted with all of these at the same time, and having to find all instances by proof failure is time consuming and can be extremely frustrating when you have to hunt in hg history for what something has been renamed to.

Concerning NEWS, the Guide.thy has a changelog section.

Concerning robustness I did refer to changes to the internal
identifiers, e.g. due to a move to a different theory, not to a renaming
proper, ie. a change of the typical (unqualified) name visible to the
end-user.  And according to my memory such renamings should not have
happened.  Did you stumble over a renaming though?

>>>> For instance, who on earth wants to read something like "is_aligned (push_bit m k) m" instead of "is_aligned (k << m) m"? . The syntax was introduced for good reasons. It is fine to make it a bundle to not pollute a global syntax space, it is not fine to change original material that was written with a purpose.
>>>
>>> If your point is to to apply the ‹_ << _› syntax etc. thoroughly and
>>> consistently among Word_Lib theories, I am happy to look after these slips.
>> 
>> That would be appreciated. My complaint was not that a new lemma did not use the word syntax (although it would be nice to remain consistent), but that old lemmas were rewritten to not use it. Possibly as part of another change.

There is one issue which prevents a 100% satisfactory solution here: the
syntax needs abbreviations, and these cannot be organized in bundles.

Taking into consideration that there seem to be enough rough edges at
the moment, I would not object to postpone the reconciliation of the two
different layers of bit shift operations to a future release, ie.
keeping << >> >>> as distinct constants at the moment.

(This would be one remaining item on my todo list, the other one being
the final dismantling of Ancient_Numeral.thy sometime in the future)

>> - the unat_arith tactic solves fewer goals. I might be mistaken, but I think this is mostly due to simpset changes, but I have not had the time to track down which exactly.

Tactic unat_arith has a technical deficiency: at the time of its
declaration it takes a simpset »as it is« as base, and hence its
behavior is brittle wrt. to movements in the surroundings.  Since there
are many things to work out concerning the simp setup, I would postpone
this issue after these have been resolved.

> - the default simpset setup changed, it no longer reliably normalises ground terms with numerals.

Ie. terms involving operations like AND + - * and numerals 0, 1, 42?  If
that breaks down, something is really weird.

> This caused a bunch of proof changes, hunting for which rules are needed etc. My reading of NEWS it that this is getting worse with NOT terms not being normalised any more. This potentially makes sense for int/nat, but I would suggest to keep them for Word (and to make sure the rest of the numeral simplification actually covers all cases -- usually 0, 1, Suc 0 are the corner cases to look out for).

After Thomas Sewell pointed out some problems with simplifying bit
expressions over natural numbers, I took over the elementary numeral
rewriting approach by Andreas Lochbihler originating in session
Native_Word to overcome that;  in the first stage that relied on NOT not
being simplified by default, but with everything now in place, it should
not be difficult to recover that.

Simplification of bit expressions involving word numerals works still
the same way as pre-Isabelle2021: rewriting to int first.  If this by
itself exhibits problems.

> - the same issue caused C/assembly refinement to fail, as well as generated bitfield proofs (for 64 bit architectures) because arithmetic leaves different terms. This might be unavoidable for arbitrary terms in a larger update, but Isabelle should be able to compute with at least numeral ground terms reliably.

Yes, that is definitely supposed to work.  Do you have examples at hand?
 Otherwise I will augment the Examples.thy to cover more combinations.

>> - punning max_word with -1 conflicts with normalising numerals to the interval [0; 2^len). There were a few test lemmas in the repository that made sure this was not accidentally removed, I'm not sure where these went. Removing normalisation for -1 in turn leads to terms like unat(-1) which proofs then get stuck on. This can be fixed locally by finding the right rule to re-establish the old normalisation, but that will of course mean that abstract lemmas that refer to -1 won't apply any more. This means there is no principled way to deal with ground terms any more. I can see the appeal of using -1 for generic lemmas, because it has nice algebraic meaning (as opposed to max_word). Maybe the solution is to build a rule set for arithmetic that can be added on demand or as a bundle. It's a bit of a crutch, but at least it would provide a choice.

Concerning normalizing numerals to the interval [0; 2^len) – is that
supposed to work universally?  There seems to be no such mechanism
neither in Isabelle2020 nor Isabelle2021:

lemma ‹w = 2342342› for w :: ‹4 word›
  apply simp

(I think something like that would require a simproc).

Concerning max_word – would it help to let it abbreviate 2^len - 1
rather than -1?  If it is used in terms involving concrete numerals, the
abstract properties of -1 are not that relevant.  (I pondered that issue
into different directions, but will spare the space here for the moment).

Coming back to word numeral normalization – the typical rules should
take »- _« into consideration properly – »should« in the sense from what
I would expect naively, I did not check the proactively.  If this fails,
I should augment that.

I have been reluctant to enforce a particular normalization policy once
and for all – who can tell whether e.g. [0; 2^len) or [- 2 ^ (len - 1);
2 ^ (len - 1) - 1) is the preferred interval?

If your application depends on normalizing »- _« away, this maybe should
be done in the setup of the application itself.

>> - similarly, there is a rule that automatically rewrites "x = 0" to something in the direction of "x dvd 2^len". This is rarely useful with concrete len, and preventing these terms from being produced is similarly manual as take_bit.

Plain lemma ‹w = 0› for w :: ‹'a::len word› apply simp does not exhibit
this.  In which examples does this happen?  (I agree that shouldn't).

>> There is a separate question of what a good setup is for the interplay between casts to different types and what good normal forms for these are. We had settled on never unfolding ucast/unat/uint etc automatically, instead producing abstract rules that describe the interaction with the usual operators and relationships between each other. I still think that is a reasonable setting.

Sounds like a good roadmap to follow.

>> - related to simp rules for "unsigned", there are many (hundreds) of instances where casts or unat now leave terms of the form "take_bit 32 .." in the goal which have little chance of making progress. This was such frequent cause of breakage that we introduced a bundle which removes the rules producing take_bit terms. Trying to remove them globally is hopeless, because any theory merge with a theory that still has them, will re-introduce them.

Those terms appear if conversions to and from words are collapsed.
Taking the sentence above, this should not happen by default, and I can
remove that from the default simp rules.

> Overall, it looks like a number of aspects of the new simpset are convenient for reasoning abstractly about machine words when you are working within the library (esp take_bit and 0/dvd), but counterproductive once you are working with specific word lengths as you do in program verification.

The only instances I can spot at the moment are the conversions; are
there examples beyond that?  dvd is not supposed to appear spontaneously
after default simplification.

> - the new simpset often loops in the l4v proofs, because it either reversed polarity or added additional rules about ucast/unat. Some of this was fall-out from the more general treatment (e.g. unsigned)

OK, this should be revisited after the simp rules on conversions are
clarified.

> - not all new looping is due to that specific problem. I haven't been able to track down what rules exactly are the problem, but adding field_simps now almost always loops in word proofs. This might be a symptom of a different problem (i.e. not related to Word_Lib), because the usual fix was to instantiate specific commutativity rules. This indicates that something doesn't quite work any more with ordered rewriting. Possibly there is now an additional rewriting step or something like that so that the simple test in `simp` no longer suffices.

OK, this should be revisited after the simp rules on conversions are
clarified.

Cheers,
	Florian



-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 228 bytes
Desc: OpenPGP digital signature
URL: <https://mailman46.in.tum.de/pipermail/isabelle-dev/attachments/20211009/652a9c7c/attachment-0001.sig>


More information about the isabelle-dev mailing list