[Afp-submit] AFP entry Neural_Networks

Achim D. Brucker adbrucker at 0x5f.org
Thu Nov 20 14:21:02 CET 2025


Hi,
This is also my assumption.  The values used by me in the below example 
are after pretty printing (i.e., convertio to strings).

The actually value stored is 0.070820772089064121246337890625 (decoding 
it "manually", following the standard). Trying to print it in a small C 
programm gives 0.00708207720890641212, which seems to be the offical 
decimal represenation according to the standard. IEEE754  is not easy ...


Achim


On 20/11/2025 12:14, Peter Lammich wrote:
> It looks like what has changed is the pretty-printing. I think (just 
> did a quick check with Java) as a IEEE-32 value, both numbers are the 
> same (the closest IEEE-32 floating point number to this decimal 
> representation is the same, namely 3be810c5).
>
> -- 
>
>   Peter
>
>
>
> On 20/11/2025 12:35, Lawrence Paulson wrote:
>> I find it surprising that the meaning of a specific bit pattern as a 
>> real number according to an established floating-point standard can 
>> change from year to year. How do we know it won't change again?
>>
>> Larry
>>
>>> On 19 Nov 2025, at 22:37, Achim D. Brucker <adbrucker at 0x5f.org> wrote:
>>>
>>> Hi,
>>> I did a first investigation. Overall, this part of the theory is 
>>> intended to serve as regression test. Thus, it worked as it should: 
>>> acting as early warning for subtle changes that otherwise would go 
>>> undetected, potentially creating "bit rot".
>>>
>>> The change is actually due to a difference in Poly/ML used in 
>>> Isabelle 2025 vs the current development version (and, 2025-1, for 
>>> that matter). Namely:
>>>
>>> ML\<open>
>>>      PackReal32Little.fromBytes (Word8Vector.fromList [0wxC5, 0wx10, 
>>> 0wxE8, 0wx3B])
>>> \<close>
>>>
>>> returns a different value, namely:
>>>
>>> * Isabelle 2025:                                  val it = 
>>> 0.007082077209: PackReal32Little.real
>>> * Isabelle development/2025-1:      val it = 0.007082077: 
>>> PackReal32Little.real
>>>
>>> As the rule-of-thumb regarding the precision of 32 IEEE-745 floating 
>>> point numbers is around 7-8 decimal digits, the new behaviour is OK. 
>>> Hence, the fix to get the AFP entry working again is to update the 
>>> theories to reflect the new values.
>>> We will take care of this tomorrow.
>>>
>>> Best,
>>> Achim (and Amy)
>>>
>>>
>>> On 19/11/2025 16:25, Lawrence Paulson via isabelle-dev wrote:
>>>> Neural_Networks is badly broken. I took a look and the failure is 
>>>> in a proof that tests whether a certain generated construction is 
>>>> equal to an explicit construction. They are supposed to be 
>>>> literally identical and are not. I think we should get back to the 
>>>> authors to find out why it failed and how it can be made robust.
>>>>
>>>> Larry
>> _______________________________________________
>> Afp-submit mailing list
>> Afp-submit at mailman.proof.cit.tum.de
>> https://mailman46.in.tum.de/mailman/listinfo/afp-submit




More information about the isabelle-dev mailing list