[isabelle-dev] Jenkins maintenance
Makarius
makarius at sketis.net
Thu Oct 6 15:02:09 CEST 2016
On 05/10/16 18:07, Lars Hupel wrote:
>> According to the official README_REPOSITORY
>> (http://isabelle.in.tum.de/repos/isabelle/file/ec0fb01c6d50/README_REPOSITORY#l282)
>> there is
>>
>> * no need to test pdf documents
>
> This is arguable. In fact, it is very convenient to have documents in a
> continually working state, because Jenkins also serves a "continuous
> delivery" purpose: The AFP statistics page, and also everything else on
> <https://devel.isa-afp.org/> (including documents) is being produced by
> Jenkins. Failure to produce a document in that sense is also a failure
> to present an entry.
"Continous delivery" seems to be one of the Jenkins defaults that
suddenly intrude the Isabelle development process.
The reasons, why generated PDFs for the Isabelle repository should not
be required:
* It demands precious additional minutes in the main build (with its
target for 10-20min, but presently going slightly beyond 30min on my
machine).
* A broken Latex document is easy to repair.
So if nobody tests PDFs seriously, that is a gain for everyone. There
could be a weekly or monthly test to point out latex problems automatically.
For AFP, with its focus on PDF articles, the situation is certainly
different. But AFP is not the Isabelle repository.
Many of the open problems are caused by putting everything in one pot:
Isabelle, AFP, testboard, nightly builds with serious measurements.
> In the long term, it could also make sense to present the official
> Isabelle documentation (i.e. everything in "~~/src/Doc") in a similar
> fashion. Currently users have to build documents themselves.
There is already "isabelle build_doc -a". It could be included in a
separate build job.
Even more, there could be a nightly job to run "Admin/Release/build" and
publish the resulting mini website, as a rplacement of
http://isabelle.in.tum.de/devel with its "Download".
>> The 1 thread test is the base of all other tests. Without that it
>> becomes very difficult to see anything concerning performance changes.
>
> This is not a problem; I have it on my list anyway.
>
> Makarius, last time we spoke I didn't get the impression that this is
> crucial for performance analysis, which is why I didn't prioritize it.
I pointed out the importance of the threads=1 test several times. A
proper test setup needs to provide a "mostly flat line" as reference for
anything else. That has been Isabelle standard for almost 10 years, and
was only recently lost.
Without that, performance tuning becomes very difficult. It is one of
the reasons, why no serious tuning has happened recently.
> As always, I first need to figure out how long this takes on our
> hardware and if necessary, provision an appropriate machine.
I don't like this idea of exclusive "provisioning" of machines for the
Jenkins monster. It is the main reason why test hardware is lacking.
It is a paradoxical situation of both wasting a lot of CPU resources,
and of scarcity and austerity.
> It would also be great if, as we discussed, there was a possibility to
control
> "ML_OPTIONS" via options instead of environment variables.
This is now addressed by the new Admin/build_history tool (presently
2c5039363ea3), but that is a different story for a different thread.
> Pull: Implement a loosely-coupled extension for hgweb to fetch
> information from Jenkins. All changesets and their statuses are recorded
> in there and can be retrieved from the API, not unlike the statistics
> page. This can be done with Javascript and implemented without changing
> anything in Jenkins.
>
> Push: Embrace Bitbucket and use their "Build Status" API [2]. This
> information is displayed in the "Builds" column in the "Commits" view.
> Of course, this only works for repositories hosted on Bitbucket. This
> likely requires some Scala code to work (i.e. performing an
> authenticated HTTP request).
>
> I'd be happy if someone other than me could take the lead on this.
I have already started to think about the problem of integrating test
data into Isabelle history myself.
Presently there are two main questions:
* Is it feasible to have a standard database server hosted at TUM
(that can run the next 10 years), and be publicly accessible by some
standard protocol and standard authentication?
* What are the prospects for generally available test hardware --
non-exclusive as in the past?
> In software engineering, this is known as "Not Invented Here" syndrome :-)
>
>> * How much efforts are required to make proper use of it? I.e. what is
>> the "payload" of it as a "carrier system" for an actual application.
>
> The advantages, as outlined by Florian, by far outweigh the efforts.
Such discussions always need to true sources at hand. Where are the
Isabelle/Jenkins sources?
Makarius
More information about the isabelle-dev
mailing list