How to skip (problematic) Isabelle sessions in AFP? - isabelle

I tried to use AFP (02/22/2021) with Isabelle 2021, but the jEdit/Isabelle PIDE wouldn't load after the AFP directory is added to the user ROOT file. The is shown below and seems to be about a specific package:
I don't really need the entry in question (, or know what it does).
My question is:
Is there a way to use a subset of AFPs and exclude problematic entries in the screenshot?
-- Update ---
As pointed out in the comments, the AFP seemed to be lagging a couple of days behind. Using afp-02-24-2021, the initial error went away. However, when selecting a session Jordan-Normal-Form from jEdit, there is a new error about JNF-AFP-Lib build failing, as shown below:
The question remains. The AFP seems to be a large collection and there could be multiple sources of error.
In case of such errors, is there a way to select a subset of AFPs to use or debug?
If not, is there a systematic way to test which afps do or do not build?

Original question
The problem was because you (unknowingly) used an AFP release that did not match the Isabelle release. To avoid that problem in the future, I recommend using the Mercurial repository directly:
https://foss.heptapod.net/isa-afp/afp-2021
The Mercurial repository for a new Isabelle release gets created during the RC phase, so you can always be sure to have matching versions.
In the case of these mismatches, it is usually not possible to select a subset of the AFP that "works", because between 2020 and 2021, the session management has changed considerably.
Update
The problem you're facing here is that you have selected a big session as a prerequisite and it takes too long to build with the default configuration.
You can build the session on the command line with an increased timeout as follows:
isabelle build -bv -d <path to afp>/thys -o timeout_scale=3 Jordan_Normal_Form
Increase the factor if it keeps timeouting.
General remark
You ask, "In case of such errors, is there a way to select a subset of AFPs to use or debug?". The answer to that question depends on the kind of error. Your post currently contains two vastly different problems, so it's not possible to give a general answer, unfortunately.

Related

Automake/autotools and using "--dry-run" and "--always-make" for make

I stumbled upon an issue when I recently switched to VSCode as editor.
I have several projects that have a full (medium-complex+) autotool
setup and they all work fine. However I discovered that the makefile plugin for VSCode in order to initialize itself (and finding all dependencies and targets) starts by running
make --dry-run --always-make
as first time initialization. This throws the makefile (or actually the
re-config) into an endless loop re-running "configure" (since the targets are never resolved to disk).
I have also confirmed this behavior with the smallest possible autoconf/automake
setup. I can also kind-of understand why this happens (and it seems make
have an internal way to discover this exact situation with the special
variable MAKE_RESTARTS that could possible be used to detect a cyclic
behavior)
Is there a known best-practice workaround ? or is it even a reasonable expectations that these two options in combination should work? (Good to have a second opinion before I go down the rabbit-hole of reminding myself of all the details I forgot about the magical land of autotools)?

Using git for feedback from proof readers

I am currently writing a text with R bookdown and asked two friends to read my text and give comments, corrections and general feedback. My source files for the text are stored on GitHub and I would like my collaborators to make changes in the files (one for each chapter) with the help of git. However, none of us are really experts on git. This makes it hard to figure out what a suitable workflow is.
For now, we decided that each one of them creates himself a branch so that he does not directly push into the master branch. After I have read their changes I would like to decide what I merge into the master branch and what not. So far, it looks like each change needs to be in a separate commit because I am not able to merge single lines from a specific commit (not sure if that is at all possible). However, this seems like a lot of annoying and unnecessary commits to create. So, I guess I am looking for a way to avoid that and/or general pointers towards a good workflow for such kind of projects.
A useful command will be git cherry-pick, it allows you to select specific commits from a branch.
A general good practice is that commits should be self contained (if applied alone they make sense) and they target a specific feature (in the use case mentioned, that could be a paragraph or a section or a chapter).
In the end, if you would like to apply only specific changes of a commit, that would have to happen manually, someone has to decide which parts to apply and which not. A commit can be edited using git rebase -i <branch name> before being merged. This question might also be useful.
I finally found what worked for me in here. Basically, on my master branch I had to use
git merge --no-commit --no-ff branch-to-merge
This will merge all changes into my master branch but does not immediatly commit the changes so that they can still be staged/unstaged. Then, I can decide what line change to include by staging the line changes I want to keep and discard all other line changes. Finally, I commit all staged line changes et voilĂ , that's what I wanted to get.
Sidenote: I am using gitkraken and as a beginner with git I enjoy using the GUI but the merge part with the options "no-commit" and "no-fast-forwarding" had to be done via the git console (at least I could not find a way to to that using the GUI). Choosing which lines to stage and which to discard is then an easy task via the GUI.

How to install an R package to R-3.3.0 from GitHub, which is built on R-3.4.0?

We have R-3.3.0 in our university's computing system. For some reason, the IT staffs do not want to update R version to R-3.4.0 soon. However, I need to install an R package from the GitHub, which is built on R-3.4.0. Is there any way to install that R package from the GitHub's to R-3.3.0?
#patrick's answer may work just fine. The benefit (if it works) is that you have all recent changes and functionality of the package. However, you may never know if one of the changes requiring 3.4 is about accuracy or correctness, meaning you may still get a return value but you don't necessarily know that it is correct.
For this answer, I'm going to assume that there is a valid reason to not use the current version and trick R into installing it anyway.
Grab from a specific commit
Go to the repo, https://github.com/mshasan/OPWeight in this case.
Open the DESCRIPTION file and click on the "Blame" button on the right. This brings up the commit message-header and timeframe for each group of lines with their most recent commit. In this case, it shows "Update DESCRIPTION":
Click on the description, and you're taken to the specific commit. Seeing that this is a single-line change, it is likely that an earlier commit may have been what actually changed code to use R (>= 3.4.0) a hard requirement. Take note of the commit hash (5c0a43c in this case).
Go back to the repo main page and click on "Commits". If you now search for that 7-char hash-substring, you'll see it happened on June 20, 2017. Unfortunately, the commit descriptions and timeline do not give a great expectation of where the version-depending change happened.
If you can find "the commit" that did it, then take that hash-substring and use that as your ref="..." argument to install_github. If not, however, you are either stuck (1) trying them iteratively or randomly, or (2) asking the author at which commit they started using 3.4-specific code.
Once you know a ref to use (or want to try), then run
devtools::install_github("mshasen/OPWeight", ref="5c0a43c")
(This is obviously the wrong ref to use, since that's the first commit at which we are certain the dependency exists.)
Using tests to know which to use
Since the repo contains a tests/ subdir, one can hope/assume that the tests will accurately catch if things are not working correctly in your R-3.3. This alternative method involves you testing each commit on your specific version of R (before the DESCRIPTION file was changed) until the tests no longer fail.
Using git (command-line or GUI), clone the repo to your local computer.
$ git glone https://github.com/mshasan/OPWeight
Now, iterate through the references (found using the above method or perhaps with git log) with something like:
$ git checkout --detach <hash_substring>
... and in R, run
devtools::test("path/to/your/copy/of/OPWeight")
If the tests have been set up correctly and you chose a worthy version, then stick with it.
You can find the description file for OPWeight here.
Change this part
Depends:
R (>= 3.4.0),
to whatever R you are using and see if things break. The logic of the description file is explained here. Obviously a last resort.

How to avoid code duplication for non-data structures (views, stored procedures etc)

My project contains a lot of objects like views and stored procedures which are being changed quite frequently. Now I have to create new SQL script on every update which contains complete source code of changed objects despite I've actually changed only few rows. It leads to massive code duplication and I also found it difficult to review these changes.
I'd like to have only one actual version of SQL script for every object like view or procedure and recreate these objects every time I redeploy the database. As result I could change existing source file (like in Java or C programming) instead of creating a new update every time I need to alter view or procedure.
Is there a possibility to execute some scripts every time I migrate the database with Flyway?
I'm not sure why that got so many downvotes, it's a perfectly understandable and valid question. Perhaps it's because it closely resembles this open question:
Migrating Stored Procedures with Flyway
We are actually starting to push against this issue now. We've been using flyway for development and testing (and love it). But we've come to a point where we're starting to have to use procs/triggers/views (p/t/v's) and the fundamental disconnect between how we did it before, and how we must use flyway, is starting to be a strain.
Before, for a given database object (let's say it's a procedure), there'd be one source file. And if you needed to change the proc 'n' times, there would be 'n' versions of the same file in your VCS. Diff tools work great, IDE's all understand this, merges detect when two developers working in separate branches make changes to the proc, etc, etc. You know, old school.
But with flyway, any one proc with 'n' changes is now scattered across 'n' files. Instead of "one object in one file with 'n' versions", you have "one objecct in 'n' files with one change each". I now need to do a text search in my IDE for any instance of "proc_name" if I want to know the history of changes to the proc. The VCS knows nothing about it. Devs can each make a migration in their own branches that succeed when each is deployed, but leave the proc with a missing update.
I'm not saying any of this to complain about flyway, and I fully realize it's not a simple area. I'd almost say it's unsolveable (by flyway).
We're scheming how to handle this problem, and I'd be very interested to know how others have handled it.
Repeatable migrations are supported by Flyway 4.0, now.
Just add sql files starting with "R" without any version information to your migration folder:
R__Blue_cars.sql
You have to ensure, that the script could be repeatable migrated.
This is usually done by "CREATE OR REPLACE" clauses in your DDL statements.
https://flywaydb.org/documentation/migration/repeatable

How to properly debug OCaml code?

Can I know how an experienced OCaml developer debugs his code?
What I am doing is just using Printf.printf. It is too troublesome as I have to comment them all out when I need a clean output.
How should I better control this debugging process? special annotation to switch those logging on or off?
thanks
You can use bolt for this purpose. It's a syntax extension.
Btw. Ocaml has a real debugger.
There is a feature of the OCaml debugger that you may not be aware of which is not commonly found with stateful programming and is called time travel. See section 16.4.4. Basically since all of the information from step to step is kept on the stack, by keeping the changes associated with each step saved during processing, one can move through the changes in time to see the values during that step. Think of it as running the program once logging all of the values at each step into a data store then indexing into that data store based on a step number to see the values at that step.
You can also use ocp-ppx-debug which will add a printf with the good location instead of adding them manually.
https://github.com/OCamlPro-Couderc/ocp-ppx-debug

Resources