Why does update discards information in the UpdateReport in sbt? - sbt

I was looking into make some evictions fatal on sbt builds by changing the update task. I quickly found that the UpdateReport returned by it contains no eviction information. I looked at the source code then, and noticed that evicted calls updateFull, whose only difference to update is changing two flags in a downstream call, which result in that information not being discarded.
It's something done very explicitly, and the sole distinction between it and updateFull, both being one-liners calling the same method. And, on the other hand, I see no purpose in doing so. Replacing update with updateFull would allow me to tackle my stuff on it, but I worry it might have unforeseen consequences.
I tried replacing update with updateFull and ran a build successfully, but that doesn't prove anything. So I'm wondering why does update discard this information?

Related

Is it safe to initialize a struct containing a std::shared_ptr with std::memset?

I'm modifying a code written in c++ in order to add several features required by my company. I need to modify as less as possible this code, because it's a public code get from a Git repository, and we want to avoid to deviate from the original source code in case we need to synchronize our code with possible new versions in the future.
In this code, a structure is initialized with a call to std::memset. And I had need to add a shared pointer to this structure.
I notice no issue about that, the code compiles, links and works as expected, and I get even no warnings while the compilation.
But is it safe to achieve that this way? May a std::shared_ptr be correctly initialized if it is part of a structure initialized with std::memset? Or are side effects or hazardous issues which prevent to do that?

Automake/autotools and using "--dry-run" and "--always-make" for make

I stumbled upon an issue when I recently switched to VSCode as editor.
I have several projects that have a full (medium-complex+) autotool
setup and they all work fine. However I discovered that the makefile plugin for VSCode in order to initialize itself (and finding all dependencies and targets) starts by running
make --dry-run --always-make
as first time initialization. This throws the makefile (or actually the
re-config) into an endless loop re-running "configure" (since the targets are never resolved to disk).
I have also confirmed this behavior with the smallest possible autoconf/automake
setup. I can also kind-of understand why this happens (and it seems make
have an internal way to discover this exact situation with the special
variable MAKE_RESTARTS that could possible be used to detect a cyclic
behavior)
Is there a known best-practice workaround ? or is it even a reasonable expectations that these two options in combination should work? (Good to have a second opinion before I go down the rabbit-hole of reminding myself of all the details I forgot about the magical land of autotools)?

How to skip (problematic) Isabelle sessions in AFP?

I tried to use AFP (02/22/2021) with Isabelle 2021, but the jEdit/Isabelle PIDE wouldn't load after the AFP directory is added to the user ROOT file. The is shown below and seems to be about a specific package:
I don't really need the entry in question (, or know what it does).
My question is:
Is there a way to use a subset of AFPs and exclude problematic entries in the screenshot?
-- Update ---
As pointed out in the comments, the AFP seemed to be lagging a couple of days behind. Using afp-02-24-2021, the initial error went away. However, when selecting a session Jordan-Normal-Form from jEdit, there is a new error about JNF-AFP-Lib build failing, as shown below:
The question remains. The AFP seems to be a large collection and there could be multiple sources of error.
In case of such errors, is there a way to select a subset of AFPs to use or debug?
If not, is there a systematic way to test which afps do or do not build?
Original question
The problem was because you (unknowingly) used an AFP release that did not match the Isabelle release. To avoid that problem in the future, I recommend using the Mercurial repository directly:
https://foss.heptapod.net/isa-afp/afp-2021
The Mercurial repository for a new Isabelle release gets created during the RC phase, so you can always be sure to have matching versions.
In the case of these mismatches, it is usually not possible to select a subset of the AFP that "works", because between 2020 and 2021, the session management has changed considerably.
Update
The problem you're facing here is that you have selected a big session as a prerequisite and it takes too long to build with the default configuration.
You can build the session on the command line with an increased timeout as follows:
isabelle build -bv -d <path to afp>/thys -o timeout_scale=3 Jordan_Normal_Form
Increase the factor if it keeps timeouting.
General remark
You ask, "In case of such errors, is there a way to select a subset of AFPs to use or debug?". The answer to that question depends on the kind of error. Your post currently contains two vastly different problems, so it's not possible to give a general answer, unfortunately.

Why doesn't RStudio clear its Global Environment when something goes wrong?

Using R Version 4.0.2 and RStudio Version 1.3.1056
This is honestly one of the strangest features I've seen in RStudio, and I suppose there's probably a good reason for it to be there, but I'm currently not seeing it and I feel that it can lead to a lot of issues of misleading data.
Basically, to my understanding, when you create and open an R project in RStudio, RStudio creates a Session with a Global Environment.
Every time you run something, this is added to the Global Environment, I assume it's done as a cached value.
However, I've encountered situations where this feature leads to either:
Outdated/wrong values being shown in my tests.
Cases where a function stops working altogether after changing 1 piece of code, executing the new code, then undoing the change.
functions "bleeding into" other files without importing/sourcing them.
Case 1 and 2 obviously leads to a lot of issues while testing. If you try to run a test like
test <- someFunction()
test #to display the value of the test
If the code is correct, the test will execute and the results of test will be stored in the Global Environment.
However, if you then proceed to break the code and run the test again, since test already has a value stored in Global Environment, that old value will print, even though the function failed and thus didn't return anything. Of course if you go above on the console feed, you might run into a line after test <- someFunction() saying "someFunction failed for X reason", but I still think it's both pretty misleading and not very intuitive. Sometimes the result of a function is really large and it's complicated to scroll all the way up the console to see if the code exited with an error, whereas other IDEs would simply immediately tell you at the end of the console that the code failed, and not print the old and outdated value.
Example: Running the proper code.
Running the code after having changed is.na to the non-existent is.not.na.
Notice how it's still printing the old value belonging to the previous version of the function.
The third case can also lead to misleading scenarios.
If you execute a function in any file within your session, the function is stored in the Global Environment. This allows you to call the function from any other file, even if you haven't added a source statement at the top to load the file containing that function.
Once again this can lead to cases where you inadvertently change/add a new function on file B without running it, then try to invoke the function from file A and you get unexpected results because you were actually invoking the old/outdated function, and the Global Environment has no idea about the changes to the old one or the new function.
All of these issues are rather easy to fix, but I think that's a bit beyond the point. Why is this a feature in general? Why isn't the Global Environment emptied upon errors in execution? I know that you can manually empty the GE whenever you want, but it seems odd to me that the IDE doesn't do it on its own, or, to my knowledge, that it doesn't provide you with an option for it to do it.
I can imagine that it provides some benefit at runtime, but is it really that significant that it can justify these behaviors?

How to properly debug OCaml code?

Can I know how an experienced OCaml developer debugs his code?
What I am doing is just using Printf.printf. It is too troublesome as I have to comment them all out when I need a clean output.
How should I better control this debugging process? special annotation to switch those logging on or off?
thanks
You can use bolt for this purpose. It's a syntax extension.
Btw. Ocaml has a real debugger.
There is a feature of the OCaml debugger that you may not be aware of which is not commonly found with stateful programming and is called time travel. See section 16.4.4. Basically since all of the information from step to step is kept on the stack, by keeping the changes associated with each step saved during processing, one can move through the changes in time to see the values during that step. Think of it as running the program once logging all of the values at each step into a data store then indexing into that data store based on a step number to see the values at that step.
You can also use ocp-ppx-debug which will add a printf with the good location instead of adding them manually.
https://github.com/OCamlPro-Couderc/ocp-ppx-debug

Resources