Only apply tag using semantic-release - semantic-release

As far as I read the documentation of semantic-release, it does many things in sequence.
It applies the tag in the commit and publish it to the Github releases.
Q: Is it possible to separate those 2 steps somehow?
In 1st stage, I only want to apply tag
In 2nd stage, I want to publish the artifact to Github.
Note: We are using the following command
npx semantic-release
The steps done by semantic-release: semantic-release: release-steps

Related

self-updating *.phpt tests with phpunit?

Background I: *.phpt in phpunit
I recently read an article about *.phpt support in PhpUnit:
https://www.moxio.com/blog/32/start-testing-with-phpt-tests-in-phpunit
The big advantages here:
Supported out of the box with phpunit, no need for custom code to discover and deal with those files.
Common format that everybody seems to agree upon.
One can directly specify a specific *.phpt as a cli parameter.
A typical *.phpt file would look like this:
--TEST--
Basic arithmetic - addition
--FILE--
<?php
var_dump(42 + 1);
?>
--EXPECT--
int(43)
Background II: Self-updating fixtures
Earlier than that, I read about unit tests that can update (= overwrite) their own fixture files if a cli variable is present:
https://tomasvotruba.com/blog/2020/07/20/how-to-update-hundreds-of-test-fixtures-with-single-phpunit-run/
The following cli command would then overwrite the "expected" part in fixtures for failing tests:
$ UPDATE_TESTS=1 ./vendor/bin/phpunit
One would then review the git diff, and either accept the changes as desired functionality change, or fix the behavior.
The format of fixture files proposed in this article is very similar to the *.phpt file format, but in the end the format can be completely arbitrary and project-specific.
Benefits:
Start with empty "--EXPECT--" section, and have this part generated automatically.
Automatically update tests after behavior change, e.g. after a bug was fixed, or when BC-breaking changes were introduced.
Frequently update fixtures during early development, when behavior is not really stable yet.
No more copy/paste from cli diff output.
Disadvantages:
No built-in support in phpunit. Requires custom code. This does not scale well, if I want to do it in multiple projects.
One cannot target the *.phpt file as a cli parameter.
Question
Is there a built-in way to auto-update those *.phpt test files with phpunit?
Or should I perhaps use something other than phpunit?

Using git for feedback from proof readers

I am currently writing a text with R bookdown and asked two friends to read my text and give comments, corrections and general feedback. My source files for the text are stored on GitHub and I would like my collaborators to make changes in the files (one for each chapter) with the help of git. However, none of us are really experts on git. This makes it hard to figure out what a suitable workflow is.
For now, we decided that each one of them creates himself a branch so that he does not directly push into the master branch. After I have read their changes I would like to decide what I merge into the master branch and what not. So far, it looks like each change needs to be in a separate commit because I am not able to merge single lines from a specific commit (not sure if that is at all possible). However, this seems like a lot of annoying and unnecessary commits to create. So, I guess I am looking for a way to avoid that and/or general pointers towards a good workflow for such kind of projects.
A useful command will be git cherry-pick, it allows you to select specific commits from a branch.
A general good practice is that commits should be self contained (if applied alone they make sense) and they target a specific feature (in the use case mentioned, that could be a paragraph or a section or a chapter).
In the end, if you would like to apply only specific changes of a commit, that would have to happen manually, someone has to decide which parts to apply and which not. A commit can be edited using git rebase -i <branch name> before being merged. This question might also be useful.
I finally found what worked for me in here. Basically, on my master branch I had to use
git merge --no-commit --no-ff branch-to-merge
This will merge all changes into my master branch but does not immediatly commit the changes so that they can still be staged/unstaged. Then, I can decide what line change to include by staging the line changes I want to keep and discard all other line changes. Finally, I commit all staged line changes et voilĂ , that's what I wanted to get.
Sidenote: I am using gitkraken and as a beginner with git I enjoy using the GUI but the merge part with the options "no-commit" and "no-fast-forwarding" had to be done via the git console (at least I could not find a way to to that using the GUI). Choosing which lines to stage and which to discard is then an easy task via the GUI.

How to install an R package to R-3.3.0 from GitHub, which is built on R-3.4.0?

We have R-3.3.0 in our university's computing system. For some reason, the IT staffs do not want to update R version to R-3.4.0 soon. However, I need to install an R package from the GitHub, which is built on R-3.4.0. Is there any way to install that R package from the GitHub's to R-3.3.0?
#patrick's answer may work just fine. The benefit (if it works) is that you have all recent changes and functionality of the package. However, you may never know if one of the changes requiring 3.4 is about accuracy or correctness, meaning you may still get a return value but you don't necessarily know that it is correct.
For this answer, I'm going to assume that there is a valid reason to not use the current version and trick R into installing it anyway.
Grab from a specific commit
Go to the repo, https://github.com/mshasan/OPWeight in this case.
Open the DESCRIPTION file and click on the "Blame" button on the right. This brings up the commit message-header and timeframe for each group of lines with their most recent commit. In this case, it shows "Update DESCRIPTION":
Click on the description, and you're taken to the specific commit. Seeing that this is a single-line change, it is likely that an earlier commit may have been what actually changed code to use R (>= 3.4.0) a hard requirement. Take note of the commit hash (5c0a43c in this case).
Go back to the repo main page and click on "Commits". If you now search for that 7-char hash-substring, you'll see it happened on June 20, 2017. Unfortunately, the commit descriptions and timeline do not give a great expectation of where the version-depending change happened.
If you can find "the commit" that did it, then take that hash-substring and use that as your ref="..." argument to install_github. If not, however, you are either stuck (1) trying them iteratively or randomly, or (2) asking the author at which commit they started using 3.4-specific code.
Once you know a ref to use (or want to try), then run
devtools::install_github("mshasen/OPWeight", ref="5c0a43c")
(This is obviously the wrong ref to use, since that's the first commit at which we are certain the dependency exists.)
Using tests to know which to use
Since the repo contains a tests/ subdir, one can hope/assume that the tests will accurately catch if things are not working correctly in your R-3.3. This alternative method involves you testing each commit on your specific version of R (before the DESCRIPTION file was changed) until the tests no longer fail.
Using git (command-line or GUI), clone the repo to your local computer.
$ git glone https://github.com/mshasan/OPWeight
Now, iterate through the references (found using the above method or perhaps with git log) with something like:
$ git checkout --detach <hash_substring>
... and in R, run
devtools::test("path/to/your/copy/of/OPWeight")
If the tests have been set up correctly and you chose a worthy version, then stick with it.
You can find the description file for OPWeight here.
Change this part
Depends:
R (>= 3.4.0),
to whatever R you are using and see if things break. The logic of the description file is explained here. Obviously a last resort.

Conditional addSbtPlugin based on scalaVersion

I'm using a plugin (sbt-scapegoat) which only works for Scala 2.11.
Can I have a conditional addSbtPlugin based on scalaVersion? Like:
if (scalaVersion.value.startsWith("2.11")) addSbtPlugin("com.sksamuel.scapegoat" %% "sbt-scapegoat" % "0.94.6")
How can I do this in SBT?
Jianshi
tl;dr It's not possible given the description of the problem.
There are at least two build configurations involved in a sbt project - the real project (you want to bet your money on) and the meta build for the build of your project. Yes, I know it sounds a little weird, but it's a very powerful concept IMHO.
See sbt is recursive:
The project directory is another build inside your build, which knows how to build your build. To distinguish the builds, we sometimes use the term proper build to refer to your build, and meta-build to refer to the build in project. The projects inside the metabuild can do anything any other project can do. Your build definition is an sbt project.
sbt runs atop Scala and requires a strict version of it. No way to change it unless you fancy spending time on things you should really not be touching in the first place :)
What you can do is to apply the plugin in project/plugins.sbt and then, in the project, apply the settings of the plugin selectively, per scalaVersion of the project's build not the meta-build's itself.
It's not that complicated as the answer reads, but explaining simple concepts is usually not an easy task for me. Have fun with sbt! It's gonna pay you back very soon when used properly.
Updated answer for 2020: You can use .filter on addSbtPlugin.
For example, the following works:
val scalafixEnabled = System.getProperty("SCALAFIX", "").trim.nonEmpty
addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.9.14").filter(_ => scalafixEnabled)

Adding additional make/test targets to an autotools project

We have an autotools project that has a mixture of unit and integration tests, all of which run via 'make check'. This isn't ideal, as some of the integration tests take a while, and have all sorts of dependencies (database, etc.)
I'd like to separate the integration tests and assign them their own make target. That way, unit tests can still be run often (via make check), and the integration tests can be run as needed in a similar fashion.
Is there a straightforward (or otherwise) way to add an additional make target?
NOTE: I should probably also add that this is a large project, so editing/maintaining every makefile by hand is not desirable. I'd like to do it the 'autotools way' if possible.
-- UPDATE 1 --
I've attempted Jon's solution, and it's a step closer, but not quite there. I still have a couple of issues:
1) Recursion - I'm OK with modifying the makefile.am in the root of the build tree, as well as any directory that contains the tests, but it seems like there should be a way to do this where I don't have to change every Makefile.am in the hierarchy. (the check target works this way, after all)
2) .PHONY - I keep getting messages about .PHONY being redefined. Which is understandable, because it's being set by another package (specifically, doxygen). How do I make the two play nice together?
In your am files, all make syntax is passed into the generated Makefile. So if you want a new target just create it like you would in a Makefile and it will appear in the auto generated Makefile. Put the following at the bottom of your am files.
integration-tests: prerequisites....
commands to run test
.PHONY: integration-tests
Since there haven't been any more responses, I'm going to answer with my solution.
I solved the recursion issue by eliminating the recursion altogether. Using this page
as a guide, I switched the entire project from recursive make to non-recursive make. I then cloned the non-recursive check-related targets (check, check-am, check-TESTS, etc.) into a new set of targets for the integration tests. So far, this works extremely well.
Note: you may be wondering why I didn't just clone the recursive targets instead. Quite frankly, I couldn't find them. Either I didn't know where to look (the rules weren't in the generated Makefile) or something is happening implicitly, and I don't understand autotools well enough to follow it.
As for the issue with .PHONY being redefined, I still haven't found a solution, other than to conditionally exclude the other definition when I'm doing the integration tests.

Resources