Is it possible to generate artifacts without publishing with Semantic-release? - semantic-release

I'm currently using semantic-release for versioning of my react library.
https://github.com/semantic-release/semantic-release
Question:
Is it possible to generate the artifacts without publishing it?
For example, in my use case I would like to generate:
- Version Release Number (#semantic-release/commit-analyzer)
- tar file that will be publish to npm (#semantic-release/npm)
- change log (#semantic-release/release-notes-generator)
If you run the dry run option, it will print the version release number and the change log to console, but I want to store it to a file. One workaround is I could pipe the results and then parse, but it'll be nice if it can pass the plugin could put the data in a file during the dry run.
The dry run won't run the publish stage which is where the files get tar'ed up.
Any Advice appreciated,
Thanks,
Derek

You can use the npmPublish option of the #semantic-release/npm plugin. This will generate the tar file for the npm package but it won't publish it to the npm registry.

// In your package.json file add the following property which ensures that npm will not publish
"private": "true"
// In your github action's workflow/release.yml file use the following to store your tar file
- run: |
mkdir -p ~/new/artifact
echo ${{contents_of_your_file}} > ~/new/artifact/yourtarfile
- uses: actions/upload-artifact#v2
with:
name: artifactname
path: '~/new/**/*'
- name: download
uses: actions/download-artifact#v1
with:
name: artifactname
path: '~/new/**/*'

Related

Serve RMarkdown outputs without version controlling them

We frequently use RMarkdown based packages to create websites with R (bookdown, blogdown, distill...) and use github-pages to serve the html files via the url username.github.io/repo.
In this approach, the ouput (i.e. html / css) files are also version controlled, and are frequently included in commits by mistake (git commit -a). This is annoying since these files clutter the commit and often lead to fictitious files conflicts.
Ideally, the outputfiles would not be version controlled at all, since the binary files (images) additionally bloat the repo. So I'm looking for a solution where:
Git ignores the output files completely but provides an alternative (but comparable1) method to gh-pages to serve them
Git ignores the output files temporally and committing / pushing them to gh-pages is done in a separate, explicit command
1: The method should be command line based and provide a nice URL to access the website
You could have .html, .css etc. ignored in the main and all other branches but the branch, for example, the gh-page branch, where your github-page is built from.
Git does not support different .ignore files in different branches so you would have to set up a bash script that replaces the ignore file each time you checkout a new branch. See here for how to do that: https://gist.github.com/wizioo/c89847c7894ede628071
Maybe not the elegant solution you were hoping for but it should work.
If you have a python installation on your computer, you can use GitHub Pages Import, a tool designed specifically for this purpose.
You need a python installation since it has to be installed with pip, but once it's installed it integrates beautifully with into an R / RMarkdown workflow.
Once it's installed (pip install ghp-import), you can run ghp-import docs (assuming docs is where your RMarkdown outputs are stored).
There are a bunch of practical options that you can use, including -p to additionally push the changes to your remote after the commit.
You need to tell Git to ignore the folder the book gets built into.
So, for example, by default bookdown puts all the built files in a folder called "_book"
Just add the following line to the .gitignore file in your project.
_book
Then you can work on your book and build it and push changes without worrying about the site being updated.
To update the site, you want to create a gh-pages branch that is only used for the hosted content. Do that with these commands from in your book folder:
git checkout --orphan gh-pages
git rm -rf .
# create a hidden file .nojekyll
touch .nojekyll
git add .nojekyll
git commit -m"Initial commit"
git push origin gh-pages
Make sure (once you put your book content in that branch) that GitHub is set to use that branch for hosting, rather than the folder you were using before.
Then, you can switch back to your main branch with the following command:
git checkout master
Next, you will clone your gh-pages branch into your main project:
git clone -b gh-pages https://github.com/yourprojecturl.git book-output
Then, when you have a version of the built book (in the _book folder) ready to use as your live site, use the following commands to copy the content into the book-output folder and push that to the gh-pages branch where the live site is:
cd book-output
git rm -rf *
cp -r ../_book/* ./
git add --all *
git commit -m"Update the book"
git push -q origin gh-pages
You can continue to use this last set of commands whenever you have a version in _book that you're ready to push live.

How to add pre-commit git hooks to check that README.Rmd and index.Rmd have been knitted?

I have an R package with a pkgdown documentation site. I want to create a git hook so that if I try to commit and push changes to either README.Rmd or index.Rmd without first knitting them to create the corresponding .Md files, I'm warned. Right now I just forget.
The book R Packages says to use usethis::use_readme_rmd() to create the README, which will also create the git hook. But I already have a README.Rmd file.
How can I create a hook for an existing .Rmd file generally, whether it's README.Rmd or the index.Rmd from my pkgdown site? I'd like to use the usethis package but if it's simpler to do it outside of that package, I'm open to that.
A different approach is to do this with Github Actions if that is where your pkgdown site is.
Create the folder .github within your repo
Within that create the folder workflows
Within that create the file render-readme.yml
Paste this code in that file
on:
push:
paths:
- README.Rmd
- Index.Rmd
name: Render README and Index
jobs:
render:
name: Render README and Index
runs-on: macOS-latest
steps:
- uses: actions/checkout#v2
- uses: r-lib/actions/setup-r#v1
- uses: r-lib/actions/setup-pandoc#v1
- name: Install packages
run: Rscript -e 'install.packages(c("rmarkdown", "knitr"))'
- name: Render README
run: Rscript -e 'rmarkdown::render("README.Rmd", output_format = "md_document")'
- name: Render Index
run: Rscript -e 'rmarkdown::render("Index.Rmd", output_format = "md_document")'
- name: Commit results
run: |
git commit README.md -m 'Re-build README.Rmd' || echo "No changes to commit"
git commit Index.md -m 'Re-build Index.Rmd' || echo "No changes to commit"
git push origin || echo "No changes to commit"
Push this to GitHub and it should start working immediately. Note it takes a bit of time to process. Click on the Actions tab in your GitHub repo to see the progress.
See https://github.com/r-lib/actions for examples. Code above is adapted from that.
Note you might want to divide the action into 2 files. render-readme.yml and render-index.yml. That way if the Action fails, you'll know which file has the problem.

AWS CodeBuild artifact handling

So I have a react with SSR project that I am trying to create a CI/CD pipeline. My project needs to deploy the following artifacts:
dist folder created by Webpack
appspec.yml file needed by Code Deploy
script folder for scripts that are references in appspec.yml
When I tried to get the files one by one using this buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- npm install
build:
commands:
- npm run build
artifacts:
files:
- 'dist/*'
- 'appspec.yml'
- 'deploy-scripts'
I got dist with only part of it's content and the appspec.yml and the deploy scripts folder.
Then I tried a different approach:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- npm install
build:
commands:
- npm run build
artifacts:
files:
- '**/*'
base-directory: "dist"
discard-paths: yes
The dist folder has the scripts and the appspec file inside it. Now it does deploy, but I lose the folder structure of dist which is necessary for my deploy scripts.
I need to get all the contents of dist in their folder structure. As well as the scripts and the appspec.yml file. The scripts and the appspec.yml can't be inside dist, but dist needs to have all of it's content.
Can anyone help with this?
The solution was to use the first buildspec file and adding "**/*" to the dist directory.
So in the dist line it ends up being this: "dist/**/*".
So if we apply this to the general context, anytime you want to get a directory to be sent along with single files in the build phase, you can add it like this:
"[directory_name]/**/*"
And that will get you both the directory and everything inside it in a recursive way.

Setting codebuild output source folder for SBT incremental compile

I am curious if you can control the output "src" folder in AWS CodeBuild.
Specifically, I see this when debugging the build in CodeBuild.
/codebuild/output/src473482839/src/github.....
I would love to be able to set/change/remove the src473482839 part of that path, because I have a feeling it is causing my sbt to recompile my scala source files, although I am using CodeBuilds new localcache to cache my target folders between builds, the compiled class's canonical path change between builds, which is what I suspect is causing the problem
After some more debugging I have managed to get my 6 minute builds down to 1:30s.
Although you are not able to set or override the CODEBUILD_SRC_DIR I have found a work around in my buildspec.
This is what my buildspec looks like now, with local caching enabled in codebuild.
version: 0.2
phases:
pre_build:
commands:
- mkdir -p /my/build/folder/
- cp -a ${CODEBUILD_SRC_DIR}/. /my/build/folder
build:
commands:
- cd /my/build/folder
- sbt compile test
cache:
paths:
- '/root/.ivy2/cache/**/*'
- '/root/.cache/**/*'
- 'target/**/*'
- 'any other target folders you may need'
The key change I had to make was copy over the source(cached target directories) in the pre_build phase, and change directory and compile from the new, static directory
I hope this helps someone else down the road until CodeBuild allows a person to set/override the CODEBUILD_SRC_DIR folder

XCode 'Run Script' step to create directory and copy file is failing

I have the following custom script step during my build:
mkdir -p "${CONTENTS_FOLDER_PATH}/Frameworks"
cp "${SRCROOT}/testing.1.dylib" "${CONTENTS_FOLDER_PATH}/Frameworks"
The script runs successfully, but when I check the bundle the Frameworks directory does not exist.
Should not not work as I expect? (Frameworks folder created with the testing.1.dylib in it).
Edit: Added screenshot of the runscript.
How about trying the following:
dst=${CONFIGURATION_BUILD_DIR}/${CONTENTS_FOLDER_PATH}/Frameworks
mkdir -p "${dst}"
cp "..." "$dst"
(I found your example and adapted it as above to copy a dylib into the 'Frameworks' folder of my framework).

Resources