I'm not sure how best to describe this, hence the rather vague title.
I have an R package that uses Github Actions to run checks. You can see the workflow file here:
https://github.com/Azure/Microsoft365R/blob/master/.github/workflows/check-standard.yaml
It's basically the same as the check-standard workflow in the r-lib/actions repo, with some tweaks for my particular requirements. My latest commit is failing the check for the MacOS build, with this error:
Run remotes::install_deps(dependencies = TRUE)
Error: Error: HTTP error 404.
Not Found
Did you spell the repo owner (`hongooi73`) and repo name (`AzureGraph`) correctly?
- If spelling is correct, check that you have the required permissions to access the repo.
Execution halted
Error: Process completed with exit code 1.
The step in question is this. It just scans the package's DESCRIPTION file and installs the dependencies for the package -- all very straightforward.
- name: Install dependencies
run: |
remotes::install_deps(dependencies = TRUE)
remotes::install_cran(c("pkgbuild", "rcmdcheck", "drat"))
shell: Rscript {0}
It looks like it's trying to install a dependency from the hongooi73/AzureGraph repo, which no longer exists. But my DESCRIPTION file doesn't list hongooi73/AzureGraph as a remote dependency; it uses Azure/AzureGraph of which hongooi73/AzureGraph was a fork. It used to refer to hongooi73/AzureGraph, but that was several commits ago. Indeed, the Linux and Windows checks both run without problems so they are clearly using the correct repo location.
What can be causing this failure? And how do I fix it? I've already tried rerunning the workflow, and deleting older workflows.
You're using actions/cache to cache your R libs. By this you're restoring a cache that might be invalid if your key and the restore-keys isn't set up properly.
At the moment, there is no direct way to manually clear the cache. For some other options you can check Clear cache in GitHub Actions.
Jan. 2021:
At the moment, there is no direct way to manually clear the cache
June 2022: Actually, there now is:
List and delete caches in your Actions workflows
You can now get more transparency and control over dependency caching in your actions workflows.
Actions users who use actions/cache to make jobs faster on GitHub Actions can now use our cache list and delete APIs to:
list all the Actions caches within a repository and sort by specific metadata like cache size, creation time or last accessed time.
delete a corrupt or a stale cache entry by providing the cache key or ID.
Learn more about Managing caching dependencies to speed up workflows.
See the updated answer to "Clear cache in GitHub Actions" from beatngu13 for the GitHub API call examples.
Related
We have an Artifactory installation acting as proxy/cache for a remote Ubuntu repository. Sometimes packages are updated on the remote but the update doesn't fully propagate to the Artifactory cache and outdated packages are being served.
What's been tried:
Using the generic as well as deb option to add the remote repository
Metadata Retrieval Cache Period (Sec) adjustment - The Release/Package files are updated and contain the correct checksums. However the checksums of the previously cached packages do not match and remain unchanged.
Disable artifact resolution in repository ON/OFF - no difference.
For testing purposes in an effort to reproduce the issue, apt-mirror was used to create a fake repository. Replacing the files there and using dpkg-scanpackages to update the Release/Package metadata on said repository.
I'd expect artifactory to validate the cache against the remote package metadata and update it on a mismatch.
Am I overlooking something or is there any way to fix this that doesn't involve an ugly workaround?
We're making use of a remote repository and are storing artifacts locally. However, we are running into a problem because of the fact the remote repository regularly rebuilds all artifacts that it hosts. In our current state, we update metadata (e.x. repodata/repomd.xml), but artifacts are not updated.
We have to continually clear our local remote-repository-cache out in order to allow it to download the rebuilt artifacts.
Is there any way we can configure artifactory to allow it to recache new artifacts as well as the new artifact metadata?
In our current state, the error we regularly run into is
https://artifactory/artifactory/remote-repo/some/path/package.rpm:
[Errno -1] Package does not match intended download.
Suggestion: run yum --enablerepo=artifactory-newrelic_infra-agent clean metadata
Unfortunately, there is no good answer to that. Artifacts under a version should be immutable; it's dependency management 101.
I'd put as much effort as possible to convince the team producing the artifacts to stop overriding versions. It's true that it might be sometimes cumbersome to change versions of dependencies in metadata, but there are ways around it (like resolving the latest patch during development, as supported in the semver spec), and in any way, that's not a good excuse.
If that's not possible, I'd look into enabling direct repository-to-client streaming (i.e. disabling artifact caching) to prevent the problem of stale artifacts.
Another solution might be cleaning up the cache using a user plugin or a script using JFrog CLI once you learn about newer artifacts being published in the remote repository.
We are running into a 404 error when pulling a specific package from the npm remote repository. It seems to only happen with the #ngrx/effects#2.0.2. We are able to install the 2.0.0 version and other scoped packages correctly.
tested it with scoped and unscoped packages that we have never installed before and it works successfully. Just this package seems to have a problem.
We are on version 5.1.0
The issue is the metadata retrieval cache periods. In order to avoid the latency associated with upstream connections, Artifactory will cache certain metadata from the remote site (NPMJS in this case). This can mean that the period has to pass before you can see anything new.
You can read more about the settings on Artifactory Wiki entry for Advanced Settings. In your case, the relevant settings are Metadata Retrieval Cache Period and Missed Retrieval Cache Period. If you want to always receive the most up-to-date information, simply set those to zero (or a couple of minutes). This may slow down your builds a tad but it's a compromise between speed and completeness.
As administering my Artifactory install was not an option, I found an easy fix:
Remove the line containing the token to your artifactory server in ~/.npmrc.
This may be done with npm logout, however I didn't try that. In any case, the token being present resulted in 404 responses from the server.
I am trying to install Openstack through Ansible for a single node using All IN ONE.
When I run setup-everything.yml file, I am receiving following error:
ERROR: config_template is not a legal parameter in an Ansible task or handler
Can you please help on the issue?
I know that this answer is a little late but I found this and thought to try and help others out should they run into this.
It's very likely that the system was not bootstrapped. We see this error from time to time when the action plugin is not present on the system. With OpenStack-Ansible you will need to retrieve the roles and plugins from the ansible-role-requirements.txt file. After you've cloned the software the first step in deployment is usually running ./scripts/bootstrap-ansible.sh which would install Ansible into a venv, retrieve your roles, libraries, & plugins, and then create the openstack-ansible CLI wrapper. You can also simply use the ansible-galaxy command with the ansible-role-requirements.txt if you don't want to run that script. After you have the roles and libs you will likely no longer see that error. More documentation on getting started can be found here: https://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html
You can get access to the config_template module source code here: https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py should you have issues specifically with the module or jump into the #openstack-ansible channel on freenode where there's generally someone available to help out.
config_template is a custom module developed by the OpenStack team. If you get ERROR: config_template is not a legal parameter in an Ansible task or handler it might mean that Ansible can not find the module could be an indention / syntax error as well. Check if the module config_template is in your ANSIBLE_LIBRARY environment variable. You can also pass the path via command line with --module-path
Also the pull request for this module was closed by the Ansible developers. So it is likely that you can find a similar functionality in a module supported by the Ansible developers.
I have followed every advice on http://r-pkgs.had.co.nz/git.html and on the subsection http://r-pkgs.had.co.nz/git.html#git-branch and I am still getting error.
The steps I need/did (different from what Hadley's page dictates).
grab URL of GitHub repo (e.g, https://github.com/OHDSI/Achilles.git )
create versioned project in RStudio with this URL
set up my global user names for git
select a dev branch here (for example devXYZ)
At this point I got "detached at origin/devXYZ) message.
Per instructions in Hadley book - I tried to do fix this using this command
git push --set-upstream origin devXYZ
but it fails. The error is: origin does not appear to be a git repository or src refspec devXYZ does not match any
I tried fixing it with doing this command (may be wrong)
git remote add origin https://github.com/OHDSI/Achilles.git
I am using windows, latest R, latest RStudio, latest git from https://git-scm.com/download/win
EDIT: I also tried making a new branch using the recommended mechanism but it also fails. The goal is to get instructions where there is not git init and the whole process starts with an URL and new project in RStudio.
The desired future steps to work would be 5. modify and commit into the devXYZ branch.
THIS ONLY APPLIES TO NON-MASTER BRANCHES:
If you are newbie to git - simply don't try to do the git part in R at all.
Instead, use GitHub Desktop or SourceTree.
Point that tool to the desired repo, switch to desired branch
Start RStudio and do any development
Close RStudio and use that external tool to perform any git steps.
FOR MASTER BRANCHES:
integrated RStudio git implementation works great.
I think I might know what the problem is. You're trying to push directly to the main repo. I'm guessing you're not one of the main contributors for that repo so it won't allow you to create a branch there directly. I'm guessing in that book he's probably using his own repository as an example rather than using an existing one
The reason you're getting that error is because that branch doesn't exist on the remote repo so it can't get the reference to it which is inferred from this src refspec devXYZ does not match any
The preferred workflow is to work on a fork of the main repo (basically its your own personal copy of the main repo that is stored on the server). Even if you end up as a contributor at some point I think this is a good workflow to follow
Here's a good explanation on how use the fork workflow. There's other information on stackoverflow as well
Once you've made updates you'd create what's called a pull request to the original repo (commonly referred to as upstream). This basically is a request to merge your changes from the fork into the main repo. This allows the repo owner to review the changes and decide whether to accept them or make changes
Since you're just going over a tutorial I'd say use your fork as the origin wherever its used in the book for now