We set up and Artifactory instance with a generic repo that proxies https://nodejs.org/dist/.
The problem is that the upstream repo has files that change over time. I checked the available documentation but haven't found pointers to whether is it possible to set up some cache invalidation policy that would allow us to purge the stale content from the cached files?
If you are running with Artifactory Pro, you can develop your own cache invalidation mechanism using a User Plugin. Have a look at this plugin as an example.
The idea is to mark a resource as "expired" just before Artifactory processes the download request, making Artifactory replace the cached version with the upstream version. Plugins such as this one are usually used to replace metadata files in repo types that aren't officially supported (CRAN, for example), but you can use it to expire any generic file. This functionality is documented as part of the beforeDownloadRequest comment block on our User Plugins Wiki.
Related
We have an Artifactory installation acting as proxy/cache for a remote Ubuntu repository. Sometimes packages are updated on the remote but the update doesn't fully propagate to the Artifactory cache and outdated packages are being served.
What's been tried:
Using the generic as well as deb option to add the remote repository
Metadata Retrieval Cache Period (Sec) adjustment - The Release/Package files are updated and contain the correct checksums. However the checksums of the previously cached packages do not match and remain unchanged.
Disable artifact resolution in repository ON/OFF - no difference.
For testing purposes in an effort to reproduce the issue, apt-mirror was used to create a fake repository. Replacing the files there and using dpkg-scanpackages to update the Release/Package metadata on said repository.
I'd expect artifactory to validate the cache against the remote package metadata and update it on a mismatch.
Am I overlooking something or is there any way to fix this that doesn't involve an ugly workaround?
I'm setting up a new atrifactory installation for the first time in my life. Downloaded the tar and extraceted it ok. Got some firewall rules in place to allow https to jcenter.bintray.com. After an initial refresh I see loads of artifacts in the com tree that must come from jcenter, so all seems fine, but when I preform simple maven tasks like mvn help:active-profiles I only get warnings and errors that indicate that none of the relevant stuff is available from my artifactory.
I have accessed the firewall logs and I found no outgoing traffic from my artifactory server to anything that's not permitted. What have I missed? My artifactory is OSS version 7.5.7 rev 705070900.
Artifactory remote repositories are not working as a mirror or the external repository they are pointing at.
Remote Artifactory repositories are proxying the external repository, which means that you have to actively request for artifacts. When requesting for an artifact, Artifactory will request it from the external repository and cache it inside Artifactory. Farther requests for a cached artifact will be served from Artifactory without the need to go out to the external repository.
The list of artifacts we are seeing, are ones which are available in the external repository. This is a feature is called remote browsing and available for some of the package types supported by Artifactory.
I found the issue, sort of. For reasons I now understand I have plugin repositories. I added the true source for the plugins to my list of plugin repositories, and that solved the issue for me.
I have an artifactory server, with a bunch of remote repositories.
We are planning to upgrade from 5.11.0 to 5.11.6 to take advantage of a security patch in that version.
Questions are:
do all repositories need to be on exactly the same version?
is there anything else i need to think about when upgrading multiple connected repositories (there is nothing specific about this in the manual)
do i need to do a system-level export just on the primary server? or should i be doing it on all of the remote repository servers
Lastly, our repositories are huge... a full System Export to backup will take too long...
is it enough to just take the config files/dirs
do i get just the config files/dirs by hitting "Exclude Content"
If you have an Artifactory instance that points to other Artifactory instances via smart remote repositories, then you will not have to upgrade all of the instances as they will be able to communicate with each other even if they are not on the same version. With that said, it is always recommended to use the latest version of Artifactory (for all of your instances) in order to enjoy all the latest features and bug fixes and best compatibility between instances. You may find further information about the upgrade process in this wiki page.
In addition, it is also always recommended to keep backups of your Artifactory instance, especially when attempting an upgrade. You may use the built-in backup mechanism or you may manually backup your filestore (by default located in $ARTIFACTORY_HOME/data/filestore) and take DataBase snapshots.
What do you mean by
do all repositories need to be on exactly the same version?
Are you asking about Artifactory instances? Artifactory HA nodes?
Regarding the full system export:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
https://jfrog.com/knowledge-base/how-should-we-backup-our-data-when-we-have-1tb-of-files/
For more info, you might want to contact JFrog's support.
We're making use of a remote repository and are storing artifacts locally. However, we are running into a problem because of the fact the remote repository regularly rebuilds all artifacts that it hosts. In our current state, we update metadata (e.x. repodata/repomd.xml), but artifacts are not updated.
We have to continually clear our local remote-repository-cache out in order to allow it to download the rebuilt artifacts.
Is there any way we can configure artifactory to allow it to recache new artifacts as well as the new artifact metadata?
In our current state, the error we regularly run into is
https://artifactory/artifactory/remote-repo/some/path/package.rpm:
[Errno -1] Package does not match intended download.
Suggestion: run yum --enablerepo=artifactory-newrelic_infra-agent clean metadata
Unfortunately, there is no good answer to that. Artifacts under a version should be immutable; it's dependency management 101.
I'd put as much effort as possible to convince the team producing the artifacts to stop overriding versions. It's true that it might be sometimes cumbersome to change versions of dependencies in metadata, but there are ways around it (like resolving the latest patch during development, as supported in the semver spec), and in any way, that's not a good excuse.
If that's not possible, I'd look into enabling direct repository-to-client streaming (i.e. disabling artifact caching) to prevent the problem of stale artifacts.
Another solution might be cleaning up the cache using a user plugin or a script using JFrog CLI once you learn about newer artifacts being published in the remote repository.
We are running into a 404 error when pulling a specific package from the npm remote repository. It seems to only happen with the #ngrx/effects#2.0.2. We are able to install the 2.0.0 version and other scoped packages correctly.
tested it with scoped and unscoped packages that we have never installed before and it works successfully. Just this package seems to have a problem.
We are on version 5.1.0
The issue is the metadata retrieval cache periods. In order to avoid the latency associated with upstream connections, Artifactory will cache certain metadata from the remote site (NPMJS in this case). This can mean that the period has to pass before you can see anything new.
You can read more about the settings on Artifactory Wiki entry for Advanced Settings. In your case, the relevant settings are Metadata Retrieval Cache Period and Missed Retrieval Cache Period. If you want to always receive the most up-to-date information, simply set those to zero (or a couple of minutes). This may slow down your builds a tad but it's a compromise between speed and completeness.
As administering my Artifactory install was not an option, I found an easy fix:
Remove the line containing the token to your artifactory server in ~/.npmrc.
This may be done with npm logout, however I didn't try that. In any case, the token being present resulted in 404 responses from the server.