We're making use of a remote repository and are storing artifacts locally. However, we are running into a problem because of the fact the remote repository regularly rebuilds all artifacts that it hosts. In our current state, we update metadata (e.x. repodata/repomd.xml), but artifacts are not updated.
We have to continually clear our local remote-repository-cache out in order to allow it to download the rebuilt artifacts.
Is there any way we can configure artifactory to allow it to recache new artifacts as well as the new artifact metadata?
In our current state, the error we regularly run into is
https://artifactory/artifactory/remote-repo/some/path/package.rpm:
[Errno -1] Package does not match intended download.
Suggestion: run yum --enablerepo=artifactory-newrelic_infra-agent clean metadata
Unfortunately, there is no good answer to that. Artifacts under a version should be immutable; it's dependency management 101.
I'd put as much effort as possible to convince the team producing the artifacts to stop overriding versions. It's true that it might be sometimes cumbersome to change versions of dependencies in metadata, but there are ways around it (like resolving the latest patch during development, as supported in the semver spec), and in any way, that's not a good excuse.
If that's not possible, I'd look into enabling direct repository-to-client streaming (i.e. disabling artifact caching) to prevent the problem of stale artifacts.
Another solution might be cleaning up the cache using a user plugin or a script using JFrog CLI once you learn about newer artifacts being published in the remote repository.
Related
We have an Artifactory installation acting as proxy/cache for a remote Ubuntu repository. Sometimes packages are updated on the remote but the update doesn't fully propagate to the Artifactory cache and outdated packages are being served.
What's been tried:
Using the generic as well as deb option to add the remote repository
Metadata Retrieval Cache Period (Sec) adjustment - The Release/Package files are updated and contain the correct checksums. However the checksums of the previously cached packages do not match and remain unchanged.
Disable artifact resolution in repository ON/OFF - no difference.
For testing purposes in an effort to reproduce the issue, apt-mirror was used to create a fake repository. Replacing the files there and using dpkg-scanpackages to update the Release/Package metadata on said repository.
I'd expect artifactory to validate the cache against the remote package metadata and update it on a mismatch.
Am I overlooking something or is there any way to fix this that doesn't involve an ugly workaround?
In a near future I will start using Artifactory in my project. I have been reading about local and remote repositories and I am a bit confused of their practical use. In general as far as I understand
Local repositories are for pushing and pulling artifacts. They have no connection to a remote repository (i.e. npm repo at https://www.npmjs.com/)
Remote repositories are for pulling and caching artifacts on demand. It works only one way, it is not possible to push artifacts.
If I am right up to this point, then practically it means you only need a remote repository for npm if you do not develop npm modules but only use them to build your application. In contrast, if you need to both pull and push Docker container images, you need to have one local repository for pushing&pulling custom images and one remote repository for pulling official images.
Question #1
I am confused because our Artifactory admin created a local npm repository for our project. When I discussed the topic with him he told me that I need to first get packages from the internet to my PC and push them to Artifactory server. This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Question #2
Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy (i.e. delete packages older than 6 months)? I ask this because it is important to keep packages until a meteor hits the servers (for archiving policy of the company).
Question #3
We will need to get official Docker images and customize them for CI. It would be a bit hard to maintain one local repo for pulling&pushing custom images and one remote repo for pulling official images. Let's say I need to pull official Ubuntu latest, modify it, push and finally pull the custom image back. In this case it should be pulled using remote repository, pushed to local repo and pulled again from local repo. Is it possible to use virtual repositories to do this seamlessly as one repo?
Question #1 This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Generally, you would want to use a remote repository for this. You would then point your client to this remote repository and JFrog Artifactory would grab them from the remote site and cache them locally, as needed.
In some very secure environments, corporate policies do not even allow this (they may not even be connected to the internet) and instead manually download, vet, and then upload those third-party libraries to a local repository. I don't think that is your case and they may just not understand their intended usages.
Question #2 Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy?
They will not be deleted unless you actively configure it to do so.
For some repo types there are built-in retention mechanisms like the number of snapshots or maximum tags but not for all of them and even in those that have it, they must be actively turned on. Different organizations have different policies for how long artifacts must be maintained. There are a lot of ways to cleanup those old artifacts but ultimately it will depend on your own requirements.
Question #3 Is it possible to use virtual repositories to do this seamlessly as one repo?
A virtual repository will let you aggregate your local and remote sites and appear as a single source. So you can do something like:
docker pull myarturl/docker/someimage:sometag
... docker build ...
docker push myarturl/docker/someimage:sometag-my-modified-version
docker pull myarturl/docker/someimage:sometag-my-modified-version
It is also security-aware so if the user only has access to the local stuff and not the remote stuff, they will only be able to access the local stuff even though they are using the virtual repository that contains both of them.
That said, I don't see why it would be any harder to explicitly use different repositories:
docker pull myarturl/docker-remote/someimage:sometag
... docker build ...
docker push myarturl/docker-local/someimage:sometag-my-modified-version
docker pull myarturl/docker-local/someimage:sometag-my-modified-version
This also has the added advantage that you know they can only pull your modified version of the image and not the remote (though you can also accomplish that by creating the correct permissions).
I have an artifactory server, with a bunch of remote repositories.
We are planning to upgrade from 5.11.0 to 5.11.6 to take advantage of a security patch in that version.
Questions are:
do all repositories need to be on exactly the same version?
is there anything else i need to think about when upgrading multiple connected repositories (there is nothing specific about this in the manual)
do i need to do a system-level export just on the primary server? or should i be doing it on all of the remote repository servers
Lastly, our repositories are huge... a full System Export to backup will take too long...
is it enough to just take the config files/dirs
do i get just the config files/dirs by hitting "Exclude Content"
If you have an Artifactory instance that points to other Artifactory instances via smart remote repositories, then you will not have to upgrade all of the instances as they will be able to communicate with each other even if they are not on the same version. With that said, it is always recommended to use the latest version of Artifactory (for all of your instances) in order to enjoy all the latest features and bug fixes and best compatibility between instances. You may find further information about the upgrade process in this wiki page.
In addition, it is also always recommended to keep backups of your Artifactory instance, especially when attempting an upgrade. You may use the built-in backup mechanism or you may manually backup your filestore (by default located in $ARTIFACTORY_HOME/data/filestore) and take DataBase snapshots.
What do you mean by
do all repositories need to be on exactly the same version?
Are you asking about Artifactory instances? Artifactory HA nodes?
Regarding the full system export:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
https://jfrog.com/knowledge-base/how-should-we-backup-our-data-when-we-have-1tb-of-files/
For more info, you might want to contact JFrog's support.
I have an Artifactory pro (without support) server installed in my local network.
One major use case for this artifactory was to use it as local cache for remote artifacts from e.g. repo1 maven repository or lightbend ivy2 repository. The hope was that I could speedup resolution of dependencies hosted on repo1 when caching them on my local artifactory.
I am pretty sure my development machine is configured correctly to exclusively resolve artifacts against my local artifactory.
However, every once in a while (suspiciously close to the interval configured as Metadata Retrieval Cache Period (Sec) in the Advanced Tab of the remote repository settings), the resolution of dependencies originally hosted on maven repo 1 takes far longer then usual.
I suspect that at these times artifactory refreshes artifact meta data (pom, ivy.xml) of remote artifacts. But this takes far longer than I would expect, assuming that a simple pom or ivy download should not take several seconds but rather a few milli seconds.
I am currently requesting root access to the server for attempting a tcpdump from OPs which may take time...
So my question is
Has anyone an idea what actually might happen that takes several seconds per dependency of a remote repository to refresh meta data files or am I looking in the wrong direction?
Update
My Artifactory version is
Artifactory Professional 5.1.3 rev 50019
We had a similar issue but with npm repo's where the meta data re-calculation was taking quite sometime and eventually we came to know that it was a bug in artifactory and was resolved in version 6.1.0. Worth checking the artifactory jira's for any such bugs. Hope so this helps!
Artifactory Jira Link
I am running Nexus 2.3.1-01. I define a proxy repository that proxies snapshots from an upstream nexus instance. When I browse the remote repository associated with this proxy, I can see the snapshot artifact of interest. However, when I search for all versions of this artifact in the Nexus admin web ui, older versions of the snapshot artifact appear, but not the more recent versions of interest. Yet those more recent versions are clearly visible when I browse the remote.
I've struggled with this for a few hours, and have tried expiring the proxy cache, rebuilding the index, and repairing the index. This is a fresh installation of Nexus, so a damaged index seems unlikely.
Might someone provide some guidance on what I can try next? I should add that my mvn clients cannot resolve the snapshot dependency of interest either.
I figured it out. My mistake, of course.
The POM my test project was using did not have a clause in it pointing to an appropriate repository. The only hint of a repository was in my settings.xml file, and that repository was in a clause, which I want, but which is not sufficient.
What was the final hint? When I dumped the effective POM (mvn help:effective-pom), I saw the only repository configured was Maven Central. And snapshots were disabled. I (actually a coworker) realized that this single repository could not bootstrap the ability to resolve snapshots.
So I added a repository clause to my POM, enabled snapshots on it, and now everything, releases and snapshots resolve fine. Of course, the repository has to be setup to hand back releases and snapshots, but I already that part of my Nexus config right.