I am looking at moving my filestore from S3 to NFS. I am not sure what that process would be. Has anyone had to do this? Any advice?
There is no outside-of-the-box solution to move your artifacts from S3 to NFS.
With that being said, I would recommend looking at third-party tools (i.e RSync).
Regardless, it is important after moving/copying the binaries to validate the filestore integrity with Artifactory's DB. This can be done using this JFrog Filestore Integrity plugin.
I hope this information helps you.
Related
We are using Artifactory Enterprise and, in addition to "normal" usage, we would like to just store some binaries in Artifactory. This is so we can limit egress and pull the binaries from Artifactory instead of the general Internet. Is this possible? Is there a documentation link that will help explain the process?
Yes, this can be done by creating a generic local repository and deploy the binaries thru UI or using the REST API and you can use the binaries from generic local repository. Refer to this blog as well.
Is there a script or any other automated process for migration of artifacts into JFrog? We are currently working on this and need more information to carry out this process. Please help us in achieving this. Thanks in advance.
If you have an existing artifact repository, JFrog Artifactory supports acting as an artifact proxy while you are in the process of migrating to Artifactory.
I would recommend the following:
Create a local repository in artifactory
Create a remote repository in artifactory which points to your current artifact repository.
Create a virtual repository in artifactory which contains both the local and remote repositories.
Iterate on all your projects to have them publish to the local artifactory repository and pull from the virtual repository.
The advantage to this workflow is that you can port things over piece by piece, rather than trying to do it all at once. If you point a dependency at artifactory that hasn't been ported there yet, artifactory will proxy it for you. When the dependency is ported over, it will be transparent to its users.
When you have moved everything to your local Artifactory repository, then you can remove the remote repository from your virtual repository.
The relevant documentation is available here: https://www.jfrog.com/confluence/display/RTF/Configuring+Repositories
For an Enterprise account, I'd suppose S3 storage and a significant number of artifacts, so there will be no easy and automated way to do it. It also highly dependent on the storage implementation of choice in the on-prem solution. If you plan to use S3 storage, JFrog can help to perform S3 replication. In other scenarios, the solution will be different. I suggest contacting the support.
I'm wondering how other Artifactory Admins do that so here's my question:
We're starting to use Artifactory to manage our artifacts. Internal as well as external artifacts. The external artifacts are all available in an internal repository. This is so because of a conversion from a file based repository to Artifactory.
Now this is starting to cause issues and I'm wondering how others are managing the external dependencies? As an Artifactory Administrator I want to be sure that my developers only use artifacts which have the correct license so I don't want to have a "feel free to download everything from the internet" culture.
I want to provide some sort of a "whitelisted and approved" set of external Artifacts.
Is this possible using Artifactory OSS or do we manually download the artifacts from a remote repository and deploy it to our local repository?
Thank you in advance!
this can be done with writing a user plugin but it will require a PRO version of Artifactory. You can see here examples to a governance control plugin that was written in the past.
With OSS version you can't reject downloads of users based on license.
Hope that answer your question.
I am looking for a solution that would allow me to have a network share where people can access (read-only) the artifacts from an Artifactory repository.
Why? We use Artifactory to also keep track of big binaries like installation kits, ISO images and so on and it takes a lot of time to download all of them (sometimes as zips), unpack and run them. If these would be exported to a NFS/SMB share people would be able to only mount them in order to use them.
How can we achieve this? Please keep in mind that we also want to automate this, so the files would be updated by Artifactory when needed.
Artifactory supports WebDAV out of the box.
It seems that's not possible at this moment and there is a feature request for enabling it:
https://www.jfrog.com/jira/browse/RTFACT-8302
Feel free to vote and to comment on it, allowing jFrog to realise how important is this use case.
I guess they should be able to provide a script that does mirror/sync a repository to a NFS share but that would almost double the storage space needed.
Instead if they would use hardlinks or symlinks to create a browsable tree of the repository inside the storage directory, this would be solved and no sync will be needed.
Our CloudBees Jenkins SBT builds spend a lot of time re-downloading a considerable number of third-party jars any time we get a clean VM. If we could download the jars once and never again into a shared cache, that would speed things up wonderfully.
It would seem our WebDAV repo would fit the bill. The only issue I can think of is SBT's lock file, which should prevent contention between multiple builds, though I'm not sure if that works on a shared drive (this suggests maybe not). Might there be other issues that might catch us up?
An alternative might be to use our Cloudbees Artifactory server as a proxy for third-party jars, then mount Artifactory via WebDAV, though that sounds more complicated, and this suggests Ivy might still copy files from WebDAV to its cache (which is still better than downloading to the cache).
Thanks.
I heard people are saving the resulting artifacts of STB builds to Maven repo (I think this may help https://cloudbees.zendesk.com/entries/20836643-sbt-publish-to-repositories).
Note that the realm of the credentials must match exactly the realm of the server (https://groups.google.com/forum/?fromgroups=#!searchin/simple-build-tool/cloudbees/simple-build-tool/ovoxXM8fe7A/dAFQhdpcIvkJ).
Also be sure you create a folder before uploading the jars: afaik, webdav requires explicit directory creation.