Nexus Repository Manager OSS 3.9.0-01.
I wish to create a 'proxy' Nexus repository that will a replica of the public pypi repository. The other machines can then be configured to point to this Nexus repo. so that a 'pip install' on these machines works even if there is no Internet connection. Accordingly, I created a proxy repository of type 'pypi(proxy)'.
When I browse this repo, there aren't any components/assets but whenever someone does a 'pip install' by pointing to this repo, the package shows up in the interface e.g: pip install --user pyspark --verbose
What I am looking for is to clone/copy all the packages in the PyPI repository at once so that the future 'pip install' refers to this local copy and doesn't go to the Internet every time. Once a day, the local copy should be updated.
Is it possible to do so in Nexus OSS?
What you are trying to achieve is a PyPI mirror repository, not a proxy.
The PyPI proxy repository behaviour you described is correct, because it is a proxy, not a mirror. Nexus Repository Manager does not provide functionality to to create a mirror of another repository.
However, you could try to use a PyPI mirror client (e.g. bandersnatch) to obtain a copy of all packages, then move those files over to your PyPI hosted repository and ask Nexus to reindex the files. Later you would have to periodically repeat the process to keep your mirror up to date.
Related
It is needed to add repositories from SLES (updates.suse.com) as a remote repository to artifactory. I guess this would be done via "RPM" remote repository, but after entering the link (including token) there always be a 403. Would suspect that the problem is that the link should be accessed as a yum repo and the metadata should been read first.
How can I get this working?
Preventing pulling in new dependencies from one build to the next, due to the way 3rd parties could define their dependencies.
Currently I'm utilizing the Remote Repositories in JFrog Artifact which reduces the downloads from public repos.
In my build scripts I can hard code the versions of the 3rd party modules/libraries I want to pull in. But if one of the dependencies hasn't it could pull in a new version on the next build.
So was curious if it there was a feature within JFrog Artifactory to copy an artifact and it's dependency from a remote repo into a local repo ?
There is a concept of virtual repo's in artifacory that is a combination of remote and loca repo.
local repo --> mostly used for custom build/artifacts/packages
remote repo --> points to the repo server of the technology configured.
For eg. with npm virtual repo.
npm local --> custom packages
npm remote --> https://www.npmjs.com/~npmre
My employer has been misusing Bintray as our binary repository for some time. We are finally moving to Artifactory instead and closing down Bintray. But this seems to be an almost impossible task. There is no way of exporting Bintray repos to a zip. Downloading the repos means manually downloading each file from the UI or through their API. I have tried two approaches for automation:
1) wget for crawling our bintray like this:
wget -e robots=off -o ~/wget.log -w 1 -m -np --user --password "https://.bintray.com"
which yielded all of the files in the repos. But this only solves half the problem. I couldn't find out how to import the files to a repository in artifactory (all the repos are over 100mbs each and therefore can't be uploaded, for some reason).
2) I set the Bintray repos up as remote repositories and enabled Active Replication. That seems to have worked for now. But I don't know if they will be removed when the Bintray account is moved or even if they are stored in Artifactory. Therefore I would like to convert the remote repo to a local repo, to make sure that it is permanently stored in artifactory is there a way of doing this? If so, how?
I'll try to address both of your questions below.
What do you mean you can't upload more than 100mb? Which version of Artifactory are you using? On-prem or SaaS-based installation? How are you trying to upload your files to Artifactory? Have you tried to import the content by using the import feature of Artifactory? (Admin --> Import&Export --> repository Import)
It sounds like you are using the UI for the upload, and if so you can configure the max upload size in Admin --> General Configuration page.
If you mean that you have all of the content from Bintray cached in your remote repository cache in Artifactory just use the "Copy" or "Move" option and move the content to a local repository. This will ensure that all of the content is stored locally.
I am trying to install sbt on our RedHat Linux server (RHEL6.8). The server doesn't have internet connection.
I downloaded sbt-1.0.x.zip from github but I don't see installation instruction. In README.md file, it points to http://www.scala-sbt.org/release/docs/Getting-Started/Setup which tells me to use yum command. But that would require Internet connection.
Can anyone help?
Thank you.
sbt 0.13.15 supports offline installation with a preloaded local repo:
sbt 0.13.15 adds two new repositories called “local-preloaded-ivy”
and “local-preloaded” that point to ~/.sbt/preloaded/.
The purpose for the repositories is to preload them with
sbt artifacts during the initial installation, instead of resolving
from the remote repository on the first run.
This enables installation of sbt without network connection.
To enable resolving of your own dependencies, it should be sufficient to add them to the preloaded directory.
You need to download the RPM from bintray. See the instructions here:
Source: http://www.scala-sbt.org/0.13/docs/Installing-sbt-on-Linux.html
Depending on what dependencies are unfulfilled, you may need to download additional RPM files that sbt depends on.
I setup a test artifactory server and downloaded some test rpms from a public mirror and then deployed these rpms through artifactory to a local artifactory yum repo (the whole point of this is to test artifactory w/ yum integration). So then I set everything else up and did a bunch of testing so generally speaking the environment has been shown to work (meaning artifactory.repo is config'd correctly and whatever else needed to be done). However. If I "yum install some.rpm" an rpm that is BOTH STORED LOCALLY IN MY ARTIFACTORY YUM REPO and ALSO AVAILABLE IN THE PUBLIC MIRRORS, yum will pull from the public mirror.
How do I deal with this?
Should I deal with this?
Or should I just let yum pull from the public mirror all the time and only use artifactory with yum for my company's rpms?
I mean, what if I want to just build my own repo and not deal with the public mirrors in some cases...?
Is there anything that can "preference" this stuff as opposed to just blowing away the public mirror repo config files in /etc/yum.repos.d/?
Well, it depends on your usecase.
By default Artifactory remote repositories also come with a local cache where they store all downloaded artifacts - so using a local repository to store the same artifacts is redundant.
You can config your yum client to pull from either the remote, local or both if you'd like - it has it's own internal ordering for which one it will access first.
You would mainly use a local repository for cases where you want to be 100% certain only specific versions are used in your build and don't want to rely on what the yum client will choose as an appropriate version, or for cases where you build your own packages for internal use and don't want anyone from 'outside' (outside the team/company whatever) having access to them.