Making internal nexus repos available publicly - nexus

I would like to make some Nexus repos, that are available only from within our companies network, publicly available. To this I set up a host in a DMZ, and installed a Nexus instance there with proxy repos pointing to the internal Nexus repos. The problem is, that if someone checks the Nexus UI running in the DMZ, he is not seeing the actual content of the internal repos, as the artifacts appear only as per request in the proxy. I need some kind of a smart proxy like feature, but I am running the OSS Nexus release.
A suggestions I have found was to rsync over the appropriate folders to the externally available host. The problem is, that I only want to make specific repositories available, and as far as I see, the repo contents are not separated on filesystem level.
On the REST API I neither could find proper methods, so that I could periodically sync over the artifacts.
Does anyone have any suggestions?
Nexus release: OSS 3.13.0-01
Thanks in advance!

Related

How do I get (remote/virtual) repositories to actually show up in Artifactory OSS' repository administration page?

I have an instance of Artifactory OSS (latest version) running in a docker container locally.
We have a remote instance of Artifactory (non-OSS) running as well.
In my local instance, I set up a remote repository of package type Ivy pointing to each of the repositories we have set up in our remote non-OSS instance.
Once I have created a remote repository configuration, I can view each remote repository I created and its artifacts being served under the Application --> Artifactory --> Artifacts page HOWEVER under the Administration --> Repositories --> Repositories page where I am expecting to be able to make changes to the configurations (I have logged in as Admin with administrator privileges btw), none of the repositories I set up are actually visible here! I only see a 0 Repositories count and a No remote repositories message where I am expecting to see a list.
When I do Ivy resolves against my local (remote repository) it works as expected, so they're definitely working... just not showing up for administration.
I have tried rolling back to earlier versions of Artifactory OSS but that hasn't changed anything.
I can tediously work around this with a combination of the REST API and the UI but I REALLY JUST want to be able to administer the configurations within the web app... Am I just being dumb about something or is there a known issue regarding this? I have searched around and I haven't found an answer to this issue, so any help or direction would be greatly appreciated! Thanks!
I was able to reproduce it on my end on as well on version 7.19.1, seems this issue happens when working with Ivy on Artifactory OSS.
I have opened a bug for this on JFrog Jira.

Artfactory not syncing org tree with jcenter

I'm setting up a new atrifactory installation for the first time in my life. Downloaded the tar and extraceted it ok. Got some firewall rules in place to allow https to jcenter.bintray.com. After an initial refresh I see loads of artifacts in the com tree that must come from jcenter, so all seems fine, but when I preform simple maven tasks like mvn help:active-profiles I only get warnings and errors that indicate that none of the relevant stuff is available from my artifactory.
I have accessed the firewall logs and I found no outgoing traffic from my artifactory server to anything that's not permitted. What have I missed? My artifactory is OSS version 7.5.7 rev 705070900.
Artifactory remote repositories are not working as a mirror or the external repository they are pointing at.
Remote Artifactory repositories are proxying the external repository, which means that you have to actively request for artifacts. When requesting for an artifact, Artifactory will request it from the external repository and cache it inside Artifactory. Farther requests for a cached artifact will be served from Artifactory without the need to go out to the external repository.
The list of artifacts we are seeing, are ones which are available in the external repository. This is a feature is called remote browsing and available for some of the package types supported by Artifactory.
I found the issue, sort of. For reasons I now understand I have plugin repositories. I added the true source for the plugins to my list of plugin repositories, and that solved the issue for me.

Uploading custom jar to cx-server nexus

So, I am trying to set up a CI/CD pipeline with the s4sdk. I successfully completed all the steps descriped in this blog. Everything seems to be running smoothly, however my build is failing with the following error message:
The following artifacts could not be resolved: com.sap.xs2.security:security-commons:jar:0.28.6, com.sap.xs2.security:java-container-security:jar:0.28.6, com.sap.xs2.security:java-container-security-api:jar:0.28.6, com.sap.security.nw.sso.linuxx86_64.opt:sapjwt.linuxx86_64:jar:1.1.19: Could not find artifact com.sap.xs2.security:security-commons:jar:0.28.6 in s4sdk-mirror (http://s4sdk-nexus:8081/repository/mvn-proxy/)
Now, this error messages makes sense to me, since I remember downloading these artifacts from the SAP download center and therefore those artifacts are not available on maven central.
I think this error can be resolved by manually uploading those artifacts to the nexus server, but I don't know how. According to the nexus documentation, there is a web ui reachable under http://< cx-server-ip>:8081, but it is somehow not responding.
I can confirm with docker ps that both the jenkins and nexus container are running and that the nexus container is listening on TCP 8081. I am also able to reach the jenkin's frontend to configure and run my pipeline.
What am I missing? Is uploading the missing artifacts to the nexus the right approach? Any help is appreciated.
The nexus container you see acts as a download cache and is by design not accessible from outside to prevent accidental changes to it. Also, its life-cycle is controlled by the cx-server script, so even if you installed packages there manually, they would be gone once you upgrade the Jenkins.
I think the best way to handle this would be to set up another Nexus instance where you install the required packages and configure the pipeline to use that as described here (mvn_repository_url). This nexus needs to be configured as a mirror for Maven central. We don't have specific docs on how to do that, but this post describes a similar setup.
In this set up, you might want to disable the download cache as it is redundant (cache_enabled to false).
I hope this helps.
Kind regards
Florian
The sidecar nexus acts as a read-only cache for maven and npm artifacts on the host (and agents) where cx server is running. By default it looks up artifacts from maven central and the default npm registry. In the current implementation, the cache will be completely deleted after stopping cx server, leading to a loss of all internal state.
If you want to use custom sources, you can set them in server.cfg via mvn_repository_url and npm_registry_url. This is documented in the operations guide, which you can find here: https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/doc/operations/operations-guide.md
In your case, you have to specify a maven repository which includes the dependencies in question.

Connect one Artifactory to another Artifactory

Our setup includes a company wide Artifactory that holds in-house-built artifacts as well as goes out and fetches publicly available artifacts. I’m trying to setup a local Artifactory at our location that would fetch publicly available artifacts through the regular internet, but would connect to the company wide Artifactory for our in-house-built artifacts. Is this possible?
In my local Artifactory setup, I put the company wide Artifactory URL as a Remote Repository. I can hit the Test button and it tells me that it successfully connected. However, when I go to download an artifact it does not work. I would like to say that publicly available artifacts can be fetched through my local Artifactory, so at least I can get to jcenter.bintray.
Can one Artifactory be connected to another Artifactory? If yes, is there a way to test if this connection works
I don’t think we would be using all the contents of the company wide Artifactory, so I don’t want to do an export and import to the local or do replication. I would prefer if we could fetch on demand. Is this possible?
Edit: Thanks to #DarthFennec pointing me to Smart Remote Repositories I have solved my problem. To others who have the same problem
Please follow the steps mentioned on the previously mentioned page to set up the Smart Remote Repository. In my case Artifactory did not detect that the remote was another instance of Artifactory and did not give me any options to set, but I was not interested in these anyway.
Note You can always click the Test button to make sure that your connection to the Remote Repository works.
Next, go to the Admin -> Virtual Repositories select your Repository Key and select your Smart Repository from the Available Repositories so that it moves into the Selected Repositories. Click Save & Finish at the bottom and you should be good to go.
I'm not sure exactly what your problem ended up being, but if you want to remote one Artifactory repository from another, it should be a smart remote repository. This is when Artifactory detects that a remote is pointing at another Artifactory, and it enables a number of extra features, like download statistics, property replication, and remote browsing.
An important thing to keep in mind when configuring a smart remote repository is that depending on the package type, you might need to point the remote at <artifactory>/api/<type>/<repo>, rather than just <artifactory>/<repo>. This is the case for Bower, Chef, CocoaPods, Docker, Go, NuGet, Npm, Php Composer, Puppet, Pypi, RubyGems, and Vagrant repositories. Other repository types should use the standard <artifactory>/<repo> URL.

Is nexus able to provide artifacts of not configured repositories?

I am using nexus on our companies build server as proxy. Sometimes developers add new dependencies to their projects without telling me. Hence, the list of proxy repositories is sometimes not in sync what is really required. As a result, the jobs in our jenkins build server fail because of missing artificats. The jenkins is configured to use the nexus proxy repositories.
Is it possible to tell nexus to download the artifacts from the original repository if it is not found in the proxied ones?
I assume you mean that developers add repository entries into their Maven pom file to get further dependencies and/or modify their settings.xml.
On the other hand the CI server is configured to get everything from Nexus with mirrorOf *.
There is no automatic addition of repositories based on this setup. You can do two things imho
create scripts that do that for you using the Nexus REST API
or educate your developers to tell you to add the proxy repos to Nexus
Potentially you can even use the Maven enforcer rule to disallow repositories in the POM and set up an explicit message and allow them to create proxy repositories in Nexus. Just dont forget to have them added to the group you are using on the CI server.

Resources