Currently, we have a nexus hosted-repository remotely (in a different geographic location). We have a local-proxy-repository locally to the hosted-repository.
Whenever there are new versions of a few files added to remote-hosted-repository, the first request for the newly added file from the build system is downloading it to local-proxy-repository.
The problem I have now is that some of the files being added are really huge (say around 400 MB). Therefore the first build consumes lot of time to finish.
Is there a way we can poll on remote-hosted-repository and auto-mirror it ?
Nexus Professional 2.+ supports this as experimental feature of the Smart Proxy feature set. It is an experimental feature that is off by default, but should work just fine. Give it a go!
To turn it on go to "administration/capabilities". Check "show advanced" and then select the "smart proxy: subscribe" and enable preemptive fetch.
Update: as of Nexus 2.3 this is no longer deemed experimental and you can configure if for each repository that you proxy off.
I cannot comment on Manfred's answer, so here is a new variant:
If you are running Nexus Professional, you can use Smart Proxy to synchronize repositories.
You need to go through the general setup as described on http://www.sonatype.com/books/nexus-book/reference/smartproxy.html first (establish trust, set up publishing hosted repo, set up receiving proxy repo). Only then is the capability created and Manfreds answer applies:
Go to "Administration/Capabilities", check "Show Advanced" and select the
"Subscribe" capability for your proxy repo. There you can turn on preemptive
fetching, which will automatically download new artifacts in your hosted repository on the proxy.
Related
In a near future I will start using Artifactory in my project. I have been reading about local and remote repositories and I am a bit confused of their practical use. In general as far as I understand
Local repositories are for pushing and pulling artifacts. They have no connection to a remote repository (i.e. npm repo at https://www.npmjs.com/)
Remote repositories are for pulling and caching artifacts on demand. It works only one way, it is not possible to push artifacts.
If I am right up to this point, then practically it means you only need a remote repository for npm if you do not develop npm modules but only use them to build your application. In contrast, if you need to both pull and push Docker container images, you need to have one local repository for pushing&pulling custom images and one remote repository for pulling official images.
Question #1
I am confused because our Artifactory admin created a local npm repository for our project. When I discussed the topic with him he told me that I need to first get packages from the internet to my PC and push them to Artifactory server. This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Question #2
Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy (i.e. delete packages older than 6 months)? I ask this because it is important to keep packages until a meteor hits the servers (for archiving policy of the company).
Question #3
We will need to get official Docker images and customize them for CI. It would be a bit hard to maintain one local repo for pulling&pushing custom images and one remote repo for pulling official images. Let's say I need to pull official Ubuntu latest, modify it, push and finally pull the custom image back. In this case it should be pulled using remote repository, pushed to local repo and pulled again from local repo. Is it possible to use virtual repositories to do this seamlessly as one repo?
Question #1 This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Generally, you would want to use a remote repository for this. You would then point your client to this remote repository and JFrog Artifactory would grab them from the remote site and cache them locally, as needed.
In some very secure environments, corporate policies do not even allow this (they may not even be connected to the internet) and instead manually download, vet, and then upload those third-party libraries to a local repository. I don't think that is your case and they may just not understand their intended usages.
Question #2 Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy?
They will not be deleted unless you actively configure it to do so.
For some repo types there are built-in retention mechanisms like the number of snapshots or maximum tags but not for all of them and even in those that have it, they must be actively turned on. Different organizations have different policies for how long artifacts must be maintained. There are a lot of ways to cleanup those old artifacts but ultimately it will depend on your own requirements.
Question #3 Is it possible to use virtual repositories to do this seamlessly as one repo?
A virtual repository will let you aggregate your local and remote sites and appear as a single source. So you can do something like:
docker pull myarturl/docker/someimage:sometag
... docker build ...
docker push myarturl/docker/someimage:sometag-my-modified-version
docker pull myarturl/docker/someimage:sometag-my-modified-version
It is also security-aware so if the user only has access to the local stuff and not the remote stuff, they will only be able to access the local stuff even though they are using the virtual repository that contains both of them.
That said, I don't see why it would be any harder to explicitly use different repositories:
docker pull myarturl/docker-remote/someimage:sometag
... docker build ...
docker push myarturl/docker-local/someimage:sometag-my-modified-version
docker pull myarturl/docker-local/someimage:sometag-my-modified-version
This also has the added advantage that you know they can only pull your modified version of the image and not the remote (though you can also accomplish that by creating the correct permissions).
So, I am trying to set up a CI/CD pipeline with the s4sdk. I successfully completed all the steps descriped in this blog. Everything seems to be running smoothly, however my build is failing with the following error message:
The following artifacts could not be resolved: com.sap.xs2.security:security-commons:jar:0.28.6, com.sap.xs2.security:java-container-security:jar:0.28.6, com.sap.xs2.security:java-container-security-api:jar:0.28.6, com.sap.security.nw.sso.linuxx86_64.opt:sapjwt.linuxx86_64:jar:1.1.19: Could not find artifact com.sap.xs2.security:security-commons:jar:0.28.6 in s4sdk-mirror (http://s4sdk-nexus:8081/repository/mvn-proxy/)
Now, this error messages makes sense to me, since I remember downloading these artifacts from the SAP download center and therefore those artifacts are not available on maven central.
I think this error can be resolved by manually uploading those artifacts to the nexus server, but I don't know how. According to the nexus documentation, there is a web ui reachable under http://< cx-server-ip>:8081, but it is somehow not responding.
I can confirm with docker ps that both the jenkins and nexus container are running and that the nexus container is listening on TCP 8081. I am also able to reach the jenkin's frontend to configure and run my pipeline.
What am I missing? Is uploading the missing artifacts to the nexus the right approach? Any help is appreciated.
The nexus container you see acts as a download cache and is by design not accessible from outside to prevent accidental changes to it. Also, its life-cycle is controlled by the cx-server script, so even if you installed packages there manually, they would be gone once you upgrade the Jenkins.
I think the best way to handle this would be to set up another Nexus instance where you install the required packages and configure the pipeline to use that as described here (mvn_repository_url). This nexus needs to be configured as a mirror for Maven central. We don't have specific docs on how to do that, but this post describes a similar setup.
In this set up, you might want to disable the download cache as it is redundant (cache_enabled to false).
I hope this helps.
Kind regards
Florian
The sidecar nexus acts as a read-only cache for maven and npm artifacts on the host (and agents) where cx server is running. By default it looks up artifacts from maven central and the default npm registry. In the current implementation, the cache will be completely deleted after stopping cx server, leading to a loss of all internal state.
If you want to use custom sources, you can set them in server.cfg via mvn_repository_url and npm_registry_url. This is documented in the operations guide, which you can find here: https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/doc/operations/operations-guide.md
In your case, you have to specify a maven repository which includes the dependencies in question.
Our setup includes a company wide Artifactory that holds in-house-built artifacts as well as goes out and fetches publicly available artifacts. I’m trying to setup a local Artifactory at our location that would fetch publicly available artifacts through the regular internet, but would connect to the company wide Artifactory for our in-house-built artifacts. Is this possible?
In my local Artifactory setup, I put the company wide Artifactory URL as a Remote Repository. I can hit the Test button and it tells me that it successfully connected. However, when I go to download an artifact it does not work. I would like to say that publicly available artifacts can be fetched through my local Artifactory, so at least I can get to jcenter.bintray.
Can one Artifactory be connected to another Artifactory? If yes, is there a way to test if this connection works
I don’t think we would be using all the contents of the company wide Artifactory, so I don’t want to do an export and import to the local or do replication. I would prefer if we could fetch on demand. Is this possible?
Edit: Thanks to #DarthFennec pointing me to Smart Remote Repositories I have solved my problem. To others who have the same problem
Please follow the steps mentioned on the previously mentioned page to set up the Smart Remote Repository. In my case Artifactory did not detect that the remote was another instance of Artifactory and did not give me any options to set, but I was not interested in these anyway.
Note You can always click the Test button to make sure that your connection to the Remote Repository works.
Next, go to the Admin -> Virtual Repositories select your Repository Key and select your Smart Repository from the Available Repositories so that it moves into the Selected Repositories. Click Save & Finish at the bottom and you should be good to go.
I'm not sure exactly what your problem ended up being, but if you want to remote one Artifactory repository from another, it should be a smart remote repository. This is when Artifactory detects that a remote is pointing at another Artifactory, and it enables a number of extra features, like download statistics, property replication, and remote browsing.
An important thing to keep in mind when configuring a smart remote repository is that depending on the package type, you might need to point the remote at <artifactory>/api/<type>/<repo>, rather than just <artifactory>/<repo>. This is the case for Bower, Chef, CocoaPods, Docker, Go, NuGet, Npm, Php Composer, Puppet, Pypi, RubyGems, and Vagrant repositories. Other repository types should use the standard <artifactory>/<repo> URL.
As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.
We upload artifacts to Nexus through the file protocol with Maven deploy plugin. Sometimes, those artifacts do not appear directly in Nexus Web interface. I have to do 'expire cache' and refresh the page. Moreover, this causes builds dependant of this artifact to fail.
I guess this is because, we deploy though file protocol. Is there a way to prevent this ? I saw the 'Not Found Cache TTL' in Nexus interface. Not sure to understand the doc. If I set this to zero, will this work ?
Thanks
PW
Deploying directly to the file system should only be used in extreme cases such as bulk manipulations or imports. In order to make Nexus fully recognize the changes on disk, you would need to expire the cache and then you may have to rebuild the metadata. Both of these can be triggered from the repository screen. If you want the artifacts to be searchable, you would also have to fire off the indexer task as well.
All of those things happen automatically when you deploy via http/https directly to Nexus which is the way it is intended to be used