the situation is next:
we have few repositories in local Sonatype Nexus OSS for two env, it is a "site" and "hosted repositories".
Jenkins uploads there builds, currently it is a zip files with *.bin inside, In particular Nexus repo I see list of files, like:
*_$BUILD_1.zip
*_$BUILD_2.zip
...
*_$BUILD_n.zip
I want to have Rundeck job to chose appropriate version from appropriate repo to deploy, I set up nexus-rundeck-plugin in nexus, and nexus-step-plugins in Rundeck, I thought, evrth will be ok, but rundeck-nexus steps require things like "Group", "Artifact", "Version", "Packaging", "Classifer", and I do not have these parameters for my *.zip files in local Nexus repo, do you have any suggestion how can I integrate this things?
Related
I have a build machine for Android app. This machine has no access to the Internet.
There is a local nexus repository. Application gradle set up to load dependencies from this nexus repository and it works fine.
But I also need gradlew to load from that nexus repository too.
I've tried to set distributionUrl to local nexus url and it works - distribution is installed and daemon is started.
But next it fails to load https://dl.google.com/android/repository/addons_list-5.xml with timeout
Is there a way to redirect all gradle requests to nexus repositories?
To redirect all Gradle requests to your Nexus repository, you can add the following to your gradle.properties file:
systemProp.http.proxyHost=<hostname>
systemProp.http.proxyPort=<port>
systemProp.http.nonProxyHosts=localhost|127.0.0.1
Replace and with the hostname and port of your Nexus repository. This will tell Gradle to use your local Nexus repository for all dependencies, including the addons_list-5.xml file that you are currently having trouble loading.
It's also a good idea to make sure that your Nexus repository is properly configured and has all the necessary dependencies. You can check the logs of your Nexus repository to see if there are any errors or issues that might be preventing Gradle from accessing the dependencies it needs.
My employer has been misusing Bintray as our binary repository for some time. We are finally moving to Artifactory instead and closing down Bintray. But this seems to be an almost impossible task. There is no way of exporting Bintray repos to a zip. Downloading the repos means manually downloading each file from the UI or through their API. I have tried two approaches for automation:
1) wget for crawling our bintray like this:
wget -e robots=off -o ~/wget.log -w 1 -m -np --user --password "https://.bintray.com"
which yielded all of the files in the repos. But this only solves half the problem. I couldn't find out how to import the files to a repository in artifactory (all the repos are over 100mbs each and therefore can't be uploaded, for some reason).
2) I set the Bintray repos up as remote repositories and enabled Active Replication. That seems to have worked for now. But I don't know if they will be removed when the Bintray account is moved or even if they are stored in Artifactory. Therefore I would like to convert the remote repo to a local repo, to make sure that it is permanently stored in artifactory is there a way of doing this? If so, how?
I'll try to address both of your questions below.
What do you mean you can't upload more than 100mb? Which version of Artifactory are you using? On-prem or SaaS-based installation? How are you trying to upload your files to Artifactory? Have you tried to import the content by using the import feature of Artifactory? (Admin --> Import&Export --> repository Import)
It sounds like you are using the UI for the upload, and if so you can configure the max upload size in Admin --> General Configuration page.
If you mean that you have all of the content from Bintray cached in your remote repository cache in Artifactory just use the "Copy" or "Move" option and move the content to a local repository. This will ensure that all of the content is stored locally.
Firstly I am new to Nexus. So please bear if it is too noob a question. Let me first explain how our current build/deployment process works.
HOW WE DO IT AT PRESENT:
We have a project that is Maven based. There is a parent POM.xml and two module pom.xmls Each child module POM.xmls create a JAR file each when built. Currently I am doing the build/ deployments manually. I checkout code from SVN to my local machine. I run mvn clean install. I have created a bash script to bundle the 2 Jar files + few other resources (Present just in SVN repo and gets downloaded to local) into a tar.gzip file. Now I SCP this to the app server. Run install scripts that deploys the tar.gzip file.
HOW WE WANT TO DO IT:
We plan to automate the build in Bamboo (Which I have already done). Then the built artifact needs to be uploaded to a Nexus repository (Due to security issues, the SCP task in Bamboo does not work because of establishing SSH connectivity from Bamboo Server to App Server).
MY FIRST HURDLE:
I have created a Bash Script task in Bamboo which does the bundling ( 2 Jars from each child Module POM + resources) to a tar.gzip. This tar.gzip is prersent in a path a/b/c/d on my bamboo machine.
How do I upload this tar.gzip to Nexus Repository?
MY CONFUSION:
I have read about uploading artifacts to Nexus. But I understand it if just 1 jar/ear/war file is created from the build. But we want the bundle. So if I make changes to settings.xml & POM.xml to configure the upload to NEXUS, each JAR file will be uploaded into separate paths in Nexus. And then I have to configure separately to upload the resource files (Not part of build). Is my understanding correct? Please let me know how to proceed with this?
Thanks in advance!!!
Use the Maven Assembly Plugin to create an assembly that contains your artifacts and resources, and then your regular maven deploy will deploy it into Nexus.
I have have an internal Nexus OSS server with dozens of regular Maven jar artifacts in Inhouse, Snapshots and various proxied repos.
I have also installed the Nexus P2 Repository Plugin and the P2 Bridge Plugin (2.6.3-01) and the Nexus Unzip Plugin (0.12.0). I can browse to the .meta/p2 folder of our group repository, but it is currently empty.
My understanding is that I should be able have one (or a combination) of the aforementioned plugins automagically convert the regular jars into bundles (or maybe I need to manually run the Felix bundle maven plugin to redeploy each of them) and then make them available to Tycho.
What I don't understand is which repo I should be pointing the P2 plugins at in the capabilities tab and the update repo/meta inf scripts configuration - is it necessary to do each hosted/proxy repo individually or can I do it for the Public group repo?
Also, how can I easily verify which bundles/plugins are available via the P2 repo?
I want to upload an artifact from Jenkins running on Cloudbees to Nexus central, since my one is an OSS project stored in Maven Central. To do so, I need to install gpg keys locally. How can I do this on Cloudbees. I've done it on my local Linux box but I'd need access to some sort of Linux environment on Cloudbees.
Regards,
Marco
You can upload your gpg key to your private repository on cloudbees forge, and set maven job to use -Dgpg.homedir=/private/