I am working on a delivery pipeline within spinnaker. Spinnaker has support for searching artifactory for artifacts and then triggering a pipeline. I have been publishing my maven artifacts to bintray.com and assumed that this would work for triggering my pipelines.
I've configured spinnaker with this information...
hal config repository artifactory enable
hal config repository artifactory search add bintray \
--base-url https://dl.bintray.com/$USERNAME \
--repo maven-repo \
--groupId $GROUP_ID \
--username $USERNAME \
--password $PASSWORD
However I getting errors in the igor service log saying...
2019-08-15 14:20:00.262 WARN 1 --- [RxIoScheduler-3] c.n.s.i.a.ArtifactoryBuildMonitor : Unable to query Artifactory for artifacts (HTTP 405):
I'm wondering if I am falsely assuming that bintray implements the artifactory api.
Does bintray.com implement the artifactory api?
Bintray's API isn't the same as Artifactory.
It has its own API, documented here.
Related
I've run Artifactory using Docker.
Downloaded JFrog cli inside the container and configured it.
So ./jfrog rt ping returns
OK
Is there a way to perform system level export/import using JFrog cli?
Succeeded to perform it using web ui. Couldn't find information on how to perform system level export/import in the documentation.
Edit
Succeeded to perform export using REST API:
curl -u admin:pass -X POST -H "Content-Type: application/json" --data #/tmp/export-settings.json http://localhost:8081/artifactory/api/export/system
You can invoke the same REST API using JFrog CLI's curl command as shown below. This way, you don't need to provide the URL and credentials. JFrog CLI's config storage will be used. You can manage this storage using the jfrog rt c command.
If you have multiple Artifactory severs configured, and you don't want to use the default server, the jfrog rt curl command also accepts the --server-id option, with the pre configured Artifactory server ID as the valve.
jfrog rt curl -X POST -H "Content-Type: application/json" --data #/tmp/export-settings.json api/export/system
This feature is currently not supported by the CLI.
Feel free to create a feature request.
I'm facing issue while integrating spinnaker with Nexus.
Basically, here is my process - Building docker image using Jenkins and uploading to Nexus. Next, want to trigger spinnaker pipelines based on new image available on Nexus to deploy apps on kubernetes.
I've used these 2 commands
hal config provider docker-registry enable
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--username <userName> \
--password
Getting error as below
+ Get current deployment
Success
- Add the my-docker-registry account
Failure
Problems in default.provider.dockerRegistry.my-docker-registry:
! ERROR Unable to fetch tags from the docker repository:
repository/test-docker-snapshots/, Unrecognized SSL message, plaintext
connection?
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
- Failed to add account my-docker-registry for provider
dockerRegistry.
is it mandatory to have nexus on HTTPS ? I'm running on http, and using in internal network only...
please advise.. thanks..
If your nexus repo is running on HTTP then you should set --insecure-registry flag in your command. So you would final command would be as follows:
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--insecure-registry true \
--username <userName> \
--password
I need to setup a private nexus oss 3 for internal nodejs development for our company. The project dependencies have to be download from developer's computer and copy across to the private network, and then upload/publish to the private nexus instance.
We've write some scripts to pull all dependencies in .tgz format form the npm repo, and copied into the private network.
But how can I upload those .tgz files to the npm repo of my private nexus without using the GUI?
You can upload using the UI; but you choose not to use this way.
You can upload using the API; see the docs
You can upload using npm publish; eg.npm --registry=http://nxrm.local/repository/npm-hosted publish package.tgz
For someone who interest for a fast solution, here's my procedure and scripts:
create a hosted npm repo in nexus
create an account for package upload
grant 'npm Bearer Token Realm' realm to the account
run the download script for download packages from public npm repository
run the upload script for upload packages to private npm repository
script for downloading npm packages from public repository
#!/bin/bash
NODE_MODULES_PATH=./node_modules
PACKAGES_PATH=./packages
mkdir -p $PACKAGES_PATH
for url in $(grep _resolved $NODE_MODULES_PATH/**/package.json | awk -F '"' '{print $4}' | sort -u); do
if wget -c -q "$url" -P $PACKAGES_PATH; then
echo "url=$url"
else
(>&2 echo "error download url=$url")
fi
done
script for uploading npm packages to private nexus npm repository
#!/bin/bash
REPOSITORY=[REPOSITORY_URL]
PACKAGES_PATH=./packages
npm login --registry=$REPOSITORY
for package in $PACKAGES_PATH/*.tgz; do
npm publish --registry=$REPOSITORY $package
done
Note:
the packages have to be downloaded locally in the normal way
the script should be running in the project root directory
the [REPOSITORY] can be obtained in your private hosted npm repository
You can also use the rest API to manage components directly:
POST /v1/components
For instance, to upload the package my-npm-package-0.0.0.tgz to the repository npm-private use the following:
curl -u user:password -X POST "http://localhost:8081/service/rest/v1/components?repository=npm-private" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "npm.asset=#my-npm-package-0.0.0.tgz;type=application/x-compressed"
The complete live API specs can be found at endpoint /#admin/system/api
The official nexus documentation can be found at https://help.sonatype.com/repomanager3/rest-and-integration-api/components-api
Do we have any option/way to download a docker image using wget or curl.
My docker image is present in Jfrog artifactory.
First, any curl command to an Artifactory repo would need the API key of your account. See "How to use docker registry API with Artifactory Docker Repository when not using docker client?"
you can use the following header: "X-JFrog-Art-Api" and pass the API key of the user to authenticate. The API key of the user can be retrieved from the "User Profile" page in Artifactory. Artifactory REST API supports three forms of authentication and you can use any one of them with the docker repository
Second, downloading an image is not trivial (as you need to get all the layers).
You might have some chance adapting the moby contrib script download-frozen-image-v2.sh
Or try docker-registry-debug which will print a curl command for fetching the layer, as explained here.
I found this answer while looking to do the same thing with gitlab. I modified the suggested moby contrib script to do the same thing for a gitlab instance.
Download download-gitlab-frozen-docker-image.sh
Mark it executable (chmod +x download-gitlab-frozen-docker-image.sh)
Run the script:
./download-gitlab-frozen-docker-image.sh <FOLDER_NAME> <DOCKER_URL>
where FOLDER_NAME is the folder to store the frozen docker image and DOCKER_URL is the url straight out of the gitlab container registry.
Import the frozen folder into docker (at your convenience/any future date):
tar -cC '<FOLDER_NAME>' . | docker load
We have build our first Nodejs app and I want to integrate Jenkins as continuous integration we are running node server behind Nginx as proxy and source control in Gitlab. I need example configurations or steps.
I am looking here any doc or wiki link or if you can point me into right direction it will be helpful
I have CentOS server and managed to install and configure Jenkins but not getting the proper way to connect my Gitlab server. I need to run npm commands after each build. If any one already has done that please let me know.
Thanks
Your question is still vague but I will try to provide you here how I had done Jenkins NodeJs with Gitlab integration. I have CentOS 6 and tested.
Steps
Open Java should be installed prior.
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
sudo service jenkins start
Login as jenkins
sudo -s -H -u jenkins
Now generate the ssh key in the folder /var/lib/jenkins/.ssh and copy that key to gitlab
ssh-keygen
Install Gitlab Hook Plugin and GitLab Plugin in jenkins.
As you will create a project by accessing your jenkins in browser
After creating the project go to configure (left side menu) project page
There lots of options are self explanatory - setup Git repo url
and setup mail git browser url.
Create a new item in jenkins and add the git repo url and in build triggers
select Build when a change is pushed to GitLab. GitLab CI Service URL:
Build Triggers
check the option
Build when a change is pushed to GitLab
Paste that url in your gitlab repo's webhooks in settings.
This is to run npm commands after build
There is one section SSH Publisher
In exec commands section (I have put my example you can write your commands)
cd project_dir
rm -rf public server package.json
tar -xvf projectname.tgz
ls
npm install --production
export NODE_ENV=production
forever restartall
jasmine-node spec/api/frisbyapi_spec.js
rm -rf projectname.tgz
I have written most the steps that I took to setup jenkins nodejs and gitlab.
I might have forgot any step. If you face any error please post that as well.