Create multiple artifactory repositories from json - artifactory

I want to automate the process of importing existing repositories structure from another Artifactory through .json file.
So far, I have managed to make single repo from json with the following command.
curl -X PUT --insecure -u admin -H "Content-type: application/json" -T repository-config.json "https://artifactory.test.net/artifactory/api/repositories/acqbo-docker-release-local"
Is there a way to import multiple/array of repositories from a single json file and a single curl?

Ended up writing my own bash script for this purpose.
you will have to make a file with the repositories you want to copy:
#!/bin/bash
#############
# This script copies the repository structure from one Artifactory server to another
# repos.list file is required to have repositories that we want to copy, each in new line.
# user with the admin rights is necessary
#############
#Where to copy repos from and to
ARTIFACTORY_SOURCE="https://source.group.net/artifactory/api/repositories/"
ARTIFACTORY_DESTINATION="https://destination.group.net/artifactory/api/repositories/"
NOLINES=$(wc -l < repos.list) #get total nuber of lines in repos.line
COUNTER=1 #Set the counter to 1
while [ $COUNTER -le $NOLINES ] #loops times number of lines in the repos.line
do
REPONAME=$(awk "NR==$COUNTER" repos.list) #get only repo name, line by line
curl -GET --insecure -u admin:adminpass "$ARTIFACTORY_SOURCE$REPONAME" > xrep.json #Obtain data from Artifactory source, repo by repo, and writes it to the xrep.json
curl -X PUT --insecure -u admin:adminpass -H "Content-type: application/json" -T xrep.json "$ARTIFACTORY_DESTINATION$REPONAME" #Sends data from json to the destination Artifactory server
#print in blue color
printf "\e[1;34m$COUNTER repo done\n\e[0m"
((COUNTER++))
done
printf "\e[1;34mAll repos exported!\n\e[0m"

Related

How to specify putting zip into jfrog artifactory?

I have directory like /tmp/some-js/(a lot of folders)
I added it into zip /tmp/some-js.zip
I have structure in artifactory like /npm-dev/some-js/*
I put into artifactory this zip with command
curl -u user:api-key -k -X PUT https://xxx.xx.xx.xx:8081/artifactory/npm_dev/some-js/ -T /tmp/some-js.zip
And I have got directory in artifactory /npm-dev/some-js/some-js.zip/*
There is a way to specify unpacking some-js.zip contents into /npm-dev/some-js ?
Uploading an archive file (such as a zip file) to Artifactory and extracting its content to a specific directory is done by:
PUT https://<jfrog-platform>/artifactory/the-repo/path/to/dir/file.zip
X-Explode-Archive: true
<file-content>
The content of file.zip will be extracted and deployed under the-repo/path/to/dir/, preserving the relative directory structure in the zip file. So if file.zip has the following structure:
foo/
|- bar.txt
|- baz.txt
The following files will be created in Artifactory:
the-repo/path/to/dir/foo/bar.txt
the-repo/path/to/dir/foo/baz.txt
Using curl and the details in the question:
curl -u user:api-key \
-k \
-X PUT \
https://xxx.xx.xx.xx:8081/artifactory/npm_dev/some-js/some-js.zip \
-T /tmp/some-js.zip
-H "X-Explode-Archive: true"
For more information, see the documentation on Deploy Artifacts from Archive

How to delete all contents of local Artifactory repository via REST API?

I'm using a local repository as a staging repo and would like to be able to clear the whole staging repo via REST. How can I delete the contents of the repo without deleting the repo itself?
Since I have a similar requirement in one of my environments I like to provide a possible solution approach.
It is assumed the JFrog Artifactory instance has a local repository called JFROG-ARTIFACTORY which holds the latest JFrog Artifactory Pro installation RPM(s). For listing and deleting I've created the following script:
#!/bin/bash
# The logged in user will be also the admin account for Artifactory REST API
A_ACCOUNT=$(who am i | cut -d " " -f 1)
LOCAL_REPO=$1
PASSWORD=$2
STAGE=$3
URL="example.com"
# Check if a stage were provided, if not set it to PROD
if [ -z "$STAGE" ]; then
STAGE="repository-prod"
fi
# Going to list all files within the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-FileList
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X GET "https://${STAGE}.${URL}/artifactory/api/storage/${LOCAL_REPO}/?list&deep=1" \
-w "\n\n%{http_code}\n"
echo
# Going to delete all files in the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X DELETE "https://${STAGE}.${URL}/artifactory/${LOCAL_REPO}/" \
-w "\n\n%{http_code}\n"
echo
So after calling
./Scripts/deleteRepository.sh JFROG-ARTIFACTORY Pa\$\$w0rd! repository-dev
for the development instance, it listed me all files in the local repository called JFROG-ARTIFACTORY, the JFrog Artifactory Pro installation RPM(s), deleted them, but left the local repository itself.
You may change and enhance the script for your needs and have also a look into How can I completely remove artifacts from Artifactory?

How can I set the version of a raw file in a (Sonatype) Nexus raw repository?

I'm making an automatic script to upload a raw file in Nexus, and I need to set up the version of this file. Is this possible? I've been checking the API but it doesn't seem posible.
The command I'm currently using to upload is:
curl --proxy $my-proxy -v --user 'user:pass' --upload-file ./myRawFile 'http://12.34.56.78:1234/repository/MyRawRepo/LoL/TheUploadedFile'
This command is being used from an automatic script (and working) to upload the file, but I don't know how to set the version.
curl -k -u "xxx:xxx" -H 'Content-Type: multipart/form-data' --data-binary "#output.zip" -X PUT https://nexus.xxx.com/repository/{raw-reponame}/xxx/{version}/output.zip
Version number can be change {version}

OpenStack Swift cannot use bulk operation to auto extract tar file

I want to upload many files with a single operation in OpenStack Swift. I find the middleware -- Bulk Operations which can auto extract files from tar compressed file. However, I failed to extract the files from the tar.
I PUT the tar file use the bulk operation like this:
curl -X PUT http://127.0.0.1:8080/v1/AUTH_test/ContainerName/$?extract-archive=tar \
-T theTarName.tar \
-H "Content-Type: text/plain" \
-H "X-Auth-Token: token"
I am sure that the storageURL, tar file path, and token is accurate. But, I didn't get any responses(successes or errors). When I show the objects in the container, I find just one object named 0extract-archive=tar was uploaded, but the files in the tar were not extracted.
I want to know how to extract the tar automatically in OpenStack Swift and all of the files in the tar can be displayed in the container.
Thanks in advance.
The issue is the $? part. $? refers to the exit code of the last command in bash (http://tldp.org/LDP/abs/html/exit-status.html), which I suspect you're using.
If you'd like to use $ as the archive prefix, consider escaping it with \:
$ curl -X PUT \
"http://127.0.0.1:8080/v1/AUTH_test/container/\$?extract-archive=tar" \
-T test.tar \
-H "X-Auth-Token: <token>"
You should get the following output:
Number Files Created: 3
Response Body:
Response Status: 201 Created
Errors:

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

Resources