Artifactory backup - artifactory

I would like to do a backup artifactory. My artifactory is started by docker in VM. Arti version 6.12 in default configurations.
<config version="1">
<chain template="file-system"/>
</config>
The metadata is stored in a Derby database (by default). All files are mounted in the volume /art/data (docker ... -v /art/data/arti.docker.home:/var/opt/jfrog/artifactory)
Artifact and database are about 1TB
I don't want to use build-in backup mechanism from the UI. I'm going to make backup /art/data. Is it a proper aproche?

Anything that would allow you a swift restore is a valid approach.
I advise you to test a restore scenario from the backup you plan to take. If you are able to restore successfully - you can implement this solution.

Related

Artifactory: Converting remote repo to local repo

My employer has been misusing Bintray as our binary repository for some time. We are finally moving to Artifactory instead and closing down Bintray. But this seems to be an almost impossible task. There is no way of exporting Bintray repos to a zip. Downloading the repos means manually downloading each file from the UI or through their API. I have tried two approaches for automation:
1) wget for crawling our bintray like this:
wget -e robots=off -o ~/wget.log -w 1 -m -np --user --password "https://.bintray.com"
which yielded all of the files in the repos. But this only solves half the problem. I couldn't find out how to import the files to a repository in artifactory (all the repos are over 100mbs each and therefore can't be uploaded, for some reason).
2) I set the Bintray repos up as remote repositories and enabled Active Replication. That seems to have worked for now. But I don't know if they will be removed when the Bintray account is moved or even if they are stored in Artifactory. Therefore I would like to convert the remote repo to a local repo, to make sure that it is permanently stored in artifactory is there a way of doing this? If so, how?
I'll try to address both of your questions below.
What do you mean you can't upload more than 100mb? Which version of Artifactory are you using? On-prem or SaaS-based installation? How are you trying to upload your files to Artifactory? Have you tried to import the content by using the import feature of Artifactory? (Admin --> Import&Export --> repository Import)
It sounds like you are using the UI for the upload, and if so you can configure the max upload size in Admin --> General Configuration page.
If you mean that you have all of the content from Bintray cached in your remote repository cache in Artifactory just use the "Copy" or "Move" option and move the content to a local repository. This will ensure that all of the content is stored locally.

How do I restore phabricator if I deleted the files but the database is still intact?

So, I did a stupid rm -rf on the folder where the complete phabricator folder was present.
The whole phabricator database is still intact though.
I cloned the required repos on the same old location:
somewhere/ $ git clone https://github.com/phacility/libphutil.git
somewhere/ $ git clone https://github.com/phacility/arcanist.git
somewhere/ $ git clone https://github.com/phacility/phabricator.git
Apache was already configured during previous install.
I then ran:
./bin/storage upgrade
After which I went to the address which pointed to phabricator folder. Now I get the following error:
1146: Table 'phabricator_user.user_cache' doesn't exist
How do I resolve it? Or in general, what's the best way to reinstall phabricator using the old database?
Thanks
Well, if you still have the database, make a mysqldump from the data (export the db data - you should have this by default - a cron job, running a backup script on another backup machine/usb/hard/cloud)
Do a fresh reinstall on phabricator(EVEN on whole LAMP).
Import the previous backup.sql you did.
After setting the user/passwd/host/port/ in the "path_to_phab/conf/local/local.json" via the command line or simply editing the file, try to run the
./bin/storage upgrade
This should work fine if you have the storage engine set to mysql db (not-recommended). If you have a different storage engine (like hdd) try to restore data reproducing the path to where you have data in phab`s fresh installation conf files along with mysql import.

How to publish builds to Artifactory from GitLab CI?

I am looking for an easy and clean way to publish artefacts build with GitLab CI onto Artifactory.
I was able to spot https://gitlab.com/gitlab-org/omnibus/blob/af8af9552966348a15dc1bf488efb29a8ca27111/lib/omnibus/publishers/artifactory_publisher.rb but I wasnt able to find any documentation regarding how I am supposed to configure it to make it work.
Note: I am looking for a gitlab_ci.yaml approach, not as in implementing it externally.
At a basic level, this can be done with the JFrog CLI tools. Unless you want to embed configuration in your .gitlab-ci.yml (I don't) you will first need to run (on your runner):
jfrog rt c
This will prompt for your Artifactory URL and an API key by default. After entering these items, you'll find ~/.jfrog/jfrog-cli.conf containing JSON like so:
{
"artifactory": {
"url": "http://artifactory.localdomain:8081/artifactory/",
"apiKey": "AKCp2V77EgrbwK8NB8z3LdvCkeBPq2axeF3MeVK1GFYhbeN5cfaWf8xJXLKkuqTCs5obpzxzu"
}
}
You can copy this file to the GitLab runner's home directory - in my case, /home/gitlab-runner/.jfrog/jfrog-cli.conf
Once that is done, the runner will authenticate with Artifactory using that configuration. There are a bunch of other possibilities for authentication if you don't want to use API keys - check the JFrog CLI docs.
Before moving on, make sure the 'jfrog' executable is in a known location, with execute permissions for the gitlab-runner user. From here you can call the utility within your .gitlab-ci.yml - here is a minimal example for a node.js app that will pass the Git tag as the artifact version:
stages:
- build-package
build-package:
stage: build-package
script:
- npm install
- tar -czf test-project.tar.gz *
- /usr/local/bin/jfrog rt u --build-name="Test Project" --build-number="${CI_BUILD_TAG}" test-project.tar.gz test-repo
If you're building with maven this is how I managed to do mine:
Note: you need to have your artifactory credentials (user and pass) ready.
Create a master password and generate an encrypted password from it. The procedure on how to create a masterpassword can be found here
In your pipeline settings in gitlab, create 2 secret variables, one for the username and the other for your encrypted password.
Update or create a settings.xml file in .m2 directory for maven builds. Your settings.xml should look like this:
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd" xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<servers>
<server>
<id>central</id>
<username>${env.ARTIFACTORY_USER}</username>
<password>${env.ENCRYPTED_PASS}</password>
</server>
</servers>
</settings>
In your .gitlab-ci.yml file, you need to use this settings.xml like this:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
and that's it. This should work. You can visit here for more about how to use artifactory with maven
I know this doesn't exactly answer your question, but I got to this question from a related search, so I thought it might be relevant to others too:
I ended up using an mvn deploy job that was bound to the deploy stage for gitlab.
Here is the relevant job portion:
deploy:jdk8:
stage: test
script:
- 'mvn $MAVEN_CLI_OPTS deploy site site:stage'
only:
- master
# Archive up the built documentation site.
artifacts:
paths:
- target/staging
image: maven:3.3.9-jdk-8

Deploying binaries from Bamboo to Nexus repository

Firstly I am new to Nexus. So please bear if it is too noob a question. Let me first explain how our current build/deployment process works.
HOW WE DO IT AT PRESENT:
We have a project that is Maven based. There is a parent POM.xml and two module pom.xmls Each child module POM.xmls create a JAR file each when built. Currently I am doing the build/ deployments manually. I checkout code from SVN to my local machine. I run mvn clean install. I have created a bash script to bundle the 2 Jar files + few other resources (Present just in SVN repo and gets downloaded to local) into a tar.gzip file. Now I SCP this to the app server. Run install scripts that deploys the tar.gzip file.
HOW WE WANT TO DO IT:
We plan to automate the build in Bamboo (Which I have already done). Then the built artifact needs to be uploaded to a Nexus repository (Due to security issues, the SCP task in Bamboo does not work because of establishing SSH connectivity from Bamboo Server to App Server).
MY FIRST HURDLE:
I have created a Bash Script task in Bamboo which does the bundling ( 2 Jars from each child Module POM + resources) to a tar.gzip. This tar.gzip is prersent in a path a/b/c/d on my bamboo machine.
How do I upload this tar.gzip to Nexus Repository?
MY CONFUSION:
I have read about uploading artifacts to Nexus. But I understand it if just 1 jar/ear/war file is created from the build. But we want the bundle. So if I make changes to settings.xml & POM.xml to configure the upload to NEXUS, each JAR file will be uploaded into separate paths in Nexus. And then I have to configure separately to upload the resource files (Not part of build). Is my understanding correct? Please let me know how to proceed with this?
Thanks in advance!!!
Use the Maven Assembly Plugin to create an assembly that contains your artifacts and resources, and then your regular maven deploy will deploy it into Nexus.

Uploading an artifact to Nexus from Jenkins on Cloudbees

I want to upload an artifact from Jenkins running on Cloudbees to Nexus central, since my one is an OSS project stored in Maven Central. To do so, I need to install gpg keys locally. How can I do this on Cloudbees. I've done it on my local Linux box but I'd need access to some sort of Linux environment on Cloudbees.
Regards,
Marco
You can upload your gpg key to your private repository on cloudbees forge, and set maven job to use -Dgpg.homedir=/private/

Resources