Backup of Artifactory Instance - User not imported - artifactory

Will appreciate if someone can please throw some light on best way to backup and restore Artifactory. Primary concern being the repository plus user and permission are available on restore.
I am trying System Export and import feature as well as incremental backup plus import but having no success with user/permissions when restore happens. Expecting to see ALL users after import from source - but net results are opposite - after restore completes only user I see is access-admin.
I even lose the user and anonymous on destination instance.
As per Arti. doc https://www.jfrog.com/confluence/display/RTF/Importing+and+Exporting ,system export and import should take care of ething including security BUT my tests does not seem to have user info on import.
From Doc. At system level, Artifactory can export and import the whole Artifactory server: configuration, security information, stored data and metadata. This is useful when manually running backups and for migrating and restoring a complete Artif. instance (as an alternative to using database level backup and restore).
artifactory.version=5.9.3
artifactory.timestamp=1521564024289
artifactory.revision=50903900
artifactory.buildNumber=820
Default embedded derby mode . In new instance not restoring the db as per my understanding documentation says - System Backup and Restore (ALL) will take care of all configurations.
But I am new to artifactory - Please correct me if my understanding is not correct.
Do like to point out restore is tried at seperately new spun instance with jfrog ami and not on the instance from where data was backed.
This is to test if we lose our instance completely , can we spin the new aws instance and quickly restore the artifactory envoirnment back.
Thanks in advance for help.

Related

What is the best way to export / import from source to target?

We are planning to clone (make a copy) of our existing artifactory. our current setup runs artifactory on an EC2 instance with Derby DB and the files/artifacts stored in S3 bucket. In our copy we would like to have artifactory running on a new EC2, DB running on mysql and files stored in a different S3 bucket.
We have built a base setup for the target and its operational.
what is the best way to export / import from source to target. I see options for repository and system export. Should i do a repository by export/import ?
Thanks
A system export -> import is usually the recommended way.
You can see a detailing of the process in this JFrog's knowledge-base - migrating Artifactory article.
Also another entry with a video in this How to migrate Artifactory from one environment to another?
However you choose to go, make sure to test well and backup before!

Migrate existing Artifactory OSS installation to existing Artifactory PRO installation

I tried setting up a remote-repo on my PRO installation to replicate from the OSS installation but I get an error.
Error testing pull replication config: Replication to remote
open-source Artifactory instance is not supported.
Is there a script that can use the CLI to download each OSS artifact and upload to the PRO installation?
Or, do I need to purchase a PRO license, export the OSS version, and import into a new PRO installation, just to be able to replicate from one instance to the other?
I think your best option is to follow these instructions from the JFrog wiki.
Note that if you've already installed your new Pro and started uploading Artifacts to it, you might need to run an Export on each repo, do a "clean upgrade" as per the link, and import the repos data back in. Do not do a full export on your pro, as the import will override the OSS data you upgraded.
I ended up downloading all the artifacts from the OSS artifactory (20GB) and wrote a simple script using jfrog cli to upload the files to the PRO artifactory. No down time, and didn't have to modify a working server just to be compatible for replication.
There is this (new) Wiki page that talks about updating OSS to Pro in place. I couldn't make it to work, the license would not apply properly, and the start-up kept failing. I also didn't exactly want to do an "in-place" update, and instead tried to run Pro version on a separate system against a copy of the OSS data.
The remaining method (barring manually re-importing all artifacts as #Branson did) is full import export. There doesn't seem to be clear instructions on how to do this (anymore), the "Upgrading Artifactory" wiki page doesn't anymore talk about migrating between installation types. It looks like there was a section for this there before, judging by the URL fragment in the OP's URL, but it's no longer there.
Having just completed this myself, this is the process that I followed. Note that in my case, the Pro version runs on another system.
Mount a sufficiently large drive to do a full system export
Prepare Pro instance - set up (temporary) admin password and enter the license key
On OSS instance - disable garbage collection and artifact clean up in Administration→Artifactory→Advanced→Maintenance. I did this by simply adding a next year in both cron expressions.
On OSS instance - disable the encryption as explained on this page (yes, this can only be done using an W/S API call). Failure to do so will likely land you on this problem, and you would have wasted your time.
On OSS instance - start the full system export ("Export System") in Administration→Artifactory→Import & Export→System. Check "Output Verbose Log" and have all other checkboxes unchecked.
If your database is of any decent size, the page will eventually show an "Oops" error. Ignore that, and keep monitoring the export process in the logs (artifactory-service.log).
Once the export is finished, detach the drive, and attach it to the Pro instance. Mount the file system
On Pro instance - start the full system import ("Import System") in Administration→Artifactory→Import & Export→System. Check "Output Verbose Log" and have all other checkboxes unchecked.
Again, the page will either timeout with "Oops" message, or show a login screen again. Ignore that, and monitor artifactory-service.log for the import process. Don't touch the UI until import completes. Once completed, your user database will be whatever it was in your OSS version. For me, the import took about 220% of the export time.

WSO2 API Manager - API-specific configuration files

We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)

Alfresco Community v.5.1 does not start and loading the index.html forever

For some unexplained reason I can't use alfresco from yesterday.
Let me tell you how happens.
First of all, I didn't change any conf file or something like that.
I started tomcat and postgre services and after that, I tried to load "localhost:8080/share" but it was loading forever.
I tried to check the logs files, but no use, too. There is no error messages, nothing unusual.
After that, I deleted alfresco and share folder inside the "webapps", just in case, but it failed, too.
Finally, I can't stop these services from service manager, cause I am at work and I have no access privilege.
My main concern is that I don't even know the cause of this issue, so I don't even know how to ask for help.
When you don't have permission to delete the folder(share + alfresco) and stopping the services also. Without stopping the services, you can't delete the complete files from alfresco and share folders.
You need to find the problem is in Alfresco share, Alfresco Repo or database or tomcat.
Check Tomcat
Type http://localhost:8080 and make sure Tomcat is running or not.
Check Database
Connect your database service from Service manager, via PgAdmin tool to check the database service is running or not.
Check Repo
http://localhost:8080/alfresco - It should display some basic information about Alfresco Repo otherwise, it is clearly decided the Alfresco Repo itself is failed.
Check Share
http://localhost:8080/share - It should display the login page, if everything works fine.
Logs
Check and share, alfresco.log, share.log, solr.log, catalina, tomcatstdout and tomcatstderror logs files. Definitely, some of the error information would have recorded any of these logs files.

Best way to install web applications (e.g. Jira) on Unixes?

Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.

Resources