import nexus from diffrent sources - nexus

I have two instances of nexus server (2.14.0-1 and 3.16.1) Running on production.on the other side, we have created a new instance server Nexus 3.16.1 for our validation environment (an existing installation of Nexus Repository Manager 3.16.1 populated with data and configurationfor instance).
I'm looking on how to get my data ( copy them ) from my production environement (stored on booth server 2.14.0-1 and 3.16.1.) and put them to our validation environment.
Is there a way to import theses storage to the new environement destination . kind of import data from a source ( Nexus 2 / Nexus 3 )

here is the answare to my question https://support.sonatype.com/hc/en-us/articles/236210187

Related

What is the best way to export / import from source to target?

We are planning to clone (make a copy) of our existing artifactory. our current setup runs artifactory on an EC2 instance with Derby DB and the files/artifacts stored in S3 bucket. In our copy we would like to have artifactory running on a new EC2, DB running on mysql and files stored in a different S3 bucket.
We have built a base setup for the target and its operational.
what is the best way to export / import from source to target. I see options for repository and system export. Should i do a repository by export/import ?
Thanks
A system export -> import is usually the recommended way.
You can see a detailing of the process in this JFrog's knowledge-base - migrating Artifactory article.
Also another entry with a video in this How to migrate Artifactory from one environment to another?
However you choose to go, make sure to test well and backup before!

Backup of Artifactory Instance - User not imported

Will appreciate if someone can please throw some light on best way to backup and restore Artifactory. Primary concern being the repository plus user and permission are available on restore.
I am trying System Export and import feature as well as incremental backup plus import but having no success with user/permissions when restore happens. Expecting to see ALL users after import from source - but net results are opposite - after restore completes only user I see is access-admin.
I even lose the user and anonymous on destination instance.
As per Arti. doc https://www.jfrog.com/confluence/display/RTF/Importing+and+Exporting ,system export and import should take care of ething including security BUT my tests does not seem to have user info on import.
From Doc. At system level, Artifactory can export and import the whole Artifactory server: configuration, security information, stored data and metadata. This is useful when manually running backups and for migrating and restoring a complete Artif. instance (as an alternative to using database level backup and restore).
artifactory.version=5.9.3
artifactory.timestamp=1521564024289
artifactory.revision=50903900
artifactory.buildNumber=820
Default embedded derby mode . In new instance not restoring the db as per my understanding documentation says - System Backup and Restore (ALL) will take care of all configurations.
But I am new to artifactory - Please correct me if my understanding is not correct.
Do like to point out restore is tried at seperately new spun instance with jfrog ami and not on the instance from where data was backed.
This is to test if we lose our instance completely , can we spin the new aws instance and quickly restore the artifactory envoirnment back.
Thanks in advance for help.

How to make it easier to deploy my Jar to Spark Cluster in standalone mode?

I have a small cluster with 3 machines, and another machine for developing and testing. When developing, I set SparkContext to local. When everything is OK, I want to deploy the Jar file I build to every node. Basically I manually move this jar to cluster and copy to HDFS which shared by the cluster. Then I could change the code to:
//standalone mode
val sc = new SparkContext(
"spark://mymaster:7077",
"Simple App",
"/opt/spark-0.9.1-bin-cdh4", //spark home
List("hdfs://namenode:8020/runnableJars/SimplyApp.jar") //jar location
)
to run it in my IDE. My question: Is there any way easier to move this jar to cluster?
In Spark, the program creating the SparkContext is called 'the driver'. It's sufficient that the jar file with your job is available to the local file system of the driver in order for it to pick it up and ship it to the master/workers.
In concrete, your config will look like:
//favor using Spark Conf to configure your Spark Context
val conf = new SparkConf()
.setMaster("spark://mymaster:7077")
.setAppName("SimpleApp")
.set("spark.local.ip", "172.17.0.1")
.setJars(Array("/local/dir/SimplyApp.jar"))
val sc = new SparkContext(conf)
Under the hood, the driver will start a server where the workers will download the jar file(s) from the driver. It's therefore important (and often an issue) that the workers have network access to the driver. This can often be ensured by setting 'spark.local.ip' on the driver in a network that's accessible/routable from the workers.

how to use cloudera manager for monitor the components of CDH4

I have already install CDH4 without using cloudera manager. I wanted to use cloudera manager so that i can monitor the different components of CDH4. Please suggest me how to use the manager now.
I have recently had to undertake the same task of importing already installed and running clusters into new Cloudera Manager instances.
I would firstly suggest taking your time to read through as much documentation as possible to fully understand the processes and key components.
As a short answer, you need to manually import all your cluster configurations and assignments into Cloudera Manager so that they can be managed. A rough outline of the plan I used is below:
Setup MySQL instance on NEW hardware (can use postgresql)
Create Cloudera Manager user on all servers (must be sudo enabled)
Setup ssh key access between cloudera-manager server and all other hosts
Useful Docs below:
- http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_install_mysql.html
- http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_install_path_B.html
Install Cloudera Manager and agent/daemon packages on Cloudera Manager server
Shutdown all services using cluster and cluster services
Save the namespace
Backup Meta Data and Configuration files to MULTIPLE LOCATIONS
Ensure the backup can be loaded by starting a single instance NN
Install Cloudera Manager agent and daemon on all production servers
Start the services on the Cloudera Manager server
Access the Cloudera Manager interface
Skip Setup Wizard
Add all hosts to Cloudera Manager
Create HDFS service - DO NOT start the service
Check hosts assignments are correct
Input all configuration file parameters and verify (this means each servers conf files need to be input manually)
Run host inspector and configuration check
Perform the above process for remaining services
I hope this provides a some assistance for you. If you have any other questions I will be happy to try and assist you as much as I can.
Regards,
James
I just recorded a webinar titled "Installing Cloudera Manager in < 30 mins" for Global Knowledge. Available at: http://www.globalknowledge.com/training/coursewebsem.asp?pageid=9&courseid=20221&catid=248&country=United+States (register in the upper right of page). In the video, I install CM on Ubuntu, set up the core components (Hadoop only), and then browse through some of the graphs for monitoring.

Best way to install web applications (e.g. Jira) on Unixes?

Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.

Resources