Nexus3 Pro Replication - nexus

I've installed the trial version of nexus3 Pro for POC of Bi-Directional replication.
I've followed all the instructions mentioned in the doc
https://help.sonatype.com/repomanager3/nexus-repository-administration/repository-management/repository-replication
Currently, I've implemented only One-Directional replication. So when I upload a docker image and run the jar file mentioned in the doc, I can see the replication.txt file is created in the source instance and when I execute the jar file, I don't see the repository replicated in the target instance.
Following is my config.yaml file
sources:
path: /etc/opt/sonatype-work/nexus3/blobs/default
type: file
targets:
path: /opt/sonatype-work/nexus3/blobs/default
type: file
repositoryName: Docker_destination
connectionName: DockerImageReplication
Expected result would be, Soruce repo contents should be replicated to Target repo on different nexus instance.

Related

Deploy Raw Source Code from GitLab Repository

I have a Gitlab repository containing a WordPress theme - php, js, and css files. My desired result is that when I push a change to the 'main' branch of the repo, the theme files are deployed, raw, without any build or test steps, to my remote server.
I have a .gitlab-ci.yml file set up with 'deploy' as its only step.
The script triggers on 'only: -main' and successfully accesses my remote server via ssh.
What I'm unsure of is how to send the entire raw repository to the remote.
Here is the 'script' portion of my yml:
- rsync -rav --delete project-name/ /var/opt/gitlab/git-data/repositories/project-name/ username#my.ip.add.ress:public_html/wp-site/wp-content/themes/
When the pipeline runs, I receive the following two errors:
rsync: [sender] change_dir "/builds/username/project-name/project-name" failed: No such file or directory (2)
rsync: [sender] change_dir "/var/opt/gitlab/git-data/repositories/project-name" failed: No such file or directory (2)
Is GitLab looking in /builds/ its default behavior? I am not instructing it to do so in my yml.
Is there some other file path I should be using to access the working tree for 'main' in my repository?
Ok, I misunderstood the rsync syntax. I thought the --delete flag included a parameter thereafter, meaning 'delete any existing files in the following directory' rather than what it actually does, which is to auto-choose the destination directory. Once I removed 'project-name/' and corrected the GitLab (origin) file path to '/builds/username/project-name/' the deployment occurs as intended.

How to prepare the nodes folder structure for the network-bootstrapper?

I am trying to create a bootstrap test network on aws, and I am using this -
java -jar corda-tools-network-bootstrapper-4.5.jar --dir ./
I get -
Bootstrapping local test network in /home/ubuntu No nodes found
The jar seems to be correct. The docs state - https://docs.corda.net/docs/corda-os/4.5/network-bootstrapper.html -
java -jar network-bootstrapper-4.5.jar --dir <nodes-root-dir>
I cannot find network-bootstrapper-4.5.jar but only the corda-tools-network-bootstrapper-4.5.jar. The error seems to be something related to the node.conf file.
Has anyone any ideas?
If you follow the steps that are mentioned here, you will see that it says:
Create a directory containing a node config file...for each node
The keywords are node config file; so you must do the following:
Build your nodes: From the root folder of your project run ./gradlew deployNodes; this will create a folder for every node that you defined inside the deployNodes Gradle task of your root build.gradle file.
The folders will be inside path-to-project-folder/build/nodes. If you inspect the folders, you'll see that each node has a node.conf file which the documentation of the bootstrapper is talking about.
Run the bootstrapper command where the <nodes-root-dir> is path-to-project-folder/build/nodes since it contains all of your nodes.

Rename an artifact during its uploading by Jenkins

I'm using jenkins to upload rpm artifacts to general repository.
I would like to rename the rpm file from my_product_1.1.0.0.rpm to my_product.rpm.
I tried to add a
curl -umyUser:myP455w0rd! -T "http://artifactory:8081/../old name" "http://artifactory:8081/../new name"
command for uploading where the source is artifactory repo and the destination is the same repo but with a different file name. It fails "cannot find the source file"
Later, I tried to do it using "Publish Artifacts" field in jenkins:
/drop_folder/ => repo/my_product.rpm
but in this case, artifacts created a folder "my_product.rpm" and uploads the my_product_1.1.0.0.rpm within.
Can it be done in a different way?
Using CLI for Jfrog Artifactory from Jenkins pipeline you can 2 options:
Copying the file with the new name to another repo:
jfrog rt cp "your-artifactory-repo/oldname.extension" your-artifactory-repo/newName.extension
Download the artifact and upload to new repo with the new name (not recommended).

Cannot unzip the file located on remote centos machine using Ansible

- name: Unzip the Elasticsearch file
unarchive: src=/root/elasticsearch-1.4.0.tar.gz dest=/tmp/
TASK [Unzip the Elasticsearch file]
*******************************************
fatal: [54.173.94.235]: FAILED! => {"failed": true, "msg": "ERROR! file or module does not exist: /root/elasticsearch-1.4.0.tar.gz"}
Is it consider the local file? ...I am running file on my local machine to unzip file on the remote machine. How can I solve this problem?
By default, Ansible copies the file (src) from control machine to the remote machine and unarchives it. If you do not want Ansible to copy the file, set copy=no in your task.
The value of copy is yes by default, so Ansible will try to look for src file in the local machine if you do not set copy=no
unarchive: src=/root/elasticsearch-1.4.0.tar.gz dest=/tmp/ copy=no
Ansible - Unarchive
Copy
If true, the file is copied from local 'master' to the target machine,
otherwise, the plugin will look for src archive at the target machine.
add option "remote_src: yes" to unarchive module declaration
you can find it over here ""http://docs.ansible.com/ansible/latest/unarchive_module.html

OpsWorks war deployment failure from S3

I have a war file,. myapp.war (it happens to be a grails app, but this is not material)
I upload this to an s3 bucket, say myapp in us-west-2
I set up an OpsWorks using the S3 repository type:
Repository Type: S3
Repository URL: https://myapp.s3-us-west-2.amazonaws.com/myapp.war
Access key ID: A key with read permission on the above bucket
Secret access key: the secret for this key
Deploy to an instance in Java layer (Tomcat 7)
All lights are green, deployments succeeded
But the app isn't actually deployed
Shelling in to the instance and looking in /usr/share/tomcat7/webapps I find a directory called 'myapp'. Inside this directory is a file called 'archive'. 'archive' appears to be a war file, but it is not named 'archive.war', and it is in a subdirectory of webapps, so tomcat isn't going to deploy it anyway.
Now, the OpsWorks docs say the archive should be a 'zip' file. But:
zipping up myapp.war into a zip archive 'myapp.war.zip' and changing the path to this file results in 'myapp' containing 'myapp.war'. No deployment, since tomcat isn't looking for war files in 'webapps/myapp'
Changing the name of 'myapp.war' to 'myapp.zip' and changing the repository path results in 'myapp' containing the single file 'archive' again.
So. Can anyone describe how to properly provide a war file to OpsWorks from S3?
It appears that the problem has to do with how the zip archive is made.
Jars, war, and the like created with the java 'jar' tool do not work. Zip archives created with a zip tool, and then renamed to have a '.war' extension do.
This is explained here: https://forums.aws.amazon.com/thread.jspa?messageID=559582&#559582
Quoting that post's answer:
Our current extract script doesn't correctly identify WAR files. If
you unpack the WAR file and use zip to pack it, it should work until
we update our script.
So the procedure that works is to:
Explode the war made by your development environment (In the case of grails, the war build cleans up the staging directory for the war, so you don't have an exploded war directory laying around to zip up yourself, you have to unzip it first.)
Zip the contents of the directory created by exploding the war using a zip tool (or, if your build tool leaves the exploded war directory there, then just zip it directly)
Optionally, rename the new zip archive to have a '.war' extension.
Resume the procedure from the original question, step 3 -- that is, upload the war to the s3 bucket and
Specify the S3 path to the war file as the repository in the OpsWorks setup.
EDIT:
After answering this, I discovered that Grails can produce an exploded war directory after all.
// BuildConfig.groovy
...
grails.project.war.exploded.dir = "path/to/exploded/war-directory"
grails.war.exploded=true
...
That directory can be zipped or jarred or whatever you want by your builder/deployer.
From this wiki page you see that a WAR file is just a special JAR file. And if you check out what a JAR is here then you see it is just zipped up compiled java code.
This SuperUser question also touches on the .WAR vs .zip business. Basically, a WAR is just a special ZIP. So when you upload a WAR, you are uploading a ZIP.
Make sure it's a WAR file in the S3 bucket.
Provide the entire link to the S3 WAR file. To get this, right-click the WAR file in S3 and select Properties and then copy the link.

Resources