Cannot unzip the file located on remote centos machine using Ansible - unzip

- name: Unzip the Elasticsearch file
unarchive: src=/root/elasticsearch-1.4.0.tar.gz dest=/tmp/
TASK [Unzip the Elasticsearch file]
*******************************************
fatal: [54.173.94.235]: FAILED! => {"failed": true, "msg": "ERROR! file or module does not exist: /root/elasticsearch-1.4.0.tar.gz"}
Is it consider the local file? ...I am running file on my local machine to unzip file on the remote machine. How can I solve this problem?

By default, Ansible copies the file (src) from control machine to the remote machine and unarchives it. If you do not want Ansible to copy the file, set copy=no in your task.
The value of copy is yes by default, so Ansible will try to look for src file in the local machine if you do not set copy=no
unarchive: src=/root/elasticsearch-1.4.0.tar.gz dest=/tmp/ copy=no
Ansible - Unarchive
Copy
If true, the file is copied from local 'master' to the target machine,
otherwise, the plugin will look for src archive at the target machine.

add option "remote_src: yes" to unarchive module declaration
you can find it over here ""http://docs.ansible.com/ansible/latest/unarchive_module.html

Related

Nexus3 Pro Replication

I've installed the trial version of nexus3 Pro for POC of Bi-Directional replication.
I've followed all the instructions mentioned in the doc
https://help.sonatype.com/repomanager3/nexus-repository-administration/repository-management/repository-replication
Currently, I've implemented only One-Directional replication. So when I upload a docker image and run the jar file mentioned in the doc, I can see the replication.txt file is created in the source instance and when I execute the jar file, I don't see the repository replicated in the target instance.
Following is my config.yaml file
sources:
path: /etc/opt/sonatype-work/nexus3/blobs/default
type: file
targets:
path: /opt/sonatype-work/nexus3/blobs/default
type: file
repositoryName: Docker_destination
connectionName: DockerImageReplication
Expected result would be, Soruce repo contents should be replicated to Target repo on different nexus instance.

Deploy Raw Source Code from GitLab Repository

I have a Gitlab repository containing a WordPress theme - php, js, and css files. My desired result is that when I push a change to the 'main' branch of the repo, the theme files are deployed, raw, without any build or test steps, to my remote server.
I have a .gitlab-ci.yml file set up with 'deploy' as its only step.
The script triggers on 'only: -main' and successfully accesses my remote server via ssh.
What I'm unsure of is how to send the entire raw repository to the remote.
Here is the 'script' portion of my yml:
- rsync -rav --delete project-name/ /var/opt/gitlab/git-data/repositories/project-name/ username#my.ip.add.ress:public_html/wp-site/wp-content/themes/
When the pipeline runs, I receive the following two errors:
rsync: [sender] change_dir "/builds/username/project-name/project-name" failed: No such file or directory (2)
rsync: [sender] change_dir "/var/opt/gitlab/git-data/repositories/project-name" failed: No such file or directory (2)
Is GitLab looking in /builds/ its default behavior? I am not instructing it to do so in my yml.
Is there some other file path I should be using to access the working tree for 'main' in my repository?
Ok, I misunderstood the rsync syntax. I thought the --delete flag included a parameter thereafter, meaning 'delete any existing files in the following directory' rather than what it actually does, which is to auto-choose the destination directory. Once I removed 'project-name/' and corrected the GitLab (origin) file path to '/builds/username/project-name/' the deployment occurs as intended.

How would write ansible script to find if directory is mounted or not?

I need to check if a directory is mounted or not. I have checked on doc.ansible for help but they only have create or delete.
I am using visual code for ansible and vm for directory file.
Module setup returns a list of mounted filesystems "ansible_mounts". It is possible to test it
ansible remote-host -m setup

Rename an artifact during its uploading by Jenkins

I'm using jenkins to upload rpm artifacts to general repository.
I would like to rename the rpm file from my_product_1.1.0.0.rpm to my_product.rpm.
I tried to add a
curl -umyUser:myP455w0rd! -T "http://artifactory:8081/../old name" "http://artifactory:8081/../new name"
command for uploading where the source is artifactory repo and the destination is the same repo but with a different file name. It fails "cannot find the source file"
Later, I tried to do it using "Publish Artifacts" field in jenkins:
/drop_folder/ => repo/my_product.rpm
but in this case, artifacts created a folder "my_product.rpm" and uploads the my_product_1.1.0.0.rpm within.
Can it be done in a different way?
Using CLI for Jfrog Artifactory from Jenkins pipeline you can 2 options:
Copying the file with the new name to another repo:
jfrog rt cp "your-artifactory-repo/oldname.extension" your-artifactory-repo/newName.extension
Download the artifact and upload to new repo with the new name (not recommended).

File in executable jar cannot be found when running on AWS EC2

I have a .jar file executing on a aws ec2 instance which contains the following code:
List<String> lines = FileUtils.readLines(new File("googlebooks-eng-all-1gram-20120701-k"));
the file exists in projectname/res and also in /projectname directly. I included /res in the build path. Also I see that the file exists inside the jar file at the root if I export the .java file in eclipse.
If I run the jar localy on my pc it works fine. But if I run it on a ec2 instance it says:
java.io.FileNotFoundException: File 'googlebooks-eng-all-1gram-20120701-k' does not exist
How can that be?
On your PC it is reading from the actual file on the filesystem - that is what new File means - a file on the filesystem.
To access a resource in a jar file you need to call getResourceAsStream or something similar instead.

Resources