How can I found that the Rscript have been run successfully on the docker by the CWL? - r

I have written the CWL code which runs the Rscript command in the docker container. I have two files of cwl and yaml and run them by the command:
cwltool --debug a_code.cwl a_input.yaml
I get the output which said that the process status is success but there is no output file and in the result "output": null is reported. I want to know if there is a method to find that the Rscript file has run on the docker successfully. I want actually know about the reason that the output files are null.
The final part of the result is:
[job a_code.cwl] {
"output": null,
"errorFile": null
}
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
Removing intermediate output directory /tmp/tmpCG9Xs1
Removing intermediate output directory /tmp/tmpCG9Xs1
{
"output": null,
"errorFile": null
}
Final process status is success
Final process status is success
R code:
library(cummeRbund)
cuff<-readCufflinks( dbFile = "cuffData.db", gtfFile = NULL, runInfoFile = "run.info", repTableFile = "read_groups.info", geneFPKM = "genes.fpkm_trac .... )
#setwd("/scripts")
sink("cuff.txt")
print(cuff)
sink()
My cwl file code is:
class: CommandLineTool
cwlVersion: v1.0
id: cummerbund
baseCommand:
- Rscript
inputs:
- id: Rfile
type: File?
inputBinding:
position: 0
- id: cuffdiffout
type: 'File[]?'
inputBinding:
position: 1
- id: errorout
type: File?
inputBinding:
position: 99
prefix: 2>
valueFrom: |
error.txt
outputs:
- id: output
type: File?
outputBinding:
glob: cuff.txt
- id: errorFile
type: File?
outputBinding:
glob: error.txt
label: cummerbund
requirements:
- class: DockerRequirement
dockerPull: cummerbund_0
my input file (yaml file) is:
inputs:
Rfile:
basename: run_cummeR.R
class: File
nameext: .R
nameroot: run_cummeR
path: run_cummeR.R
cuffdiffout:
- class: File
path: cuffData.db
- class: File
path: genes.fpkm_tracking
- class: File
path: read_groups.info
- class: File
path: genes.count_tracking
- class: File
path: genes.read_group_tracking
- class: File
path: isoforms.fpkm_tracking
- class: File
path: isoforms.read_group_tracking
- class: File
path: isoforms.count_tracking
- class: File
path: isoform_exp.diff
- class: File
path: gene_exp.diff
errorout:
- class: File
path: error.txt
Also, this is my Dockerfile for creating image:
FROM r-base
COPY . /scripts
RUN apt-get update
RUN apt-get install -y \
libcurl4-openssl-dev\
libssl-dev\
libmariadb-client-lgpl-dev\
libmariadbclient-dev\
libxml2-dev\
r-cran-plyr\
r-cran-reshape2
WORKDIR /scripts
RUN Rscript /scripts/build.R
ENTRYPOINT /bin/bash

I got the answer!
There were some problems in my program.
1. The docker was not pulled correctly then the cwl couldn't produce any output.
2. The inputs and outputs were not defined mandatory. So I got the success status in the case which I did not have proper inputs and output.

Related

How to add nested dictionary to dynamic host in Ansible

I have application details in respective vars like below. For example, myapp1 in "QA" environment would look like the below:
cat myapp1_QA.yml
---
APP_HOSTS:
- myapphost7:
- logs:
- /tmp/web/apphost7_access
- /tmp/web/apphost7_error
- myapphost9:
- logs:
- /tmp/web/apphost9_access
- /tmp/web/apphost9_error
- /tmp/web/apphost9_logs
WEB_HOSTS:
- mywebhost7:
- logs:
- /tmp/webserver/webhost7.pid
In this example I wish to create a dynamic host containing the 3 hosts
myapphost7
myapphost9
mywebhost7
and each host has variable log which can be looped to get the file paths:
Below is my ansible play:
---
- hosts: localhost
tasks:
- include_vars:
file: "{{ playbook_dir }}/{{ appname }}_{{ myenv }}.yml"
- name: Dsiplay dictionary data
debug:
msg: "{{ item[logs] }}"
loop: "{{ APP_HOSTS }}"
I get the below error:
ansible-playbook read.yml -e appname=myapp1 -e myenv=QA
TASK [Dsiplay dictionary data] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'logs' is undefined\n\nThe error appears to be in '/root/read.yml': line 8, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Dsiplay dictionary data\n ^ here\n"}
My requirement is to store "myapphost7", "myapphost9", "mywebhost7" in group using add_hosts: hosts: while a variable logs: having the list of log files.
Note: if no hosts mywebhost7 is defined under WEB_HOSTS: or APP_HOSTS: then nothing should be added to the dynamic host.
Can you please suggest?

I am writng the gitlab pipeline to deploy terraform files in nexus repo, I want to make it incremental versioning of my folder. I am confused help me

variables:
TF_ROOT: ${CI_PROJECT_DIR}
TF_CLI_CONFIG_FILE: $CI_PROJECT_DIR/.terraformrc
TF_IN_AUTOMATION: "true"
ARM_SUBSCRIPTION_ID: ""
ARM_TENANT_ID: ""
NEXUS_URL: ""
cache:
key: "${TF_ROOT}"
paths:
- ${TF_ROOT}/.terraform/
.terraform-setup-and-init: &init
ls -al
source <(curl -O -k $NEXUS_URL)
ls -al
chmod +x setup-terraform.sh
ls -al *.sh
source ./setup-terraform.sh
export HTTPS_PROXY=
terraform init -var-file="./environments/US/sev.tfvars"
.validate:
stage: validate
script:
- *init
- terraform validate
.build:
stage: build
script:
- *init
- echo "executing terraform plan, needs provider credentials... Skipping"
# - terraform plan -out="plan.cache"
# - terraform show -json "plan.cache" > plan.json
artifacts:
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
.deploy:
stage: deploy
script:
- *init
# - terraform apply -input=false "plan.cache"
only:
variables:
- $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
stages:
validate
build
deploy
tf validate:
extends: .validate
tf plan:
extends: .build
tf apply:
extends: .deploy
dependencies:
- tf plan

how to check 777 permission in multiple directory by ansible

For a single directory my script is running fine, but how to check the same for multiple directories?
Code for a single directory:
---
- name: checking directory permission
hosts: test
become: true
tasks:
- name: Getting permission to registered var 'p'
stat:
path: /var/SP/Shared/
register: p
- debug:
msg: "permission is 777 for /var/SP/Shared/
when: p.stat.mode == "0777" or p.stat.mode == "2777" or p.stat.mode == "4777"
Reading stat_module shows that there is no parameter for recursion. Testing with_fileglob: did not gave the expected result.
So it seems you would need to loop over the directories in a way like
- name: Get directory permissions
stat:
path: "{{ item }}"
register: result
with_items:
- "/tmp/example"
- "/tmp/test"
tags: CIS
- name: result
debug:
msg:
- "{{ result }}"
tags: CIS
but I am sure there can be still more advanced solutions found.

Not able to execute lifecycle operation using script plugin

I'm trying to learn how to use script plugin. I'm following script plugin docs here but not able to make it work.
I've tried to use the plugin in two ways. The first, when cloudify.interface.lifecycle.start operation is mapped directly to a script:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
inputs: {}
The second with a direct mapping:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: script.script_runner.tasks.run
inputs:
script_path: scripts/create_project.sh
I've created a directory named scripts and placed the below create_project.sh script in this directory:
#! /bin/bash -e
ctx logger info "Hello to this world"
hostname
I'm getting errors while validating the blueprint.
Error when operation is mapped directly to a script:
[2019-04-13 13:29:40.594] [DEBUG] DslParserExecClient - got output from dsl parser Could not extract plugin from operation mapping 'scripts/create_project.sh', which is declared for operation 'start'. In interface 'cloudify.interfaces.lifecycle' in node 'Import_Project' of type 'cloudify.nodes.WebServer'
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-27O0e1t813N6as
in line: 3, column: 2
path: node_templates.Import_Project
value: {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'scripts/create_project.sh', 'inputs': {}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}
Error when using a direct mapping:
[2019-04-13 13:25:21.015] [DEBUG] DslParserExecClient - got output from dsl parser node 'Import_Project' has no relationship which makes it contained within a host and it has a plugin 'script' with 'host_agent' as an executor. These types of plugins must be installed on a host
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-279QCz2CV3Y81L
in line: 2, column: 0
path: node_templates
value: {'Import_Project': {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'script.script_runner.tasks.run', 'inputs': {'script_path': 'scripts/create_project.sh'}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}}
What is missing to make this work?
I also found the Cloudify Script Plugin examples from their documentation do not work: https://docs.cloudify.co/4.6/working_with/official_plugins/configuration/script/
However, I found I can make the examples work by adding an executor line in parallel with the implementation line to override the host_agent executor as follows:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
executor: central_deployment_agent
inputs: {}

Ansible Not copying directory using cp command

I have the following role:
---
- name: "Copying {{source_directory}} to {{destination_directory}}"
shell: cp -r "{{source_directory}}" "{{destination_directory}}"
being used as follows:
- { role: copy_folder, source_directory: "{{working_directory}}/ipsc/dist", destination_directory: "/opt/apache-tomcat-base/webapps/ips" }
with the parameters: working_directory: /opt/demoServer
This is being executed after I remove the directory using this role (as I do not want the previous contents)
- name: "Removing Folder {{path_to_file}}"
command: rm -r "{{path_to_file}}"
with parameters: path_to_file: "/opt/apache-tomcat-base/webapps/ips"
I get the following output:
TASK: [copy_folder | Copying /opt/demoServer/ipsc/dist to /opt/apache-tomcat-base/webapps/ips] ***
<md1cat01-demo.lnx.ix.com> ESTABLISH CONNECTION FOR USER: my.user
<md1cat01-demo.lnx.ix.com> REMOTE_MODULE command cp -r "/opt/demoServer/ipsc/dist" "/opt/apache-tomcat-base/webapps/ips" #USE_SHELL
...
changed: [md1cat01-demo.lnx.ix.com] => {"changed": true, "cmd": "cp -r \"/opt/demoServer/ipsc/dist\" \"/opt/apache-tomcat-base/webapps/ips\"", "delta": "0:00:00.211759", "end": "2016-02-05 11:05:37.459890", "rc": 0, "start": "2016-02-05 11:05:37.248131", "stderr": "", "stdout": "", "warnings": []}
What is happening is that there is never being a folder in that directory.
Basically the cp command is not doing it's job, but i get no error or so. If i run the copy command manually on the machine it works however.
Use Copy module and set directory_mode to yes

Resources