My heat template is like:
windows_instance:
type: OS::Nova::Server
properties:
image: {get_param: imagever }
flavor: m1.large
key_name: test
networks:
- port: { get_resource: publicport }
user_data_format: RAW
user_data:
str_replace:
template: |
#ps1
testps "$srcurl" "$dest" -Verbose
params:
$dest: { get_param: target_location }
$srcurl: { get_param: url_src }
testps: { get_file: test1.ps1 }
test1.ps1 :
param([String]$src, [String]$dest)
Write-Host "url is:" + $src
write-host "dest is:" + $dest
But cloudbase-init.log reported: executeuserdatascript C:\Program Files (x86)\Cloudbase Solutions\Cloudbase-Init\Python27\lib\site-packages\cloudbaseinit\plugins\windows\userdatautils.py:58 2015-04-27 18:40:06.905 1788 DEBUG cloudbaseinit.plugins.windows.userdatautils [-] Userdata stderr: The term 'param' is not recognized as the name of a cmdlet, function, script fi
le, or operable program. Check the spelling of the name, or if a path was inclu
ded, verify that the path is correct and try again.
At C:\Users\cloudbase-init\appdata\local\temp\6ea2afb5-645b-430c-91a2-a67c3201f
5db.ps1:7 char:7
param <<<< ([String]$src, [String]$dest)
CategoryInfo : ObjectNotFound: (param:String) [], CommandNotFou
ndException
FullyQualifiedErrorId : CommandNotFoundException
So what is the correct way to pass parameter to a powershell script using heat template?
I make it work according to the sample:
https://github.com/openstack/heat-templates/tree/master/hot/Windows
Related
For a single directory my script is running fine, but how to check the same for multiple directories?
Code for a single directory:
---
- name: checking directory permission
hosts: test
become: true
tasks:
- name: Getting permission to registered var 'p'
stat:
path: /var/SP/Shared/
register: p
- debug:
msg: "permission is 777 for /var/SP/Shared/
when: p.stat.mode == "0777" or p.stat.mode == "2777" or p.stat.mode == "4777"
Reading stat_module shows that there is no parameter for recursion. Testing with_fileglob: did not gave the expected result.
So it seems you would need to loop over the directories in a way like
- name: Get directory permissions
stat:
path: "{{ item }}"
register: result
with_items:
- "/tmp/example"
- "/tmp/test"
tags: CIS
- name: result
debug:
msg:
- "{{ result }}"
tags: CIS
but I am sure there can be still more advanced solutions found.
I have simple playbook where fetching some data from Vault server using curl.
tasks:
- name: role_id
shell: 'curl \
--header "X-Vault-Token: s.ddDblh8DpHkOu3IMGbwrM6Je" \
--cacert vault-ssl-cert.chained \
https://active.vault.service.consul:8200/v1/auth/approle/role/cpanel/role-id'
register: 'vault_role_id'
- name: test1
debug:
msg: "{{ vault_role_id.stdout }}"
The output is like this:
TASK [test1] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"auth": null,
"data": {
"role_id": "65d02c93-689c-eab1-31ca-9efb1c3e090e"
},
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "8bc03205-dcc2-e388-57ff-cdcaef84ef69",
"warnings": null,
"wrap_info": null
}
}
Everything is ok if I am accessing first level attribute, like .stdout in previous example. I need deeper level attribute to reach, like vault_role_id.stdout.data.role_id. When I try this it is failing with following error:
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Do you have suggestion what I can do to get properly attribute values from deeper level in this object hierarchy?
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Yes, because what's happening is that rendering it into msg: with {{ is coercing the JSON text into a python dict; if you do want it to be a dict, then use either msg: "{{ (vault_role_id.stdout | from_json).data.role_id }}" or you can use set_fact: {vault_role_data: "{{vault_role_id.stdout}}"} and then vault_role_data will be a dict for the same reason it was coerced by your msg
You can see the opposite process by prefixing the msg with any characters:
- name: this one is text
debug:
msg: vault_role_id is {{ vault_role_id.stdout }}
- name: this one is coerced
debug:
msg: '{{ vault_role_id.stdout }}'
while this isn't what you asked, you should also add --fail to your curl so it exists with a non-zero return code if the request returns non-200-OK, or you can use the more ansible-y way via - uri: and set the return_content: yes parameter
I am not able to use variable in driver configuration (feature file Background).
1)variable is defined in JS configuration file (karate-config.js):
config.driverType = 'geckodriver';
config.driverExecutable = 'geckodriver';
config.driverStart = false;
config.driverPort = 4444;
2)in feature file (Background section) I need to modify driver according to the variable values:
configure driver = { type: driverType, executable: driverExecutable, start: driverStart, port: driverPort}
to have same result to this (this works):
configure driver = { type: 'geckodriver', executable: 'geckodriver', start: false, port: 4444}
3) when I wrote the variable "print driverType" in scenario, value is printed correctly:
[print] geckodriver
but driver configuration fails:
WARN com.intuit.karate - unknown driver type: driverType, defaulting to 'chrome'
ERROR com.intuit.karate - driver config / start failed: class java.lang.String cannot be cast to class java.lang.Boolean (java.lang.String and java.lang.Boolean are in module java.base of loader 'bootstrap'), options: {type=chrome, executable=driverExecutable, start=driverStart, port=driverPort, target=null}
Could you help me with solving this to be able to change driver settings in JS file (generally - how to insert variable into driver configuration)?
Thank you.
Just make this change:
* configure driver = { type: '#(driverType)', executable: '#(driverExecutable)', start: '#(driverStart)', port: '#(driverPort)' }
Or this should also work:
* configure driver = ({ type: driverType, executable: driverExecutable, start: driverStart, port: driverPort })
There is a subtle difference, explained here: https://github.com/intuit/karate#enclosed-javascript
By the way, you can do the config like this also in karate-config.js:
config.driverConfig = { type: 'geckodriver', executable: 'geckodriver' };
And this would work in the feature file:
* configure driver = driverConfig
And you can do the driverConfig completely in the karate-config.js if you want:
* karate.configure('driver', { type: 'geckodriver', executable: 'geckodriver' });
Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.
I have written the CWL code which runs the Rscript command in the docker container. I have two files of cwl and yaml and run them by the command:
cwltool --debug a_code.cwl a_input.yaml
I get the output which said that the process status is success but there is no output file and in the result "output": null is reported. I want to know if there is a method to find that the Rscript file has run on the docker successfully. I want actually know about the reason that the output files are null.
The final part of the result is:
[job a_code.cwl] {
"output": null,
"errorFile": null
}
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
Removing intermediate output directory /tmp/tmpCG9Xs1
Removing intermediate output directory /tmp/tmpCG9Xs1
{
"output": null,
"errorFile": null
}
Final process status is success
Final process status is success
R code:
library(cummeRbund)
cuff<-readCufflinks( dbFile = "cuffData.db", gtfFile = NULL, runInfoFile = "run.info", repTableFile = "read_groups.info", geneFPKM = "genes.fpkm_trac .... )
#setwd("/scripts")
sink("cuff.txt")
print(cuff)
sink()
My cwl file code is:
class: CommandLineTool
cwlVersion: v1.0
id: cummerbund
baseCommand:
- Rscript
inputs:
- id: Rfile
type: File?
inputBinding:
position: 0
- id: cuffdiffout
type: 'File[]?'
inputBinding:
position: 1
- id: errorout
type: File?
inputBinding:
position: 99
prefix: 2>
valueFrom: |
error.txt
outputs:
- id: output
type: File?
outputBinding:
glob: cuff.txt
- id: errorFile
type: File?
outputBinding:
glob: error.txt
label: cummerbund
requirements:
- class: DockerRequirement
dockerPull: cummerbund_0
my input file (yaml file) is:
inputs:
Rfile:
basename: run_cummeR.R
class: File
nameext: .R
nameroot: run_cummeR
path: run_cummeR.R
cuffdiffout:
- class: File
path: cuffData.db
- class: File
path: genes.fpkm_tracking
- class: File
path: read_groups.info
- class: File
path: genes.count_tracking
- class: File
path: genes.read_group_tracking
- class: File
path: isoforms.fpkm_tracking
- class: File
path: isoforms.read_group_tracking
- class: File
path: isoforms.count_tracking
- class: File
path: isoform_exp.diff
- class: File
path: gene_exp.diff
errorout:
- class: File
path: error.txt
Also, this is my Dockerfile for creating image:
FROM r-base
COPY . /scripts
RUN apt-get update
RUN apt-get install -y \
libcurl4-openssl-dev\
libssl-dev\
libmariadb-client-lgpl-dev\
libmariadbclient-dev\
libxml2-dev\
r-cran-plyr\
r-cran-reshape2
WORKDIR /scripts
RUN Rscript /scripts/build.R
ENTRYPOINT /bin/bash
I got the answer!
There were some problems in my program.
1. The docker was not pulled correctly then the cwl couldn't produce any output.
2. The inputs and outputs were not defined mandatory. So I got the success status in the case which I did not have proper inputs and output.