my output now
I'm learning salt stack right now and I was wondering if there was a way to get the stdout of a salt state and put it into a document and then send it to the master. Or is there a better way to do this?
To achieve this, we'll have to save the execution of the script in a variable. It will contain a hash containing keys that are showing up under changes:. Then the contents of this variable (stdout) can be written to a file.
{% set script_res = salt['cmd.script']('salt://test.sh') %}
create-stdout-file:
file.managed:
- name: /tmp/script-stdout.txt
- contents: {{ script_res.stdout }}
The output is already going to the master. It would be better to actually output in json and query down to the data you want in your document on the master.
such as the following
Normal output
$ sudo salt salt00\* state.apply tests.test3
salt00.wolfnet.bad4.us:
----------
ID: test_run
Function: cmd.run
Name: echo test
Result: True
Comment: Command "echo test" run
Started: 10:39:51.103057
Duration: 18.281 ms
Changes:
----------
pid:
8661
retcode:
0
stderr:
stdout:
test
Summary for salt00.wolfnet.bad4.us
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 18.281 ms
json output
$ sudo salt salt00\* state.apply tests.test3 --out json
{
"salt00.wolfnet.bad4.us": {
"cmd_|-test_run_|-echo test_|-run": {
"name": "echo test",
"changes": {
"pid": 9057,
"retcode": 0,
"stdout": "test",
"stderr": ""
},
"result": true,
"comment": "Command \"echo test\" run",
"__sls__": "tests.test3",
"__run_num__": 0,
"start_time": "10:40:55.582273",
"duration": 19.374,
"__id__": "test_run"
}
}
}
json parsed down with jq to just the stdout
$ sudo salt salt00\* state.apply tests.test3 --out=json | jq '.|.[]|."cmd_|-test_run_|-echo test_|-run"|.changes.stdout'
"test"
Also, for the record it is considered bad practice to put code that changes the system into jinja. Jinja always runs when a template is rendered and there is no way to control if it happens so just running test=true tests will still run the jinja code that makes changes which could be very harmful to your systems.
Related
My ansible-playbook is running some long running task with async tag and also utilizes "creates:" condition, so it is run only once on the server. When I was writing the playbook yesterday, I am pretty sure, the task was skipped when the log set in "creates:" tag existed.
It shows changed now though, everytime I run it.
I am confused as I do not think I did change anything and I'd like to set up my registered varaible correctly as unchanged, when the condition is true.
Output of ansible-play (debug section shows the task is changed: true):
TASK [singleserver : Install Assure1 SingleServer role] *********************************************************************************************************************************
changed: [crassure1]
TASK [singleserver : Debug] *************************************************************************************************************************************************************
ok: [crassure1] => {
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
}
But if I check the actual results file on the target maschine, it correctly resolved condition and did not actually execute the shell script, so the task should be unchanged (shows message the task is skipped as the log exists):
[root#crassure1 assure1]# cat "/root/.ansible_async/637594935242.28556"
{"invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": "/opt/install/install_command.sh", "removes": null, "argv": null, "creates": "/opt/assure1/logs/SetupWizard.log", "chdir": null, "stdin_add_newline": true, "stdin": null}}, "cmd": "/opt/install/install_command.sh", "changed": false, "rc": 0, "stdout": "skipped, since /opt/assure1/logs/SetupWizard.log exists"}[root#crassure1 assure1]# Connection reset by 172.24.36.123 port 22
My playbook section looks like this:
- name: Install Assure1 SingleServer role
shell:
#cmd: "/opt/assure1/bin/SetupWizard -a --Depot /opt/install/:a1-local --First --WebFQDN crassure1.tspdata.local --Roles All"
cmd: "/opt/install/install_command.sh"
async: 7200
poll: 0
register: Assure1InstallWait
args:
creates: /opt/assure1/logs/SetupWizard.log
- name: Debug
debug:
msg: "{{ Assure1InstallWait }}"
- name: Check on Installation status every 15 minutes
async_status:
jid: "{{ Assure1InstallWait.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
delay: 900
when: Assure1InstallWait is changed
Is there something I am missing, or is that some kind of a bug?
I am limited by Ansible version available in configured trusted repo, so I am using ansible 2.9.25
Q: "The module shell shows changed every time I run it"
A: In async mode the task can't be skipped immediately. First, the module shell must find out whether the file /opt/assure1/logs/SetupWizard.log exists at the remote host or not. Then, if the file exists the module will decide to skip the execution of the command. But, you run the task asynchronously. In this case, Ansible starts the module and returns without waiting for the module to complete. That's what the registered variable Assure1InstallWait says. The task started but didn't finish yet.
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
The decision to set such a task changed is correct, I think because the execution on the remote host is going on.
Print the registered result of the module async. You'll see, that the command was skipped because the file exists (you've printed the async file at the remote instead). Here the attribute changed is set false because now we know the command didn't execute
job_result:
...
attempts: 1
changed: false
failed: false
finished: 1
msg: Did not run command since '/tmp/SetupWizard.log' exists
rc: 0
...
I have simple playbook where fetching some data from Vault server using curl.
tasks:
- name: role_id
shell: 'curl \
--header "X-Vault-Token: s.ddDblh8DpHkOu3IMGbwrM6Je" \
--cacert vault-ssl-cert.chained \
https://active.vault.service.consul:8200/v1/auth/approle/role/cpanel/role-id'
register: 'vault_role_id'
- name: test1
debug:
msg: "{{ vault_role_id.stdout }}"
The output is like this:
TASK [test1] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"auth": null,
"data": {
"role_id": "65d02c93-689c-eab1-31ca-9efb1c3e090e"
},
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "8bc03205-dcc2-e388-57ff-cdcaef84ef69",
"warnings": null,
"wrap_info": null
}
}
Everything is ok if I am accessing first level attribute, like .stdout in previous example. I need deeper level attribute to reach, like vault_role_id.stdout.data.role_id. When I try this it is failing with following error:
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Do you have suggestion what I can do to get properly attribute values from deeper level in this object hierarchy?
"The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'data'\n\n
Yes, because what's happening is that rendering it into msg: with {{ is coercing the JSON text into a python dict; if you do want it to be a dict, then use either msg: "{{ (vault_role_id.stdout | from_json).data.role_id }}" or you can use set_fact: {vault_role_data: "{{vault_role_id.stdout}}"} and then vault_role_data will be a dict for the same reason it was coerced by your msg
You can see the opposite process by prefixing the msg with any characters:
- name: this one is text
debug:
msg: vault_role_id is {{ vault_role_id.stdout }}
- name: this one is coerced
debug:
msg: '{{ vault_role_id.stdout }}'
while this isn't what you asked, you should also add --fail to your curl so it exists with a non-zero return code if the request returns non-200-OK, or you can use the more ansible-y way via - uri: and set the return_content: yes parameter
I have the following role:
---
- name: "Copying {{source_directory}} to {{destination_directory}}"
shell: cp -r "{{source_directory}}" "{{destination_directory}}"
being used as follows:
- { role: copy_folder, source_directory: "{{working_directory}}/ipsc/dist", destination_directory: "/opt/apache-tomcat-base/webapps/ips" }
with the parameters: working_directory: /opt/demoServer
This is being executed after I remove the directory using this role (as I do not want the previous contents)
- name: "Removing Folder {{path_to_file}}"
command: rm -r "{{path_to_file}}"
with parameters: path_to_file: "/opt/apache-tomcat-base/webapps/ips"
I get the following output:
TASK: [copy_folder | Copying /opt/demoServer/ipsc/dist to /opt/apache-tomcat-base/webapps/ips] ***
<md1cat01-demo.lnx.ix.com> ESTABLISH CONNECTION FOR USER: my.user
<md1cat01-demo.lnx.ix.com> REMOTE_MODULE command cp -r "/opt/demoServer/ipsc/dist" "/opt/apache-tomcat-base/webapps/ips" #USE_SHELL
...
changed: [md1cat01-demo.lnx.ix.com] => {"changed": true, "cmd": "cp -r \"/opt/demoServer/ipsc/dist\" \"/opt/apache-tomcat-base/webapps/ips\"", "delta": "0:00:00.211759", "end": "2016-02-05 11:05:37.459890", "rc": 0, "start": "2016-02-05 11:05:37.248131", "stderr": "", "stdout": "", "warnings": []}
What is happening is that there is never being a folder in that directory.
Basically the cp command is not doing it's job, but i get no error or so. If i run the copy command manually on the machine it works however.
Use Copy module and set directory_mode to yes
I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
You need to source openstack-config.sh before packer build.
I am trying to invoke a command on provisioning via Saltstack. If command fails then I get state failing and I don't want that (retcode of command doesn't matter).
Currently I have the following workaround:
Run something:
cmd.run:
- name: command_which_can_fail || true
is there any way to make such state ignore retcode using salt features? or maybe I can exclude this state from logs?
Use check_cmd :
fails:
cmd.run:
- name: /bin/false
succeeds:
cmd.run:
- name: /bin/false
- check_cmd:
- /bin/true
Output:
local:
----------
ID: fails
Function: cmd.run
Name: /bin/false
Result: False
Comment: Command "/bin/false" run
Started: 16:04:40.189840
Duration: 7.347 ms
Changes:
----------
pid:
4021
retcode:
1
stderr:
stdout:
----------
ID: succeeds
Function: cmd.run
Name: /bin/false
Result: True
Comment: check_cmd determined the state succeeded
Started: 16:04:40.197672
Duration: 13.293 ms
Changes:
----------
pid:
4022
retcode:
1
stderr:
stdout:
Summary
------------
Succeeded: 1 (changed=2)
Failed: 1
------------
Total states run: 2
If you don't care what the result of the command is, you can use:
Run something:
cmd.run:
- name: command_which_can_fail; exit 0
This was tested in Salt 2017.7.0 but would probably work in earlier versions.