Ansible-Vault conf file not being decrypted when running playbook - encryption

I'm working on this ansible playbook to sign certificates. Inside the playbook I use a conf file with an api key inside to hide the key I have encrypted the file with ansible vault. The problem with this is when I run the playbook, it errors out with a stdout saying file contains no section headers.
fatal: [cxlabs-alln01-sslapi]: FAILED! => {
"changed": true,
"cmd": [
"/usr/local/bin/sslapi_cli",
"sign",
"-csr",
"/etc/sslapi_cli/xxxxxxxx.cisco.com.csr",
"-out",
"/etc/sslapi_cli/xxxxxxxx.cisco.com.cer",
"-confFile",
"/etc/sslapi_cli/sslapi_cli.conf",
"-validityPeriod",
"one_year"
],
"delta": "0:00:00.209337",
"end": "2022-04-04 15:47:37.772535",
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/sslapi_cli sign -csr /etc/sslapi_cli/xxxxxxxx.cisco.com.csr -out /etc/sslapi_cli/xxxxxxxx.cisco.com.cer -confFile /etc/sslapi_cli/sslapi_cli.conf -validityPeriod one_year",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 2,
"start": "2022-04-04 15:47:37.563198",
"stderr": "File contains no section headers.\nfile: '/etc/sslapi_cli/sslapi_cli.conf', line: 1\n'$ANSIBLE_VAULT;1.1;AES256\\n'",
"stderr_lines": [
"File contains no section headers.",
"file: '/etc/sslapi_cli/sslapi_cli.conf', line: 1",
"'$ANSIBLE_VAULT;1.1;AES256\\n'"
],
"stdout": "File contains no section headers.\nfile: '/etc/sslapi_cli/sslapi_cli.conf', line: 1\n'$ANSIBLE_VAULT;1.1;AES256\\n'",
"stdout_lines": [
"File contains no section headers.",
"file: '/etc/sslapi_cli/sslapi_cli.conf', line: 1",
"'$ANSIBLE_VAULT;1.1;AES256\\n'"
]
}
I'm not sure what this means, but I think It's because the sslapi_cli.conf is not being decrypted when the playbook is reading it.

Ansible vault purpose is not encrypting files, it is encrypting variables. When you encrypt a file with ansible-vault, it is assumed that the file is .yml formatted and therefore it can be processed as ansible variables.
You need to define the api key in an encrypted file, or encrypt inline (https://docs.ansible.com/ansible/latest/user_guide/vault.html#creating-encrypted-variables).
# encrypted_file.yml
my_api_key: foo
# variable ecrypted inline:
my_api_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
Then you need to create a template of your sslapi_cli.conf file with something like this:
sslapi_cli.conf.j2
ssl_api_key: {{ my_api_key}}
And before you execute your task you need to run a template (https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html) task, generating the sslapi_cli.conf file with the correct api key.

Related

Ansible showing task changed but the task has condition (creates: ) and does not actually execute

My ansible-playbook is running some long running task with async tag and also utilizes "creates:" condition, so it is run only once on the server. When I was writing the playbook yesterday, I am pretty sure, the task was skipped when the log set in "creates:" tag existed.
It shows changed now though, everytime I run it.
I am confused as I do not think I did change anything and I'd like to set up my registered varaible correctly as unchanged, when the condition is true.
Output of ansible-play (debug section shows the task is changed: true):
TASK [singleserver : Install Assure1 SingleServer role] *********************************************************************************************************************************
changed: [crassure1]
TASK [singleserver : Debug] *************************************************************************************************************************************************************
ok: [crassure1] => {
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
}
But if I check the actual results file on the target maschine, it correctly resolved condition and did not actually execute the shell script, so the task should be unchanged (shows message the task is skipped as the log exists):
[root#crassure1 assure1]# cat "/root/.ansible_async/637594935242.28556"
{"invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": "/opt/install/install_command.sh", "removes": null, "argv": null, "creates": "/opt/assure1/logs/SetupWizard.log", "chdir": null, "stdin_add_newline": true, "stdin": null}}, "cmd": "/opt/install/install_command.sh", "changed": false, "rc": 0, "stdout": "skipped, since /opt/assure1/logs/SetupWizard.log exists"}[root#crassure1 assure1]# Connection reset by 172.24.36.123 port 22
My playbook section looks like this:
- name: Install Assure1 SingleServer role
shell:
#cmd: "/opt/assure1/bin/SetupWizard -a --Depot /opt/install/:a1-local --First --WebFQDN crassure1.tspdata.local --Roles All"
cmd: "/opt/install/install_command.sh"
async: 7200
poll: 0
register: Assure1InstallWait
args:
creates: /opt/assure1/logs/SetupWizard.log
- name: Debug
debug:
msg: "{{ Assure1InstallWait }}"
- name: Check on Installation status every 15 minutes
async_status:
jid: "{{ Assure1InstallWait.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
delay: 900
when: Assure1InstallWait is changed
Is there something I am missing, or is that some kind of a bug?
I am limited by Ansible version available in configured trusted repo, so I am using ansible 2.9.25
Q: "The module shell shows changed every time I run it"
A: In async mode the task can't be skipped immediately. First, the module shell must find out whether the file /opt/assure1/logs/SetupWizard.log exists at the remote host or not. Then, if the file exists the module will decide to skip the execution of the command. But, you run the task asynchronously. In this case, Ansible starts the module and returns without waiting for the module to complete. That's what the registered variable Assure1InstallWait says. The task started but didn't finish yet.
"msg": {
"ansible_job_id": "637594935242.28556",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/root/.ansible_async/637594935242.28556",
"started": 1
}
The decision to set such a task changed is correct, I think because the execution on the remote host is going on.
Print the registered result of the module async. You'll see, that the command was skipped because the file exists (you've printed the async file at the remote instead). Here the attribute changed is set false because now we know the command didn't execute
job_result:
...
attempts: 1
changed: false
failed: false
finished: 1
msg: Did not run command since '/tmp/SetupWizard.log' exists
rc: 0
...

How do I get just the STDOUT of a salt state?

my output now
I'm learning salt stack right now and I was wondering if there was a way to get the stdout of a salt state and put it into a document and then send it to the master. Or is there a better way to do this?
To achieve this, we'll have to save the execution of the script in a variable. It will contain a hash containing keys that are showing up under changes:. Then the contents of this variable (stdout) can be written to a file.
{% set script_res = salt['cmd.script']('salt://test.sh') %}
create-stdout-file:
file.managed:
- name: /tmp/script-stdout.txt
- contents: {{ script_res.stdout }}
The output is already going to the master. It would be better to actually output in json and query down to the data you want in your document on the master.
such as the following
Normal output
$ sudo salt salt00\* state.apply tests.test3
salt00.wolfnet.bad4.us:
----------
ID: test_run
Function: cmd.run
Name: echo test
Result: True
Comment: Command "echo test" run
Started: 10:39:51.103057
Duration: 18.281 ms
Changes:
----------
pid:
8661
retcode:
0
stderr:
stdout:
test
Summary for salt00.wolfnet.bad4.us
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 18.281 ms
json output
$ sudo salt salt00\* state.apply tests.test3 --out json
{
"salt00.wolfnet.bad4.us": {
"cmd_|-test_run_|-echo test_|-run": {
"name": "echo test",
"changes": {
"pid": 9057,
"retcode": 0,
"stdout": "test",
"stderr": ""
},
"result": true,
"comment": "Command \"echo test\" run",
"__sls__": "tests.test3",
"__run_num__": 0,
"start_time": "10:40:55.582273",
"duration": 19.374,
"__id__": "test_run"
}
}
}
json parsed down with jq to just the stdout
$ sudo salt salt00\* state.apply tests.test3 --out=json | jq '.|.[]|."cmd_|-test_run_|-echo test_|-run"|.changes.stdout'
"test"
Also, for the record it is considered bad practice to put code that changes the system into jinja. Jinja always runs when a template is rendered and there is no way to control if it happens so just running test=true tests will still run the jinja code that makes changes which could be very harmful to your systems.

Re-create openstack artifacts from previous command output?

Is there an easy way to convert Openstack show command outputs into openstack commands ?
The goal is to rebuild an openstack environment after a complete wipe.
(for example: openstack network show myNet > out.txt,
then somehow generate the Openstack CLI command with appropriate fields to re-create this same exact network, based on out.txt ?)
Thanks!
You can write the output of the show commands as json formated string into a file, so you can easily read the information of the output with python-script to create and execute your desired commands.
To print the output of an openstack-command as json, add a -f json at the end of your command.
Example:
openstack server show cirros -f json
{
"OS-DCF:diskConfig": "MANUAL",
"OS-EXT-AZ:availability_zone": "nova",
"OS-EXT-SRV-ATTR:host": "test-system",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "test-system",
"OS-EXT-SRV-ATTR:instance_name": "instance-00000001",
"OS-EXT-STS:power_state": "Shutdown",
"OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "stopped",
"OS-SRV-USG:launched_at": "2020-07-22T08:41:06.000000",
"OS-SRV-USG:terminated_at": null,
"accessIPv4": "",
"accessIPv6": "",
"addresses": "test-network=192.168.62.207",
"config_drive": "",
"created": "2020-07-22T08:40:46Z",
"flavor": "f1 (273a2179-ac85-4c54-a40a-2c0121b338ff)",
"id": "6d302fcf-4de3-45a5-93c0-eb95650e5952",
"image": "cirros (86dded1f-8e0f-4342-906e-8ff9fbd854e2)",
"name": "cirros",
"project_id": "cbba4b1f3cb4460ca63e8ddb87c9b5fb",
"properties": "",
"security_groups": "name='default'",
"status": "SHUTOFF",
"updated": "2020-08-17T13:26:55Z",
"user_id": "b6505d6801e84fb98d77d2461f9719c2",
"volumes_attached": ""
}

Upload Image on TryStack Server using Packer tool

I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
You need to source openstack-config.sh before packer build.

Chef::Exceptions::ChecksumMismatch when installing nginx-1.7.8 from source

I get the following error when running vagrant up --provision to set up my development environment with vagrant...
==> default: [2014-12-08T20:33:51+00:00] ERROR: remote_file[http://nginx.org/download/nginx-1.7.8.tar.gz] (nginx::source line 58) had an error: Chef::Exceptions::ChecksumMismatch: Checksum on resource (0510af) does not match checksum on content (12f75e)
My chef JSON has the following for nginx:
"nginx": {
"version": "1.7.8",
"user": "deploy",
"init_style": "init",
"modules": [
"http_stub_status_module",
"http_ssl_module",
"http_gzip_static_module"
],
"passenger": {
"version": "4.0.53",
"gem_binary": "/home/vagrant/.rbenv/shims/gem"
},
"configure_flags": [
"--add-module=/home/vagrant/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/passenger-3.0.18/ext/nginx"
],
"gzip_types": [
"text/plain",
"text/html",
"text/css",
"text/xml",
"text/javascript",
"application/json",
"application/x-javascript",
"application/xml",
"application/xml+rss"
]}
and Cheffile has the following cookbook:
cookbook 'nginx'
How do I resolve the Checksum mismatch?
The nginx cookbook requires you to edit the checksum attribute when using another version of nginx. The remote_file resource that is causing you an error is:
remote_file nginx_url do
source nginx_url
checksum node['nginx']['source']['checksum']
path src_filepath
backup false
end
You need to update the checksum value. Specifically node['nginx']['source']['checksum'].
So in your JSON, you would add this line:
"source": {"checksum": "insert checksum here" }
Edit: As pointed out in the comments, the checksum is SHA256. You can generate the checksum of the file like so:
shasum -a 256 nginx-1.7.8.tar.gz

Resources