I'm trying to use the Euca 5 ansible installer to install a single server for all services "exp-euca.lan.com" with two node controllers "exp-enc-[01:02].lan.com" running VPCMIDO. The install goes okay and I end up with a single server running all Euca services including being able to run instances but the ansible scripts never take action to install and configure my node servers. I think I'm misunerdstanding the inventory format. What could be wrong with the following? I don't want my main euca server to run instances and I do want the two node controllers installed and running instances.
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
zone:
hosts:
exp-euca.lan.com:
nodes:
hosts:
exp-enc-[01:02].lan.com:
All of the plays related to nodes have a pattern similar to this where they succeed and acknowledge the main server exp-euca but then skip the nodes.
2021-01-14 08:15:23,572 p=57513 u=root n=ansible | TASK [zone assignments default] ***********************************************************************************************************************
2021-01-14 08:15:23,596 p=57513 u=root n=ansible | ok: [exp-euca.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_facts": {"host_zone_key": "1"}, "ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"]}
2021-01-14 08:15:23,604 p=57513 u=root n=ansible | skipping: [exp-enc-01.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"], "skip_reason": "Conditional result was False"}
It should be node, not nodes, i.e.:
node:
hosts:
exp-enc-[01:02].lan.com:
The documentation for this is currently incorrect.
Related
I am trying to add a network to a podman container after it has already been created.
These are the steps I took:
Create and start a container:
podman run -it --name "container" --network=mgmtnet img_v1 /bin/bash
The container starts.
I now stop the container
podman stop container
I edit the podman config.json file at:
/var/lib/containers/storage/overlay-containers/60dfc044f28b0b60f0490f351f44b3647531c245d1348084944feaea783a6ad5/userdata/config.json
I add an extra netns path in the namespaces section.
"namespaces": [
{
"type": "pid"
},
{
"type": "network",
>> "path": "/var/run/netns/cni-8231c733-6932-ed54-4dee-92477014da6e",
>>[+] "path": "/var/run/netns/test_net"
},
{
"type": "ipc"
},
{
"type": "uts"
},
{
"type": "mount"
}
],
I start the container
podman start container
I expected the changes (an extra interface) in the container. But that doesn't happen. Also, checking the config.json, I find that my changes are gone.
So starting the container removes the changes in config. How to overcome this?
Extra info:
[root#bng-ix-svr1 ~]# podman info
host:
BuildahVersion: 1.9.0
Conmon:
package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.1-dev, commit: unknown'
Distribution:
distribution: '"rhel"'
version: "8.1"
MemFree: 253316108288
MemTotal: 270097387520
OCIRuntime:
package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 5368705024
SwapTotal: 5368705024
arch: amd64
cpus: 16
hostname: bng-ix-svr1.englab.juniper.net
kernel: 4.18.0-147.el8.x86_64
os: linux
rootless: false
uptime: 408h 2m 41.08s (Approximately 17.00 days)
registries:
blocked: null
insecure: null
search:
- registry.redhat.io
- registry.access.redhat.com
- quay.io
- docker.io
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 4
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 2
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes
that is correct. The config.json file is generated by Podman to instruct the OCI runtime how to run the container.
All the changes done directly on that file will be lost next time you restart the container. The config.json file is used by the OCI runtime to create the container and then it is not used anymore.
Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.
When I am trying to run a task asynchronously as another user using become in ansible plabook, I am getting "Job not found error". Can some one suggest me how can I successfully check the async job status.
I am using ansible version 2.7
I read in some articles suggesting use the async_status task with same become user as async task, to read the job status.
I tried that solution but still I am getting the same "job not found error"
- hosts: localhost
tasks:
- shell: startInstance.sh
register: start_task
async: 180
poll: 0
become: yes
become_user: venu
- async_status:
jid: "{{start_task.ansible_job_id}}"
register: start_status
until: start_status.finished
retries: 30
become: yes
become_user: venu
Expected Result:
I should be able to Fire and forget the job
Actual_Result:
{"ansible_job_id": "386361757265.15925428", "changed": false, "finished": 1, "msg": "could not find job", "started": 1}
This is really weird. I'm launching some windows 2012 servers into EC2 using salt-cloud. And tho I'm using this profile:
ec2_private_win_app1:
provider: company-nonpod-us-east-1
image: ami-xxxxxx
size: c4.large
network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: False
SubnetId: subnet-A
SecurityGroupId: sg-xxxxxx
#block_device_mappings:
# - DeviceName: /dev/sda1
# Ebs.VolumeSize: 120
# Ebs.VolumeType: gp2
# - DeviceName: /dev/sdf
# Ebs.VolumeSize: 100
# Ebs.VolumeType: gp2
tag: {'Engagement': '2112254190125', 'Owner': 'Tim', 'Name': 'production'}
And giving this command:
salt-cloud -p ec2_private_win_app1 USAB00005
The resulting server ends up in this subnet in AWS:
Subnet ID: subnet-B
I'm using salt-cloud version: salt-cloud 2016.9.0-410-gdedfd82
On a server running: CentOS Linux release 7.2.1511
Just what in the hell is going on?
It was a yaml formatting problem. I ran the yaml through an online yaml parser, and was able to correct the issue:
ec2_private_win_app1:
provider: company-nonpod-us-east-1
image: ami-xxxxx
size: c4.large
ssh_username: root
network_interfaces:
- DeviceIndex: 0
SubnetId: subnet-xxxxxx
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: False
SecurityGroupId:
- sg-xxxxxx
Basically, I had to group the Subnet ID in the network_interfaces section in order for the servers to appear in the correct subnet.
I followed the steps on the blog to get wordpress going
https://blog.pivotal.io/pivotal-cloud-foundry/products/getting-started-with-wordpress-on-cloud-foundry
when I do a cf push it keeps crashing with the following lines in the error
2016-05-14T15:41:44.22-0700 [App/0] OUT total size is 2,574,495 speedup is 0.99
2016-05-14T15:41:44.24-0700 [App/0] ERR fusermount: entry for /home/vcap/app/htdocs/wp-content not found in /etc/mtab
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: mountpoint is not empty
2016-05-14T15:41:44.46-0700 [App/0] OUT 22:41:44 sshfs | fuse: if you are sure this is safe, use the 'nonempty' mount option
2016-05-14T15:41:44.64-0700 [DEA/86] ERR Instance (index 0) failed to start accepting connections
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
2016-05-14T15:41:44.68-0700 [API/1] OUT App instance exited with guid cf2ea899-3599-429d-a39d-97d0e99280e4 payload: {"cc_partition"=>"default", "droplet"=>"cf2ea899-3599-429d-a39d-97d0e99280e4", "version"=>"c94b7baf-4da4-44b5-9565-dc6945d4b3ce", "instance"=>"c4f512149613477baeb2988b50f472f2", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1463265704}
^[[A
my manifest file:
cf-ex-wordpress$ cat manifest.yml
---
applications:
- name: myapp
memory: 128M
path: .
buildpack: https://github.com/cloudfoundry/php-buildpack
host: near
services:
- mysql-db
env:
SSH_HOST: user#abc.com
SSH_PATH: /home/user
SSH_KEY_NAME: sshfs_rsa
SSH_OPTS: '["cache=yes", "kernel_cache", "compression=no", "large_read"]'
vagrant#vagrant:~/Documents/shared/cf-ex-wordpress$
Please check your SSH mount, more details at https://github.com/dmikusa-pivotal/cf-ex-wordpress/issues