I have the following working Salt state file which: 1) downloads an image 2) creates a container from that image and 3) adds a nic afterwards.
These states work using the following setup:
salt-ssh: version 3004
Python: 3.9.7
LXC (snapd): version 5.0.0
PyLXD: version 2.3.0
Linux ubuntu: aarch64
---
# Create Penguin Container
#---
get_focal:
lxd_image.present:
- name: 'focal'
- source:
type: simplestreams
server: https://cloud-images.ubuntu.com/releases
name: '20.04'
create_penguin:
lxd_container.present:
- name: penguin
- profiles: ['default']
- source: 'focal'
- running: true
- devices:
### I want to create NIC here. ###
add_nic_card:
cmd.run:
- name: |
lxc config device add penguin eth0 nic nictype=bridged parent=br0
I need to combine states #2 and #3 so the nic is created simultaneously with the container. This should be possible according to the official documentation. However, I've not been able to get the syntax right, and the error codes aren't helpful.
I've tried numerous variations of the following:
variation 1
create_penguin:
lxd_container.present:
- name: penguin
- profiles: ['default']
- source: 'focal'
- running: true
- devices:
eth0: {
type: "nic",
nictype: "bridged",
parent: "br0" }
variation 2
create_penguin:
lxd_container.present:
- name: penguin
- profiles:
- default
- source: 'focal'
- running: true
- devices:
eth0:
type: nic
nictype: bridged
parent: br0
Variation 2 produces the following error:
----------
ID: create_penguin
Function: lxd_container.present
Name: penguin
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/state.py", line 2179, in call
ret = self.states[cdata["full"]](
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 149, in __call__
return self.loader.run(run_func, *args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 1201, in run
return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 1216, in _run_as
return _func_or_method(*args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 1249, in wrapper
return f(*args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/states/lxd_container.py", line 235, in present
__salt__["lxd.container_create"](
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 149, in __call__
return self.loader.run(run_func, *args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 1201, in run
return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/loader/lazy.py", line 1216, in _run_as
return _func_or_method(*args, **kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/modules/lxd.py", line 691, in container_create
container_device_add(name, dn, **dargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/modules/lxd.py", line 1431, in container_device_add
return _set_property_dict_item(container, "devices", device_name, kwargs)
File "/var/tmp/.ubuntu_a31665_salt/pyall/salt/modules/lxd.py", line 3544, in _set_property_dict_item
raise SaltInvocationError("path must be given as parameter")
salt.exceptions.SaltInvocationError: path must be given as parameter
Started: 09:43:31.807609
Duration: 5147.141 ms
Changes:
----------
I took cue from the lxd-formula and as described in documentation, we need supply a dict to devices: in YAML format. So, the curly brackets { .. } are not required.
Following that, I found that the below syntax works fine:
create_penguin:
lxd_container.present:
- name: penguin
- profiles: ['default']
- source: 'focal'
- running: true
- devices:
eth0:
type: nic
nictype: bridged
parent: br0
# another network interface
eth1:
type: nic
nictype: bridged
parent: br0
For reference, my setup:
component
version
Salt
3003.3
Python
3.8.10
dist
ubuntu 20.04 focal
lxc (apt)
4.0.9
pylxd
2.3.1
And output from Saltstack (had removed eth1 from the state):
----------
ID: create_penguin
Function: lxd_container.present
Name: penguin
Result: True
Comment: 1 changes
Started: 12:38:58.041506
Duration: 27277.86 ms
Changes:
----------
devices:
----------
eth0:
Added device "eth0"
The following solution provides a satisfactory work around to the original post. Instead of adding a separate stanza, I've created a profile named "natted" and added it to the container. While it works with my version of Salt, it's a little less explicit/transparent than the "official" answer in documentation.
---
# Create & Start the Penguin Container
#---
penguin_create_start:
lxd_container.present:
- name: penguin
- source: 'focal'
- running: true
- profiles: ['default', 'natted']
...
The profile looks something like this:
---
# equivalent to --> 'lxc config device add penguin eth0 nic nictype=bridged parent=br0'
#---
config: {}
description: Adds eth1 to 'lxdbr0' bridge
devices:
eth1:
nictype: bridged
parent: lxdbr0
type: nic
name: natted
I'm trying to use the Euca 5 ansible installer to install a single server for all services "exp-euca.lan.com" with two node controllers "exp-enc-[01:02].lan.com" running VPCMIDO. The install goes okay and I end up with a single server running all Euca services including being able to run instances but the ansible scripts never take action to install and configure my node servers. I think I'm misunerdstanding the inventory format. What could be wrong with the following? I don't want my main euca server to run instances and I do want the two node controllers installed and running instances.
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
zone:
hosts:
exp-euca.lan.com:
nodes:
hosts:
exp-enc-[01:02].lan.com:
All of the plays related to nodes have a pattern similar to this where they succeed and acknowledge the main server exp-euca but then skip the nodes.
2021-01-14 08:15:23,572 p=57513 u=root n=ansible | TASK [zone assignments default] ***********************************************************************************************************************
2021-01-14 08:15:23,596 p=57513 u=root n=ansible | ok: [exp-euca.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_facts": {"host_zone_key": "1"}, "ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"]}
2021-01-14 08:15:23,604 p=57513 u=root n=ansible | skipping: [exp-enc-01.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"], "skip_reason": "Conditional result was False"}
It should be node, not nodes, i.e.:
node:
hosts:
exp-enc-[01:02].lan.com:
The documentation for this is currently incorrect.
I'd like to disable the built-in windows local administrator account if it is enabled.
As salt.state.user.present doesn't support disabling accounts, I'm using salt.modules.win_useradd.update. However, it disables the account even if it is already disabled.
I can't use unless or onlyif because they only use results parsed from shell commands.
Is there a way to use the boolean value for [user.info][account_disabled] in salt.module.win_useradd.info's return data 'changes' dictionary as a requirement?
I'd like to do something like the following:
builtin_administrator:
module.run:
- user.info:
- name: Administrator
disable_builtin_administrator:
module.run:
- user.update:
- name: Administrator
- account_disabled: true
- require:
- module: builtin_administrator
- require:
- module: builtin_administrator['changes']['user.info']['account_disabled']['false']
You can see the results data changes dictionary from win_useradd.info in the output:
local:
----------
ID: builtin_administrator
Function: module.run
Result: True
Comment: user.info: Built-in account for administering the computer/domain
Started: 15:59:56.440000
Duration: 15.0 ms
Changes:
----------
user.info:
----------
account_disabled:
True
account_locked:
False
active:
False
comment:
Built-in account for administering the computer/domain
description:
Built-in account for administering the computer/domain
disallow_change_password:
False
expiration_date:
2106-02-07 01:28:15
expired:
False
failed_logon_attempts:
0L
fullname:
gid:
groups:
- Administrators
home:
None
homedrive:
last_logon:
Never
logonscript:
name:
Administrator
passwd:
None
password_changed:
2019-10-09 09:22:00
password_never_expires:
True
profile:
None
successful_logon_attempts:
0L
uid:
S-1-5-21-3258603230-662395079-3947342588-500
----------
ID: disable_builtin_administrator
Function: module.run
Result: False
Comment: The following requisites were not found:
require:
module: builtin_administrator['changes']['user.info']['account_disabled']['false']
Started: 15:59:56.455000
Duration: 0.0 ms
Changes:
Summary for local
------------
Succeeded: 1 (changed=1)
Failed: 1
------------
Total states run: 2
Total run time: 15.000 ms
I'm testing with a Windows 10 1903 masterless salt-minion 2019.2.1 (Fluorine) where I set use_superseded for module.run in the minion config file.
Thanks in advance!
I settled for this:
localuser.disable.administrator:
cmd.run:
- name: "Get-LocalUser Administrator | Disable-LocalUser"
- shell: powershell
- onlyif: powershell -command "if ((Get-LocalUser | Where-Object {($_.Name -eq 'Administrator') -and ($_.Enabled -eq $true)}) -eq $null) {exit 1}"
I am working on Mac OSX El Capitan 10.11.2. I am quite new to vagrant and this would be my first project in Symfony. I set up the environment to start my first symfony 4 project. Everything was installed correctly (including VirtualBox, Vagrant). I updated my hosts file, I edited Homestead.yaml and at the end when I wanted to access my domain (symf01.test) in Chrome I get a message that this site can't be reached and a text file is downloaded automatically, containing such information:
<?php
use Symfony\Component\Debug\Debug;
use Symfony\Component\HttpFoundation\Request;
require dirname(__DIR__).'/config/bootstrap.php';
if ($_SERVER['APP_DEBUG']) { unmask(0000);
Debug::enable();}
if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? $_ENV['TRUSTED_PROXIES'] ?? false)
{ Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);}
if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? $_ENV['TRUSTED_HOSTS'] ?? false)
{ Request::setTrustedHosts([$trustedHosts]);}
$kernel = new Kernel($_SERVER['APP_ENV'], (bool) $_SERVER['APP_DEBUG']);
$request = = Request::createFromGlobals();
$response = $kernel->handle($request);
$response->send();
$kernel->terminate($request, $response);
This is my Homestead.yaml file:
ip:"192.168.10.10"
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: /Users/username/Homestead/code/simba
to: /home/vagrant/simba
type: "nfs"
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: symfony
databases:
- homestead
# ports:
# - send: 50000
# to: 5000
# - send: 7777
# to: 777
# protocol: udp
# blackfire:
# - id: foo
# token: bar
# client-id: foo
# client-token: bar
# zray:
# If you've already freely registered Z-Ray, you can place the token here.
# - email: foo#bar.com
# token: foo
# Don't forget to ensure that you have 'zray: "true"' for your site.
This is what I get when I run serve nginx status
For future reference, this helped:
...
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: symfony
to
...
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: "symfony4"
Difference is in syntax. Value of type needs to be under quotes.
That type tells Homestead how to modify .htaccess files so that your websites can be properly accessed.
I'm running the same salt states on vagrant and on a cloud server. On vagrant the Started and Duration details show, but on the cloud server they don't.
I'm running masterless on v2014.7.5 (Helium)
Here's my vagrant config:
Vagrant.configure("2") do |config|
config.vm.box = "trusty64"
config.vm.network "private_network", ip: "192.168.33.11"
config.vm.synced_folder "saltMount/", "/srv/saltMount/"
config.vm.provision "salt" do |salt|
salt.minion_config = "minion"
salt.run_highstate = true
salt.colorize = true
salt.log_level = 'info'
salt.verbose = true
end
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
end
Here is my minion file:
master: localhost
file_client: local
failhard: True
state_verbose: True
file_roots:
base:
- /srv/saltMount/stateFiles
pillar_roots:
base:
- /srv/saltMount/pillars
And this is the command I'm running:
salt-call state.highstate -l info
What am I missing to get the Duration and Started on my cloud highstate?
(This is a followup question to: salt-stack highstate - find slow states )