Can't generate lets-encrypt certificate using saltStack - salt-stack

I am trying to generate the lets-encrypt certificate and here are the steps that I followed:
Under /srv/salt/pillars/minion I added the file init.sls
letsencrypt:
config: |
email = email
auth:
method: standalone
type: http-01
port: 8080
agree-tos = True
renew-by-default = True
domainsets:
mydomain:
- mydomain.com
After that I updated the salt_pillar:
# . update_salt.sh
# salt 'minion' state.sls letsencrypt
I got this result:
ID: letsencrypt-crontab-mydomain.com
Function: cron.present
Name: /usr/local/bin/renew_letsencrypt_cert.sh mydomain.com
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-
cert-mydomain.com
Started:
Duration:
Changes:
------------
ID: create-fullchain-privkey-pem-for-mydomain.com
Function: cmd.run
Name: cat /etc/letsencrypt/live/mydomain.com/fullchain.pem \
/etc/letsencrypt/live/mydomain.com/privkey.pem \
> /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem && \
chmod 600 /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-cert-mydomain.com
Started:
Duration:
Changes:
What should I modify in my configuration to get the certificate?

Related

How do I combine two commands in one task? | Ansible

So, my problem is that I want to check if nginx is installed on two different OS with different package managers.
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
Can I combine it in one task and how if it is possible? Because I need to register the output result and then use it onwards. Can't figure it out.
An alternative solution is to use the package_facts module, like this:
- hosts: localhost
tasks:
- package_facts:
- debug:
msg: "Nginx is installed!"
when: "'nginx' in packages"
But you could also register individual variables for your two tasks, and then combine the result:
- hosts: localhost
tasks:
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
failed_when: false
register: rpm_check
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
failed_when: false
register: dpkg_check
- set_fact:
nginx_result: >-
{{
(rpm_check is not skipped and rpm_check.rc == 0) or
(dpkg_check is not skipped and dpkg_check.rc == 0)
}}
- debug:
msg: "nginx is installed"
when: nginx_result

SaltStack disable local windows Administrator if it is enabled

I'd like to disable the built-in windows local administrator account if it is enabled.
As salt.state.user.present doesn't support disabling accounts, I'm using salt.modules.win_useradd.update. However, it disables the account even if it is already disabled.
I can't use unless or onlyif because they only use results parsed from shell commands.
Is there a way to use the boolean value for [user.info][account_disabled] in salt.module.win_useradd.info's return data 'changes' dictionary as a requirement?
I'd like to do something like the following:
builtin_administrator:
module.run:
- user.info:
- name: Administrator
disable_builtin_administrator:
module.run:
- user.update:
- name: Administrator
- account_disabled: true
- require:
- module: builtin_administrator
- require:
- module: builtin_administrator['changes']['user.info']['account_disabled']['false']
You can see the results data changes dictionary from win_useradd.info in the output:
local:
----------
ID: builtin_administrator
Function: module.run
Result: True
Comment: user.info: Built-in account for administering the computer/domain
Started: 15:59:56.440000
Duration: 15.0 ms
Changes:
----------
user.info:
----------
account_disabled:
True
account_locked:
False
active:
False
comment:
Built-in account for administering the computer/domain
description:
Built-in account for administering the computer/domain
disallow_change_password:
False
expiration_date:
2106-02-07 01:28:15
expired:
False
failed_logon_attempts:
0L
fullname:
gid:
groups:
- Administrators
home:
None
homedrive:
last_logon:
Never
logonscript:
name:
Administrator
passwd:
None
password_changed:
2019-10-09 09:22:00
password_never_expires:
True
profile:
None
successful_logon_attempts:
0L
uid:
S-1-5-21-3258603230-662395079-3947342588-500
----------
ID: disable_builtin_administrator
Function: module.run
Result: False
Comment: The following requisites were not found:
require:
module: builtin_administrator['changes']['user.info']['account_disabled']['false']
Started: 15:59:56.455000
Duration: 0.0 ms
Changes:
Summary for local
------------
Succeeded: 1 (changed=1)
Failed: 1
------------
Total states run: 2
Total run time: 15.000 ms
I'm testing with a Windows 10 1903 masterless salt-minion 2019.2.1 (Fluorine) where I set use_superseded for module.run in the minion config file.
Thanks in advance!
I settled for this:
localuser.disable.administrator:
cmd.run:
- name: "Get-LocalUser Administrator | Disable-LocalUser"
- shell: powershell
- onlyif: powershell -command "if ((Get-LocalUser | Where-Object {($_.Name -eq 'Administrator') -and ($_.Enabled -eq $true)}) -eq $null) {exit 1}"

Openstack: Packer + Cloud-Init

I want to create a customized openstack OpenSUSE15-image that contains some custom software and a graphical interface. I have used an existing OpenSUSE15.0 image and packer to build that image. It works fine. The packer json file is as follows:
"builders": [
{
"type" : "openstack",
"ssh_username" : "root",
"image_name": "OpenSUSE_15_custom_kde",
"source_image": "OpenSUSE 15",
"flavor": "m1.medium",
"networks": "public-network"
}
],
"provisioners":[
{
"type": "shell",
"inline": [
"sleep 10",
"sudo -s",
"zypper --gpg-auto-import-keys refresh",
"zypper -n up -y",
"zypper -n clean -a",
"zypper -n addrepo -f http://download.opensuse.org/repositories/devel\\:/languages\\:/R\\:/patched/openSUSE_Leap_15.0/ R-patched",
"zypper -n addrepo -f http://download.opensuse.org/repositories/devel\\:/languages\\:/R\\:/released/openSUSE_Leap_15.0/ R-released",
"zypper --gpg-auto-import-keys refresh",
"zypper -n install -y R-base R-base-devel R-recommended-packages rstudio",
"zypper -n clean -a",
"zypper --non-interactive install -y -t pattern kde kde_plasma devel_kernel devel_python3 devel_C_C++ office x11",
"zypper -n install xrdp",
"zypper -n clean -a",
"zypper -n dup -y",
"systemctl enable xrdp",
"systemctl start xrdp",
"cloud-init clean --logs",
"zypper -n install -y cloud-init growpart yast2-network yast2-services-manager acpid",
"cat /dev/null > /etc/udev/rules.d/70-persistent-net.rules",
"systemctl disable cloud-init.service cloud-final.service cloud-init-local.service cloud-config.service",
"systemctl enable cloud-init.service cloud-final.service cloud-init-local.service cloud-config.service sshd",
"sudo systemctl stop firewalld",
"sudo systemctl disable firewalld",
"sed -i 's/GRUB_TIMEOUT=.*$/GRUB_TIMEOUT=0/g' /etc/default/grub",
"exec grub2-mkconfig -o /boot/grub2/grub.cfg '$#'",
"systemctl restart cloud-init",
"systemctl daemon-reload",
"cat /dev/null > ~/.bash_history && history -c && sudo su",
"cat /dev/null > /var/log/wtmp",
"cat /dev/null > /var/log/btmp",
"cat /dev/null > /var/log/lastlog",
"cat /dev/null > /var/run/utmp",
"cat /dev/null > /var/log/auth.log",
"cat /dev/null > /var/log/kern.log",
"cat /dev/null > ~/.bash_history && history -c",
"rm ~/.ssh/authorized_keys"
]
},
{
"type": "file",
"source": "./cloud_init/cloud.cfg",
"destination": "/etc/cloud/cloud.cfg"
}
]
}
There are no errors in the building and provisioning phases with packer.
In a second stage, when this base image is spawned through a heat template via the openstack client, I want some personalized tasks to be completed. User creation, granting ssh-access (including adjusting the sshd_config file...). This is done through the init_image.sh file.
#!/bin/bash
useradd -m $USERNAME -p $PASSWD -s /bin/bash
usermod -a -G sudo $USERNAME
tee /etc/ssh/banner <<EOF
You are one lucky user, if you bear the key...
EOF
tee /etc/ssh/sshd_config <<EOF
## SOME IMPORTANT SSHD CONFIGURATIONS
EOF
sudo -u $USERNAME -H sh -c 'cd ~;mkdir ~/.ssh/;echo "$SSHPUBKEY" > ~/.ssh/authorized_keys;chmod -R 700 ~/.ssh/;chmod 600 ~/.ssh/authorized_keys;'
systemctl restart sshd.service
voldata_dev="/dev/disk/by-id/virtio-$(echo $VOLDATA | cut -c -20)"
mkfs.ext4 $voldata_dev
mkdir -pv /home/$USERNAME/share
echo "$voldata_dev /home/$USERNAME/share ext4 defaults 1 2" >> /etc/fstab
mount /home/$USERNAME/share
chown -R $USERNAME:users /home/$USERNAME/share/
systemctl enable xrdp
systemctl start xrdp
For this purpose, I have created the following heat template.
heat_template_version: "2018-08-31"
description: "version 2017-09-01 created by HOT Generator at Fri, 05 Jul 2019 12:56:22 GMT."
parameters:
username:
type: string
label: User Name
description: This is the user name, and will be also the name of the key and the server
default: test
imagename:
type: string
label: Image Name
description: This is the Name of the Image e.g. Ubuntu 18.04
default: "OpenSUSE Leap 15"
ssh_pub_key:
type: string
label: ssh public key
flavorname:
type: string
label: Flavor Name
description: This is the Name of the Flavor e.g. m1.small
default: "m1.small"
vol_size:
type: number
label: Volume Size
description: This is the size of the volume that should be attached in GB
default: 10
password:
type: string
label: password
description: This is the su password and user password
resources:
init:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template:
{get_file: init_image.sh}
params:
$USERNAME: {get_param: username}
$SSHPUBKEY: {get_param: ssh_pub_key}
$PASSWD: {get_param: password}
$VOLDATA: {get_resource: volume}
my_key:
type: "OS::Nova::KeyPair"
properties:
name:
list_join:
["_", [ {get_param: username}, 'key']]
public_key: {get_param: ssh_pub_key}
my_server:
type: "OS::Nova::Server"
properties:
block_device_mapping_v2: [{ device_name: "vda", image : { get_param : imagename }, delete_on_termination : "false", volume_size: 20 }]
name: {get_param: username}
flavor: {get_param: flavorname}
key_name: {get_resource: my_key}
admin_pass: {get_param: password}
user_data_format: RAW
user_data: {get_resource: init}
networks:
- network: "public-network"
depends_on:
- my_key
- init
- volume
volume:
type: "OS::Cinder::Volume"
properties:
# Size is given in GB
size: {get_param: vol_size}
name:
list_join: ["-", ["vol_",{get_param: username }]]
volume_attachment:
type: "OS::Cinder::VolumeAttachment"
properties:
volume_id: { get_resource: volume }
instance_uuid: { get_resource: my_server }
depends_on:
- volume
outputs:
instance_ip:
description: The IP address of the deployed instances
value: { get_attr: [my_server, first_address] }
If I use the original image in the template I have no problems (however, the building process takes very very long) and I need to restart to have the graphical KDE interface.
However, if I use the image build with packer, my user_data are ignored? I cannot log in, the user personalized user is not created... What have I missed? Why does it not work? As you see, I clean cloud-init, restart the services... I am stuck big time...
UPDATE
Here is the accesible boot-log from the machine.
UPDATE 2
This is the output of cloud-init analyze show:
-- Boot Record 01 --
The total time elapsed since completing an event is printed after the "#" character.
The time the event takes is printed after the "+" character.
Starting stage: init-local
|`->no cache found #00.01000s +00.00000s
|`->no local data found from DataSourceOpenStackLocal #00.04700s +15.23000s
Finished stage: (init-local) 15.31200 seconds
Starting stage: init-network
|`->no cache found #16.01000s +00.00100s
|`->no network data found from DataSourceOpenStack #16.01700s +00.02600s
|`->found network data from DataSourceNone #16.04300s +00.00100s
|`->setting up datasource #16.09000s +00.00000s
|`->reading and applying user-data #16.10000s +00.00200s
|`->reading and applying vendor-data #16.10200s +00.00000s
|`->activating datasource #16.12100s +00.00100s
|`->config-migrator ran successfully #16.17900s +00.00100s
|`->config-seed_random ran successfully #16.18000s +00.00100s
|`->config-bootcmd ran successfully #16.18200s +00.00000s
|`->config-write-files ran successfully #16.18200s +00.00100s
|`->config-growpart ran successfully #16.18300s +00.46100s
|`->config-resizefs ran successfully #16.64500s +01.33400s
|`->config-disk_setup ran successfully #17.98100s +00.00300s
|`->config-mounts ran successfully #17.98500s +00.00400s
|`->config-set_hostname ran successfully #17.99000s +00.09800s
|`->config-update_hostname ran successfully #18.08900s +00.01000s
|`->config-update_etc_hosts ran successfully #18.10000s +00.00100s
|`->config-rsyslog ran successfully #18.10100s +00.00200s
|`->config-users-groups ran successfully #18.10400s +00.00200s
|`->config-ssh ran successfully #18.10700s +00.61400s
Finished stage: (init-network) 02.73600 seconds
Starting stage: modules-config
|`->config-locale ran successfully #35.00200s +00.00400s
|`->config-set-passwords ran successfully #35.00600s +00.00100s
|`->config-zypper-add-repo ran successfully #35.00700s +00.00200s
|`->config-ntp ran successfully #35.01000s +00.00100s
|`->config-timezone ran successfully #35.01100s +00.00200s
|`->config-disable-ec2-metadata ran successfully #35.01300s +00.00100s
|`->config-runcmd ran successfully #35.01800s +00.00200s
Finished stage: (modules-config) 00.05100 seconds
Starting stage: modules-final
|`->config-package-update-upgrade-install ran successfully #35.87400s +00.00000s
|`->config-puppet ran successfully #35.87500s +00.00000s
|`->config-chef ran successfully #35.87600s +00.00000s
|`->config-mcollective ran successfully #35.87600s +00.00100s
|`->config-salt-minion ran successfully #35.87700s +00.00100s
|`->config-rightscale_userdata ran successfully #35.87800s +00.00100s
|`->config-scripts-vendor ran successfully #35.87900s +00.00500s
|`->config-scripts-per-once ran successfully #35.88400s +00.00100s
|`->config-scripts-per-boot ran successfully #35.88500s +00.00000s
|`->config-scripts-per-instance ran successfully #35.88500s +00.00100s
|`->config-scripts-user ran successfully #35.88600s +00.00100s
|`->config-ssh-authkey-fingerprints ran successfully #35.88700s +00.00100s
|`->config-keys-to-console ran successfully #35.88800s +00.09000s
|`->config-phone-home ran successfully #35.97900s +00.00100s
|`->config-final-message ran successfully #35.98000s +00.00600s
|`->config-power-state-change ran successfully #35.98700s +00.00100s
Finished stage: (modules-final) 00.13600 seconds
Total Time: 18.23500 seconds
1 boot records analyzed
Update 3
Apparently, when one does not update with zypper up, cloud-init behaves well and finds the user data. Hence, I will not update the image in provisioning. However, once provisioned it makes sense to update.
In the end of your provisioning you should stop cloud-init and wipe the state. Otherwise when the image is launched cloud-init think it already executed the first launch.
systemctl stop cloud-init
rm -rf /var/lib/cloud/

Homestead not loading the page (symfony)

I am working on Mac OSX El Capitan 10.11.2. I am quite new to vagrant and this would be my first project in Symfony. I set up the environment to start my first symfony 4 project. Everything was installed correctly (including VirtualBox, Vagrant). I updated my hosts file, I edited Homestead.yaml and at the end when I wanted to access my domain (symf01.test) in Chrome I get a message that this site can't be reached and a text file is downloaded automatically, containing such information:
<?php
use Symfony\Component\Debug\Debug;
use Symfony\Component\HttpFoundation\Request;
require dirname(__DIR__).'/config/bootstrap.php';
if ($_SERVER['APP_DEBUG']) { unmask(0000);
Debug::enable();}
if ($trustedProxies = $_SERVER['TRUSTED_PROXIES'] ?? $_ENV['TRUSTED_PROXIES'] ?? false)
{ Request::setTrustedProxies(explode(',', $trustedProxies), Request::HEADER_X_FORWARDED_ALL ^ Request::HEADER_X_FORWARDED_HOST);}
if ($trustedHosts = $_SERVER['TRUSTED_HOSTS'] ?? $_ENV['TRUSTED_HOSTS'] ?? false)
{ Request::setTrustedHosts([$trustedHosts]);}
$kernel = new Kernel($_SERVER['APP_ENV'], (bool) $_SERVER['APP_DEBUG']);
$request = = Request::createFromGlobals();
$response = $kernel->handle($request);
$response->send();
$kernel->terminate($request, $response);
This is my Homestead.yaml file:
ip:"192.168.10.10"
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: /Users/username/Homestead/code/simba
to: /home/vagrant/simba
type: "nfs"
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: symfony
databases:
- homestead
# ports:
# - send: 50000
# to: 5000
# - send: 7777
# to: 777
# protocol: udp
# blackfire:
# - id: foo
# token: bar
# client-id: foo
# client-token: bar
# zray:
# If you've already freely registered Z-Ray, you can place the token here.
# - email: foo#bar.com
# token: foo
# Don't forget to ensure that you have 'zray: "true"' for your site.
This is what I get when I run serve nginx status
For future reference, this helped:
...
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: symfony
to
...
sites:
- map: symf01.test
to: /home/vagrant/simba/public
type: "symfony4"
Difference is in syntax. Value of type needs to be under quotes.
That type tells Homestead how to modify .htaccess files so that your websites can be properly accessed.

Saltstack: ignoring result of cmd.run

I am trying to invoke a command on provisioning via Saltstack. If command fails then I get state failing and I don't want that (retcode of command doesn't matter).
Currently I have the following workaround:
Run something:
cmd.run:
- name: command_which_can_fail || true
is there any way to make such state ignore retcode using salt features? or maybe I can exclude this state from logs?
Use check_cmd :
fails:
cmd.run:
- name: /bin/false
succeeds:
cmd.run:
- name: /bin/false
- check_cmd:
- /bin/true
Output:
local:
----------
ID: fails
Function: cmd.run
Name: /bin/false
Result: False
Comment: Command "/bin/false" run
Started: 16:04:40.189840
Duration: 7.347 ms
Changes:
----------
pid:
4021
retcode:
1
stderr:
stdout:
----------
ID: succeeds
Function: cmd.run
Name: /bin/false
Result: True
Comment: check_cmd determined the state succeeded
Started: 16:04:40.197672
Duration: 13.293 ms
Changes:
----------
pid:
4022
retcode:
1
stderr:
stdout:
Summary
------------
Succeeded: 1 (changed=2)
Failed: 1
------------
Total states run: 2
If you don't care what the result of the command is, you can use:
Run something:
cmd.run:
- name: command_which_can_fail; exit 0
This was tested in Salt 2017.7.0 but would probably work in earlier versions.

Resources