Install specific MongoDB version using enix.mongodb Ansible Galaxy role - ansible-galaxy

I want to modify an Ansible playbook to install a specific version of MongoDB using the enix.mongodb role. According to the docs there is a mongodb_version role variable that I can set to do this. I have tried updating the Ansible playbook, but it doesn't like the way I am specifying it.
- hosts: development_ec2
remote_user: ubuntu
become: yes
pre_tasks:
- name: Update all apt packages
apt: update_cache=yes
roles:
- role: geerlingguy.nodejs
- role: geerlingguy.git
- role: geerlingguy.docker
- role: geerlingguy.helm
- role: enix.mongodb:
mongodb__version: 4.0
Where should, and how should I specify it? There is a requirements.yml file where the roles are also specified.
roles:
- name: geerlingguy.nodejs
version: 5.1.1
- name: geerlingguy.git
version: 2.1.0
- name: geerlingguy.docker
version: 2.7.0
- name: enix.mongodb
version: 1.1.0
- name: geerlingguy.helm
version: 1.0.0
Below is the error I get when I run the playbook:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from
each:
JSON: No JSON object could be decoded
Syntax Error while loading YAML.
mapping values are not allowed in this context
The error appears to be in '/home/ubuntu/tc-ansible/playbooks/development.yml': line 14,
column 25, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- role: geerlingguy.helm
- role: enix.mongodb:
^ here
Many thanks.

For those who could pass by there and that might be still looking for selecting the version with MongoDB you can try this:
roles:
- role: enix.mongodb
mongodb__version: 4.4

Related

hydra - structured config group/package override via yaml

I'm not very successfully trying to figure out how to achieve an override of a group/package with and via a yaml file. Trying to explain my problem using the example (files and folder structre) from the hydra documentation https://hydra.cc/docs/tutorials/structured_config/schema/.
yaml.config as:
defaults:
- base_config # --> reference to dataclass
- db: base_mysql # --> reference to dataclass
- _self_ debug: true
gives the expected (print when running the myapp.py):
db:
driver: mysql
host: localhost
port: 3306
user: ???
password: ???
Using the yaml file instead instead of the base_mysql dataclass is also fine thus the yaml.config as:
defaults:
- base_config
- db: mysql # --> reads db/mysql.yaml
- _self_
debug: true
prints again as expected
db:
driver: mysql
host: localhost
port: 3306
user: omry
password: secret
Overriding individual fields is as well fine, e.g. with config.yaml like
defaults:
- base_config
- db: mysql
- _self_
debug: true
db:
password: UpdatedPassword
What I'm to able to figure out is how to override the full db group with a/via another yaml file - defining the structure via a dataclass and then override/set the values like:
defaults:
- base_config
- db: base_mysql # --> reference to dataclass to define the structure
- _self_
debug: true
db: mysql # --> mysql.yaml
throws the following error:
In 'config': Validation error while composing config:
Merge error: str is not a subclass of MySQLConfig. value: mysql
full_key:
object_type=Config
Searching the internet/stackoverflow already showed me that moving the self to the first position will get rid of the error - but then the composition order is "wrong".
Keeping the order as it is and using the mysql.yaml for an override works well - when done via commandline (python myapp.py db=mysql when the line "db:mysql" is not present), but for my usecase it much more convinient to handle it all via the yaml file(s).
Somehow I assume that the same functionality is available via CLI and files/code and that I just did not mange to figure out how it works.
(hydra version 1.1 in a conda environment with python 3.9)
Thank you very much in advance for any help that you can provide.
If understand correctly, you want to use the defaults list in your primary yaml file to merge together the base_mysql config with the mysql config. This will do the trick:
defaults:
- base_config
- db: [base_mysql, mysql]
- _self_
debug: true
Passing a list [base_mysql, mysql] of config names causes those configs base_mysql and mysql to be merged together. This is documented here -- see the "CONFIG_NAMES" alternative for specifying an option in the defaults list.
Note that passing the CLI override db=mysql (as in python myapp.py db=mysql) results in modification of the defaults list; the resulting defaults list will be the same as if you had used the following in your yaml file:
defaults:
- base_config
- db: mysql
- _self_
debug: true
You can pass a list [base_mysql, mysql] of config names at the CLI like this:
python my_app.py 'db=[base_mysql, mysql]'

Salt: install package and run service (name different than package)

Currently trying to wrap my head around Salt.
In essence I'd like to install a package (rpm) and enable and run the service (if the package installed successfully).
Surprise: The service is called differently than the package.
Let's say
the package is called x
but the (systemd/init) service this package installs is called y
This does not work
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: x
Result:
Comment: The following requisites were not found:
require:
pkg: x
It looks like I have to write it like this and reference the state and not the package:
my_state_id:
pkg.installed:
- pkgs:
- x
service.running:
- name: y
- enable: true
- require:
- pkg: my_state_id
But: What does require: pkg: my_state_id mean? =D "if the state up to this point didn't fail, then run the current module"?
Quoting from the requisites documentation:
The generalized form of a requisite target is <state name>: <ID or name>.
If we break up your my_state_id ID:
my_state_id is the ID
pkg and service are the state names
pkg state does not have a name parameter, but service state has it, and it is y
Since pkg state does not have name parameter, we need to use its ID to specify it as requisite:
On the left side we will have pkg
On the right side it will be the ID my_state_id
- require:
- pkg: my_state_id
The other way to write the same would be:
# give the package name in 'name' parameter
my_state_id:
pkg.installed:
- name: x
service.running:
- name: y
- enable: true
- require:
- pkg: x
So it is a way to tell Saltstack to take actions conditionally. In this case if package install failed, then it should not try to start the service (and fail).

Azure Form Recognizer Label Tool Docker: Missing EULA=accept command line option. You must provide this to continue

I am trying to run the Azure Forms Recognizer Label Tool in Azure Container instance.
I have followed the instructions given in here.
I was able to deploy the container image but when I try to start it, it terminates with the following message:
Missing EULA=accept command line option. You must provide this to continue.
This quite surprising, because this option has been specified in my YAML file (see below).
What can I do to fix this?
My YAML file:
apiVersion: 2018-10-01
location: West Europe
name: renecognitiveservice
imageRegistryCredentials: # This is required when pulling a non-public image
- server: mcr.microsoft.com
username: xxx
password: xxx
properties:
containers:
- name: xxxeamlabelingtool
properties:
image: mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: https://rk-formsrecognizer.cognitiveservices.azure.com/
- name: apikey
value: xxx
resources:
requests:
cpu: 2 # Always refer to recommended minimal resources
memoryInGb: 4 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
Apparently you can run it with command:
"command": [
"./run.sh", "eula=accept"
],
Worked from the portal
https://github.com/MicrosoftDocs/azure-docs/issues/46623
This is what you want to add in the Azure portal while creating the container instance.
You will find this in the "Advanced" tab.
Afterwards you can access the IP address of that instance to open the label-tool.
"./run.sh", "eula=accept"

Generating documentation for salt stack states

I have a repository with salt states for provisioning my cluster of servers in the cloud. Over time, I kept on adding more states - the .sls files - into this repo. Now im starting to struggle what is what and what is where.
I am wondering if there is a there is some software utility/package that will generate documentation off my states repository, preferably as html pages, so that I can browse them and see their interdependencies.
UPDATE:
The state sls files look like this:
include:
- states.core.pip
virtualenv:
pip.installed:
- require:
- sls: states.core.pip
virtualenvwrapper:
pip.installed:
- require:
- sls: states.core.pip
And another sls example:
{% set user_home = '/home/username' %}
my_executable_virtualenv:
virtualenv.managed:
- name: {{ user_home }}/.virtualenvs/my_executable_virtualenv
- user: username
- system_site_packages: False
- pip_pkgs:
- requests
- numpy
- pip_upgrade: True
- require:
- sls: states.core
my_executable_supervisor_entry:
file.managed:
- name: /etc/supervisor/conf.d/my_executable.conf
- source: salt://files/supervisor_config/my_executable.conf
- user: username
- group: username
- mode: 644
- makedirs: False
- require:
- sls: states.core
I did some research and found that salt stack has created one. It does work as HTML pages too. According to the documentation. If you have python installed installing Sphinx is as easy as doing
C:\> pip install sphinx
Salt-stacks docs on this can be found here. According to the docs making the HTML documentation is as easy as doing:
cd /path/to/salt/doc
make HTML
I hope this answer is what you were looking for!
This needs a custom plugin which needs to be written.
There is no plugins directly available to render sls files.
There are some plugins available for rendering YAML files, may be you can modify the same to suite your requirement.
You can use some of the functions in the state module to list all the everything in the highstate for a minion:
# salt-call state.show_states --out=yaml
local:
- ufw.package.install
- ufw.config.file
- ufw.service.enable
- ufw.service.reload
- ufw.config.services
- ufw.config.applications
- ufw.service.running
- apt.apt_conf
- apt.unattended
- cacerts
- kerberos
- network
- editor
- mounts
- openssh
- openssh.config_ini
- openssh.known_hosts
...
And then view the compiled data for each one (also works with states not in the highstate):
# salt-call state.show_sls editor --out=yaml
local:
vim-tiny:
pkg:
- installed
- order: 10000
__sls__: csrf.editor
__env__: base
editor:
alternatives:
- path: /usr/bin/vim.tiny
- set
- order: 10001
__sls__: csrf.editor
__env__: base
Or to get the entire highstate at once with state.show_highstate.
I'm not aware of any tools to build HTML documentation from that. You'd have to do that yourself.
To access all states (not just a particular highstate), you can use salt-run fileserver.file_list | grep '.sls$' to find every state, and salt-run state.orchestrate_show_sls to get the rendered data for each (though you may need to supply pillar data).

Service is already enabled, and is dead

I have the following states:
copy_over_systemd_service_files:
file.managed:
- name: /etc/systemd/system/consul-template.service
- source: salt://mesos/files/consul-template.service
- owner: consul
start_up_consul-template_service:
service.running:
- name: consul-template
- enable: True
- restart: True
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service
when I run my state file I get the following error:
ID: start_up_consul-template_service
Function: service.running
Name: consul-template
Result: False
Comment: Service consul-template is already enabled, and is dead
Started: 17:27:38.346659
Duration: 2835.888 ms
Changes:
I'm not sure what this means. All I want to do is restart the service once it's been copied over and I've done this before without issue. Looking back through the stack trace just shows that Salt ran systemctl is-enabled consult-template
I think I was over complicating things. Instead I'm doing this:
consul-template:
service.running:
- require:
- file: copy_over_systemd_service_files
- watch:
- /etc/systemd/system/consul-template.service

Resources