I would like to store all Salt files (pillars, states, data files, etc.) in a git repository, so that this repository can be cloned on several different deployments.
Then I would like to be able to change the value of some pillar settings, such as a pathname, or a password, but without editing the original file which is in version control (i.e. those modifications would be local only and not necessarily versioned).
I would like to be able to pull new versions from the original repository (e.g. to add new pillar and state definitions) without losing the customized values.
E.g. the "base" or "default" pillar file would have settings like:
service:
dir: /var/opt/myservice
username: myuser
password: mypassword
and I would like to customize some settings, in another file, without changing the base file:
service:
dir: /mnt/data/myservice
password: secret_password
The modified settings should take precedence over the base / default ones.
Is it possible to do this by using environments (e.g. a "base" environment and a "custom" environment)?
Or perhaps by including these custom pillar files?
The documentation seems to indicate that there isn't a fixed order for overriding pillar settings.
Let me first suggest a way where you keep the original file and the customized settings in the git repository. See below how to override setting with a file outside of git.
Setup Git Pillar
I assume all files are stored in a git pillar like described here. I am using the syntax of salt version 2015.8 here.
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
In your top.sls file you can list different SLS files. They will override each other in the order listed in the top file:
# top.sls
base:
'*':
standard
'*qa'
qaservers
'hostqa':
hostqaconfig
This will apply on all servers:
# standard.sls
test:
setting1: A
setting2: B
This will apply on all servers with the name ending with 'qa':
# qaservers.sls
test:
setting2: B2
This will apply to the server with the name 'hostqa':
# hostqa.sls:
test:
setting1: A2
The commands salt hostqa saltutil.refresh_pillar and salt hostqa pillar.data will then show that the values A2 and B2 as they have all been merged together.
As this works without specifying environments, I suggest not to use environments here.
Override some local settings outside of Git
To override some of your settings locally, you can add another external pillar. One of the most simple ones is cmd_yaml that will run a command (here: cat) and merge the output with the current pillar:
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
- cmd_yaml: cat /srv/salt/local_override.sls
All external pillars are executed in the order listed in the configuration file.
Related
I'm using hydra to log hyperparameters of experiments.
#hydra.main(config_name="config", config_path="../conf")
def evaluate_experiment(cfg: DictConfig) -> None:
print(OmegaConf.to_yaml(cfg))
...
Sometimes I want to do a dry run to check something. For this I don't need any saved parameters, so I'm wondering how I can disable the savings to the filesystem completely in this case?
The answer from Omry Yadan works well if you want to solve this using the CLI. However, you can also add these flags to your config file such that you don't have to type them every time you run your script. If you want to go this route, make sure you add the following items in your root config file:
defaults:
- _self_
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
hydra:
output_subdir: null
run:
dir: .
There is an enhancement request aimed at Hydra 1.1 to support disabling working directory management.
Working directory management is doing many things:
Creating a working directory for the run
Changing the working directory to the created dir.
There are other related features:
Saving log files
Saving files like config.yaml and hydra.yaml into .hydra in the working directory.
Different features has different ways to disable them:
To prevent the creation of a working directory, you can override hydra.run.dir to ..
To prevent saving the files into .hydra, override hydra.output_subdir to null.
To prevent the creation of logging files, you can disable logging output of hydra/hydra_logging and hydra/job_logging, see this.
A complete example might look like:
$ python foo.py hydra.run.dir=. hydra.output_subdir=null hydra/job_logging=disabled hydra/hydra_logging=disabled
Note that as always you can also override those config values through your config file.
I want to use relative paths when including sls files. This approach works when including state files but does not work when including pillar files.
Let's assume I have the following structure on my salt master:
file_roots:
base:
- /srv/salt/states
pillar_roots:
base:
- /srv/salt/pillars
And let's assume I have the following files:
/srv/salt/states/top.sls
/srv/salt/states/test/
/srv/salt/states/test/init.sls
/srv/salt/states/test/test_state.sls
In the top.sls file I include the test directory like this:
base:
'*':
- test
The init.sls file then includes the actual state file like this:
include:
- .test_state
When I call the highstate everything works as expected. Now I use the same logic for pillar data. That means I have the following files:
/srv/salt/pillars/top.sls
/srv/salt/pillars/test/
/srv/salt/pillars/test/init.sls
/srv/salt/pillars/test/test_pillar.sls
In the test_pillar.sls file I put one pillar like this:
test_pillar: text
The init.sls file looks like this (analogue to the init.sls file above):
include:
- .test_pillar
When I call the highstate now I get the following error message:
Data failed to compile:
----------
Pillar failed to render with the following messages:
----------
Specified SLS '.test_pillar' in environment 'base' is not available on the salt master
So I go back to the init.sls file and make the file path absolute:
include:
- test.test_pillar
Now it works.
To make a long story short: salt allows me to use relative paths in the init.sls for the state files but complains when doing the same for the pillar data.
Is this the intended behaviour? Or do I have to use some other syntax maybe?
Relative includes for pillar files was added with this commit: https://github.com/saltstack/salt/pull/52156
But as of this writing, 15 Nov 2019, it doesn't look like it has made it into a release yet.
The Ansible project has this directory structure:
roles/
common/
tasks/
main.yml
group_vars/
group1.yml
group2.yml
inventory/
hosts
When using the copy module inside the main.yml like this:
- name: Copy test directory
copy:
src: ./test
dest: /tmp
mode: 0600
owner: user
group: user
Where is Ansible going to look for the test directory?
I can not find it in the documentation.
Q: "Where is Ansible going to look for the test directory?"
A: Quoting from The magic of ‘local’ paths:
... relative paths get attempted first with a files|templates|vars appended (if not already present), depending on the action being taken, ‘files’ is the default. (i.e include_vars will use vars/). The paths will be searched from most specific to the most general (i.e role before play). dependent roles WILL be traversed (i.e task is in role2, role2 is a dependency of role1, role2 will be looked at first, then role1,then play). i.e
role search path is rolename/{files|vars|templates}/, rolename/tasks/.
play search path is playdir/{files|vars|templates}/, playdir/.
I want to set some parameters as defined here(https://github.com/nteract/papermill#python-version-support). The catch is, I want to be able to do this via UI. I have a JHub installed on my cluster and while opening it, I want certain parameters to be set by default.
Also, when I pass the parameters via papermill(the above script gets saved somewhere and then I will run it via papermill), I want the latter to override the former.
I tried looking into several topics in pure JuPyter notebooks but in vain.
For the user to access some parameters as soon as her notebook starts, ipython needs to know the startup cells. This can be done via the following commands in case of JuPyterHub:
proxy:
secretToken: "yada yada"
singleuser:
image:
name: some_acc_id.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: some_acc_id.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#abc.com
Make sure you are escaping the quotes in the yaml correctly, or simply follow what I have done above.
Once this is done, papermill will override the params but for that, you have to make sure that the cell is tagged as "parameters". For instance, in my jupyterhub, every notebook that starts has run_id variable with the value "sample".
I'm trying to use Salt to deploy an online tool to a new VPS. The process involves cloning a git repo and then various set-up commands - however there seems to be an issue with including other .sls files from within sub directories.
Here's a simplified version:
Master config file:
file_roots:
base:
- /srv/salt/saltstates
I have a a file in /srv/salt/saltstates/test/test.sls containing:
base:
'*':
- basic
The file /srv/salt/saltstates/test/basic.sls contains:
Europe/London:
timezone.system
However, when I run salt 'Minion1' state.sls test.test, an error is returned:
Minion1:
----------
ID: base
Function: *.basic
Result: False
Comment: State *.basic found in sls test.test is unavailable
Started:
Duration:
Changes:
OK, so you've confused several things here.
First of all the contents you've put in /srv/salt/saltstates/test/test.sls really is what is called a top file and should probably be moved to /srv/salt/saltstates/top.sls
The top.sls is only needed if you want to do a highstate, but since you're trying to run salt 'Minion1' state.sls test.test you don't really need the top.sls.
Now since you have your sls file here: /srv/salt/saltstates/test/basic.sls, then the command you want to run is the following:
salt 'Minion1' state.sls test.basic
The "dot" traverses down directories.