I've been through the salt and pillar walkthroughs and in general, everything works as expected with my setup. In fact, there isn't anything that I'm aware of that isn't working properly...until now.
This is my first foray into using the pillar system. I have access keys that I am trying to protect so I'd like for pillar to allow me to keep tabs on which minions get copies of them.
Here is my setup.
Directory structure:
[root#master config-mgmt]# tree /srv/pillar
/srv/pillar
├── awscreds.sls
├── data.sls
├── dev
└── top.sls
/srv/pillar/top.sls file:
[root#master config-mgmt]# cat /srv/pillar/top.sls
dev:
'roles:*aws*':
- match: grain
- awscreds
'*':
- data
Eventually, I'd like to be able to match on my "roles" grain but for this test, to keep things simple, I am only concerned with the glob match ('*').
For all minions, it ought to run the data state, which is here:
[root#master config-mgmt]# cat /srv/pillar/data.sls
info: some data for poc
From my salt-master, I run refresh_pillar:
[root#master config-mgmt]# salt '*salttest*' saltutil.refresh_pillar
slave-salttesting-01.eng.example.com:
True
Seems okay. But, neither on the minion nor the master are the pillar attributes present in any form.
On the master:
[root#master config-mgmt]# salt '*salttest*' pillar.ls
slave-salttesting-01.eng.example.com:
On the minion:
[root#slave-salttesting-01 ~]# salt-call pillar.ls
local:
I'm running a recent version of salt:
[root#master config-mgmt]# salt --version
salt 2018.3.3 (Oxygen)
Any ideas why my minion isn't picking up any attributes?
I found the solution. I wasn't familiar with the /etc/salt/master file until my colleague suggested that I check it. The salt_pillar section had a typo for dev (defined as /sr/salt/dev and not /srv/salt/dev and base was defined as pointing to a location other than the default. I made sure that base was set to /srv/salt and dev was set to /srv/salt/dev. I then had to go back into /srv/salt and make sure that top.sls was in the /srv/salt location. I also moved data.sls and awscreds.sls to /srv/salt/dev because I wanted those to be part of dev.
After that, everything worked as expected. It goes to show, don't take anything for granted. I thought our pillars were working but as it turned out, they weren't.
Related
I am having one large .yml config file, which gets loaded in pillar .sls file and later used in states. To refactor that config file and make it a bit readable, I would like to split it into multiple files, which would be placed in one directory.
Current structure of pillars is:
pillar
|- app_configuration.sls
|- config.yml
Desired structure is:
pillar
|- app_configuration.sls
|- config_files
|- config1.yml
|- config2.yml
|- config3.yml
Current code in app_configuration.sls loads yaml file config.yml like this:
{% import_yaml 'config.yml' as app_config %}
But with updated configuration structure I need to pass directory path config_files and traverse all files in that directory and merge their content together. How can such behavior be achieved in Saltstack? The most important thing for me is how to list all files in config_files directory. I've already managed to create a for loop with merging code in Jinja, but when I try to use salt.file.find function with relative path (config_files), it does not work. Only when I specify absolute path, which is really really long and it does not look right for me. I also thought about enumerating those config files, but I would like to avoid that, because when new config is added, it may happen, that it is forgotten to be added in enumeration. That is not really scalable.
There are two options for this, you can use include statement inside a pillar SLS file, or load files using top.sls pillar file.
Example of using top.sls for base environment for all minions:
base:
'*':
- app_configuration
- config_files.*
Or, without editing the top.sls file, include files from config_files directory in app_configuration.sls.
At the top of app_configuration.sls:
include:
- config_files.*
Another alternative is to use map.jinja files (see documentation on Formulas).
I am curious if you can control the output "src" folder in AWS CodeBuild.
Specifically, I see this when debugging the build in CodeBuild.
/codebuild/output/src473482839/src/github.....
I would love to be able to set/change/remove the src473482839 part of that path, because I have a feeling it is causing my sbt to recompile my scala source files, although I am using CodeBuilds new localcache to cache my target folders between builds, the compiled class's canonical path change between builds, which is what I suspect is causing the problem
After some more debugging I have managed to get my 6 minute builds down to 1:30s.
Although you are not able to set or override the CODEBUILD_SRC_DIR I have found a work around in my buildspec.
This is what my buildspec looks like now, with local caching enabled in codebuild.
version: 0.2
phases:
pre_build:
commands:
- mkdir -p /my/build/folder/
- cp -a ${CODEBUILD_SRC_DIR}/. /my/build/folder
build:
commands:
- cd /my/build/folder
- sbt compile test
cache:
paths:
- '/root/.ivy2/cache/**/*'
- '/root/.cache/**/*'
- 'target/**/*'
- 'any other target folders you may need'
The key change I had to make was copy over the source(cached target directories) in the pre_build phase, and change directory and compile from the new, static directory
I hope this helps someone else down the road until CodeBuild allows a person to set/override the CODEBUILD_SRC_DIR folder
I know I can use cp.get_dir to download a directory from master to minions, but when the directory contains a lot of files, it's very slow. If I can tar up the directory and then download to minion, it will be much faster. But I can't find out how to archive a directory at master prior to downloading it to minions. Any ideas?
What we do is tar the files manually, then extract them on the minion, as you said. We then either replace or modify any files that should be different from what is in the tar-file. This is a good approach for a configuration file that resides in the .tar file, for example.
To archive the file, we just ssh on the salt master and then use something like tar -cvzf files.tar.gz <yourfiles>.
You could also consider having the files on the machines from the start, with a rsync afterwards (via salt.states.rsync for example). This would just push over the changes in the files, not all the files.
Adding to what Kai suggested, you could have a minion running on the salt master box and have it tar up the file before you send it down to all the minions.
You can use the archive.extracted state. The source argument uses the same syntax as its counterpart in the file.managed state. Example:
/path/on/the/minion:
archive.extracted:
- source: salt://path/on/the/master/archive.tar.gz
I'm just learning saltstack to start automating provisioning and deployment. One thing I'm having trouble finding is how to recursively set ownership on a directory after extracting an archive. When I use the user and group properties, I get a warning that says this functionality will be dropped in archive.extracted in a future release (carbon).
This seems so trivial, but I can't find a good way to do the equivalent of chown -R user:user on the dir that's extracted from the tar I'm unpacking.
The only thing I could find via googling was to add a cmd.run statement in the state file that runs chown and requires the statement that unpacks the tar. There's gotta be a better way, right?
EDIT: the cmd.run workout works perfectly btw, it just seems like a work around.
Here's how I have used it. I extract the file and then have a file.directory which set's the permission.
/path/to/extracted/dir:
file.directory:
- user: <someuser>
- group: <group>
- mode: 755 # some permission
- recurse:
- user
- group
- require:
- archive: <State id of `archive.extracted`>
Say I have a cmd.wait script that watches a managed git repository for changes. What’s the best way to trigger that script even if the repo hasn’t changed?
Here's the scenario:
my-repo:
git.latest:
- name: git#github.com:my/repo.git
- rev: master
- target: /opt/myrepo
- user: me
- require:
- pkg: git
syncdb:
cmd.run:
- name /opt/bin/syncdb.sh
load-resources:
cmd.wait:
- name: /opt/bin/load_script.py /opt/myrepo/resources.xml
- require:
- cmd: syncdb
- watch:
- git: my-repo
index-resources:
cmd.wait:
- name: /opt/bin/indexer.sh
- watch:
- cmd: load-resources
Say that I run this state, but syncdb fails. load-resources and index-resources fail as well because of missing prerequisites. But my-repo succeeded, and now has the latest checkout of the repository.
So I go ahead and fix the problem that was causing syncdb to fail, and it succeeds. But now my cmd.watch scripts won't run, because my-repo isn't reporting any changes.
I need to trigger, just once, load-resources, and going forward I want it to only trigger when the repo changes. Now, I could just change it to use cmd.run, but in actuality I have a bunch of these cmd.wait scripts in a similar state, and I really don't want to have to go through and switch them all and then switch them back. The same goes for introducing artificial changes into the git repo. There are multiple repos involved and that's annoying in many ways. Finally, I can foresee something like this happening again, and I want a sure solution for handling this case that doesn't involve a bunch of error-prone manual interventions.
So, is there a way to trigger those cmd.watch scripts manually? Or is there a clever way to rearrange the dependencies so that a manual triggering is possible?
Assuming the above sls lives in: /srv/salt/app.sls then you should be able to execute load-resources by doing this:
$: salt '*appserver*' state.sls_id load-resources app base
That said, there are surely many better ways to do this, so that you don't have to manually handle failures.
You could change your load-resources to use cmd.run with unless command that actually checks whether the resources have been loaded or not. If that's not possible to do in business terms (i.e. no easy way to check), then something generic could do, this can be as simple as a file you create at the end of load_script.py. The file can contain the commit id of the git repo at the time of the import, and if the file doesn't exist or the commit id in the file is different than that of the current git repo, you know you have to re-import.
A better variation would be to even bake the unless logic into load_script.py, which would make that script idempotent. Just like all salt modules. Your SLS file would be even simpler then.