Saltstack "Malformed topfile" with GitFS - salt-stack

I'm trying to get Saltstack working with GitFS integration on Debian Squeeze. Salt master and minions are running and GitFS works via GitPython (pip install 'GitPython==0.3.2.RC1').
Issue: I receive the error message Malformed topfile (state declarations not formed as a list) when requesting the top file via salt-call -l debug state.show_top. However, if i clone the repository locally and use fileserver_backend: roots it works fine.
Some debugging code:
root#/# salt-call -l debug state.show_top
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Configuration file path: /etc/salt/minion
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Mako not available
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[INFO ] Loading fresh modules for state activity
[DEBUG ] Fetching file from saltenv 'development', ** attempting ** 'salt://top.sls'
[INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://top.sls'
[DEBUG ] Jinja search path: ['/var/cache/salt/minion/files/base']
[DEBUG ] Rendered data from file: /var/cache/salt/minion/files/base/top.sls:
base:
'*':
- system.test
production:
'N#aws':
- match: compound
- system
'N#aws and G#roles:redis':
- match: compound
- redis.server
# TODO:
'N#aws and G#roles:queue':
- match: compound
- rabbitmq
# TODO:
'N#aws and G#roles:cronmaster':
- match: compound
- php.ng.cli
# TODO:
'N#aws and G#roles:consumer':
- match: compound
- php.ng.cli
'N#aws and G#roles:app_bob':
- match: compound
- app.bob
'N#aws and G#roles:app_alice':
- match: compound
- app.alice
'N#aws and G#roles:mysql':
- match: compound
- mysql
development:
'vagrant':
- devtools
- redis.server
- mysql
- solr
- app.bob
- app.alice
nodegroup:
aws: 'G#provider:aws'
avnet: 'G#provider:avnet'
[DEBUG ] Results of YAML rendering:
OrderedDict([('base', OrderedDict([('*', ['system.test'])])), ('production', OrderedDict([('N#aws', [OrderedDict([('match', 'compound')]), 'system']), ('N#aws and G#roles:redis', [OrderedDict([('match', 'compound')]), 'redis.server']), ('N#aws and G#roles:queue', [OrderedDict([('match', 'compound')]), 'rabbitmq']), ('N#aws and G#roles:cronmaster', [OrderedDict([('match', 'compound')]), 'php.ng.cli']), ('N#aws and G#roles:consumer', [OrderedDict([('match', 'compound')]), 'php.ng.cli']), ('N#aws and G#roles:app_bob', [OrderedDict([('match', 'compound')]), 'app.bob']), ('N#aws and G#roles:app_alice', [OrderedDict([('match', 'compound')]), 'app.alice']), ('N#aws and G#roles:mysql', [OrderedDict([('match', 'compound')]), 'mysql'])])), ('development', OrderedDict([('vagrant', ['devtools', 'redis.server', 'mysql', 'solr', 'app.bob', 'app.alice'])])), ('nodegroup', OrderedDict([('aws', 'G#provider:aws'), ('avnet', 'G#provider:avnet')]))])
[DEBUG ] LazyLoaded .returner
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
local:
- Malformed topfile (state declarations not formed as a list)
- Malformed topfile (state declarations not formed as a list)
root#/srv# salt-call --versions-report
Salt: 2014.7.1
Python: 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)
Jinja2: 2.5.5
M2Crypto: 0.20.1
msgpack-python: 0.1.10
msgpack-pure: Not Installed
pycrypto: 2.1.0
libnacl: Not Installed
PyYAML: 3.09
ioflo: Not Installed
PyZMQ: 13.1.0
RAET: Not Installed
ZMQ: 3.2.3
Mako: Not Installed

It appears the issue is related to the nodegroups. After some debugging in the salt source code the following turned up:
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
local:
- G#provider:aws
- Malformed topfile (state declarations not formed as a list)
- G#provider:avnet
- Malformed topfile (state declarations not formed as a list)
After changing the nodegroup to use a list, everything worked fine.

Related

Masterless Windows Salt minion - Where to keep state files

I have install salt-minion on windows machine and made it masterless.
Where do i keep state files, so that salt is able to find them.
Currenlty, when i keep it in C:\salt and run salt-call state.sls test -l debug, i get the following log,
[DEBUG ] LazyLoaded roots.envs
[DEBUG ] Could not LazyLoad roots.init: 'roots.init' is not available.
[DEBUG ] Updating roots fileserver cache
[DEBUG ] Determining pillar cache
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[INFO ] Loading fresh modules for state activity
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] Could not find file 'salt://test.sls' in saltenv 'base'
[DEBUG ] Could not find file 'salt://test/init.sls' in saltenv 'base'
[DEBUG ] compile template: False
[ERROR ] Template was specified incorrectly: False
[DEBUG ] LazyLoaded highstate.output
local:
Data failed to compile:
----------
No matching sls found for 'test' in env 'base'
Salt keeps state files in the directory(ies) configured by the file_roots variable. To find the location(s), run from a command prompt:
python2 -c 'import salt.config; master_opts = salt.config.client_config("/etc/salt/master"); print(master_opts["file_roots"])'

why salt minion ignoring /etc/salt/minion.d/*

My /etc/salt/minion only has the line:
default_include: minion.d/*.conf
I wrote a very simple minion.d configuration:
cat /etc/salt/minion.d/saltminion
master: 123.123.xxx.xxx
id: test2
salt-minion -l debug
[DEBUG ] Reading configuration from /etc/salt/minion
[INFO ] Using cached minion ID from /etc/salt/minion_id: test1
[DEBUG ] Configuration file path: /etc/salt/minion
[INFO ] Setting up the Salt Minion "test1"
[DEBUG ] Created pidfile: /var/run/salt-minion.pid
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] The `lspci` binary is not available on the system. GPU grains will not be available.
[DEBUG ] Attempting to authenticate with the Salt Master at 92.242.xxx.xxx
minion.d is completely ignored and I don't know where this 92.242.xxx.xxx ip is coming from, It looks like a major security hole to me.
Version: salt-minion 2014.1.13+ds-3
Would someone shed some lights here? please. Thanks

Unable to install software on windows 10 minion

I trying to get the master to install software on the minion but it doesn't seem like its working. When I try a debug it looks like the problem is the following:
[DEBUG ] Missing configuration file: /root/.saltrc
Not sure how to go about fixing it. For more information my salt master version is 2015.5.10 (lithium) and my minion version is 2015.5.1. Also the complete debug log is:
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Including configuration from '/etc/salt/master.d/master.conf'
[DEBUG ] Reading configuration from /etc/salt/master.d/master.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: ECS-141abdb2.ecs.ads.autodesk.com
[DEBUG ] Missing configuration file: /root/.saltrc
[DEBUG ] Configuration file path: /etc/salt/master
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Including configuration from '/etc/salt/master.d/master.conf'
[DEBUG ] Reading configuration from /etc/salt/master.d/master.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: ECS-141abdb2.ecs.ads.autodesk.com
[DEBUG ] Missing configuration file: /root/.saltrc
[DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
[DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event - data = {'_stamp': '2016-10-31T15:49:34.567058'}
[DEBUG ] LazyLoaded local_cache.get_load
[DEBUG ] get_iter_returns for jid 20161031084934590375 sent to set(['ds']) will timeout at 08:49:39.5975 53
[DEBUG ] jid 20161031084934590375 return from ds
[DEBUG ] LazyLoaded nested.output
Any help would be greatly appreciated since I'm new to this :)
Command used was:
sudo salt '<minion>' pkg.install 'chrome'
In case of windows you need to include some magic for this to work properly. Did you followed the instructions written here? A short summary of the docs content
# prepare your master to know about the win repos
salt-run winrepo.update_git_repos
# tell all connected windows minions to refresh
salt -G 'os:windows' pkg.refresh_db
# list available packages
salt -G 'os:windows' pkg.list_pkgs
# ask your minion for known chrome versions
salt '<minion id>' pkg.available_version chrome
# and finally install it; use version=x.x.x in case of a specific one.
salt '<minion id>' pkg.install 'chrome'

Pkg.install on windows repo in top.sls files for node group

I follow the following guide on how to setup windows software repository
I can manually install a package on a node by executing:
salt node pkg.install 'firefox'
How do I transform this to a configuration in a top.sls file using pkg.install (or something similar) so that I could target a nodegroup using state.highstate and only install software that is not previously installed?
Using the answer from Utah_Dave firefox I will get the following error message: (given that firefox is already installed on the machine, if I do a fresh install the error message/logs are similar and firefox will install on the machine but still return False)
my-node:
----------
ID: firefox
Function: pkg.installed
Result: False
Comment: The following packages failed to install/update: firefox=40.0.3
And from the debug logs from the salt-minion:
[DEBUG ] Fetching file from saltenv 'base', ** attempting ** 'salt://firefox.sls'
[INFO ] Fetching file from saltenv 'base', ** done ** 'firefox.sls'
[DEBUG ] LazyLoaded cmd.run
[DEBUG ] Jinja search path: ['c:\\salt\\var\\cache\\salt\\minion\\files\\base']
[DEBUG ] Rendered data from file: c:\salt\var\cache\salt\minion\files\base\firefox.sls:
firefox:
pkg.installed
[DEBUG ] LazyLoaded config.get
[DEBUG ] Results of YAML rendering:
OrderedDict([('firefox', 'pkg.installed')])
[DEBUG ] LazyLoaded pkg.install
[DEBUG ] LazyLoaded pkg.installed
[DEBUG ] Error loading module.tls: ['PyOpenSSL version 0.14 or later must be installed before this module can be used.']
[DEBUG ] Error loading module.nacl: libnacl import error, perhaps missing python libnacl package
[DEBUG ] Error loading module.ipmi: No module named pyghmi.ipmi
[DEBUG ] Error loading module.npm: npm execution module could not be loaded because the npm binary could not be located
[DEBUG ] Could not LazyLoad pkg.ex_mod_init
[INFO ] Running state [firefox] at time 08:18:32.643000
[INFO ] Executing state pkg.installed for firefox
[DEBUG ] Initializing COM library
[DEBUG ] Uninitializing COM library
[DEBUG ] Could not LazyLoad pkg.normalize_name
[DEBUG ] Could not LazyLoad pkg.check_db
[DEBUG ] Could not LazyLoad pkg.normalize_name
[DEBUG ] Initializing COM library
[DEBUG ] Uninitializing COM library
[WARNING ] Specified file https://download-installer.cdn.mozilla.net/pub/firefox/releases/40.0.3/win32/en-US/Firefox%20Setup%2040.0.3.exe is not present to generate hash
[DEBUG ] Reading configuration from c:\salt\conf\minion
[DEBUG ] file_roots is c:\salt\srv\salt
[DEBUG ] Using GET Method
[INFO ] Executing command ['c:\\salt\\var\\cache\\salt\\minion\\extrn_files\\base\\download-installer.cdn.mozilla.net\\pub\\firefox\\releases\\40.0.3\\win32\\en-US\\Firefox%20Setup%2040.0.3.exe', '/s'] in directory 'C:\\Users\\aes.jenkins'
[DEBUG ] Initializing COM library
[DEBUG ] Uninitializing COM library
[DEBUG ] Could not LazyLoad pkg.hold
[DEBUG ] Initializing COM library
[DEBUG ] Uninitializing COM library
[ERROR ] The following packages failed to install/update: firefox=40.0.3
[INFO ] Completed state [firefox] at time 08:18:46.939000
cat /srv/salt/firefox.sls:
firefox:
pkg.installed
cat /srv/salt/top.sls:
base:
'myNodeGroupName':
- match: nodegroup
- firefox

Introspection into forever-running salt highstate?

I've been experimenting with Salt, and I've managed to lock up my highstate command. It's been running for hours now, even though there's nothing that warrants that kind of time.
The last change I made was to modify the service.watch state for nginx. It currently reads:
nginx:
pkg.installed:
- name: nginx
service:
- running
- enable: True
- restart: True
- watch:
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/sites-available/default.conf
- pkg: nginx
The last change I made was to add the second file: argument to watch.
After letting it run all night, with no change in state, I subsequently Ctrl-C'd the process. The last output from sudo salt -v 'web*' state.highstate -l debug was:
[DEBUG ] Checking whether jid 20140403022217881027 is still running
[DEBUG ] get_returns for jid 20140403103702550977 sent to set(['web1.mysite.com']) will timeout at 10:37:04
[DEBUG ] jid 20140403103702550977 found all minions
Execution is still running on web1.mysite.com
^CExiting on Ctrl-C
This job's jid is:
20140403022217881027
The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later run:
salt-run jobs.lookup_jid 20140403022217881027
Running it again immediately, I got this:
$ sudo salt -v 'web*' state.highstate -l debug
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Missing configuration file: /home/eykd/.salt
[DEBUG ] Configuration file path: /etc/salt/master
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Missing configuration file: /home/eykd/.salt
[DEBUG ] LocalClientEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
[DEBUG ] LocalClientEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
Executing job with jid 20140403103715454952
-------------------------------------------
[DEBUG ] Checking whether jid 20140403103715454952 is still running
[DEBUG ] get_returns for jid 20140403103720479720 sent to set(['web1.praycontinue.com']) will timeout at 10:37:22
[INFO ] jid 20140403103720479720 minions set(['web1.mysite.com']) did not return in time
[DEBUG ] Loaded no_out as virtual quiet
[DEBUG ] Loaded json_out as virtual json
[DEBUG ] Loaded yaml_out as virtual yaml
[DEBUG ] Loaded pprint_out as virtual pprint
web1.praycontinue.com:
Minion did not return
I then ran the same command, and received this:
$ sudo salt -v 'web*' state.highstate -l debug
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Missing configuration file: /home/eykd/.salt
[DEBUG ] Configuration file path: /etc/salt/master
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Missing configuration file: /home/eykd/.salt
[DEBUG ] LocalClientEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
[DEBUG ] LocalClientEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
Executing job with jid 20140403103729848942
-------------------------------------------
[DEBUG ] Loaded no_out as virtual quiet
[DEBUG ] Loaded json_out as virtual json
[DEBUG ] Loaded yaml_out as virtual yaml
[DEBUG ] Loaded pprint_out as virtual pprint
web1.mysite.com:
Data failed to compile:
----------
The function "state.highstate" is running as PID 4417 and was started at 2014, Apr 03 02:22:17.881027 with jid 20140403022217881027
There is no process running under PID 4417. Running sudo salt-run jobs.lookup_jid 20140403022217881027 displays nothing.
Unfortunately, I can't connect to the minion via ssh, as salt hasn't provisioned my authorized_keys yet. :\
So, to my question: what the heck is wrong, and how in the world do I find that out?
So, after a lot of debugging, this was a result of an improperly configured Nginx service. service nginx start was hanging, and thus so was salt-minion.
I had this happen when I aborted a run of state.highstate on the salt-master with Ctrl-C. It turned out that the PID referenced in the error message was actually the PID of a salt-minion process on the minion machine.
I was able to resolve the problem by restarted the salt-minion process on the minion, and then re-executing state.highstate on the master.

Resources