We're using salt (masterless, fwiw) to maintain a fleet of 10 hosts. That will probably grow to 15-20 by year end. I have a pillar file called credentials/init.sls that has a single huge hunk of yaml that looks something like this:
host_credentials:
host1.example.com:
role: staging
mysql:
superdatabase:
role1:
username: role1_username
password: someSecretSHA1
from_hosts:
- host2.example.com
grants: select, insert, update, delete
role2:
username: role2_username
password: anotherSecretSHA1
from_hosts:
- host3.example.com
grants: select
host2.example.com:
role: staging
superthing:
mysql:
database: superdatabase
host: host1.example.com
role: role1
The point here is that the code that configures host1 can fetch all of its pertinent data from host_credentials['host1.example.com'] and the code that sets up host2 can fetch its data from host_credentials['host2.example.com'] with some lookups, for example, in host1's info in order to know the user and password for superdatabase on host1 with role1.
This works, but it's cumbersome. In fact, it's sufficiently cumbersome that the firewall rules, which at first glance might also have been in this yaml structure, are now in a different pillar file. Moreover, nothing controls that each host's yaml conforms to the pattern we want, so we have to do a fair bit of error checking in the salt code.
In addition, when someday we use a salt master, this would lead us to have to share all host information with all minions, hardly ideal.
The goal is to have each bit of information written down once. If I change a password for a mysql database user, for example, I shouldn't have to change it twice, once on the server, again on each client that uses that database. Moreover, I'd like to have semi-reasonable error messages when I do the wrong thing. If I tell host4 to connect to mysql on host2, I should get an error message that says something like "host4 requested to connect to mysql on host2, but host2 does not run a mysql server".
At the moment, the only better solution I've found is to write some python code (using the salt py renderer) that generates an empty object but does a lot of checking in the middle.
salt/codequal/init.sls:
#!py
def run():
# Do lots of error checking here.
return {}
Any feedback most welcome. Since I haven't seen this sort of thing written down, I suspect I've missed a more natural way to do what I want.
Related
So I created encrypted key using ansible-vault create my.key.
Then I use it as var:
my_key: "{{ lookup('file','{{ inventory_dir }}/group_vars/my.key') }}"
And then when running my playbook, like this:
- name: Create My Private Key
ansible.builtin.copy:
content: "{{ secrets.my_key }}"
dest: "{{ secrets_key }}"
no_log: true
It does properly create key on remote host and it is then unencrypted. But I'm thinking if this is the right way to do it? Does it unencrypt at the right time and I am not exposing sensitive data where it should not be?
I thought encrypted variables must also have !vault keyword specified. But if I do this for my my_key, I get this error:
fatal: [v14-test]: FAILED! => {"msg": "input is not vault encrypted data. "}
So this got me worried, that file is unencrypted at the wrong time or maybe message is misleading or something.
Is this the right way to do it? Or I should do it differently?
Firstly, a definitive answer as to whether this approach is appropriate, is directly linked to what you want to achieve from encryption. Therefore all the answers here can do is talk about how Vault works and then you can decide if it is right for your requirements.
Fundamentally what you are doing is a 'correct' usage of Ansible Vault, although I have not previously seen it used in quite this workflow (typically I have seen create used for encrypting YAML files of vars).
Using your method, your secret is turned into ciphertext and stored in my.key (which can be confirmed by using basic text tools such as cat, less or more). You will see the first line of the file, contains a bunch of metadata that allows Ansible to understand the file contents and decrypt on demand.
At runtime, Ansible will then use the password/key for the encrypted file (accessed through a number of methods) to decrypt the file contents into plain text and then store it in the variable my_key for use during the play.
A non-exhaustive list of things to consider when determining if Ansible Vault is the right approach for you:
Ansible Vault encryption is purely designed to protect secrets at rest (i.e. when they are stored on your hard disk)
At run time, the secrets are converted into plain text and treated like any other variable/string data, however the file on disk still contains ciphertext so the plaintext is only accessible within the running Ansible process (i.e. on a multi-user system, at no point can anybody view the plaintext simply by looking inside the my.key file. However, depending on their level of access, skills and what your Ansible tasks are doing, they may be able to access the plaintext from the running process.)
Given inside the process the data is just plain text, it is vulnerable to leakage (for example by writing the contents out into a log file - checkout the Ansible no_log option)
At run time, Ansible needs some way to access the key necessary to decrypt the ciphertext. It provides a variety of methods, including prompting the user, accessing it from a file stored on disk, accessing it from an Env var, using scripts/integrations to pull it from another secrets mgmt tool. Careful thought needs to be given about which option is chosen, relative to what you are looking to achieve from the encryption (e.g. if your goal is to protect your data in the event that your laptop gets stolen, then storing the key in a file on the same system, renders the whole operation pointless). Quite often, with more sophisticated methods, you can still end up in a 'chicken and egg' situation, once more relative to what your goal from using encryption is
I might be talking complete cobblers or be a nefarious individual trying to sow disinformation, so read the docs thoroughly if the value of the secrets if significant to you :)
Unfortunately there is no getting away from generally good security is harder to achieve than the illusion of good security :|
I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.
My place of business currently uses Reflection for Unix and OpenVMS to handle a database of customers. I access this database directly through the Reflection emulation. The only way to get data out of Reflection is to navigate to a single customer via keyboard input and print the information to a .txt.
Is there anyway I can access the VM other than through Reflection with the end goal of automating retrieval of customer information from a Java script executed outside of the Reflection environment? This is the information I can gather via the Reflection interface about what I am connecting to:
At the bottom of the Reflection interface - VT500-7 -- HOST_NAME via SECURE SHELL
Via the Connection Setup drop-down:
Host name: HOST_NAME
SSH config scheme: AutoKeyLogin
User name: username
Via the Security... button:
General tab:
Port number: 22
User Authentication: [x] Public Key
[x] Password
User Keys tab:
Use Name Type Location
[x] username1user DSA C:\Documents\PathToSSHKey\.ssh
Host Keys tab:
Host Type Fingerprint
HOST_NAME, 111.1.111.11, 22 DSA 39:14:f3:123:fds:restOfFingerprint
There is more information available if the solution is possible but I have just not provided enough to solve it, so please ask.
Given that I have the host name, port, .ssh, and host key, is it possible to connect to and read from the VM that I am otherwise connecting to normally via the Reflection emulator?
NO. Reflection (other example is PuTTY) is just a dumb-terminal emulator, here using the (secure) SSH protocol to connect to some Operating System. From the information provided we cannot even tell which OS. Maybe OpenVMS maybe some Unix. Most certainly not a 'VM', but a physical box. Maybe a Alpha, Integrity, Sun, IBM or Intel server.
IF, big if, it is OpenVMS you would possibly see something like this flash by on entry:
XXX - HP rx2600 (1.50GHz/6.0MB) OpenVMS IA64 V8.3-1H1
Last interactive login on Thursday, 7-DEC-2017 13:23:19.83
Last non-interactive login on Wednesday, 6-DEC-2017 12:35:45.80
Most likely username uses is set up to always start a (shell) script which starts a menu from which a program is activated, which knows how to access data record. IF is it OpenVMS then the actual data is likely stored in RMS (indexed) files, but it could in a proper (Oracle RDB or RDBMS) database.
If bulk access to the data is needed then you need to talk to the system/application manager for the system 'HOST_NAME' and ask them about the application and its database.
You may find that there is FTP, ODBC or JDBC or natice DB (OCI?) access to the data avaiable already, or that this can be requested. Likely tools in this space are ConnX, Attunity Connect, and such.
First you'll need to find out which OS/Platform/Version, which application (3rd party? home grown? 4GL? Cobol? Basic? and ultimately, which database/storage method.
That's not to say that some terminal emulator cannot be 'tricked' (google -
screen scraping) to be programmed to fetch a series of data on command, but that will always be error prone and laboriously for limited volumes.
You are better of trying to get proper data access.
Good luck! You'll need some.
Hein
I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?
So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.
How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql
I am connecting to a Teradata database through ODBC with Stata on an Ubuntu server (12.04 LTS). Everything works fine, except that I have my TD userid and password stored in the .odbc.ini file, which seems like a terrible idea. The alternative is to enter them in Stata, which seems even worse and is awkward. Is there a way to do this more securely? The login info that I use to ssh into the server is synced with the TD database. It seems that it should be possible to pass that information along.
In ODBC terms you do not need to store usernames / passwords in any of your ODBC ini files. Both the ODBC SQLConnect and SQLDriverConnect support the passing in of username / password at the time they are called.
SQLDriverConnect would need something in your InConnectionString like "DSN=YourDataSourceName;UID=username;PWD=password".
You could go one step further and pass in the whole DSN as a command line argument thus meaning that you would not need an ODBC data source in an ini file. I'm sure one of the forum readers can post a sample for you from Teradata.
As for passing in the user name and password from your SSH loging. Your application would need to capture that and pass it to ODBC.
If you want to establish a finer grain of security around your odbc.ini file or other files on your Ubuntu server that may contain user credentials I would strongly suggest the use of Access Control Lists (ACLs). Beyond the typical Owner::Group::World permissions you can specify permissions down to the specific user on whether they are allowed or denied an explicit permission for a given file.
Other options regarding security on Teradata include the use of LDAP authentication if your environment supports it. Configuring LDAP on Teradata is beyond the scope of SO and in many cases a billable, professional services engagement with Teradata's Information Security CoE.