I have searched lots of tutorials on web & Youtube, but no luck.
I want to configure Cisco switch via Ansible, I already have it setup, works flawlessly.. but I want to store the passwords (for vty lines, console, enable secret...) ideally in hosts file encrypted via Ansible-Vault as variables so in my .yml file I can access them. I want them in hosts file, because we have different passwords for ASW, DSW and CSW so it could be easier to manage.
I generated encrypted variable in CLI:
ansible-vault encrypt_string enable_password --ask-vault-pass
I copy the value to the variable in /etc/ansible/hosts:
...
[2960-X:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=ios
ansible_user=admin
enable_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
.....
In config.yml:
- name: Set enable password
ios_config:
lines:
- enable secret "{{ enable_password }}"
Right now, the password is going to be set as " !vault |"
I am not sure if this is even best practise, I read recommendations for this but all I could find was about server automation, not networks.
I'm running Ansible 2.8.0
Any help is appreciated, thank you.
Let me quote from Variables and Vaults
When running a playbook, Ansible finds the variables in the unencrypted file and all sensitive variables come from the encrypted file.
A best practice approach for this is to start with a group_vars/ subdirectory named after the group. Inside of this subdirectory, create two files named vars and vault. Inside of the vars file, define all of the variables needed, including any sensitive ones. Next, copy all of the sensitive variables over to the vault file and prefix these variables with vault_. You should adjust the variables in the vars file to point to the matching vault_ variables using jinja2 syntax, and ensure that the vault file is vault encrypted.
This scheme isn't limited to group_vars/ only and can be applied to any place where the variables come from.
Related
So I created encrypted key using ansible-vault create my.key.
Then I use it as var:
my_key: "{{ lookup('file','{{ inventory_dir }}/group_vars/my.key') }}"
And then when running my playbook, like this:
- name: Create My Private Key
ansible.builtin.copy:
content: "{{ secrets.my_key }}"
dest: "{{ secrets_key }}"
no_log: true
It does properly create key on remote host and it is then unencrypted. But I'm thinking if this is the right way to do it? Does it unencrypt at the right time and I am not exposing sensitive data where it should not be?
I thought encrypted variables must also have !vault keyword specified. But if I do this for my my_key, I get this error:
fatal: [v14-test]: FAILED! => {"msg": "input is not vault encrypted data. "}
So this got me worried, that file is unencrypted at the wrong time or maybe message is misleading or something.
Is this the right way to do it? Or I should do it differently?
Firstly, a definitive answer as to whether this approach is appropriate, is directly linked to what you want to achieve from encryption. Therefore all the answers here can do is talk about how Vault works and then you can decide if it is right for your requirements.
Fundamentally what you are doing is a 'correct' usage of Ansible Vault, although I have not previously seen it used in quite this workflow (typically I have seen create used for encrypting YAML files of vars).
Using your method, your secret is turned into ciphertext and stored in my.key (which can be confirmed by using basic text tools such as cat, less or more). You will see the first line of the file, contains a bunch of metadata that allows Ansible to understand the file contents and decrypt on demand.
At runtime, Ansible will then use the password/key for the encrypted file (accessed through a number of methods) to decrypt the file contents into plain text and then store it in the variable my_key for use during the play.
A non-exhaustive list of things to consider when determining if Ansible Vault is the right approach for you:
Ansible Vault encryption is purely designed to protect secrets at rest (i.e. when they are stored on your hard disk)
At run time, the secrets are converted into plain text and treated like any other variable/string data, however the file on disk still contains ciphertext so the plaintext is only accessible within the running Ansible process (i.e. on a multi-user system, at no point can anybody view the plaintext simply by looking inside the my.key file. However, depending on their level of access, skills and what your Ansible tasks are doing, they may be able to access the plaintext from the running process.)
Given inside the process the data is just plain text, it is vulnerable to leakage (for example by writing the contents out into a log file - checkout the Ansible no_log option)
At run time, Ansible needs some way to access the key necessary to decrypt the ciphertext. It provides a variety of methods, including prompting the user, accessing it from a file stored on disk, accessing it from an Env var, using scripts/integrations to pull it from another secrets mgmt tool. Careful thought needs to be given about which option is chosen, relative to what you are looking to achieve from the encryption (e.g. if your goal is to protect your data in the event that your laptop gets stolen, then storing the key in a file on the same system, renders the whole operation pointless). Quite often, with more sophisticated methods, you can still end up in a 'chicken and egg' situation, once more relative to what your goal from using encryption is
I might be talking complete cobblers or be a nefarious individual trying to sow disinformation, so read the docs thoroughly if the value of the secrets if significant to you :)
Unfortunately there is no getting away from generally good security is harder to achieve than the illusion of good security :|
I am setting up a platform whereby data is stored on IPFS and then give access to some (or in some cases all) through a front-end UI.
Storing on IPFS is straight forward as is encrypting.
First I encrypt the file:
gpg --encrypt --recipient "myUserName" "myVideo.mp4"
Then I save the encrypted file:
ipfs add "myVideo.mp4.gpg"
So far so good. Recovering it is easy as is decrypting:
ipfs cat _hashcode > “myVideo.mp4.gpg”
gpg “myVideo.mp4.gpg”
My question though is, that only works if I want to encrypt the file such that only I can decrypt it, how can I allow a certain group of users to access any given file, possibly even ALL users on the platform, but not for people outside of the platform.
I know it is possible to set up Groups in the gpg.config file, but I won't know ahead of time who all of the users are that should have access and it may change over time as well.
Can anyone help me with this please?
Thanks!
Can someone please give me an example for corporatePasswordStore that is mentioned here:
https://docs.corda.net/node-administration.html?fbclid=IwAR0gRwe5BtcWO0NymZVyE7_yMfthu2xxnU832vZHdbuv17S-wPXgb7iVZSs#id2
I've been doing a lot of research in the last few days on how to hide the plain passwords from node.conf; it's a new topic for me and this is what I came up with so far:
Create a priv/pub key with gpg2
Create a password store with pass (using the key that I generated earlier).
Store all the plain passwords from node.conf inside that password store.
Replace the plain passwords in node.conf with environment variables (e.g. keyStorePassword = ${KEY_PASS})
Create a script file (e.g. start_node.sh) that will do the following:
a. Set an environment variable to one of the passwords from the password store: export key_store_password=$(pass node.conf/keyStorePassword)
b. Start the node: java -jar corda.jar
c. Restart the gpg agent to clear the cached passwords, otherwise you can get any password from the store without passing the passphrase: gpgconf --reload gpg-agent
Pros:
Using the bash file start_node.sh allows to set many passwords as environment variables at once (e.g. keyStore, trustStore, db passwords, RPC user password)
Since we are running the bash file with bash start_node.sh and not source start_node.sh, the environment variable is not exposed to the parent process (i.e. you cannot read that environment variable value inside the terminal where you ran bash start_node.sh
History commands are not enabled by default inside bash scripts.
Cons:
You no longer can have a service that automatically starts on VM startup, because the start_node.sh script will ask for the passphrase for your gpg key that was used to encrypt the passwords inside the password store (i.e. it's an interactive script).
Am I over-complicating this? Do you have an easier approach? Is it even necessary to hide the plain passwords?
I'm using Corda open source so I can't use the Configuration Obfuscator (which is for Enterprise only): https://docs.corda.r3.com/tools-config-obfuscator.html#configuration-obfuscator (edited)
I wrote a detailed article here: https://blog.b9lab.com/enabling-corda-security-with-nodes-configuration-file-412ce6a4371c, which covers the following topics:
Enable SSL for database connection.
Enable SSL for RPC connection.
Enable SSL for Corda webserver.
Enable SSL for Corda standalone shell.
Hide plain text passwords.
Set permissions for RPC users.
I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.
http://hbase.apache.org/book.html#_server_side_configuration_3
I have checked the URL in which it'll encrypt the data based on the Java java.security.KeyStore. But we need to keep the file .jks which contains master key for all the hbase servers (all master and Region servers have to have this file).
NOTE: Also the password to open the file has been given in hbase-site.xml.
For HDFS alone, there is option to keep the keystore file in the KMS server and not for HBase. Still now we need to keep it in the local store.
I don't need KMS option. I need something to keep the value in common place has to be accessed instaed of having same file in the all servers.
Is there any method/custom class available to get master key from the common storage like DB/redis/Zookeeper?
UPDATE #1
Someone has asked similar question but no solution for that: Encrypt HBase at-rest data in Cloud.