Real world practice to store secrets and config in Lua scripts for nginx - nginx

I have some Lua scripts embedded in nginx. In one of those scripts I connect to my Redis cache and do it like so:
local redis_host = "127.0.0.1"
local redis_port = 6379
...
local ok, err = red:connect(redis_host, redis_port);
I do not like this, because, I have to hard code host and port. Should I instead use something like .ini file, parse it in Lua and get configuration information from this file? How do they solve this problem in real world practice?
Besides, I my scripts I use RSA decryption and encryption. For example, I do it like so now:
local public_key = [[ -----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7udJ++o3T6lgbFwWfaD/9xUMEZMtbm GvbI35gEgzjrRcZs4X3Sikm7QboxMJMrfzjQxISPLtsy9+vhbITQNVkCAwEAAQ== -----END PUBLIC KEY----- ]]
...
local jwt_obj = jwt:verify(public_key, token)
Once again what I do not like about this, is that I have to hard code public key. Do they use it in production like so or use some other techniques to store secrets (like storing them in environment variable)?

I'm sure some people do it this way in production. It is all a matter of what you're comfortable with and what your standards are. Some things that should determine your approach here -
What is the sensitivity of the data and risk if it were to be available publicly?
What is your deployment process? If you use an infrastructure as code approach or some type of config management then you surely don't want these items sitting embedded within code.
To solve the first item around sensitivity of the data, you'd need to consider many different scenarios of the best way to secure the secrets. Standard secret stores like AWS Parameter Store and CredStash are built just for this purpose and you'd need to pull the secrets at runtime to load them to memory.
For the second item, you could use a config file that is replaced per deployment.
To get the best of both worlds, you'd need to combine both a secure mechanism for storing secrets and a configuration approach for deployments/updates.
Like was mentioned in the comments, there are books written on both of these topics so the chances of getting enough detail in a SO answer is unlikely.

Related

Ansible - properly encrypting/decrypting and using file content (not YAML)

So I created encrypted key using ansible-vault create my.key.
Then I use it as var:
my_key: "{{ lookup('file','{{ inventory_dir }}/group_vars/my.key') }}"
And then when running my playbook, like this:
- name: Create My Private Key
ansible.builtin.copy:
content: "{{ secrets.my_key }}"
dest: "{{ secrets_key }}"
no_log: true
It does properly create key on remote host and it is then unencrypted. But I'm thinking if this is the right way to do it? Does it unencrypt at the right time and I am not exposing sensitive data where it should not be?
I thought encrypted variables must also have !vault keyword specified. But if I do this for my my_key, I get this error:
fatal: [v14-test]: FAILED! => {"msg": "input is not vault encrypted data. "}
So this got me worried, that file is unencrypted at the wrong time or maybe message is misleading or something.
Is this the right way to do it? Or I should do it differently?
Firstly, a definitive answer as to whether this approach is appropriate, is directly linked to what you want to achieve from encryption. Therefore all the answers here can do is talk about how Vault works and then you can decide if it is right for your requirements.
Fundamentally what you are doing is a 'correct' usage of Ansible Vault, although I have not previously seen it used in quite this workflow (typically I have seen create used for encrypting YAML files of vars).
Using your method, your secret is turned into ciphertext and stored in my.key (which can be confirmed by using basic text tools such as cat, less or more). You will see the first line of the file, contains a bunch of metadata that allows Ansible to understand the file contents and decrypt on demand.
At runtime, Ansible will then use the password/key for the encrypted file (accessed through a number of methods) to decrypt the file contents into plain text and then store it in the variable my_key for use during the play.
A non-exhaustive list of things to consider when determining if Ansible Vault is the right approach for you:
Ansible Vault encryption is purely designed to protect secrets at rest (i.e. when they are stored on your hard disk)
At run time, the secrets are converted into plain text and treated like any other variable/string data, however the file on disk still contains ciphertext so the plaintext is only accessible within the running Ansible process (i.e. on a multi-user system, at no point can anybody view the plaintext simply by looking inside the my.key file. However, depending on their level of access, skills and what your Ansible tasks are doing, they may be able to access the plaintext from the running process.)
Given inside the process the data is just plain text, it is vulnerable to leakage (for example by writing the contents out into a log file - checkout the Ansible no_log option)
At run time, Ansible needs some way to access the key necessary to decrypt the ciphertext. It provides a variety of methods, including prompting the user, accessing it from a file stored on disk, accessing it from an Env var, using scripts/integrations to pull it from another secrets mgmt tool. Careful thought needs to be given about which option is chosen, relative to what you are looking to achieve from the encryption (e.g. if your goal is to protect your data in the event that your laptop gets stolen, then storing the key in a file on the same system, renders the whole operation pointless). Quite often, with more sophisticated methods, you can still end up in a 'chicken and egg' situation, once more relative to what your goal from using encryption is
I might be talking complete cobblers or be a nefarious individual trying to sow disinformation, so read the docs thoroughly if the value of the secrets if significant to you :)
Unfortunately there is no getting away from generally good security is harder to achieve than the illusion of good security :|

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

Using encrypted variable with Ansible-Vault for network automation

I have searched lots of tutorials on web & Youtube, but no luck.
I want to configure Cisco switch via Ansible, I already have it setup, works flawlessly.. but I want to store the passwords (for vty lines, console, enable secret...) ideally in hosts file encrypted via Ansible-Vault as variables so in my .yml file I can access them. I want them in hosts file, because we have different passwords for ASW, DSW and CSW so it could be easier to manage.
I generated encrypted variable in CLI:
ansible-vault encrypt_string enable_password --ask-vault-pass
I copy the value to the variable in /etc/ansible/hosts:
...
[2960-X:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=ios
ansible_user=admin
enable_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
.....
In config.yml:
- name: Set enable password
ios_config:
lines:
- enable secret "{{ enable_password }}"
Right now, the password is going to be set as " !vault |"
I am not sure if this is even best practise, I read recommendations for this but all I could find was about server automation, not networks.
I'm running Ansible 2.8.0
Any help is appreciated, thank you.
Let me quote from Variables and Vaults
When running a playbook, Ansible finds the variables in the unencrypted file and all sensitive variables come from the encrypted file.
A best practice approach for this is to start with a group_vars/ subdirectory named after the group. Inside of this subdirectory, create two files named vars and vault. Inside of the vars file, define all of the variables needed, including any sensitive ones. Next, copy all of the sensitive variables over to the vault file and prefix these variables with vault_. You should adjust the variables in the vars file to point to the matching vault_ variables using jinja2 syntax, and ensure that the vault file is vault encrypted.
This scheme isn't limited to group_vars/ only and can be applied to any place where the variables come from.

What is the best means of securely delivering minion specific files using Salt?

I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.

AWS Auto Scaling Launch Configuration Encrypted EBS Cloud Formation Example

I am creating cloud formation script, which will have ELB. In Auto Scaling launch configuration, I want to add encrypted EBS volume. Couldn't find an encrypted property withing blockdevicemapping. I need to encrypt volume. How can I attach an encrypted EBS volume to an EC2 instance through auto scaling launch configuration?
There is no such property for some strange reason when using launch configurations, however it is there when using blockdevicemappings with simple EC2 instances. See
launchconfig-blockdev vs ec2-blockdev
So you'll either have to use simple instances instead of autoscaling groups, or you can try this workaround:
SnapshotIds are accepted for launchconf blockdev too, and as stated here "Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted."
Create a snapshot from an encrypted empty EBS volume and use it in the CloudFormation template. If your template should work in multiple regions then of course you'll have to create the snapshot in every region and use a Mapping in the template.
As Marton says, there is no such property (unfortunately it often takes a while for CloudFormation to catch up with the main APIs).
Normally each encrypted volume you create will have a different key. However, when using the workaround mentioned (of using an encrypted snapshot) the resulting encrypted volumes will inherit the encryption key from the snapshot and all be the same.
From a cryptography point of view this is a bad idea as you potentially have multiple, different volumes and snapshots with the same key. If an attacker has access to all of these then he can potentially use differences to infer information about the key more easily.
An alternative is to write a script that creates and attaches a new encrypted volume at the boot time of a instance. This is fairly easy to do. You'll need to give the instance permissions to create and attach volumes and either have installed the AWS CLI tool or a library for your preferred scripting language. One you have that you can, from the instance that is booting, create a volume and attach it.
You can find a starting point for such a script here: https://github.com/guardian/machine-images/blob/master/packer/resources/features/ebs/add-encrypted.sh
There is an AutoScaling EBS Block Device type which provides the "Encrypted" option:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html
Hope this helps!
AWS recently announced Default Encryption for New EBS Volumes. You can enable this per region via
EC2 Console > Settings > Always encrypt new EBS volumes
https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/

Resources