How to read environment variables from .env file rather than store in paw config - paw-app

Is there a way to have a Paw environment import configuration from a local file such as .env rather than save them in the Paw configuration itself? I'd like to check my Paw config into version control but do not want to leak auth tokens.

We planning on having local environment variables that will only be accessible to you. For now you can either set a File dynamic value in the environment variable to read from your local file and combine it with the regex (https://paw.cloud/docs/advanced/file-dynamic-value-anywhere) or you can also use Secure dynamic values for sensitive data and it will be locally encrypted (https://paw.cloud/docs/security/obfuscate-and-encrypt)

Related

firebase functions config - delete data

I added some data to the encrypted config service of firebase using the syntax described here.
Now that I don't need this anymore, I'd like to delete it from the config object. I understand that it's safely stored, I just don't want this config file to be polluted with information that I no longer need to use.
How do I delete data from the firebase functions config object?
The documentation you linked to has a section on additional environment commands.
firebase functions:config:unset key1 key2 removes the specified keys from the config

How to hide sensitive data from node.conf?

Can someone please give me an example for corporatePasswordStore that is mentioned here:
https://docs.corda.net/node-administration.html?fbclid=IwAR0gRwe5BtcWO0NymZVyE7_yMfthu2xxnU832vZHdbuv17S-wPXgb7iVZSs#id2
I've been doing a lot of research in the last few days on how to hide the plain passwords from node.conf; it's a new topic for me and this is what I came up with so far:
Create a priv/pub key with gpg2
Create a password store with pass (using the key that I generated earlier).
Store all the plain passwords from node.conf inside that password store.
Replace the plain passwords in node.conf with environment variables (e.g. keyStorePassword = ${KEY_PASS})
Create a script file (e.g. start_node.sh) that will do the following:
a. Set an environment variable to one of the passwords from the password store: export key_store_password=$(pass node.conf/keyStorePassword)
b. Start the node: java -jar corda.jar
c. Restart the gpg agent to clear the cached passwords, otherwise you can get any password from the store without passing the passphrase: gpgconf --reload gpg-agent
Pros:
Using the bash file start_node.sh allows to set many passwords as environment variables at once (e.g. keyStore, trustStore, db passwords, RPC user password)
Since we are running the bash file with bash start_node.sh and not source start_node.sh, the environment variable is not exposed to the parent process (i.e. you cannot read that environment variable value inside the terminal where you ran bash start_node.sh
History commands are not enabled by default inside bash scripts.
Cons:
You no longer can have a service that automatically starts on VM startup, because the start_node.sh script will ask for the passphrase for your gpg key that was used to encrypt the passwords inside the password store (i.e. it's an interactive script).
Am I over-complicating this? Do you have an easier approach? Is it even necessary to hide the plain passwords?
I'm using Corda open source so I can't use the Configuration Obfuscator (which is for Enterprise only): https://docs.corda.r3.com/tools-config-obfuscator.html#configuration-obfuscator (edited)
I wrote a detailed article here: https://blog.b9lab.com/enabling-corda-security-with-nodes-configuration-file-412ce6a4371c, which covers the following topics:
Enable SSL for database connection.
Enable SSL for RPC connection.
Enable SSL for Corda webserver.
Enable SSL for Corda standalone shell.
Hide plain text passwords.
Set permissions for RPC users.

Multiple Config File for Flyway

Let say we have 10-15 micro services running and they have separate DB's. So how to maintain all these 15 config files at a single deployment server. As every db server has different creds, ip address and urls. So how will we manage all these in a single file or we'll have to create separate DB file per micro service ?
Each configuration refers to a single database, so you need to create one file per DB for properties such as URL and credentials. Probably the best way to handle this is to have a repository of configurations named after their DBs, and your deployment mechanism uses the appropriate one. However, you can have a base configuration for common settings and just have connection details per database using the configFiles parameter, eg:
flyway migrate -configFiles=/usr/configurations/base.conf,/usr/configurations/db1.conf
will start from the base configuration and override any settings that also appear in the db1-specific configuration.

Using encrypted variable with Ansible-Vault for network automation

I have searched lots of tutorials on web & Youtube, but no luck.
I want to configure Cisco switch via Ansible, I already have it setup, works flawlessly.. but I want to store the passwords (for vty lines, console, enable secret...) ideally in hosts file encrypted via Ansible-Vault as variables so in my .yml file I can access them. I want them in hosts file, because we have different passwords for ASW, DSW and CSW so it could be easier to manage.
I generated encrypted variable in CLI:
ansible-vault encrypt_string enable_password --ask-vault-pass
I copy the value to the variable in /etc/ansible/hosts:
...
[2960-X:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=ios
ansible_user=admin
enable_password= !vault |
$ANSIBLE_VAULT;1.1;AES256
.....
In config.yml:
- name: Set enable password
ios_config:
lines:
- enable secret "{{ enable_password }}"
Right now, the password is going to be set as " !vault |"
I am not sure if this is even best practise, I read recommendations for this but all I could find was about server automation, not networks.
I'm running Ansible 2.8.0
Any help is appreciated, thank you.
Let me quote from Variables and Vaults
When running a playbook, Ansible finds the variables in the unencrypted file and all sensitive variables come from the encrypted file.
A best practice approach for this is to start with a group_vars/ subdirectory named after the group. Inside of this subdirectory, create two files named vars and vault. Inside of the vars file, define all of the variables needed, including any sensitive ones. Next, copy all of the sensitive variables over to the vault file and prefix these variables with vault_. You should adjust the variables in the vars file to point to the matching vault_ variables using jinja2 syntax, and ensure that the vault file is vault encrypted.
This scheme isn't limited to group_vars/ only and can be applied to any place where the variables come from.

Is there a way to use both test keys localhost and live keys remote with firebase functions

I have a project were I set up keys as such.
Live keys
functions:config:set stripe.secret="sk_live_..." stripe.publishable="pk_live_..."
Test keys
functions:config:set stripe.secret="sk_test_..." stripe.publishable="pk_test_..."
The application is in its beta stage but live. So there's a lot more changes still done in code.
So I want to avoid setting the keys each time I want to test out some new feature on localhost.
Is there a way to configure firebase functions, to correspond to different Environments?
When on localhost, it should validate with test keys and with on remote live keys?
There isn't a special per-environment configuration. What you can do instead is use the unique id of the project to determine which settings it should apply. Functions can read the deployed project id out of the process environment with GCP_PROJECT
const project_id = process.env.GCP_PROJECT
The values you should use during development is a matter of opinion - do whatever suits you the best.
I believe you can make a .runtimeconfig.json file in your functions directory, which the emulators will read.
For example, first set your local values with `firebase functions config:set stripe.secret="sk_test_...",
Then, run firebase functions config:get > .runtimeconfig.json
When that file is present, from my experience, your firebase emulators will read from that, and you won't keep overwriting production config variables.
Docs: https://firebase.google.com/docs/functions/local-emulator#set_up_functions_configuration_optional

Resources