How to hide sensitive data from node.conf? - corda

Can someone please give me an example for corporatePasswordStore that is mentioned here:
https://docs.corda.net/node-administration.html?fbclid=IwAR0gRwe5BtcWO0NymZVyE7_yMfthu2xxnU832vZHdbuv17S-wPXgb7iVZSs#id2
I've been doing a lot of research in the last few days on how to hide the plain passwords from node.conf; it's a new topic for me and this is what I came up with so far:
Create a priv/pub key with gpg2
Create a password store with pass (using the key that I generated earlier).
Store all the plain passwords from node.conf inside that password store.
Replace the plain passwords in node.conf with environment variables (e.g. keyStorePassword = ${KEY_PASS})
Create a script file (e.g. start_node.sh) that will do the following:
a. Set an environment variable to one of the passwords from the password store: export key_store_password=$(pass node.conf/keyStorePassword)
b. Start the node: java -jar corda.jar
c. Restart the gpg agent to clear the cached passwords, otherwise you can get any password from the store without passing the passphrase: gpgconf --reload gpg-agent
Pros:
Using the bash file start_node.sh allows to set many passwords as environment variables at once (e.g. keyStore, trustStore, db passwords, RPC user password)
Since we are running the bash file with bash start_node.sh and not source start_node.sh, the environment variable is not exposed to the parent process (i.e. you cannot read that environment variable value inside the terminal where you ran bash start_node.sh
History commands are not enabled by default inside bash scripts.
Cons:
You no longer can have a service that automatically starts on VM startup, because the start_node.sh script will ask for the passphrase for your gpg key that was used to encrypt the passwords inside the password store (i.e. it's an interactive script).
Am I over-complicating this? Do you have an easier approach? Is it even necessary to hide the plain passwords?
I'm using Corda open source so I can't use the Configuration Obfuscator (which is for Enterprise only): https://docs.corda.r3.com/tools-config-obfuscator.html#configuration-obfuscator (edited)

I wrote a detailed article here: https://blog.b9lab.com/enabling-corda-security-with-nodes-configuration-file-412ce6a4371c, which covers the following topics:
Enable SSL for database connection.
Enable SSL for RPC connection.
Enable SSL for Corda webserver.
Enable SSL for Corda standalone shell.
Hide plain text passwords.
Set permissions for RPC users.

Related

How to change encrypted password in context file without using the studio

I am using a group context to configure the db connection. The password of the db has a password type. When deploying the job, the password is automatically encrypted in the default.properties under the contexts folder.
What if i want to change the password without using the studio (on a client environment)? what can i use to encrypt the new password?
I was able to do it by creating a separate encryption job with a tjava component and the following code:
System.out.println(routines.system.PasswordEncryptUtil.encryptPassword(context.Password));
where context.Password is an input context variable of type String. When running the job, the user is prompted to enter a password and then the encrypted Talend password will be printed. It will have the following format: enc:routine.encryption.key.v1:[encryptedPassword] The routine encryption key can be modified if needed by following this link: https://help.talend.com/r/en-US/8.0/installation-guide-data-integration-windows/rotating-encryption-keys-in-talend-studio
There's actually a few ways for this:
myJob.sh --context_param myPassword=pass123
this unfortunately can be seen by anyone via ps / task manager.
You can also edit the contexts/contextName.properties file and change the context parameters there. This way the context can only be seen if you have access to the file.
Theoretically both should be able to accept the cleartext/encrypted password.
Implicit context load feature can also be used to load contexts: https://help.talend.com/r/en-US/8.0/data-integration-job-examples/creating-job-and-defining-context-variables

Ansible create encrypted file to an existing vault

I created an encrypted file with ansible vault like so:
ansible-vault create encrypted-example-file1
It seems that ansible creates a new vault here, because it asks for new Vault password from me. That is ok, I gave a password for the new Vault to be created. This all seems working fine, I give the password to get the file decrypted during playbook run.
Now I want to create another encrypted file, and I would like to store it to the same vault I created earlier, so that I wouldn't need a separate password for the second file. How to do that? I tried to repeat the command:
ansible-vault create encrypted-example-file2
But the problem is that it again asks new vault password, which indicates it wishes me to create yet another vault? I don't want to do that. So how can I apply the existing ansible vault for the new encrypted file? I tried reading the ansible docs but did not catch any guide on how to do it.
Ansible vaults works for file encryption or variable encryption. If you want to encrypt a different file then you have to provide password again to encrypt the file using ansible vault. You can use the same password for file1 and file2. While executing playbook ansible will decrypt both files using the same password.
It’s also possible to edit the encrypted encrypted-example-file1 using ansible-vault edit command then add additional content of encrypted-example-file2 for encryption.
ansible-vault edit encrypted-example-file1
Here are more details.
You do not necessarily have to type the password every time you use ansible-vault.
ansible, ansible-playbook, and ansible-vault all have --vault-password-file=VAULT_PASSWORD_FILES parameter that you can use to specify a file in which you store the password.

Accessing Reflection for unix and openvms outside of Reflection

My place of business currently uses Reflection for Unix and OpenVMS to handle a database of customers. I access this database directly through the Reflection emulation. The only way to get data out of Reflection is to navigate to a single customer via keyboard input and print the information to a .txt.
Is there anyway I can access the VM other than through Reflection with the end goal of automating retrieval of customer information from a Java script executed outside of the Reflection environment? This is the information I can gather via the Reflection interface about what I am connecting to:
At the bottom of the Reflection interface - VT500-7 -- HOST_NAME via SECURE SHELL
Via the Connection Setup drop-down:
Host name: HOST_NAME
SSH config scheme: AutoKeyLogin
User name: username
Via the Security... button:
General tab:
Port number: 22
User Authentication: [x] Public Key
[x] Password
User Keys tab:
Use Name Type Location
[x] username1user DSA C:\Documents\PathToSSHKey\.ssh
Host Keys tab:
Host Type Fingerprint
HOST_NAME, 111.1.111.11, 22 DSA 39:14:f3:123:fds:restOfFingerprint
There is more information available if the solution is possible but I have just not provided enough to solve it, so please ask.
Given that I have the host name, port, .ssh, and host key, is it possible to connect to and read from the VM that I am otherwise connecting to normally via the Reflection emulator?
NO. Reflection (other example is PuTTY) is just a dumb-terminal emulator, here using the (secure) SSH protocol to connect to some Operating System. From the information provided we cannot even tell which OS. Maybe OpenVMS maybe some Unix. Most certainly not a 'VM', but a physical box. Maybe a Alpha, Integrity, Sun, IBM or Intel server.
IF, big if, it is OpenVMS you would possibly see something like this flash by on entry:
XXX - HP rx2600 (1.50GHz/6.0MB) OpenVMS IA64 V8.3-1H1
Last interactive login on Thursday, 7-DEC-2017 13:23:19.83
Last non-interactive login on Wednesday, 6-DEC-2017 12:35:45.80
Most likely username uses is set up to always start a (shell) script which starts a menu from which a program is activated, which knows how to access data record. IF is it OpenVMS then the actual data is likely stored in RMS (indexed) files, but it could in a proper (Oracle RDB or RDBMS) database.
If bulk access to the data is needed then you need to talk to the system/application manager for the system 'HOST_NAME' and ask them about the application and its database.
You may find that there is FTP, ODBC or JDBC or natice DB (OCI?) access to the data avaiable already, or that this can be requested. Likely tools in this space are ConnX, Attunity Connect, and such.
First you'll need to find out which OS/Platform/Version, which application (3rd party? home grown? 4GL? Cobol? Basic? and ultimately, which database/storage method.
That's not to say that some terminal emulator cannot be 'tricked' (google -
screen scraping) to be programmed to fetch a series of data on command, but that will always be error prone and laboriously for limited volumes.
You are better of trying to get proper data access.
Good luck! You'll need some.
Hein

How to avoid storing userid/password in the .odbc.ini file on Linux?

I am connecting to a Teradata database through ODBC with Stata on an Ubuntu server (12.04 LTS). Everything works fine, except that I have my TD userid and password stored in the .odbc.ini file, which seems like a terrible idea. The alternative is to enter them in Stata, which seems even worse and is awkward. Is there a way to do this more securely? The login info that I use to ssh into the server is synced with the TD database. It seems that it should be possible to pass that information along.
In ODBC terms you do not need to store usernames / passwords in any of your ODBC ini files. Both the ODBC SQLConnect and SQLDriverConnect support the passing in of username / password at the time they are called.
SQLDriverConnect would need something in your InConnectionString like "DSN=YourDataSourceName;UID=username;PWD=password".
You could go one step further and pass in the whole DSN as a command line argument thus meaning that you would not need an ODBC data source in an ini file. I'm sure one of the forum readers can post a sample for you from Teradata.
As for passing in the user name and password from your SSH loging. Your application would need to capture that and pass it to ODBC.
If you want to establish a finer grain of security around your odbc.ini file or other files on your Ubuntu server that may contain user credentials I would strongly suggest the use of Access Control Lists (ACLs). Beyond the typical Owner::Group::World permissions you can specify permissions down to the specific user on whether they are allowed or denied an explicit permission for a given file.
Other options regarding security on Teradata include the use of LDAP authentication if your environment supports it. Configuring LDAP on Teradata is beyond the scope of SO and in many cases a billable, professional services engagement with Teradata's Information Security CoE.

What's the best method for passing AWS credentials as user data to an EC2 instance?

I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:
$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip
Many instances are launched...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.
'zip -P' is not very secure
The password is hard-coded in the instance (it's always 'secret-password')
The method is very similar to the one described here
Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?
You can store the credentials on the machine (or transfer, use, then remove them.)
You can transfer the credentials over a secure channel (e.g. using scp with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to 0400 on the key file at all times, e.g. set the permissions on the master files and use scp -p)
If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.
EDIT
Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (ec2-run-instances -f user-data-file) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters from http://169.254.169.254/1.0/user-data.
UPDATE
OK, here goes a security-minded comparison of the various approaches discussed so far:
Security of data when stored in the AMI user-data unencrypted
low
clear-text data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the clear-text http://169.254.169.254/1.0/user-data)
Security of data when stored in the AMI user-data and encrypted (or decryptable) with easily obtainable key
low
easily-obtainable key (password) may include:
key hard-coded in a script inside an ABI (where the ABI can be obtained by an attacker)
key hard-coded in a script on the AMI itself, where the script is readable by any user who manages to log onto the AMI
any other easily obtainable information such as public keys, etc.
any private key (its public key may be readily obtainable)
given an easily-obtainable key (password), the same problems identified in point 1 apply, namely:
the decrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the encrypted http://169.254.169.254/1.0/user-data, ulteriorly descrypted with the easily-obtainable key)
Security of data when stored in the AMI user-data and encrypted with not easily obtainable key
average
the encrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access encrypted http://169.254.169.254/1.0/user-data)
an attempt to decrypt the encrypted data can then be made using brute-force attacks
Security of data when stored on the AMI, in a secured location (no added value for it to be encrypted)
higher
the data is only accessible to one user, the user who requires the data in order to operate
e.g. file owned by user:user with mask 0600 or 0400
attacker must be able to impersonate the particular user in order to gain access to the data
additional security layers, such as denying the user direct log-on (having to pass through root for interactive impersonation) improves security
So any method involving the AMI user-data is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.
This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of user-data when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, from user-data, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)
If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and scp the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.
I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.
http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
The best way is to use instance profiles. The basic idea is:
Create an instance profile
Create a new IAM role
Assign a policy to the previously created role, for example:
{
"Statement": [
{
"Sid": "Stmt1369049349504",
"Action": "sqs:",
"Effect": "Allow",
"Resource": ""
}
]
}
Associate the role and instance profile together.
When you start a new EC2 instance, make sure you provide the instance profile name.
If all works well, and the library you use to connect to AWS services from within your EC2 instance supports retrieving the credentials from the instance meta-data, your code will be able to use the AWS services.
A complete example taken from the boto-user mailing list:
First, you have to create a JSON policy document that represents what services and resources the IAM role should have access to. for example, this policy grants all S3 actions for the bucket "my_bucket". You can use whatever policy is appropriate for your application.
BUCKET_POLICY = """{
"Statement":[{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::my_bucket"]}]}"""
Next, you need to create an Instance Profile in IAM.
import boto
c = boto.connect_iam()
instance_profile = c.create_instance_profile('myinstanceprofile')
Once you have the instance profile, you need to create the role, add the role to the instance profile and associate the policy with the role.
role = c.create_role('myrole')
c.add_role_to_instance_profile('myinstanceprofile', 'myrole')
c.put_role_policy('myrole', 'mypolicy', BUCKET_POLICY)
Now, you can use that instance profile when you launch an instance:
ec2 = boto.connect_ec2()
ec2.run_instances('ami-xxxxxxx', ..., instance_profile_name='myinstanceprofile')
I'd like to point out that it is not needed to supply any credentials to your EC2 instance anymore. Using IAM, you can create a role for your EC2 instances. In these roles, you can set fine-grained policies that allow your EC2 instance to, for example, get a specific object from a specific S3 bucket and no more. You can read more about IAM Roles in the AWS docs:
http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html
Like others have already pointed out here, you don't really need to store AWS credentials for an EC2 instance, by using IAM Roles -
https://aws.amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/.
I will add that you can employ the same method also for securely storing NON-AWS credentials for you EC2 instance, like say if you have some db credentials you want to keep secure. You save the non-aws credentials on a S3 Bukcet, and use IAM role to access that bucket.
you can find more detailed information on that here - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/

Resources