What do I need for Travis-CI to decrypt secure variables on my fork? - encryption

I have forked a Github repository and would like to use travis-ci, as the original repository does, to run tests when I commit. However, the AWS keys, which are encrypted, are not decrypted and keep the tests from succeeding. Since my workplace owns the original repository, I have access to whatever is needed, but am unsure what information to retrieve, where to find it, or what to do with it.
For clarity, here is the pertinent part of the .travis.yml:
env:
global:
- NODE_ENV: test
- [...]
- secure: M3YSEJnWYd[...]
- secure: kvvLABsWTq[...]
All of the environment variables are imported except the secure ones (which is to be expected, of course).

Travis documents that for security reasons secret variables are not available to forks (https://docs.travis-ci.com/user/environment-variables/#defining-encrypted-variables-in-travisyml). It should be possible though to set new secrets in travis.yml or fork‘s repo settings.

Related

Delete Terraform resource aws_secretsmanager_secret_version does not delete Secrets Manager secret entry

I created an AWS secrets manager and a secret key-value entry using Terraform as below. However, After I comment out below aws_secretsmanager_secret_version resource and terraform apply, terraform shows it deletes the secret key-value entry, but I can still see the entry in AWS console and I can still use CLI to get the secret key-value using aws secretsmanager get-secret-value --secret-id myTestName.
Is this entry really deleted? Why I still see it in AWS console? or maybe it is deleted but the one shown in console and cli is an old version? at least Terraform deleted it from its state file.
resource "aws_secretsmanager_secret" "test" {
name = "myTestName"
}
# I deleted secret key-value entry by
# commenting out below and apply terraform again
resource "aws_secretsmanager_secret_version" "test" {
secret_id = aws_secretsmanager_secret.test.id
secret_string = <<EOF
{
"test-key": "test-value"
}
EOF
}
According to AWS documentation:
...Secrets Manager does not immediately delete secrets. Instead,
Secrets Manager immediately makes the secrets inaccessible and
scheduled for deletion after a recovery window of a minimum of
seven days...
Due to critical nature of the secrets, this functionality is there for a reason - to prevent you from accidentally deleting important production-grade secret, which would cause serious problems with accessing services.
If you still want to delete a secret, you can do it with force:
aws secretsmanager delete-secret --secret-id your-secret --force-delete-without-recovery --region your-region
You may need to delete it with force if you want to immediately create new secret with the same name, to avoid name conflict.
Update: As you clarified, for you specific case - where you wish to delete the version of the secret, it cannot be done while you have only one version of the secret with the AWSCURRENT label:
aws secretsmanager get-secret-value --secret-id myTestName
...
"Name": "myTestName",
"SecretString": " ...
"VersionStages": [
"AWSCURRENT"
]
...
From the terraform documentation:
If the AWSCURRENT staging label is present on this version during
resource deletion, that label cannot be removed and will be skipped to
prevent errors when fully deleting the secret. That label will leave
this secret version active even after the resource is deleted from
Terraform unless the secret itself is deleted. Move the AWSCURRENT
staging label before or after deleting this resource from Terraform to
fully trigger version deprecation if necessary.

Is there a way to use both test keys localhost and live keys remote with firebase functions

I have a project were I set up keys as such.
Live keys
functions:config:set stripe.secret="sk_live_..." stripe.publishable="pk_live_..."
Test keys
functions:config:set stripe.secret="sk_test_..." stripe.publishable="pk_test_..."
The application is in its beta stage but live. So there's a lot more changes still done in code.
So I want to avoid setting the keys each time I want to test out some new feature on localhost.
Is there a way to configure firebase functions, to correspond to different Environments?
When on localhost, it should validate with test keys and with on remote live keys?
There isn't a special per-environment configuration. What you can do instead is use the unique id of the project to determine which settings it should apply. Functions can read the deployed project id out of the process environment with GCP_PROJECT
const project_id = process.env.GCP_PROJECT
The values you should use during development is a matter of opinion - do whatever suits you the best.
I believe you can make a .runtimeconfig.json file in your functions directory, which the emulators will read.
For example, first set your local values with `firebase functions config:set stripe.secret="sk_test_...",
Then, run firebase functions config:get > .runtimeconfig.json
When that file is present, from my experience, your firebase emulators will read from that, and you won't keep overwriting production config variables.
Docs: https://firebase.google.com/docs/functions/local-emulator#set_up_functions_configuration_optional

Approach to securing test data for public repositories

We have setup nightly testing for an open source project (MERN stack). The Selenium tests require test data which we do not want to not make public. Initially we tried to keep test data as environment variables in the build server (CircleCI) but this approach is not scalable. We do not own any infrastructure - so any database or storage bucket based solutions will need additional cost which will not be feasible based on the org's current budget.Is there a smart solution to keep the test data files secure at no additional cost?
As you know, the challenge is that you need somewhere to put that data. If you're trying to do this without paying any providers, the best I can suggest is Amazon's free tier for either S3 storage or a database. https://aws.amazon.com/free/
Those could be securely accessed from CircleCI by just storing the API keys as project variables.
CircleCI's AWS S3 orb encapsulates the install and setup of AWS CLI to simplify this.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.2
jobs:
build:
docker:
- image: 'circleci/node:10'
steps:
- checkout
- aws-s3/copy:
from: 's3://your-s3-bucket-name/test_data/somefile.ext'
to: test_data.ext
- run: # your test code here

How to get Travis CI to work with a SSH Key: currently gets stuck when accessing my private rep(wants the username)

I already followed the steps exactly specified at this link
However, I am still having the issue. My build will get stuck when accessing the private repo.
$ julia --check-bounds=yes -e 'Pkg.clone("https://github.com/xxxx/xxxx.git")'
INFO: Cloning xxxx from https://github.com/xxxx/xxxx.git
Username for 'https://github.com':
Done: Job Cancelled
Note: I manually cancel it after a few minutes of waiting. How can I get it to use the SSH key I have setup and bypass this username and password field?
Note: xxxx is used in place of the name of my project to make this post general. I have already checked out the links on Travis CI and they don't make it clear what needs to occur. Thank you!
Update: I tried to add a GitHub Token Pkg.clone("https://fake_git_hub_token#github.com/xxxx/xxxx.git") and it still prompts me to sign in with the username. I gave that token full Repo access. Also, note that I am using Travis CL Virtual Machine.
In the Travis CI docs they reference the following:
Assumptions:
The repository you are running the builds for is called “myorg/main” and depends on “myorg/lib1” and “myorg/lib2”.
You know the credentials for a user account that has at least read access to all three repositories.
To pull in dependencies with a password, you will have to use the user name and password in the Git HTTPS URL: https://ci-user:mypassword123#github.com/myorg/lib1.git.
SOLUTION:
just add TravisCIUsername:mypassword#github.com/organizer_of_the_repo/Dependancy.git
In my case, I am going to make a fake admin account to run the tests since someone will have to expose their password to use this setup. Note that you can set up 2-factor authentication on the admin account such that only one person can access it even if they know the password.
You need to add the SSH key to the Travis UI under an environmental variable for your desired repo. You also need to add the key to the .travis.yml file on that repo.
https://docs.travis-ci.com is the docs for Travis
SOLUTION: just add Travis_CI_Username:my_password#github.com/organizer_of_the_repo/Dependancy.git to the travis.yml. file.
If this is unclear, please comment and I will update, but this is how I got it to work for me(even tho I went through all the SSH key business).
In my case, I am going to make a fake admin account to run the tests since someone will have to expose their password to use this setup.
Note that you can set up 2-factor authentication on the admin account such that only one person can access it even if they know the password.

How do I manage minion roles with salt-stack pillars?

I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?
So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.
How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql

Resources