How do I encrypt a build step? - encryption

I need a secret token to be part of a command executed by Travis CI, but am in a public repository. I found that I can encrypt parts of .travis.yml to keep secrets safe. However, encrypting the command like in the following example fails saying Y95MgqDf...Bc=}: No such file or directory
after_deploy:
- secure: "Y95MgqDf...Bc="

You don't encrypt the step. That does not appear to be supported by Travis.
Instead, encrypt only secret part:
$ travis encrypt TOKEN=verysecret
secure: "CnLZ...lI="
Put the secret in an environment variable:
env:
global:
secure: CnLZ...lI=
Then dereference the environment variable when you need your secret.
after_deploy:
- mycommand $TOKEN

Related

Airflow seems to be ignoring "fernet_key" config

Summary: Ariflow seems to ignore fernet_key value both from airflow.cfg and environment variables, even though the exposed config in the webserver GUI shows the correct value. All DAGs using encrypted variables therefore fail.
Now the details:
I have an Airflow 2.3.2 (webserver and scheduler) running on a VM (Ubuntu 20.04) in the cloud. To start and restart the services I am using systemctl. Here are the contents of the airflow-webserver.service:
[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service airflow-init.service
[Service]
EnvironmentFile=/etc/airflow/secrets.txt
Environment="AIRFLOW_HOME=/etc/airflow"
User=airflow
Group=airflow
Type=simple
ExecStart=/usr/local/bin/airflow webserver --pid /run/airflow/webserver.pid
Restart=on-failure
RestartSec=5s
PrivateTmp=true
RuntimeDirectory=airflow
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
As you can tell I am using an environment file. It looks like this:
AIRFLOW__CORE__FERNET_KEY=some value
AIRFLOW__WEBSERVER__SECRET_KEY=some value
The setup itself seems to be working as confirmed by an exposed config in the webserver GUI:
link to a screenshot.
However, since upgrading to 2.3.2 (from 2.2.3) I am facing an issue that seems to be related with the fernet key configuration. The gist of it is that airflow seems to ignore the fernet_key config and therefore fails to decode variables. This is how it manifests:
$ airflow variables get DBT_USER
Variable DBT_USER does not exist
$ airflow variables list -v
[2022-07-14 17:57:09,920] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_USER, FERNET_KEY configuration missing
[2022-07-14 17:57:09,922] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_PASSWORD, FERNET_KEY configuration missing
[2022-07-14 17:57:09,924] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_USER, FERNET_KEY configuration missing
[2022-07-14 17:57:09,925] {variable.py:79} ERROR - Can't decrypt _val for key=DBT_PASSWORD, FERNET_KEY configuration missing
key
============
DBT_USER
DBT_PASSWORD
I receive exactly the same error running the DAGs which use encrypted variables (templated or with Variable.get). The issue persists even if I hardcode the fernet_key in airflow.cfg. Since I have not too many variables I tried deleting them and creating new ones to ensure that fernet_key matches the key needed to decode values stored in the database. Then I confirmed that the fernet_key from my config can correctly decode the values stored in the db (I fetched encrypted values by querying variable table in the PostgreSQL directly).
I am out of ideas so any hint is greatly appreciated.

How to Fix NO_SECRET warning thrown by Next-Auth

I have a Next js application that uses Next Auth. While in development I continuously keep getting that warning stipulating that I need to set a secret but I don't know where I should set it.
Following this reference I see I just need to run openssl rand -base64 32 to get a secret but I have no Idea where to put it
In the [...nextauth].js outside provider and callback you can set the secret and it's value. As it is recommended to store such values in environment variable you can do the following
export default NextAuth({
providers: [
],
callbacks: {
},
secret: process.env.JWT_SECRET,
});
You should insert the command openssl rand -base64 32in your Linux terminal, then it will generate a Token to use it on an .env file with the variable name NEXTAUTH_SECRET=token_generated. So the error [next-auth][warn][NO_SECRET] will not be showed again on console.
In my case i had to upgrade next and next-auth module to the latest version.
next#12.1.6 and next-auth#4.5.0 worked for me.
NEXTAUTH_SECRET=secret string here
You can generate secret key using this command line openssl rand -base64 32 in command prompt or windows power shell.
And in the next-auth configuration [...nextauth].ts file
export default NextAuth({
providers: [
],
callbacks: {
},
secret: process.env.NEXTAUTH_SECRET,
});
It is said if secret is not defined in [...nextauth].ts, it loads NEXTAUTH_SECRET from env, but I added it and works like charm. :)

Missing encryption key to decrypt file with.Ask your team for your master ... it in the ENV['RAILS_MASTER_KEY']. Platform.sh Deployment aborting,

ERROR MESSAGE:
W: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
when deploying my project on Platform.sh, the operation failed because of the lack of the decryption key. from my google search, I found that the decryption key.
My Ubuntu .bashrc
export RAILS_MASTER_KEY='ad5e30979672cdcc2dd4f4381704292a'
rails project configuration for PLATFORM.SH
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
# The size of the persistent disk of the application (in MB).
disk: 5120
mounts:
'web/uploads':
source: local
source_path: uploads
relationships:
postgresdatabase: 'dbpostgres:postgresql'
hooks:
build: |
gem install bundler:2.2.5
bundle install
RAILS_ENV=production bundle exec rake assets:precompile
deploy: |
RACK_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "\"unicorn -l $SOCKET -E production config.ru\""
locations:
'/':
root: "\"public\""
passthru: true
expires: "24h"
allow: true
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
services.yaml
# The name given to the PostgreSQL service (lowercase alphanumeric only).
dbpostgres:
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 5120
db:
type: postgresql:13
disk: 5120
configuration:
extensions:
- pgcrypto
- plpgsql
- uuid-ossp
environments/production.rb
config.require_master_key = true
I suspect that the master.key is not accessible during deployment, and I don't understand how to solve the problem.
From what I understand, your export is in your .bashrc on your local machine, so it won't be accessible when deploying on Platform.sh. (The logs you see in your terminal when building and deploying are streamed, this doesn't happen on your machine.)
You need to make the RAILS_MASTER_KEY accessible on Platform.sh. To do so, this variable needs to be declared in your project.
Given the nature of the variable, I would suggest to use the Platform CLI to create this variable.
If this variable should be accessible on all your environments, you can make it a project level variable.
$ platform variable:create --level project --sensitive true env:RAILS_MASTER_KEY <your_key>
If it should only be accessible for a specific environment, then you need an environment level variable:
$ platform variable:create --level environment --environment '<your_envrionment>' --inheritable false --sensitive true env:RAILS_MASTER_KEY '<your_key>'
The env: prefix in the variable names tells Platform.sh to expose the variable with the rest of the environment variables. More information about this in the variables prefix section of the environment variables documentation page.
You could do the same via the management console if you prefer to avoid the command line.
Environment variables can also be configured directly in your .platform.app.yaml file, as described here. Keep in mind that this file being versioned, you should not use this method for sensitive information, such as encryption keys, API keys, and other kind of secrets.
The RAILS_MASTER_KEY environment variable should now be accessible during your Platform.sh deployment.

How to read variables from a file decrypted at build time using Google Cloud Build and Google Cloud KMS

I am following this tutorial for getting encrypted keys into my cloudbuild YAML file.
I am trying to understand how am I supposed to " use the decrypted ... file in the workspace directory" variables in the subsequent steps of my YAML file.
My cloudbuild step where I decrypt the keys file is as follows:
- name: gcr.io/cloud-builders/gcloud
args: ['kms', 'decrypt', '--ciphertext-file=<encrypted_file>', '--plaintext-file=<decrypted_file>', '--location=<location>', '--keyring=<keyring>', '--key=<key>']
The tutorial is not clear on how to do this and I cannot find anything on the Internet related to this.
Any help is very appreciated.
Thanks.
When you encrypt your content with gcloud kms encrypt, you can write the output to a file in your workspace, for example:
# replace with your values
gcloud kms encrypt \
--location=global \
--keyring=my-kr \
--key=my-key \
--plaintext-file=./data-to-encrypt \
--ciphertext-file=./encrypted-data
Where ./data-to-encrypt is a file on disk that contains your plaintext secret and ./encrypted-data is the destination path on disk where the encrypted ciphertext should be written.
When working directly with the API, the interaction looks like this:
plaintext -> kms(encrypt) -> ciphertext
However, when working with gcloud, it looks like this:
plaintext-file -> gcloud(read) -> kms(encrypt) -> ciphertext -> gcloud(write)
When you invoke Cloud Build, it effectively gets a tarball of your application, minus any files specified in a .gcloudignore. That means ./encrypted-data will be available on the filesystem inside the container step:
steps:
# decrypt the value in ./my-secret
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --location=global
- --keyring=my-kr
- --key=my-key
- --ciphertext=file=./encrypted-data
- --plaintext-file=./my-secret
- name: gcr.io/my-project/my-image
steps:
- my-app start --secret=./my-secret
At present, the only way to share data between steps in Cloud Build is with files, but all build steps have the same shared filesystem.

disk encryption escrow files on centos via kickstart

I'm trying to automate centos installs via PXE and kickstart with encrypted filesystems. In case we mislay the passphrase we want to use escrow files and encrypt them using the public key attached to an x509 certificate obtained from a web server. The relevant line in the kickstart file is
logvol /home --fstype ext4 --name=lv02 --vgname=vg01 --size=1 --grow --encrypted --escrowcert=http://10.0.2.2:8080/escrow.crt --passphrase=XXXX --backuppassphrase
Leaving the cert as PEM encoded on the web server rather than DER doesn't seem to matter, either work up to a point.
The filesystem is created and encrypted using the supplied passphrase and can be opened on reboot with no issues. Two escrow files are produced as expected and if by using the NSS database containing the private key and the first escrow file I obtain what I think is the passphrase but it doesn't unlock the disk. For example:
# volume_key --secrets -d /tmp/nss e04a93fc-555b-430b-a962-1cdf921e320f-escrow
Data encryption key:<span class="whitespace other" title="Tab">»</span>817E65AC37C1EC802E3663322BFE818D47BDD477678482E78986C25731B343C221CC1D2505EA8D76FBB50C5C5E98B28CAD440349DC0842407B46B8F116E50B34
I assume the string from 817 to B34 is the passphrase but using it in a cryptsetup command does not work.
[root#mypxetest ~]# cryptsetup -v status home
/dev/mapper/home is inactive.
Command failed with code 19.
[root#mypxetest ~]# cryptsetup luksOpen /dev/rootvg01/lv02 home
Enter passphrase for /dev/rootvg01/lv02:
No key available with this passphrase.
Enter passphrase for /dev/rootvg01/lv02:
When prompted I paste in the long numeric string but get the No key available message. However if I use the passphrase specified in the kickstart file or the backup escrow file the disk unlocks.
# volume_key --secrets -d /tmp/nss e04a93fc-555b-430b-a962-1cdf921e320f-escrow-backup-passphrase
Passphrase:<span class="whitespace other" title="Tab">»</span>QII.q-ImgpN-0oy0Y-RC5qa
Then using the string QII.q-ImgpN-0oy0Y-RC5qa in the crypsetup command works.
Has anyone any idea what I'm missing? Why don't both escrow files work?
I've done some more reading and the file ending in escrow is not an alternative passphrase for the luks volume but it contains the encryption key which is encrypted of course. When decrypted the long string is the encryption key and there's a clue in the rest of the text which I confess I didn't read very well.

Resources