I have an encrypted file stored in a Google Cloud Storage bucket that was generated with the following command line:
gcloud kms encrypt --location=global --keyring=my-keyring --key=-my-key --plaintext-file=my-file --ciphertext-file=my-file.enc
I am now trying to decrypt such file in a Cloud Run service with the following code:
const kms = require('#google-cloud/kms');
const client = new kms.KeyManagementServiceClient();
const file = storage.bucket("my-bucket").file('my-file.enc');
const name = client.cryptoKeyPath( 'projectId', 'global', 'my-keyring', 'my-key' );
let encrypted = (await file.download())[0];
const [result] = await client.decrypt({name, encrypted });
I am getting the following error:
Error: Decryption failed: verify that 'name' refers to the correct CryptoKey.
Which, according to this, is misleading and should be considered as not being properly deciphered. I cannot shake the feeling that I am missing a base64 encode/decode somewhere but I don't seem to find the solution.
If I run the decryption from the command-line it works just fine.
Any help is very appreciated.
Thanks.
EDIT:
Problem solved thanks to this awesome community. Here goes the steps to make this work, in case others face the same issue:
Encrypt the file using the following command line and upload it via the web UI.
gcloud kms encrypt --location=global --keyring=my-keyring --key=-my-key --plaintext-file=my-file --ciphertext-file=my-file.enc
Decrypt using the following code:
const kms = require('#google-cloud/kms');
const client = new kms.KeyManagementServiceClient();
const file = storage.bucket("my-bucket").file('my-file.enc');
const name = client.cryptoKeyPath( 'projectId', 'global', 'my-keyring', 'my-key' );
let encrypted = (await file.download())[0];
const ciphertext = encrypted .toString('base64');
const [result] = await client.decrypt({name, ciphertext});
console.log(Buffer.from(result.plaintext, 'base64').toString('utf8'))
I spot a few things here:
Assuming your command is correct, my-file-enc should be my-file.enc instead (dot vs dash)
Verify that projectId is being set correctly. If you're populating this from an environment variable, console.log and make sure it matches the project in which you created the KMS key. gcloud defaults to a project (you can figure out which project by running gcloud config list and checking the core/project attribute). If you created the key in project foo, but your Cloud Run service is looking in project bar, it will fail.
When using --ciphertext-file to write to a file, the data is not base64 encoded. However, you are creating a binary file. How are you uploading that binary string to Cloud Storage? The most probable culprit seems to be an encoding problem (ASCII vs UTF) which could cause the decryption to fail. Make sure you are writing and reading the file as binary.
Looking at the Cloud KMS Nodejs documentation, it specifies that the ciphertext should be "exactly as returned from the encrypt call". The documentation says that the KMS response is a base64 encoded string, so you could try base64 encoding your data in your Cloud Run service before sending it to Cloud KMS for decryption:
let encrypted = (await file.download())[0];
let encryptedEncoded = encrypted.toString('base64');
const [result] = await client.decrypt({name, encrypted});
You may want to take a look at Berglas, which automates this process. There are really good examples for Cloud Run with node.
For more patterns, check out Secrets in Serverless.
Related
This is the code snippet in my main.tf file:
provider "github" {
token = var.github_token_ssm
owner = var.owner
}
data "github_repository" "github" {
full_name = var.repository_name
}
The github token is stored in AWS secretsmanager parameter.
If the value of the token is hardcoded github token, then it works fine.
If the value of the token is a AWS secretsmanager parameter (eg. arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:xxxx-Github-t0UOOD:xxxxxx), it is not working.
I don't want to hardcode github token in the code. How can I use secretsmanager parameter for token above?
As far as I know, Terraform not supporting aws Secret Manager (but you can use the vault to store secrets).
you can also deploy it with TF_VAR variable and ENV Var
export TF_VAR_db_username=admin TF_VAR_db_password=adifferentpassword
You can also run a script that will pull the secret from aws and store it in EnvVar.
just remember to secure your state file (the password will exist in clear text)
In my Ionic 5 app, I am using the capacitor-community/sqlite plugin. To create and encrypt a database. I am doing the following as per documentation.
await this.sqlite.createConnection('database1', false, "no-encryption", 1);
await this.sqlite.createConnection('database1', true, "encryption", 1);
await this.sqlite.createConnection('database1', true, "secret", 1);
the mode "encryption" is to be used when you have an already existing database non encrypted and you want to encrypt it.
the mode "secret" is to be used when you want to open an encrypted database.
the mode "newsecret" is to be used when you want to change the secret of an encrypted database with the newsecret.
A secret and newsecret is maintained in the configuration file as an encryption password. When I create the connection with the secret it works fine but I am unable to use the newsecret.
await this.sqlite.createConnection('database1', true, "newsecret", 1);
The above code is supposed to change my connection secret but it's not working. When I run this code it executes with no error but when I run await this.db.open();, it fails with error "Open command failed: Failed in openOrCreateDatabase Wrong Secret". I didn't find the correct way to implement this method in the official documentations.
Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?
to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)
If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.
I'm here because I'm working on this repo.
When i compare the hash from firebase and this hash i created using the utility created by firebase for the same password using the same salt and the same parameters, they are not the same.
Is someone getting any idea for a sample of solution ? I'm totally stuck and i do not understand what is happening here ^^'
Thx !
EDIT : the hash function ( aslo here )
hash (password, salt) {
return new Promise((resolve, reject) => {
exec(
`${__dirname}/../scrypt/scrypt "${this.signerKey}" "${salt}" "${this.saltSeparator}" "${this.rounds}" "${this.memCost}" -P <<< "${password}"`,
{ shell: '/bin/bash' },
(error, stdout) => error ? reject(error) : resolve(stdout),
)
})
}
EDIT 2 : I forgot to say that, but I export users password hash using the admin sdk.
The Firebase documentation states that "on successful sign-in, Firebase re-hashes the user's password with the internal Firebase hashing algorithm". If you've already logged in you would see a unique hash so I do not think you should be expecting an exact match to your project's keys.
I answer myself : the probleme was coming from the way I export users. I was using JS Admin SDK and the listUsers() function, but apparently that's not return to you the right password hash (maybe for security, I asked i will update this post when I know).
If you want to export your users and use firebase-scrypt to verify their password, export your users using firebase-tools and the command auth:export
I've just migrated to Cloud Functions 1.0 and am trying out Cloud Functions shell/emulator to run functions locally (using instructions at https://firebase.google.com/docs/functions/local-emulator)
One of the functions is using code below to upload a file to cloud storage and then then generate url for it....but am getting following error:
SigningError: Cannot sign data without client_email.
const bucket = gcs.bucket(bucketName);
bucket.upload(localFilePath, {destination: destinationPath})
.then(data => {
const file = data[0];
return file.getSignedUrl({
action: 'read',
expires: '01-01-2099'
});
I can work around this locally by explicitly setting keyFileName as shown below but seems like this should not be necessary
const gcs = require('#google-cloud/storage')({keyFilename: 'service-account.json'});
The link above mentions that "Cloud Firestore and Realtime Database triggers already have sufficient credentials, and do not require additional setup" (I'm triggering this code from db write). I'm setting GOOGLE_APPLICATION_CREDENTIALS env variable in any case but doesn't look like it's picking it up.