Is there a way to use encrypted passwords on Prisma ORM database config? - encryption

I'm starting with Prisma ORM and I want to know if it's possible to encrypt the database's password on prisma file on the snippet where we put the database config.
My .env file:
DATABASE_PASSWORD="postgresql"
DATABASE_USER="postgres"
DATABASE_URL="postgresql://${DATABASE_USER}:${DATABASE_PASSWORD}#localhost:5432/example_db?schema=public"
My prisma file:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Book {
id String #id #default(uuid())
title String
description String
bar_code String #unique
##map("books")
}

Environment variables are always read from the system's environment and are not publicly accessible. You would normally not push the content of your .env file to GitHub. Also, note that the .env file is not a javascript file hence you would not be able to use third-party encryption libraries in that file. In conclusion, there is no way to encrypt passwords on Prisma ORM database config.

Related

Include zipped secure connect file when build Next.js app on Amplify

There was a driver for connecting to the Datastax Astra-DB Cassandra Database in node.js called 'cassandra-driver'. For legacy connection It uses a connection secret key file named secure-connect-{DB-Name}.zip, syntax like this:
const client = new Client({
cloud: {
secureConnectBundle: 'Address of the zipped file'
},
credentials: {
username: 'This is client_id',
password: 'This is client_secret',
},
});
In local the syntax works well but when I deploy it on AWS Amplify, since the file does not put in Next.js bundle, A file not found error will be raised. Now the Question: Is there any way to keep the file in Amplify itself inside Next.js bundle and not upload it on external storage (like S3 , a silly way!) to access?

Snowflake Connectivity from .NetCore API Service

I have developed an Api service ( c# program) using .NetCore3.1. I have used Snowflake.Client dll library in my Api service to connect to Snowflake Database. We have 3 Snowflake accounts (i.e. Dev account, UAT account and PROD account). Recently, the Snowflake database name is changed in these 3 accounts and NOW the database name is not the same. For example, the database name is changed to DataWareHouse_DEV in Dev account, DataWareHouse_UAT in UAT account and DataWareHouse in PROD account.
Currently the Snowflake.Client library accepts only four parameters. see below code snippet
public SnowflakeClient(string user, string password, string account, string region = null);
My Configuration file:
"SnowflakeSettings": {
"Account": "AB4444",
"User": "SOME_SERVICE_ACCOUNT",
"Password": "Password#123!",
"Region": "east-us.azure"
}
Need help and inputs to handle the need:
I want to pass another parameter Databasename so that I can add it to the configuration. How do i pass another parameter besides the user, password, account and region? what is the appropriate library or methods i have to use for Snowflake connectivity?
Thanks
Vam

How to use AWS SSM parameter for token in provider github?

This is the code snippet in my main.tf file:
provider "github" {
token = var.github_token_ssm
owner = var.owner
}
data "github_repository" "github" {
full_name = var.repository_name
}
The github token is stored in AWS secretsmanager parameter.
If the value of the token is hardcoded github token, then it works fine.
If the value of the token is a AWS secretsmanager parameter (eg. arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:xxxx-Github-t0UOOD:xxxxxx), it is not working.
I don't want to hardcode github token in the code. How can I use secretsmanager parameter for token above?
As far as I know, Terraform not supporting aws Secret Manager (but you can use the vault to store secrets).
you can also deploy it with TF_VAR variable and ENV Var
export TF_VAR_db_username=admin TF_VAR_db_password=adifferentpassword
You can also run a script that will pull the secret from aws and store it in EnvVar.
just remember to secure your state file (the password will exist in clear text)

How to set appsettings..json sensitive data into encrypted/hashed form using power shell and decrypt in c# code

I am having a problem securing the sensitive information stored as plain text in apsettings.json.
Currently, My application is in .net core, reading those sensitive configurations from appsettings.json that are store in plain text format.
For Example
{
"UserName": ABC,
"Password": xyz
}
I want to make them encrypted/secure/masked so that any unauthorized user could not read that data. Other way can be encrypt the appsettings.json at deployment time and decrypt it in memory while using configuration. How can I encrypt the appsettings.json at deployment time and decrypt it in memory.
any help would be appreaciated.
With .net core you can use encrypted JSON by using Package EncryptedJsonConfiguration. In order to add the package you install it using NuGet manager or PowerShall :
Install-Package Miqo.EncryptedJsonConfiguration
Then in your Program.cs :
var key = Convert.FromBase64String(Environment.GetEnvironmentVariable("SECRET_SAUCE"));
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEncryptedJsonFile("settings.ejson", key);
})
...
then in your startup file :
services.AddJsonEncryptedSettings<AppSettings>(_configuration);
For more info you should check : https://github.com/miqoas/Miqo.EncryptedJsonConfiguration
The most recommended way to create encrypted files is by using Kizuna command line tool.
Read more : https://github.com/miqoas/Kizuna

How to upload files or images on hasura graphql engine

Example:
upload file to server and save resulting path to the database, only authenticated users should be able to upload files
How to implement this?
to summarize we have 3 ways:
client uploads to s3 (or similar service), get's file url, then makes insert/update mutation to the right table
custom uploader - write application/server that uploads files and mutates db and use nginx routing to redirect some requests to it
custom resolver using schema stitching (example)
If you are uploading files to AWS S3, there is a simple way that you don't have to launch another server to process file upload or create a handler for hasura action.
Basically, when you upload files to S3, it's better to get signed url from backend and upload to s3 directly. BTW, for multiple image sizes hosting, this approach is easy and painless.
The critical point is how to get s3 signed url to upload.
In node.js, you can do
const AWS = require("aws-sdk");
const s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const signedUrl = s3.getSignedUrl("putObject", {
Bucket: "my-bucket",
Key: "path/to/file.jpg",
Expires: 600,
});
console.log("signedUrl", signedUrl);
A signedUrl example is like https://my-bucket.s3.amazonaws.com/path/to/file.jpg?AWSAccessKeyId=AKISE362FGWH263SG&Expires=1621134177&Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D.
Normally, you will put the above code to a handler hosted in AWS Lambda or glitch, and add some logic for authorization and even add a row to table.
You can see that the most important part is Signature=oa%2FeRF36DSfgYwFdC%2BRVrs3sAnGA%3D. How can we make it easier to get Signature?
After digging into AWS JS SDK, we can find signature is computed here.
return util.crypto.lib.createHmac(fn, key).update(string).digest(digest);
fn = 'sha1'
string = 'PUT\n\n\n1621135558\b/my-bucket/path/to/file.jpg'
digest = 'base64'
It's just sha1 a certain format of string. This means we can just use hasura computed fields and Postgres crypto function to achieve the same results.
So if you have a table "files"
CREATE TABLE files (
id SERIAL,
created_at timestamps,
filename text,
user_id integer
);
you can create a SQL function
CREATE OR REPLACE FUNCTION public.file_signed_url(file_row files)
RETURNS text
LANGUAGE sql
STABLE
AS $function$
SELECT ENCODE( HMAC(
'PUT' ||E'\n'||E'\n'||E'\n'||
(cast(extract(epoch from file_row.created_at) as integer) + 600)
||E'\n'|| '/my-bucket/' || file_row.filename
, 'AWS_SECRET', 'SHA1'), 'BASE64')
$function$
Finally, follow this to expose this computed field to Hasura.
This way allows you to be able to not add any backend stuff and handle permission all in Hasura.

Resources