How to setup AWS Dynamo connection with accessKey, accessSecret and accessToken in DBeaver? - amazon-dynamodb

I would like to ask, if is possible to setup new AWS Dynamo connection with
access key, access secret and access token in DBeaver, which I have available in my AWS SSO account.
In DBeaver I'm able to put access key and access secret, but access token field is missing in connection settings.

Related

How to access password from third-party provider such as vault or LastPass in google cloud composer?

The Cloud Composer doesn't support SMTP password storage in environment, according to the tutorial we can specify a command return SMTP password, I have tried bash command to export env var from airflow.cfg but failed to store password in airflow.cfg.
Our Gsuit credential is stored in LastPass, so the ideal way is to connect composer with lastpass to retrieve LDAP credential. Some applicable solution I can think of right now: 1) move credential from LastPass to Vault, enable vault in composer via ssh and use vault kv get airflow/connections/smtp_default in composer smtp_password_cmd; 2) create connections in airflow with LastPass and retrieve with airflow.hooks.base in code base (might need excessive config for security reason) 3) easiest way, hide hardcoded gmail password somewhere (not sure where thou)
Can someone point out a direction how to retrieve LDAP credential from third-party storage ( we don't want to use google secret at this moment) in cloud composer?
You can use existing secrets backend or add your own custom secret backend if none of the secret backends is good for you. The existing backends you can use are vault, Google secrets, Amazon secrets, but rolling your own backend is super simple if none of the existing ones suit you.
https://airflow.apache.org/docs/apache-airflow/2.2.0/security/secrets/secrets-backend/index.html
In case of SMTP you need to define a connection (any type - http will do) which you have to specify via connection id (and the connection should have user/password for SMTP)

Quarkus custom credential provider for mysql

I have Quarkus application which runs as a Java service on server. It is not deployed as a Docker container image.
I've defined custom Credential provider for decrypting the mysql datasource password in my application, and not using vault storage for storing encrypted password.
I had created a credential provider and registered my credential provider classes and in the application.properties with the key quarkus.datasource.credentials-provider = custom.
When I try to run the application it does not override quarkus credential provider class and gives the following error:
unable to find credentials provider of type default
Is there a way to run the application providing encrypted userid/password of mysql as a service on my prod server.

Restrict access to Google Cloud VM to only Firebase server

I have a Google Cloud VM instance running a REST API server.
I want to remove all public access to the microservice VM and i only want Firebase - which represents my frontend server to have access to the microservice server on the VM.
My thought was to block all access to the VM and allocate an IP in an internal virtual private network, so that it is accessable by the Firebase server.
I started researching VPC (Virtual Private Connectors) here:
https://firebase.google.com/docs/storage/gcp-integration
However, the documentation is not very good and it is about Google Cloud Storage.
Is it possible to achive this functionality with Firebase and Google Cloud VM instance?
Step 1 : Create a custom mode private VPC network in Google Cloud by following the below steps in [1]
Step 2 : Select your existing Google Cloud VM instance and configure it to have a private IP by following the below steps in [2]
Step 3 : Create a Serverless VPC Access connector in Google Cloud
Console by following the steps in [3]
Step 4 : Edit your Cloud functions to add the connector we created in
step 3 following the steps in [4]
Step 5 : To add the connector to Firebase functions follow the
stackoverflow answer(and its comments) in [5]
[1]https://cloud.google.com/vpc/docs/using-vpc#create-custom-network
[2]https://cloud.google.com/compute/docs/ip-addresses/reserve-static-internal-ip-address#how_to_reserve_a_static_internal_ip_address
[3]https://cloud.google.com/functions/docs/networking/connecting-vpc#create-connector
[4]https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#functions
[5]https://stackoverflow.com/a/55825894/15803365
What you can do here is use a JSON Web Token(JWT), a signed JWT. The secret for signing will be there both on your server side and on Firebase Functions. As a best practice store your secret in Google Cloud KMS, whenever you need the secret, access it from there.
Let me briefly explain the process and why I think it's the best choice for you.
For systems running outside of a Compute Engine called "Host1" (Firebase Server) and a Compute Engine instance called ''VM1” ( Backend Server), VM1 can connect to Host1 and validate the identity of that instance with the following process:
VM1 establishes a secure connection to Host1 over a secure connection
protocol of your choice, such as HTTPS.
VM1 requests its unique identity token from the metadata server and
specifies the audience of the token. In this example, the audience
value is the URI for Host1. The request to the metadata server
includes the audience URI so that Host1 can check the value later
during the token verification step.
Google generates a new unique instance identity token in JWT format
and provides it to VM1. The payload of the token includes several
details about the instance and also includes the audience URI. Read
Token Contents for a complete description of the token contents.
VM1 sends the identity token to Host1 over the existing secure
connection. Host1 decodes the identity token to obtain the token
header and payload values.
Host1 verifies that the token is signed by Google by checking the
audience value and verifying the certificate signature against the
public Google certificate.
If the token is valid, Host1 proceeds with the transmission and
closes the connection when it is finished. Host1 and any other
systems should request a new token for any subsequent connections to
VM1.
You can refer to the Verifying the instance identity documentation for more details.

How can I grant server-side access to my Firebase DB with rules?

I have a Firebase DB yet. It's still in the sandbox stage and I don't have any rules yet.
I have a collaborator who is maintaining a server that generates data. I would like that server to be able to push data to my Firebase database.
Is there any way (perhaps by generating a new secret key, or a service account for that DB) that I can grant that server Write permissions to only certain fields in my DB?

Kaa sandbox cqlsh user and password

What is the login name and password for Cassandra cqlsh?
I do not see data in tables ep_profile, ep_user etc. under anonymous access.
Thanks.
In Sandbox there are not additional users created for Cassandra database. There is default user "cassandra".
You can see data using anonymous access.
By default as NoSQL DB in Sandbox used MongoDB. Check your NoSQL database configuration.
Configure if it wasn't set to use Cassandra, restart kaa-node service and connect some client to the Kaa.

Resources