I'm deploying Airflow 2 on GKE Autopilot using helm chart and have provisioned a Cloud SQL instance (MySQL) to be used as DB by airflow.
I have created (using kubectl) a secret in K8s with this connection string as value and wanted to give that as an env var to all airflow pods. So tried to provide that in
env: []
section of this chart (line no 239), but it can not use valueFrom attribute there. It need value. So I want to know what are the ways by which I can refer to a secret in this helm chart and provide that as env var value to all the containers this chart deploys
Answering my own for others to find correct solution -
Create the secret with connection key and value as database URI
Disable postgres deployment in helm values.yaml
Change data.metadataSecretName to the secret create in #1. Airflow will pick up and inject that as connection URI
Answer by Harsh Manvar is still valid and correct one, but that is more suited for injecting arbitrary secrets as env vars. For changing database and providing custom URI, approach I took is recommended - https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#database
You can checkout the line no. 244 which is injecting the secret to all PODs
it will also i think do same think as we can inject the secret as env variable so.
# Secrets for all airflow containers
secret: []
# - envName: ""
# secretName: ""
# secretKey: ""
values.yaml : https://github.com/apache/airflow/blob/main/chart/values.yaml#L243
Documentation details : https://github.com/apache/airflow/blob/main/docs/helm-chart/adding-connections-and-variables.rst#connections-and-sensitive-environment-variables
Related
I'm trying to understand how to use secrets in Airflow.
I've configured hashicorp vault as a secret backend in airflow and put there an variable like:
AIRFLOW__SECRETS__BACKEND: "airflow.contrib.secrets.hashicorp_vault.VaultBackend"
AIRFLOW__SECRETS__BACKEND_KWARGS: '{"url":"http://vault:8200","token":My_TOKEN_TO_VAULT,"variables_path":"variables","mount_point":"airflow","connections_path":"connections"}'
1 docker exec -it VAULT_DOCKER_ID sh
2 vault login My_TOKEN_TO_VAULT
3 vault secrets enable -path=airflow -version=2 kv
4 vault kv put airflow/variables/slack_token value=SOMETHING
Now i'm trying to use this variable in my dag
Simple Variable.get('slack_token') indeed works but when I try to use it to connect with Slack i get an error.
So in process of debugging I noticed that this variable is printed as "***" so I suppose it is encrypted.
How to get an access to it?
Thanks :)
I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.
I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.
I am trying to get FOSElasticaBundle working on AWS ElasticSearch. At the moment I have my development env all set up and working perfectly using Docker containers for ElasticSearch using
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
If I populate my ElasticSearch using:
docker-compose exec php php /var/www/symfony/bin/console fos:elastica:populate --env=prod
this all works perfectly and the index has searchable items in it.
However moving this to AWS is throwing up an issue.
I have set up a ElasticSearch service (v6.2) within AWS using their VPN option, I am able to connect to this (I know it does connect as I had connection errors until I used this in the config:
fos_elastica:
clients:
default:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
When I run
php bin/console fos:elastica:populate --env=prod it looks like it is populating
3200/6865 [=============>--------------] 46% 4 secs/9 secs
Populating ppc/keywords
Refreshing ppc
But once complete my Amazon console shows 0 searchableDocuments and if run a query I get nothing back.
Has anyone come across this and any idea how to solve it, even being able to get more feedback from populate would help me work out where it is going wrong.
Edit 17:29 31/5
So I created a Elasticsearch install in a docker container on a standard EC2 instance and pointed at that and it indexes perfectly, so it is something to do with the connection with AWS. One of the differences between them is that the Docker install doesn't have to use:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
I presume then it's something to do with this, I would have thought if it wasn't authorised though I would get an error. Although it's working currently I would prefer to use the Amazon service just so it takes a install out of my life to keep an eye on!
I have same problem, but without using access_key.
The solution was adding to the client config key transport with value https
fos_elastica:
clients:
default:
host: vpc-xxxxxxxxxxxxxxxxxxxxxxxxx.es.amazonaws.com
port: 443
transport: https
my problem was in empty aws_access_key_id and aws_secret_access_key values.
Please, check it.
I have followed the instructions on http://cloudfoundry.github.com/docs/running/deploying-cf/openstack/install_microbosh_openstack.html to install the micro bosh in a VM.
I'm a little confused about the micro_bosh.yml:
name: microbosh-openstack
env:
bosh:
password: $6$u/dxDdk4Z4Q3$MRHBPQRsU83i18FRB6CdLX0KdZtT2ZZV7BLXLFwa5tyVZbWp72v2wp.ytmY3KyBZzmdkPgx9D3j3oHaDZxe6F.
level: DEBUG
network:
name: default
type: dynamic
label: private
ip: 192.168.22.34
resources:
persistent_disk: 4096
cloud_properties:
instance_type: m1.small
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://10.0.0.2:5000/v2.0/tokens
username: admin
api_key: f00bar
tenant: admin
default_key_name: admin-keypair
default_security_groups: ["default"]
private_key: /root/.ssh/admin-keypair.pem
what is the api_key used for? I don't comprehend the meaning of this key.
And the default key name?
Can someone please explain this configuration options better?
thanks
Bruno
EDIT
the answer to this question can be found here:
https://github.com/drnic/bosh-getting-started/blob/master/create-a-bosh/creating-a-micro-bosh-from-stemcell-openstack.md
http://10.0.0.2:5000/v2.0/tokens
Likely refers to the Keystone Service API.
This API authenticates you to OpenStack's keystone identity service. All REST API Services are catalogued there in the catalog service. Additionally all of OpenStack relies on keystone to authenticate all API queries.
Knowing nothing about bosh the attribute 'api_key' to me requires better context.
Generally OpenStack doesn't require an API Key in its own concept of API authentication.
More about openstack api authentication here:
http://docs.openstack.org/api/quick-start/content/index.html#Getting-Credentials-a00665
However there is a concept of an API Key in relation to EC2 keys. These can be generated with this query:
keystone ec2-credentials-create
My guess is that's what it requires there.
More alternatives there:
Credentials could be in in novarc file generated for your Openstack project with nova-manage project zipfile command. This is also available from the horizon interface.
Alternatively it could refer to a provider specific API Key such as rackspaces ( I doubt this ):
http://docs.rackspace.com/servers/api/v2/cs-devguide/content/curl_auth.html
'default_key_name' probably refers to the name of a keypair that has been previously registered with openstack. This would be a keypair that can be injected into an image at instance run time. It should correspond to the .pem filename. The key would need to be available to your user and your tenant that you choose in the config.
Check out a keypair creation / use example here:
http://docs.openstack.org/developer/nova/runnova/managing.instances.html
Best effort man. Hope that gives you what you need.