Is there any way we can integrate terraform with fastapi? [closed] - fastapi

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 months ago.
Improve this question
Is there a way to call terraform deployment utility in service endpoint(fastapi, flask)
To summarize Integration of service endpoint with Terraform?

After exploring a bit on it, Observed that there is no such direct module which can be utilized so I have come to a way of using it in routed manner.
Following are the steps that can be used :
1> Use template for deployment of infra at aws
2> Invoke ansible playbook
---
- hosts: localhost
become: no
gather_facts: False
vars:
ansible_python_interpreter: /usr/bin/python3
tasks:
- name: Deployment CFT via terraform for aws infra
community.general.terraform:
project_path: '/root/terraform-sample-ansible'
state: present
3> In ansible playbook only we can use terraform module
4> Write a method in ansible playbook which after deployment dumps the json output to some file
5> Now write service endpoint code, say endpoint.py using fastapi
Snippet :
#app.get("/")
async def get_instances():
""" REST endpoint to query instances """
play_run = run_playbook(playbook='<path of ansible file>')
with open("<file where output of ansible is dumped>", "r") as read_file:
data = json.load(read_file)
pretty_json = json.dumps(data, indent=4)
print(pretty_json)
return data
This can be a possible integration of terraform, ansible and service endpoint (fastapi).
Thanks!

Related

how to use sftp connection to send files to remote server using public key/private key(non password) in informatica? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I meet some trouble in using sftp connection in informatica.
I don't use passwords and use public/private key files generated in remote server in sftp connection.
when I run the session, below message returns in monitor :
Severity: DEBUG
Timestamp: 10/19/2020 2:58:06 PM
Node: node-XXXXXXXX
Thread: WRITER_1_*_1
Process ID: 26993
Message Code: FTP_14084
Message: Unable to access the private key or public key file. Verify that the correct file path was specified.
does any body can explain why? many thanks.
You need to add the keys to the file for the user that is running informatica process. Please ask informatica admin to set it up by following below steps.
a) Check which user is running informatica process ( will assume its 'infa'). Logged in Server An with user Id 'infa' and corresponding password.
b) Create the keys in server 'source' with the following command.
/user/local/bin/ssh-keygen -t rsa
c) Copy the 'id_rsa.pub' file to 'authorized_key' file in ~/.ssh directory in serber 'remote'
d) Grant 777 privileges to authorized_key file in server 'remote'
e) Log into 'remote' with 'sftp_user' user ID( will assume its 'sftp_user'). use cat id_rsa.pub >> authorized_keys to append the keys.
f) change permissions of below file and directory. Pls note they are very sensitive to permissions.
chmod 600 authorized_keys
chmod 700 .ssh
g) log into 'source' server using 'infa' and execute the sftp command in source server to verify-
sftp sftp_user#remote

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Can we add\attach security group to Terraform aws_cloudformation_stack resource

I am using Terraform to provision emr . In order to do so i am calling
resource "aws_cloudformation_stack" and then attaching cloudformation template to launch an EMR. its working now i want my EMR to have 22 inbound port open for ssh connection.
please see reference
https://www.terraform.io/docs/providers/aws/r/cloudformation_stack.html
i can do this by attaching a security group . could somebody please let me know how can i do this?
You should probably be able to handle this by creating the security group via Terraform, referencing it as a dependency to make sure it's created before the cloudformation stack and then providing the security group ID to the cloudformation template as a variable.

Is it possible to backup Firebase DB? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Was wondering if there are any common practices in backup up a firebase DB. My concern is some process accidentally wiping out our Database.
Thanks!
As of the time of this question, Firebase backs up all instances daily. So while keeping your own backups may still be useful, it's not essential.
To create your own backups, you can simply curl the data:
curl https://<instance>.firebaseio.com/.json?format=export
Note that for multiple gigabytes of data, this will slow things down and lock read access for a short period. It would be better in this case to chunk the backups and work with smaller portions. The shallow parameter can help here by providing a list of keys for any given path in Firebase, without having to fetch the data first.
curl https://<instance>.firebaseio.com/.json?shallow=true
As previously mentioned, there are also several GitHub libs available for this, and incremental backups are practical with some creativity and a worker thread on the real-time SDK.
There are now "Import Data" and "Export Data" buttons on the data page of the web interface for every project, so you can now backup your data with a button click!
just yesterday wrote a shell-script, which utilizes firebase-tools (npm install -g firebase-tools), in order to have these database dumps contained within my regular backup cronjob:
#!/bin/bash
# $1 is the Firebase projectId.
# $2 is the destination directory.
# example usage: cron_firebase.sh project-12345 /home/backups/firebase
# currently being triggered by /etc/cron.hourly/firebase-hourly.cron
PROJECTID=$1
DESTINATION=$2
FIREBASE="$(which firebase)"
NOW="$(date +"%Y-%m-%d_%H%M")"
cd $DESTINATION
$FIREBASE --project $PROJECTID database:get / > ./$PROJECTID.$NOW.json
tar -pczf $PROJECTID.$NOW.tar.gz ./$PROJECTID.$NOW.json && rm ./$PROJECTID.$NOW.json
update: in the meanwhile, one can auto backup to Google Cloud Storage Bucket
...goto Firebase Console -> Realtime Database -> and click tab Backups.
It is now possible to backup and restore Firebase Firestore using Cloud Firestore managed export and import service
You do it by:
Create a Cloud Storage bucket for your project:
Set up gcloud for your project using gcloud config set project [PROJECT_ID]
EXPORT
Export all by calling
gcloud alpha firestore export gs://[BUCKET_NAME]
Or Export a specific collection using
gcloud alpha firestore export gs://[BUCKET_NAME] --collection-ids='[COLLECTION_ID_1]','[COLLECTION_ID_2]'
IMPORT
Import all by calling
gcloud alpha firestore import gs://[BUCKET_NAME]/[EXPORT_PREFIX]/
where [BUCKET_NAME] and [EXPORT_PREFIX] point to the location of your export files. For example - gcloud alpha firestore import gs://exports-bucket/2017-05-25T23:54:39_76544/
Import a specific collection by calling:
gcloud alpha firestore import --collection-ids='[COLLECTION_ID_1]','[COLLECTION_ID_2]' gs://[BUCKET_NAME]/[EXPORT_PREFIX]/
Full instructions are available here:
https://firebase.google.com/docs/firestore/manage-data/export-import
Just to expand #kato's answer using curl.
I was looking for ways to run the command every night. My solution:
1) created a compute engine (basically a VM) in Google Cloud. You might be familiar with EC2 if you are from AWS world.
2) Wrote a simple cronjob, something like this
0 23 * * * /usr/bin/curl https://yourdatabaseurl.com/.json?format=export -o /tmp/backuptest_`date +\%d\%m\%y`.bk
I am sure there might be a simpler way to do this within the free tier itself.
Like using cloud functions.

Cloud Foundry - Installing Micro Bosh in a VM ( OpenStack )

I have followed the instructions on http://cloudfoundry.github.com/docs/running/deploying-cf/openstack/install_microbosh_openstack.html to install the micro bosh in a VM.
I'm a little confused about the micro_bosh.yml:
name: microbosh-openstack
env:
bosh:
password: $6$u/dxDdk4Z4Q3$MRHBPQRsU83i18FRB6CdLX0KdZtT2ZZV7BLXLFwa5tyVZbWp72v2wp.ytmY3KyBZzmdkPgx9D3j3oHaDZxe6F.
level: DEBUG
network:
name: default
type: dynamic
label: private
ip: 192.168.22.34
resources:
persistent_disk: 4096
cloud_properties:
instance_type: m1.small
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://10.0.0.2:5000/v2.0/tokens
username: admin
api_key: f00bar
tenant: admin
default_key_name: admin-keypair
default_security_groups: ["default"]
private_key: /root/.ssh/admin-keypair.pem
what is the api_key used for? I don't comprehend the meaning of this key.
And the default key name?
Can someone please explain this configuration options better?
thanks
Bruno
EDIT
the answer to this question can be found here:
https://github.com/drnic/bosh-getting-started/blob/master/create-a-bosh/creating-a-micro-bosh-from-stemcell-openstack.md
http://10.0.0.2:5000/v2.0/tokens
Likely refers to the Keystone Service API.
This API authenticates you to OpenStack's keystone identity service. All REST API Services are catalogued there in the catalog service. Additionally all of OpenStack relies on keystone to authenticate all API queries.
Knowing nothing about bosh the attribute 'api_key' to me requires better context.
Generally OpenStack doesn't require an API Key in its own concept of API authentication.
More about openstack api authentication here:
http://docs.openstack.org/api/quick-start/content/index.html#Getting-Credentials-a00665
However there is a concept of an API Key in relation to EC2 keys. These can be generated with this query:
keystone ec2-credentials-create
My guess is that's what it requires there.
More alternatives there:
Credentials could be in in novarc file generated for your Openstack project with nova-manage project zipfile command. This is also available from the horizon interface.
Alternatively it could refer to a provider specific API Key such as rackspaces ( I doubt this ):
http://docs.rackspace.com/servers/api/v2/cs-devguide/content/curl_auth.html
'default_key_name' probably refers to the name of a keypair that has been previously registered with openstack. This would be a keypair that can be injected into an image at instance run time. It should correspond to the .pem filename. The key would need to be available to your user and your tenant that you choose in the config.
Check out a keypair creation / use example here:
http://docs.openstack.org/developer/nova/runnova/managing.instances.html
Best effort man. Hope that gives you what you need.

Resources