App on vercel fails to connect with AWS RDS - next.js

I am facing trouble when my app written using NextJS with PrismaIO as database ORM deployed to Vercel tries to connect with MYSQL database which is hosted on AWS RDS fails due to time out. It shows below error after a while:
502: BAD_GATEWAY
Code: NO_RESPONSE_FROM_FUNCTION
ID: bom1::zrmv2-1609789429213-86b5142a230c
I have added my app hosted at Vercel's IP address whitelisted in security group of AWS RDS too but the app still timesout and fails with 502 error page. Please help.

Thank you everyone this has turned frustrating for me and this way I can't push it to production with my RDS db exposed to all IPs so I am dropping this use-case and converting my NextJS app to CRA based UI which will be deployed to S3 so that RDS and S3 can have common security protocol. I have tested the RDS with EC2 sharing common security group and they connect really well and works out of the box. Thank you everyone once again.

Related

Connecting to An External (outside of Google Cloud Platform) PostgreSQL db from Google Cloud Run

I've been going around in circles trying to look for information on this, and all I find is connecting to Google Cloud SQL from Google Cloud Run - this is not what I want.
I have a PostgreSQL database installed on a server, outside of GCP. I have a R Shiny app, being deployed as a docker container thru Google Cloud Run, that needs access to the PostgreSQL database that lives outside of GCP.
When I run my R Shiny docker container on my local machine, using docker desktop, it works fine. The connection is made and I see no errors. I logged into my postgresql db directly with the username/password and I also could log in, no problem.
When I run my R Shiny docker container in Google Cloud Run, I get the following errors:
Warning: Error in postgresqlNewConnection: RS-DBI driver: (could not connect xxxx#xxpostgresxx.client.ext:5432 on dbname "db_name": could not translate host name "xxpostgresxx.client.ext" to address: Name or service not known
In my *.R file, the connection is written like this:
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, host = "xxx.xxx.ext", port = "5432", dbname = "db_name", user = "xxxx", password = "xxxx")
Is this an issue with my external PostgreSQL db not allowing the Google Cloud Run url to access it's database? Is it something with the "RS-DBI driver" itself? Or is there something I need to setup somewhere in GCP to get this connection to work?
I've tried pulling in the DBI library before the RPostgreSQL library (someone mentioned it) but that didn't solve anything.
I am a newbie with Google Cloud Platform, so you'll probably have to give more detail in your explanations. Thank you so much in advance.
=====================================
UPDATE:
So in order to make this work, I had to set up a "Serverless VPC access" without networking people to allow my Cloud Run service to connect to resources within our private VPN network.
Cloud Run needs a public IP or DNS name by which to locate your Postgres database; it appears to be having problems using xxx.xxx.ext.
I'm unfamiliar with .ext. Is that a legitimate public Internet DNS name? If not, that's your (first) problem.
Corrollary: could any other entity on the Internet, use a Postgres client and suitable credentials connect to that Postgres database?

how to troubleshoot "Communications link failure" error with Cloud Data Fusion

I have two GCP projects. A testing environment, built from scratch and a production environment with an already existing Cloud SQL for MySQL instance. My goal is to set a replication pipeline with Data Fusion to replicate some MySQL table.
On the testing environment I'm able to connect data fusion to MySQL. It is not working on the production environment. I have the following error:
Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
This error message is pretty mysterious to me. The two environment are setup exactly the same way as far as I can see so I don't understand where is this error coming from. What can I do to better understand what's behind this error?
my setup is one Cloud SQL for MySQL instance on Private IP, on VM with proxy SQL, one Private Cloud fusion instance.

Unable to access website or SSH or FTP - Google Cloud and wordpress

I have a WordPress site hosted on Google Cloud, and was working very well.
With no apparent motive, stoped working and I can't access to it, neither the front panel or admin panel.
I can't access via FTP o SSH console.
The VM on Google cloud still running as far as I can see.
Errors I get:
When trying to access de website on Google Chrome:
ERR_CONNECTION_TIMED_OUT
When trying to access FTP via FileZilla:
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
When trying to access SSH:
Connection via Cloud Identity-Aware Proxy Failed Code: 4003 Reason:
failed to connect to backend You may be able to connect without using
the Cloud Identity-Aware Proxy.
i just want to update this issue.
The problem was that the memory quota.
I've increased the amounth of memory, restarted de VM and all went back to work.
Thanks
This page with SSH troubleshooting steps might be able to help you.
The issue could be solved by trying these troubleshooting steps. I think it is likely that the first one might be the cause of your issue since you mentioned it did work before.
Does the instance have a full disk? Try to expand it!
Is the firewall correctly setup, check your firewall rules and ensure that the default-allow-ssh rule is present.
Check your IAM permissions, do you have the roles required to connect to the VM?
Enable the serial console from your instance settings, connect and review the logs, they might give you some useful insights.

Flask SQLAlchemy Database with AWS Elastic Beanstalk - waste of time?

I have successfully deployed a Flask application to AWS Elastic Beanstalk. The application uses an SQLAlchemy database, and I am using Flask-Security to handle login/registration, etc. I am using Flask-Migrate to handle database migrations.
The problem here is that whenever I use git aws.push it will push my local database to AWS and overwrite the live one. I guess what I'd like to do is only ever "pull" the live one from AWS EB, and only push in rare circumstances.
Will I be able to access the SQLAlchemy database which I have pushed to AWS? Or, is this not possible? Perhaps there is some combination of .gitignore and .elasticbeanstalk settings which could work?
I am using SQLite.
Yes, your database needs to not be in version control, it should live on persistent storage (most likely the Elastic Block Storage service (EBS)), and you should handle schema changes (migrations) using something like Flask-Migrate.
The AWS help article on EBS should get you started, but at a high level, what you are going to do is:
Create an EBS volume
Attach the volume to a running instance
Mount the volume on the instance
Expose the volume to other instances using a Network File System (NFS)
Ensure that when new EBS instances launch, they mount the NFS
Alternatively, you can:
Wait until Elastic File System (EFS) is out of preview (or request access) and mount all of your EB-started instances on the EFS once EB supports EFS.
Switch to the Relational Database Service (RDS) (or run your own database server on EC2) and run an instance of (PostgreSQL|MySQL|Whatever you choose) locally for testing.
The key is hosting your database outside of your Elastic Beanstalk environment. If not, as the load increases different instances of your Flask app will be writing to their own local DB. There won't a "master" database that will contain all the commits.
The easiest solution is using the AWS Relational Database Service (RDS) to host your DB as an outside service. A good tutorial that walks through this exact scenario:
Deploying a Flask Application on AWS using Elastic Beanstalk and RDS
SQLAlchemy/Flask/AWS is definitely not a waste of time! Good luck.

Access TFS from Azure Service

In my application, I have to show the work items of logged in user. I have tried to pull work items of TFS from the application and It’s working fine on local server, but when it is deployed to the cloud server, it is not working and getting error message saying ‘TF400324 : The remote name could not be resolved’, as a cloud application is trying to ping internal TFS.
Is there any way to connect to internal TFS from the application hosted on cloud server ?
Many thanks in advance..
You can setup an Azure VPN tunnel between your cloud service and your internal network:
http://azure.microsoft.com/en-us/services/virtual-network/
Or you could expose your internal TFS to the internet:
http://msdn.microsoft.com/en-us/library/bb668967.aspx
Or you could use the Azure BizTalk Hybrid Connections to enable the communication:
http://azure.microsoft.com/en-us/documentation/articles/integration-hybrid-connection-overview/
After multiple tries, Service Bus Relay Binding is the one, to ping on-premise environment(internal tfs) from azure cloud server(web role).
The below link works like charm..
Service Bus Relay Binding
Hope It Helps to Somebody who has faced the same issue I've faced. Have Fun..

Resources