BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL - google-cloud-endpoints

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4

(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Related

Kubernetes Client API from Google Cloud Functions (Firebase) Token Refresh

I want to start Kubernetes jobs on a GKE cluster from a Google Cloud Function (Firebase)
I'm using the Kubernetes node client https://github.com/kubernetes-client/javascript
I've created a Kubernetes config file using `kubectl config view --flatten -o json'
and loaded it
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromString(config)
This works perfectly locally but the problem is when running on cloud functions the token can't be refreshed so calls fail after a while.
My config k8s config files contains
"user": {
"auth-provider": {
"name": "gcp",
"config": {
"access-token": "redacted-secret-token",
"cmd-args": "config config-helper --format=json",
"cmd-path": "/usr/lib/google-cloud-sdk/bin/gcloud",
"expiry": "2022-10-20T16:25:25Z",
"expiry-key": "{.credential.token_expiry}",
"token-key": "{.credential.access_token}"
}
}
I'm guessing the command path points to the gcloud sdk which is used to get a new token when the current one expires. This works locally but on cloud functions it doesn't as there is no /usr/lib/google-cloud-sdk/bin/gcloud
Is there a better way to authenticate or a way to access the gcloud binary from cloud functions?
I have a similar mechanism (using Cloud Functions to authenticate to Kubernetes Engine) albeit written in Go.
This approach uses Google's Kubernetes Engine API to get the cluster's credentials and construct the KUBECONFIG using the values returned. This is equivalent to:
gcloud container clusters get-credentials ...
APIs Explorer has a Node.js example for the above method. The example uses Google's API Client Library for Node.JS for Kubernetes Engine also see here.
There's also a Google Cloud Client Library for Node.js for Kubernetes Engine and this includes getCluster which (I assume) is equivalent. Confusingly there's getServerConfig too and it's unclear from reading the API docs as to the difference between these methods.
Here's a link to the gist containing my Go code. It constructs a Kubernetes Config object that can then be used by the Kubernetes API to authenticate you to a cluster..

VPC creation problem in aws via terraform

I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.

Firebase + Datastore = need_index

I'm working through the appengine+go tutorial, which connects in with Firebase: https://cloud.google.com/appengine/docs/standard/go/building-app/. The code is available at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine/gophers/gophers-6, which aside from my Firebase keys is identical.
I have it working locally just fine under dev_appserver.py, and it queries the Vision API and adds labels. However, after I deploy to appengine I get an index error on datastore. If I go to the Firebase console, I see the collection (Post) and the field (Posted) which is a timestamp.
If I change this line: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/appengine/gophers/gophers-6/main.go#L193 to remove the Order("-Posted") then everything works (it's important to note that any Order call causes it to error, except the test records I've posted come in random order.
The error message when running in appengine is: "Getting posts: API error 4 (datastore_v3: NEED_INDEX): no matching index found."
I've attempted to create a composite index, or test locally with --require_indexes=true and it hasn't helped me debug the issue.
Edit: I've moved this over to use Firebase's Datastore libraries directly, instead of the GCP updates. I never solved this particular issue, but was able to move forward with my app actually working :)
By default the local development server automatically creates the composite indexes needed for the actual queries invoked in your app. From Creating indexes using the development server:
The development web server (dev_appserver.py) automatically adds
items to this file when the application tries to execute a query that
needs an index that does not have an appropriate entry in the
configuration file.
In the development server, if you exercise every query that your app
will make, the development server will generate a complete list of
entries in the index.yaml file.
When the development web server adds a generated index definition to
index.yaml, it does so below the following line, inserting it if
necessary:
# AUTOGENERATED
The development web server considers all index definitions below this
line to be automatic, and it might update existing definitions below
this line as the application makes queries.
But you also need to deploy the generated index configurations to the Datastore and let the Datastore update indexing information (i.e. the indexes to get into the Serving state) for the respective queries to not hit the NEED_INDEX error. From Updating indexes:
You upload your index.yaml configuration file to Cloud Datastore
with the gcloud command. If the index.yaml file defines any
indexes that don't exist in Cloud Datastore, those new indexes are
built.
It can take a while for Cloud Datastore to create all the indexes and
therefore, those indexes won't be immediately available to App Engine.
If your app is already configured to receive traffic, then exceptions
can occur for queries that require an index that is still in the
process of being built.
To avoid exceptions, you must allow time for all the indexes to build.
For more information and examples about creating indexes, see
Deploying a Go App.
To upload your index configuration to Cloud Datastore, run the
following command from the directory where your index.yaml is located:
gcloud datastore create-indexes index.yaml
For information, see the gcloud datastore reference.
You can use the GCP Console, to check the status of your indexes.

Publish webapp to Azure as student

Alright, so I have a Microsoft Imagine account from school through which I've gotten both Azure and Microsoft Visual Studio 2017 in order to learn ASP.NET (worked with Django earlier).
So I've gone throught a whole bunch of tutorials from codeschool to virtual academy to docs.microsoft and finally got the first version of my webapp done and ready to be published to Azure.
So I look through the steps on how to publish, here's some info on that:
Subscription: Microsoft Imagine
Resource Group: <name> (northeurope)
App Service Plan:
Resource Group: <name>
Pricing Tier: Free
Location: North Europe
Status: Ready
Subscription Name: Microsoft Imagine
Click on "Explore additional azure services" (as per many tutorial instructions) and add a database, I've fortunately already created the database in Azure so I only have to connect it. Here's some info on the database (though creating it directly here generates the same error):
Resource Group: <name>
Status: Online
Location: North Europe
Subscription Name: Microsoft Imagine
Server Name: <servername>.database.windows.net
Pricing Tier: Free (5 DTUs)
Some info on the server that the server:
Resource Group: <name>
Status: Available
Location: North Europe
Status: Available
So everything looks really good and I'm ready to publish and I hit the Create-button.
Deploying: (step 0 out of 5) ...
Deploying: (step 4 out of 5) ...
ERROR
Details:
Template deployment failed. Deployment operation statuses:
Succeeded: /subscriptions/ ... /servers/mintentadbserver ()
Failed: /subscriptions/ ... /databases/Mintenta_db ()
40619: The edition 'Free' does not support the database data max size '1073741824'.
Succeeded: /subscriptions/ ... /firewallrules/AllowAllAzureIPs ()
Succeeded: /subscriptions/ ... /sites/MinTenta ()
Succeeded: /subscriptions/ ... /config/connectionstrings ()
The few duplicate questions I've found on this have close to no answers and just a few suggestions to upgrade (link1, link2).
So I suppose my question is, like many others:
1) How do you change the size of the database?
2) If that's not possible and you cannot have a database with your free account. Why would not just say that instead of using size-restrictions?
I know this question is a little bit old, but I've just ran across the same error and I also couldn't find an answer. However, I managed to work around this issue.
I was following this tutorial (https://learn.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-dotnet-sqldatabase) from Microsoft, and since you mentioned the same steps and the same message error I got, I'm assuming you were doing the same thing or at least something similar.
When publishing directly from Visual Studio 2017 to Azure, VS tries to create the following resources:
App service plan
App service
SQL server
SQL database
From your error message (and mine as well), although the SQL database creation had an error, the other resources were published successfully. So, if you access Azure portal, you'll see those resources there.
Then, if you open the SQL server and click "New database", you'll be able to add a database manually - and more importantly, you'll be able to select the free option with max size of 32MB.
(In this example, the button is disabled because I've already added one database - I believe this is another limitation from the students' subscription).
Note that if you add the database manually, you'll also need to configure your connection strings. But that is quite easy:
Open your new database on Azure portal
Go to Settings > Connection Strings
Copy the connection string from there
Now open your App service and go to Settings > Application Settings
On Connection Strings, add a new one or edit the existing one, pasting the content that you just copied from the DB (don't forget to input your username and password)
You can have a DB using a trial (there are no restrictions to trial account as far as I'm aware of, well, except money). I'm not sure how to workaround this issue, as the template is pre-built by VS.
The more I look at this error, the more I don't get it. There is no "Free" tier of the Azure SQL DB. And the cheapest (basic) supports up to 2GB database, so this doesn't really restrict you.
Try setting appservice plan to shared? if that doesn't help try deleting everything and just let VS create all the resources for you, it should work in that case.

Multiple applications in the same Symfony2 application

This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.

Resources