I've been trying to migrate my solution from Google Cloud App Engine to Google Cloud Run and have been struggling a lot.
I have 3 services in app engine for each staging environment, let's call them api, front1, front2. front1 has custom domain name assigned www.example.com, as well as staging subdomain www.dev.example.com. Using dispatch.yaml rules I route requests to appropriate services, for example: requests containing api/* are routed to api service, default path routes to front1, /foo* routes to front2. Everything is working there perfectly well, however, hosting the solution on Cloud Run would decrease the costs a lot.
So at the moment, I am really struggling to understand how could I fully replicate this behavior in Cloud Run. I've tried following firebase routing, but it requires to host an app on Firebase as well and doesn't route all traffic for route /api/** to my api service.
I've tried following this article, but I cannot select my service since I selected Internet Network Ednpoint Group option for backend type, which only can point to a single url. Also, I am not sure if it will support wildcard routing there.
I'd highly appreciate any help here, I am totally out of options at the moment.
Related
We are using an AWS API gateway for our APIs. These APIs are mapped to a custom domain name set up using Amazon Route53 and accessed through a frontend application hosted in amazon s3. We have 3-4 different environments for different stages (dev, stage, prod, etc). With the API domain being in the format stage.api.brighthub.io for other environments and api.brighthub.io for prod. The one in prod works fine, however, recently, the APIs in the other environment's are not accessible using the custom domain name. All the domain mappings and A records and CNAME records have been added in Route53 and it worked until a few days ago. The NS records for stage.api.brighthub.io have been added into brighthub.io. I have also referred this documentation - Routing traffic for subdomains and tried adding the NS records for stage.api.brighthub.io to api.brighthub.io, however, the API is still not accessible. There have been no major updates to the APIs either and suddenly they aren't accessible using the domain name. They can be accessed using the Invoke URL. I have also tested using another REST client, the same issue persists.
Can someone help with this or point to something to troubleshoot?
I'm developping an app that uses Firebase (Google Cloud) for back-end application. The project containts several Firebase Functions that are used to get data from a third party that only accept one validated fixed IP address (which I managed to do via a VPC).
My start production is soon so I felt it would be a good time now to set two different environnements, one for developpement and one for production, which would mean two different Google Cloud projects (I want to separate the databases and the functions if I add some new features) with the same IP address. Is there a way to share this setting between two different Google Cloud projects ?
Here my idea, even if I never tested it.
On your current project, create 2 Internet network endpoint group (INEG).
INEG-prod that use the fully qualified url of your firebase prod environment
INEG-dev that use the fully qualified url of your firebase dev environment
Then create a Load balancer in the project where your IP address is reserved; Create 2 backend:
Backend-prod that use the INEG-prod
Backend-dev that use the INEG-dev
Then the URL map
use your /* (for example) to route the request to the backend-prod
use dev-/* (for example) to route the request to the backend-dev
On the frontend, use your static IP, and the HTTPS protocol.
It should work (wait about 5 minutes to let the HTTPS load balancer to take into account the configuration)
I'm working on a small change to the wso2 developers portal. When we import a API to the wso2 dev portal it generates gateway URLs to that API thinking its an API created in the wso2 publisher. But what if we want to import an API from AWS and in such cases the generated URLs by the dev portal will be incorrect. To fix this issue I'm having trouble finding the code segments responsible for this. If you guys have any idea please let me know. thanks in advance
If the URLs are static value, then you can hard-code it in the code level. Environments.jsx, resides in <APIM-Home>/repository/deployment/server/jaggeryapps/devportal/source/src/app/components/Apis/Details/ the directory contains the source code that renders under Gateway Environments section (From line starting from 203).
Endpoint URLs are generated from the defined gateway environment. You can define a new gateway environment with AWS HTTP or HTTPS endpoints. So in dev portals, it will append the particular API's context and version to the endpoint and display here. If the expected gateway URLs are different from the protocol://GW_host:GW_port/context/version then you can do some additional changes from the UI. So that you can change the dev portal react code and apply the changes.
Do I still need NGINX to serve static content like JS etc and reverse proxy request to backend, or it can be done with just Spring Cloud Gateway?
Spring docs has a following image:
And I found no description there on how to return static content to the client, does it mean it's considered bad practice and I need extra step of reverse proxying adding its latency?
If not, where can I find more info on how to do that with Spring Cloud Gateway, especially if I'm going to make oauth2 authorization-code flow authentication using Spring Gateway?
I am using NGINX as reverse proxy, but i thought about the same question and i tried (same thing for me. oauth2 authorization-code flow authentication). So, you can serve static content with just Spring Cloud Gateway, it is possible.
For example if you are using React, take build and copy all build files to under resources/static/frontend-name location. Then, disable (permitAll) web security in that all frontend locations. So you can access just typing http://gatewayserver/frontend-name/index.html
However, I don't think to use it in production environment, NGINX still sounds like better idea for me. Because when I take release for frontends, why do i need to take release for gateway at the same time or vice-versa? But, if you have small project, it might be an option.
I have seen that Google Firebase offers a static files hosting solution (for the front end) which is served in SSL and by CDN. That means, I can serve customers all around the world with a server located probably close to them and enjoying good speeds.
Now I want to do the same with my Node.js backend code.
That means, instead of hosting my backend code in my own VPS, that will be probably fast only for who lives close to my server, I want to deploy the same server to Firebase's CDN and ofcourse, over HTTPS.
What I have found for now is the Firebase Functions which is probably a Node.js server. However I am not sure if its running uppon a CDN, so it will be fast just as the static files serving, or that its just a server located somewhere in US that has to serve worldwide.
In addition, if there is such a service - where I can host my back end code with SSL, may I have the "standard" express configuration I have now on my VPS?
And what for about clusters/workers? How many workers I can have when using the Firebase solution (if there is one like that).
Thanks.
SSL and firebase functions & hosting?
You get HTTPS by default for hosting and functions. If you need functions to served from your custom domain and not https://us-central1-[projectname].cloudfunctions.net, you will need to configure your firebase.json file to rewrite your routes to your firebase functions. The main thing to flag here is both options you get HTTPS and certs issues directly from google/firebase.
When you bring a custom domain over it can take up to 1-2 hours for firebase to issue the certificate, but all this happens automatically without you having to do anything.
Does firebase functions integrate with a CDN?
Yes, but you need to set the correct s-maxage header in your response to ensure the firebase CDN will store it. See here for more info on this.
Cache invalidation is still hard with firebase so I would keep this in mind before you set anything.
How many workers I can have when using the Firebase solution (if there is one like that).
One benefit of using firebase functions is that you don't need to really give much thought to the resources behind the backend. If you have heavier workloads you can increase your ram/ cpu power in the google console for your selected function. The endpoint will scale up and down depending on how many requests it gets. On the flip side if it doesn't get any requests (usually in non prod environments) it'll go to an idle state. You need to be aware of a cold start problem before you fully commit to using this as a replacement to your current nodejs VPs hosting solution.
I personally use the cache control headers to ensure the functions responses are pushed into the CDN edge, which takes the edge off the cold start issue (for me and my use case).