How to redirect traffic for a Firebase Function to another region when current one goes down? - firebase

Suppose I have a callable function that is deployed to multiple regions.
My client side app does not specify region(but default is us-central1), so in the event where the default region goes down, does Firebase/Google Cloud automatically redirect traffic to other regions that are up?
If that wasn't the case, what to do in such scenarios?
I'm sure there's something, but my search attempts haven't reached anything.

No, each Cloud Function deployed has its own URL that also includes the region and requests would be routed to that function only. Cloud Functions don't have a load balancer like functionality by default. If number of requests rises, Cloud Functions will just create new instances to handle them.
You can check user's location, find the nearest GCP region where your function is deployed and call that. That should also reduce latency a bit and balance the requests based on user region.
Alternatively, if you want to ensure requests are handled by functions in same regions, also checkout Global external HTTP(S) load balancer with Cloud Functions.

Related

Cloud Functions - DDoS protection with max instances cap + node express rate limiter?

I've been using Cloud Functions for a while and it's been great so far - though, it seems like there's no builtin way to set limits on how often the function is invoked.
I've set the max # instances to a reasonable number, but for the # invocations, Firebase doesn't really provide a way to set this. Would using a Node package that limits or slows down requests, when combined with the limited max instances be sufficient to slow down attacks if they happen?
Also know Cloud Endpoints exist - I'm pretty new to OpenAPI and it seems like something that should just be integrated with Functions at an additional cost... but wondering if that would be a good solution too.
Pretty new to all this so appreciate any help!
If you use only Google Cloud services (I don't know the other cloud provider offers to solve your issue, or even existing framework for this), you can limit the unwanted access at different layer
Firtly, Google Front End (GFE) protects all Google resources (Gmail, Maps, Cloud, Your cloud functions,...) especially against layer 3 and layer 4 common DDoS attacks. In addition, this layer is in charge of the TLS communication establishment, and will also discard the bad connexions.
Activate the "private mode". This mode forbid the unauthenticated request. With this feature, Google Front End will check if
A id_token is present in the request header
If the token is valid (correct signature, not expired)
If the identity of the token is authorized to access to the resource.
-> Only the valid request reach your service and you will pay only for that. All the bad traffic is processed by Google and "paid" by Google.
Use a Load balancer with Cloud Armor activated. You can also customize your WAF policies if you need them. Use it in front of your Cloud Functions thanks to the serverless NEG feature
If you use API Keys, you can use Cloud Endpoint (or API Gateway, a managed version of Cloud Endpoint) where you can enforce rate limit per API keys. I wrote an article on this (Cloud Endpoinr + ESPv2)

Firebase cloud functions pricing for denied requests [duplicate]

As far as I understand, my Google Cloud Functions are globally accessible. If I want to control access to them, I need to implement authorization as a part of the function itself. Say, I could use Bearer token based approach. This would protect the resources behind this function from unauthorized access.
However, since the function is available globally, it can still be DDoS-ed by a bad guy. If the attack is not as strong as Google's defence, my function/service may still be responsive. This is good. However, I don't want to pay for those function calls made by the party I didn't authorize to access the function. (Since the billing is per number of function invocations). That's why it's important for me to know whether Google Cloud Functions detect DDoS attacks and enable counter-measures before I'm being responsible for charges.
I think the question about DDOS protection has been sufficiently answered. Unfortunately the reality is that, DDOS protection or no, it's easy to rack up a lot of charges. I racked up about $30 in charges in 20 minutes and DDOS protection was nowhere in sight. We're still left with "I don't want to pay for those function calls made by the party I didn't authorize to access the function."
So let's talk about realistic mitigation strategies. Google doesn't give you a way to put a hard limit on your spending, but there are various things you can do.
Limit the maximum instances a function can have
When editing your function, you can specify the maximum number of simultaneous instances that it can spawn. Set it to something your users are unlikely to hit, but that won't immediately break the bank if an attacker does. Then...
Set a budget alert
You can create budgets and set alerts in the Billing section of the cloud console. But these alerts come hours late and you might be sleeping or something so don't depend on this too much.
Obfuscate your function names
This is only relevant if your functions are only privately accessed. You can give your functions obfuscated names (maybe hashed) that attackers are unlikely to be able to guess. If your functions are not privately accessed maybe you can...
Set up a Compute Engine instance to act as a relay between users and your cloud functions
Compute instances are fixed-price. Attackers can slow them down but can't make them break your wallet. You can set up rate limiting on the compute instance. Users won't know your obfuscated cloud function names, only the relay will, so no one can attack your cloud functions directly unless they can guess your function names.
Have your cloud functions shut off billing if they get called too much
Every time your function gets called, you can have it increment a counter in Firebase or in a Cloud Storage object. If this counter gets too high, your functions can automatically disable billing to your project.
Google provides an example for how a cloud function can disable billing to a project: https://cloud.google.com/billing/docs/how-to/notify#cap_disable_billing_to_stop_usage
In the example, it disables billing in response to a pub/sub from billing. However the price in these pub/subs is hours behind, so this seems like a poor strategy. Having a counter somewhere would be more effective.
I have sent an email to google-cloud support, regarding cloud functions and whether they were protected against DDoS attacks. I have received this answer from the engineering team (as of 4th of April 2018):
Cloud Functions sits behind the Google Front End which mitigates and absorbs many Layer 4 and below attacks, such as SYN floods, IP fragment floods, port exhaustion, etc.
I have been asking myself the same question recently and stumbled upon this information. To shortly answer your question: Google does still not auto-protect your GCF from massive DDOS-attacks, hence: unless the Google infrastructure crashes from the attack attempts, you will have to pay for all traffic and computing time caused by the attack.
There is certain mechanisms, that you should take a closer look at as I am not sure, whether each of them also applies to GCF:
https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
https://projectshield.withgoogle.com/public/
UPDATE JULY 2020: There seems to be a dedicated Google service addressing this issue, which is called Google Cloud Armor (Link to Google) as pointed out by morozko.
This is from my own, real-life, experience: THEY DON'T. You have to employ your own combo of rules, origin-detection, etc to protect against this. I've recently been a victim of DDoS and had to take the services down for a while to implement my own security wall.
from reading the docs at https://cloud.google.com/functions/quotas and https://cloud.google.com/functions/pricing it doesn't seem that there's any abuse protection for HTTP functions. you should distinguish between a DDoS attack that will make Google's servers unresponsive and an abuse that some attacker knows the URL of your HTTP function and invokes it millions of times, which in the latter case is only about how much you pay.
DDoS attacks can be mitigated by the Google Cloud Armour which is in the beta stage at the moment
See also related Google insider's short example with GC Security Rules and the corresponding reference docs
I am relatively new to this world, but from my little experience and after some research, it's possible to benefit from Cloudflare's DDOS protection on a function's http endpoint by using rewrites in your firebase.json config file.
In a typical Firebase project, here's how I do this :
Add cloud functions and hosting to the project
Add a custom domain (with Cloudflare DNSs) to the hosting
Add the proper rewrites to your firebase.json
"hosting": {
// ...
// Directs all requests from the page `/bigben` to execute the `bigben` function
"rewrites": [ {
"source": "/bigben",
"function": "bigben",
"region": "us-central1"
} ]
}
Now, the job is on Cloudflare's side
One possible solution could be the API Gateway, where you can use firebase authentication. After successful authentication to the api gw it can call your function that deployed with --no-allow-unauthenticated flag.
However I'm confused if you are charged for unauthenticated requests to api gw too..

How to deploy same firebase cloud function under different regions?

Currently we are using express framework with in firebase cloud functions and we would like to have same api distributed on different regions.
Currently this is how it is exported now.
exports.api = functions.region("asia-east2").https.onRequest(app);
As our users are distributed all over globally, is there any way we can make sure the api request hits the nearest region function for faster access and low latency?
This is a little more complex that one might think (at least to my humble knowledge), the short answer is you can't do it by default, this is more like HTTPS load balancer.
Your functions are unique, "the trick" is to route your user to the client deployed on the nearest region. Google Cloud DNS is your friend here (but a not as cool friend as AWS Route53, I left a blog post to get you started). You can get the nearest nameserver taking advantage of the anycast network, but that is as far as you go. I'm not sure what are your implementation details but these resources helped to get started:
https://cloud.google.com/load-balancing/docs/https/https-load-balancer-example
https://cloud.google.com/dns/docs/overview#performance_and_timing
https://aws.amazon.com/blogs/aws/latency-based-multi-region-routing-now-available-for-aws/
Again, it's not about routing the functions, it's about routing the traffic to the client that point to that function, sadly in my case, I end up using AWS Route 53's latency/geo so, I don't have too many details here.

What is the origin of an external API call made by a firebase function?

I'm trying to make a call to an external api using a cloud function.
The external API requires me to register the origin of the call I am making with them.
For example https://mywebsite.com
What would the url to register with them be?
mywebsite.firebaseapp.com?
The domain name registered in firebase console, mywebsite.com?
Or something else like https://us-central1-mywebsite.cloudfunctions.net/functionName ?
When working with massively scalable cloud products like Cloud Functions, you don't have guarantees about where your network traffic appears to come from. Your source IP can (and will) change over time, and addresses of your project's DNS entries (for you cloudfunctions.net hostname) can be expected to change similarly.

Firebase cloud function: how to deal with continuous request

When working with Firebase (Firebase cloud function in this case), we have to pay for every byte of bandwidth.
So, i wonder how can we deal with case that someone who somehow find out our endpoint then continuous request intentionally (by a script or tool)?
I did some search on the internet but don't see anything can help.
Except for this one but not really useful.
Since you didn't specify which type of request, I'm going to assume that you mean http(s)-triggers on firebase cloud functions.
There are multiple limiters you can put in place to 'reduce' the bandwidth consumed by the request. I'll write a few that comes to my mind
1) Limit the type of requests
If all you need is GET and say for example you don't need PUT you can start off by returning a 403 for those, before you go any further in your cloud function.
if (req.method === 'PUT') { res.status(403).send('Forbidden!'); }
2) Authenticate if you can
Follow Google's example here and allow only authorized users to use your https endpoints. You can simply achieve this by verifying tokens like this SOF answer to this question.
3) Check for origin
You can try checking for the origin of the request before going any further in your cloud function. If I recall correctly, cloud functions give you full access to the HTTP Request/Response objects so you can set the appropriate CORS headers and respond to pre-flight OPTIONS requests.
Experimental Idea 1
You can hypothetically put your functions behind a load balancer / firewall, and relay-trigger them. It would more or less defeat the purpose of cloud functions' scalable nature, but if a form of DoS is a bigger concern for you than scalability, then you could try creating an app engine relay, put it behind a load balancer / firewall and handle the security at that layer.
Experimental Idea 2
You can try using DNS level attack-prevention solutions to your problem by putting something like cloudflare in between. Use a CNAME, and Cloudflare Page Rules to map URLs to your cloud functions. This could hypothetically absorb the impact. Like this :
*function1.mydomain.com/* -> https://us-central1-etc-etc-etc.cloudfunctions.net/function1/$2
Now if you go to
http://function1.mydomain.com/?something=awesome
you can even pass the URL params to your functions. A tactic which I've read about in this medium article during the summer when I needed something similar.
Finally
In an attempt to make the questions on SOF more linked, and help everyone find answers, here's another question I found that's similar in nature. Linking here so that others can find it as well.
Returning a 403 or empty body on non supported methods will not do much for you. Yes you will have less bandwidth wasted but firebase will still bill you for the request, the attacker could just send millions of requests and you still will lose money.
Also authentication is not a solution to this problem. First of all any auth process (create token, verify/validate token) is costly, and again firebase has thought of this and will bill you based on the time it takes for the function to return a response. You cannot afford to use auth to prevent continuous requests.
Plus, a smart attacker would not just go for a req which returns 403. What stops the attacker from hitting the login endpoint a millions times?? And if he provides correct credentials (which he would do if he was smart) you will waste bandwidth by returning a token each time, also if you are re-generating tokens you would waste time on each request which would further hurt your bill.
The idea here is to block this attacker completely (before going to your api functions).
What I would do is use cloudflare to proxy my endpoints, and in my api I would define a max_req_limit_per_ip and a time_frame, save each request ip on the db and on each req check if the ip did go over the limit for that given time frame, if so you just use cloudflare api to block that ip at the firewall.
Tip:
max_req_limit_per_ip and a time_frame can be custom for different requests.
For example:
an ip can hit a 403 10 times in 1 hour
an ip can hit the login successfully 5 times in 20 minutes
an ip can hit the login unsuccessfully 5 times in 1 hour
There is a solution for this problem where you can verify the https endpoint.
Only users who pass a valid Firebase ID token as a Bearer token in the Authorization header of the HTTP request or in a __session cookie are authorized to use the function.
Checking the ID token is done with an ExpressJs middleware that also passes the decoded ID token in the Express request object.
Check this sample code from firebase.
Putting access-control logic in your function is standard practice for Firebase, BUT the function still has to be invoked to access that logic.
If you don't want your function to fire at all except for authenticated users, you can take advantage of the fact that every Firebase Project is also a Google Cloud Project -- and GCP allows for "private" functions.
You can set project-wide or per-function permissions outside the function(s), so that only authenticated users can cause the function to fire, even if they try to hit the endpoint.
Here's documentation on setting permissions and authenticating users. Note that, as of writing, I believe using this method requires users to use a Google account to authenticate.

Resources