Preventing Abuse: Cloud Functions for Firebase - firebase

What is the best way to stop an attacker from triggering a Cloud Function repeatedly, causing a huge bill or causing the project to run into quota limits?
Some ideas:
Use RTDB or Cloud Storage triggers as much as possible, since writes to those are protected by those products' security rules
Put functions behind a service like Cloudflare
Set up billing alerts, so a notification is sent if the monthly bill is unusually large

Check my answer here.
Short breakdown of items from my answer :
Limit the type of requests
Authenticate if you can
Check for origin
Use a load balancer in between
Use something like Cloudflare Page Rules
Hope it helps :)

Related

firebase billing - kill switch consequences

The firebase documentation includes a warning that if you use a kill switch to stop using firebase when a budget cap is exceeded, as follows:
Warning: This example removes Cloud Billing from your project,
shutting down all resources. Resources might not shut down gracefully,
and might be irretrievably deleted. There is no graceful recovery if
you disable Cloud Billing. You can re-enable Cloud Billing, but there
is no guarantee of service recovery and manual configuration is
required.
I'm trying to investigate what gets irretrievably deleted. Does the datastore get deleted when the kill switch is activated? Is there any opportunity to save data previously stored in cloud firestore, before the deletion takes place? Is there a way to download the database so that I can keep a back up in this scenario?
Please review the following reply from Firebase Team member(samstern) to gain more clarity on this:
these things are handled on a per-product basis and each product has different thresholds for quota overages and different procedures for what happens to inactive resources in a bad state.
For instance I know in Realtime Database if your DB is too big for the
free plan after you downgrade from a paid plan we will not delete
your data automatically. Instead you'll just be stopped from using the
database until you restore your billing.
However that statement clearly says that some products may delete data
if you pull Cloud Billing. It could be for technical reasons, it could
be for policy reasons.
If you want to turn off a product without risking this pulling your
billing account is NOT the best way to do this. It's the nuclear
option if you get into a bad situation and want to get out at all
costs. Instead you should use per-product APIs to shut down the
individual product or service and prevent future usage. This could
include doing things like flipping off APIs in the APIs console,
changing security rules to prevent all further writes, deleting Cloud
Functions, etc
The best source of information I've been able to uncover in answer to this particular question is a discussion on reddit which indicates that you can't recover access to your data, until you pay the bill (including blow out charges) - so maybe that buys some time, but if you don't pay, the project gets deleted. There may also be lost data for things happening at the time the kill switch activates.

Firebase cloud functions pricing for denied requests [duplicate]

As far as I understand, my Google Cloud Functions are globally accessible. If I want to control access to them, I need to implement authorization as a part of the function itself. Say, I could use Bearer token based approach. This would protect the resources behind this function from unauthorized access.
However, since the function is available globally, it can still be DDoS-ed by a bad guy. If the attack is not as strong as Google's defence, my function/service may still be responsive. This is good. However, I don't want to pay for those function calls made by the party I didn't authorize to access the function. (Since the billing is per number of function invocations). That's why it's important for me to know whether Google Cloud Functions detect DDoS attacks and enable counter-measures before I'm being responsible for charges.
I think the question about DDOS protection has been sufficiently answered. Unfortunately the reality is that, DDOS protection or no, it's easy to rack up a lot of charges. I racked up about $30 in charges in 20 minutes and DDOS protection was nowhere in sight. We're still left with "I don't want to pay for those function calls made by the party I didn't authorize to access the function."
So let's talk about realistic mitigation strategies. Google doesn't give you a way to put a hard limit on your spending, but there are various things you can do.
Limit the maximum instances a function can have
When editing your function, you can specify the maximum number of simultaneous instances that it can spawn. Set it to something your users are unlikely to hit, but that won't immediately break the bank if an attacker does. Then...
Set a budget alert
You can create budgets and set alerts in the Billing section of the cloud console. But these alerts come hours late and you might be sleeping or something so don't depend on this too much.
Obfuscate your function names
This is only relevant if your functions are only privately accessed. You can give your functions obfuscated names (maybe hashed) that attackers are unlikely to be able to guess. If your functions are not privately accessed maybe you can...
Set up a Compute Engine instance to act as a relay between users and your cloud functions
Compute instances are fixed-price. Attackers can slow them down but can't make them break your wallet. You can set up rate limiting on the compute instance. Users won't know your obfuscated cloud function names, only the relay will, so no one can attack your cloud functions directly unless they can guess your function names.
Have your cloud functions shut off billing if they get called too much
Every time your function gets called, you can have it increment a counter in Firebase or in a Cloud Storage object. If this counter gets too high, your functions can automatically disable billing to your project.
Google provides an example for how a cloud function can disable billing to a project: https://cloud.google.com/billing/docs/how-to/notify#cap_disable_billing_to_stop_usage
In the example, it disables billing in response to a pub/sub from billing. However the price in these pub/subs is hours behind, so this seems like a poor strategy. Having a counter somewhere would be more effective.
I have sent an email to google-cloud support, regarding cloud functions and whether they were protected against DDoS attacks. I have received this answer from the engineering team (as of 4th of April 2018):
Cloud Functions sits behind the Google Front End which mitigates and absorbs many Layer 4 and below attacks, such as SYN floods, IP fragment floods, port exhaustion, etc.
I have been asking myself the same question recently and stumbled upon this information. To shortly answer your question: Google does still not auto-protect your GCF from massive DDOS-attacks, hence: unless the Google infrastructure crashes from the attack attempts, you will have to pay for all traffic and computing time caused by the attack.
There is certain mechanisms, that you should take a closer look at as I am not sure, whether each of them also applies to GCF:
https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
https://projectshield.withgoogle.com/public/
UPDATE JULY 2020: There seems to be a dedicated Google service addressing this issue, which is called Google Cloud Armor (Link to Google) as pointed out by morozko.
This is from my own, real-life, experience: THEY DON'T. You have to employ your own combo of rules, origin-detection, etc to protect against this. I've recently been a victim of DDoS and had to take the services down for a while to implement my own security wall.
from reading the docs at https://cloud.google.com/functions/quotas and https://cloud.google.com/functions/pricing it doesn't seem that there's any abuse protection for HTTP functions. you should distinguish between a DDoS attack that will make Google's servers unresponsive and an abuse that some attacker knows the URL of your HTTP function and invokes it millions of times, which in the latter case is only about how much you pay.
DDoS attacks can be mitigated by the Google Cloud Armour which is in the beta stage at the moment
See also related Google insider's short example with GC Security Rules and the corresponding reference docs
I am relatively new to this world, but from my little experience and after some research, it's possible to benefit from Cloudflare's DDOS protection on a function's http endpoint by using rewrites in your firebase.json config file.
In a typical Firebase project, here's how I do this :
Add cloud functions and hosting to the project
Add a custom domain (with Cloudflare DNSs) to the hosting
Add the proper rewrites to your firebase.json
"hosting": {
// ...
// Directs all requests from the page `/bigben` to execute the `bigben` function
"rewrites": [ {
"source": "/bigben",
"function": "bigben",
"region": "us-central1"
} ]
}
Now, the job is on Cloudflare's side
One possible solution could be the API Gateway, where you can use firebase authentication. After successful authentication to the api gw it can call your function that deployed with --no-allow-unauthenticated flag.
However I'm confused if you are charged for unauthenticated requests to api gw too..

Best strategy to develop back end of an app with large userbase, taking into account limitations of bandwidth, concurrent connections etc.?

I am developing an Android app which basically does this: On the landing(home) page it shows a couple of words. These words need to be updated on daily basis. Secondly, there is an 'experiences' tab in which a list of user experiences (around 500) shows up with their profile pic, description,etc.
This basic app is expected to get around 1 million users daily who will open the app daily at least once to see those couple of words. Many may occasionally open up the experiences section.
Thirdly, the app needs to have a push notification feature.
I am planning to purchase a managed wordpress hosting, set up a website, and add a post each day with those couple of words, use the JSON-API to extract those words and display them on app's home page. Similarly for the experiences, I will add each as a wordpress post and extract them from the Wordpress database. The reason I am choosing wordpress is that it has ready made interfaces for data entry which will save my time and effort.
But I am stuck on this: will the wordpress DB be able to handle such large amount of queries ? With such a large userbase and spiky traffic, I suspect I might cross the max. concurrent connections limit.
What's the best strategy in my case ? Should I use WP, or use firebase or any other service ? I need to make sure the scheme is cost effective also.
My app is basically very similar to this one:
https://play.google.com/store/apps/details?id=com.ekaum.ekaum
For push notifications, I am planning to use third party services.
Kindly suggest the best strategy I should go with for designing the back end of this app.
Thanks to everyone out there in advance who are willing to help me in this.
I have never used Wordpress, so I don't know if or how it could handle that load.
You can still use WP for data entry, and write a scheduled function that would use WP's JSON API to copy that data into Firebase.
RTDB-vs-Firestore scalability states that RTDB can handle 200 thousand concurrent connections and Firestore 1 million concurrent connections.
However, if I get it right, your app doesn't need connections to be active (i.e. receive real-time updates). You can get your data once, then close the connection.
For RTDB, Enabling Offline Capabilities on Android states that
On Android, Firebase automatically manages connection state to reduce bandwidth and battery usage. When a client has no active listeners, no pending write or onDisconnect operations, and is not explicitly disconnected by the goOffline method, Firebase closes the connection after 60 seconds of inactivity.
So the connection should close by itself after 1 minute, if you remove your listeners, or you can force close it earlier using goOffline.
For Firestore, I don't know if it happens automatically, but you can do it manually.
In Firebase Pricing you can see that 100K Firestore document reads is $0.06. 1M reads (for the two words) should cost $0.6 plus some network traffic. In RTDB, the cost has to do with data bulk, so it requires some calculations, but it shouldn't be much. I am not familiar with the pricing small details, so you should do some more research.
In the app you mentioned, the experiences don't seem to change very often. You might want to try to build your own caching manually, and add the required versioning info in the daily data.
Edit:
It would possibly be more efficient and less costly if you used Firebase Hosting, instead of RTDB/Firestore directly. See Serve dynamic content and host microservices with Cloud Functions and Manage cache behavior.
In short, you create a HTTP function that reads your database and returns the data you need. You configure hosting to call that function, and configure the cache such that subsequent requests are served the cached result via hosting (without extra function invocations).

Cloud Functions, Cloud Firestore, Cloud Storage: How to protect against bots?

I already use ReCAPTCHA for Android apps client-side (I've also implemented, of course, its server-side verification).
However, this ReCAPTCHA is implemented only in one activity. But, of course, hackers can modify the app. For example:
they can simply remove ReCAPTCHA from all activities,
or start another activity that would not have ReCAPTCHA implemented; it's the case btw: I didn't implement ReCAPTCHA in each activity because it's useless according to the first problem I've just mentioned.
So I would want to detect bot and spam requests in Cloud Functions, then in Cloud Firestore, then in Cloud Storage, for the following accesses: read, write, function call. It'd allow me to prevent unwanted contents from being saved in Firestore for example (spamming messages, etc.), and to avoid overreaching my monthly billing quota (because of spam requests to Firestore for example).
Is it possible? How?
There is no "spam detection" for these products. Your security rules will determine who can access what data. If you don't have security rules in place, and allow public access, then anyone will be able to get that data, and you will be charged for it when that happens. This is the nature of publicly accessible cloud services.
If you want more control over the data in these products, you could stop all direct public access with security rules, and force clients to go through a backend you control. The backend could try to apply some logic to determine if it's "spam", by whatever criteria you determine. There is no simple algorithm for this - you will need to define what "spam" means, and reject the request if it meets you criteria.
Google does have some amount of abuse detection for its cloud products, but it will likely take a lot of abuse to trigger an alert. If you suspect abusive behavior, be sure to collect information and send that to Firebase support for assistance.
Just thought I'd add that there is another way to restrict access to Cloud Functions.
Doug already described Way 1, where you write the access logic within the cloud function. In that case, the function still gets invoked, but which code path is taken is up to your logic.
Way 2 is that you can set a function to be "private" so that it can't be invoked except by registered users (you decide on permissions). In this case, unauthenticated requests are denied and the function is not invoked at all.
Way 2 works because every Firebase project is also a Google Cloud Platform project, and GCP offers this functionality. Here are the relevant references to (a) Configuring functions as public/private, and then (b) authenticating end-users to your functions.

Throttling callable https firebase cloud function execution per user?

I was not able to find any resources about this, hence wanted to ask if it is a good idea / necessary to add throttling to callable https cloud functions in firebase on per user basis?
Example, I want to limit one user to be only able to call https function every 5 seconds.
If it is a viable thing to do, how would it be acheived?
There is not any inbuilt per user throttling capabilities in cloud functions. You have a few options of doing your own:
Put logic in your client side apps that tracks the amount of times a user is calling them and deny the call if too frequent
Issue here is that if someone is trying to game you this wouldn't be 100% effective as they could use multiple windows, etc.
You could implement a database solution where you track their usage and at the beginning of your function you check if they are violating your rate limit
Issue here is you are still having the triggers of your functions incurring the costs.
If it was a super big issue for you, I would recommend looking at using an API management platform such as Apigee where you can apply policies such as rate limiting
This a heavy weight solution with an increased cost and so wouldn't do it unless necessary

Resources