This question already has answers here:
Efficient way to implement a rate-limiting or throttling algorithm for web requests in App Engine?
(1 answer)
Throttling requests on AppEngine
(1 answer)
Closed last month.
I currently have a backend Express service running on App Engine, serving as an API for a mobile app. Users request data at different endpoints from https://my-app.appspot.com/{endpoint}. To authenticate users, my express app looks something like this: app.get("/home", checkIfAuthenticated, getHomeFeed) where checkIfAuthenticated will look at the authorization header of the request and use the Firebase admin sdk to get the user id based on the token (like they say to do so here). If I don't get a valid uid, I immediately return a 404 response to block the request. If I get a valid uid, I pass that to the getHomeFeed function to get the appropriate data and send that data back as json.
Here is my main question: since App Engine scales with the number of requests, how can I prevent a malicious actor (authenticated or not) from simply writing a Python program to make thousands of requests to https://my-app.appspot.com/home and making App Engine spin up lots of instances to look at all those requests? Ideally I would have some sort of rate limiter that blocks on I.P. addresses as well as authenticated users who are making too many requests with valid bearer tokens that prevents App Engine from having to spin up a ton of instances to verify.
Related
Although I fully understand the use of AppCheck, I still wonder how it can help against spamming request to an API endpoint.
In the scenario of a hacker using OpenBullet or whatever hacker tool to spam thousands of requests per minutes to a specific endpoint (for example, a Signup endpoints to create thousands of fake profiles in a social app):
once the hacker got their hand on the appcheck token from the device, can't they simply attach it to the request's header, and spam all they want the api endpoint that we secured from our backend by checking appcheck token?
I mean, as long as the TTL didn't expire, I guess all their requests will pass the check thus they could use their hacker tool and pretend to come from the untempered app? Or am I missing something?
I guess a solution would be to:
1- forceRefresh the appcheck token on each fetch request from the mobile app
2- expire the received appcheck token programmatically after successful verification from the backend, so that further request would need a new one that can only be generated from the app, thus making it harder for the hacker?
Any help is appreciated! :)
I'll put it in a different way. While AppCheck offers a level of protection to your resources, it does not guarantee 100% protection. The sample you gave is an instance on how it could be bypassed. But what can't be factored out is that AppCheck makes it harder for a malicious actor to roam around your services and consume them on your budget.
Take a look at this section from the documentation. Also take a look at this question as it was asked after your question and had a firebaser (Frank) corresponding to it.
(Updated description)
Frontend: Android
Core requirements: I would like to write my own code and have it executed on the server. I want the whole backend to be automated (no admin creating tables in a database and inserting records). I still want to benefit from some basic BaaS functions like sending notifications to users, server maintenance, etc. to speed up the MVP development process.
Description of MVP functionality - survey app for entrepreneurs:
An entrepreneur adds the survey and information about it (questions, possible answers). It is sent to the server and saved. There are different variants of surveys (single choice, multi-choice, open questions, etc.), so a specific document has to be created automatically by the backend code. Analogically, the creation of a document for responses has to be handled by the backend. The same in the case of the document for the final results of the survey research.
The respondent receives a notification about an available survey. The mobile app retrieves information about the survey from the server and respondent completes the survey.
The application sends the respondent's responses to the server, the server saves the information.
X respondents perform points 2 and 3.
When the survey is completed (the number of respondents set by the entrepreneur is reached), the server processes the data, collected from all respondents and saves the results of the research (in the appropriate document).
The entrepreneur receives a notification about the completed research. The application downloads the results from the server.
Additional requirements:
Server has to be able to serve many entrepreneurs and respondents at the same time without any problems like data corruption.
No admin needed for creating tables or inserting records - Backend is 100% automated.
Certainly!
You could use your admin client side of the application to upload the questions, corresponding answers, answer response limit, and completion flag (Step 1).
You could then retrieve the data from your user side of the client app from Firebase Firestore, and have the users complete the surveys and upload the answers back to Firebase Firestore. (Steps 2 & 3).
Step 5 could be achieved by using Firebase cloud functions, which listen when a Firestore document variable has reached the response limit, it could then be marked as complete. Step 6 could also be activated in the cloud function, sending a notification via FCM to your admin client.
I know this answer doesn't go into any code specifics, just wanted to let you know that this is most certainly possible with Firebase :)
I would certainly recommend creating an admin client app in addition to the user client app, rather than placing them in the same app!
I'm trying to get started implementing Web Push in one of my apps. In the examples I have found, the client's endpoint URL is generally stored in memory with a comment saying something like:
In production you would store this in your database...
Since only registered users of my app can/will get push notifications, my plan was to store the endpoint URL in the user's meta data in my database. So far, so good.
The problem comes when I want to allow the same user to receive notifications on multiple devices. In theory, I will just add a new endpoint to the database for each device the user subscribes with. However, in testing I have noticed that endpoints change with each subscription/unsubscription on the same device. So, if a user subscribes/unsubscribes several times in a row on the same device, I wind up with several endpoints saved for that user (all but one of which are bad).
From what I have read, there is no reliable way to be notified when a user unsubscribes or an endpoint is otherwise invalidated. So, how can I tell if I should remove an old endpoint before adding a new one?
What's to stop a user from effectively mounting a denial of service attack by filling my db with endpoints through repeated subscription/unsubscription?
That's more meant as a joke (I can obvioulsy limit the total endpoints for a given user), but the problem I see is that when it comes time to send a notification, I will blast notification services with hundreds of notifications for invalid endpoints.
I want the subscribe logic on my server to be:
Check if we already have an endpoint saved for this user/device combo
If not add it, if yes, update it
The problem is that I can't figure out how to reliably do #1.
I will just add a new endpoint to the database for each device the user subscribes with
The best approach is to have a table like this:
endpoint | user_id
add an unique constraint (or a primary key) on the endpoint: you don't want to associate the same browser to multiple users, because it's a mess (if an endpoint is already present but it has a different user_id, just update the user_id associated to it)
user_id is a foreign key that points to your users table
if a user subscribes/unsubscribes several times in a row on the same device, I wind up with several endpoints saved for that user (all but one of which are bad).
Yes, unfortunately the push API has a wild unsubscription mechanism and you have to deal with it.
The endpoints can expire or can be invalid (or even malicious, like android.chromlum.info). You need to detect failures (using the HTTP status code, timeouts, etc.) when you try to send the push message from your application server. Then, for some kind of failures (permanent failures, like expiration) you need to delete the endpoint.
What's to stop a user from effectively mounting a denial of service attack by filling my db with endpoints through repeated subscription/unsubscription?
As I described above, you need to properly delete the invalid endpoints, once you realize that they are expired or invalid. Basically they will produce at most one invalid request. Moreover, if you have high throughput, it takes only a few seconds for your server to make requests for thousands of endpoints.
My suggestions are based on a lot of experiments and thinking done when I was developing Pushpad.
Another way is to have a keep alive field on you server and have your service worker update it whenever it receives a push notification. Then regularly purge endpoints which haven't been responded to recently.
I have a server that needs to receive real time updates from Firebase, for multiple users, where each user grants Oauth access to his Firebase data to my app.
My server is implemented using Firebase REST Streaming, based on Server Sent Events.
I need to know if there is a way to multiplex Firebase data pertaining to multiple users on a single stream.
I would like to be able to set up the stream with Oauth tokens pertaining to multiple users, and to subsequently receive real time updates pertaining to the multiple users on the same stream.
Otherwise, it seems that I need to maintain a separate stream per Oauth token, which seems to be non-scalable.
I think Twitter have a Site Streams feature like what I am looking for in their API, implemented via an envelope that indicates the user the message is targetted to.
Does Firebase support anything similar?
A single Firebase REST call will only monitor a single node. E.g.
curl 'https://samplechat.firebaseio-demo.com/users/jack/name.json'
You can control what data is returned from under that node with the orderBy, startAt,endAtandlimitTo...` parameters. E.g.
curl 'https://samplechat.firebaseio-demo.com/users/.json?orderBy="name"&startAt="Jack"'
There is no way to have a single REST request return data from different nodes/nodesets. So unless you find a way to gather all data you want to return under single node, where it can be returned by a single set of query parameters (orderBy, etc), you will have to execute multiple REST requests to get your data.
Note that the SDKs that Firebase provides internally use a web-socket protocol, so are not impacted by this limitation. If an SDK is available for your server-side language (e.g. node.js, Java), you could solve it by using that one.
There is an application that should read user tweets for every registered user, process them and store data for future usage.
It can reach Twitter 2 ways: either REST API (poll twitter every x mins), or use its Streaming API to get tweets delivered.
Besides completely different implementations on server side I wonder what are other impacts on server side?
Say application has tousands of users. Is it better to build kind of queue and poll twitter for each user (the simplest scenario), or is it better to use Streaming API and keep HTTP connection open for each user? I'm a bit worried about the latter as it'd require keeping tousands of connections open all the time. Are there any drawbacks of that I'm not aware of? If I'd like to deploy my app on Heroku or on EC2 instance, would it be ok or are there any limits?
How it is done in other apps that constantly need getting data for each user?