Is there any way to add Firebase 3 Storage security rules to limit how many files can single authenticated user upload? For example 100 files per user.
Or somehow update Firebase Database file count, once someone uploaded file to Storage and later validate that file count.
Trying to solve problem, how to deal with user ability to upload unlimited data amount to storage.
It's not a simple solution, but...
https://medium.com/#felipepastoree/per-user-storage-limit-validation-with-firebase-19ab3341492d
Related
I know there are several questions regarding this (e.g. https://stackoverflow.com/a/52808572/3481904), but I still don't have a good solution for my case.
My application has Groups, which are created/removed dynamically, and members (users) can be added/removed at anytime.
Each Group has 0..N private files (Firebase Storage), saved in different paths (all having the prefix groups/{groupId}/...).
In Firestore Security Rules, I use get() & exists() to know if the signed-in-user is part of a group. But I cannot do this in the Firebase Storage Security Rules.
The 2 proposed solution are:
User Claims:
but the token needs to be refreshed (signing out/in, or renewing expired token) which is not acceptable for my use case, because users need to have access immediately once invited. Also, a user can be part of many groups, which can potentially grow over 1000 bytes.
File Metadata:
but Groups can have N files in different paths, so I will need to loop-list all files of a group, and set the userIds of the group-members in the metadata of each file, allowing access to it. This would be an action triggered by Firestore (a Firebase Function), when a member is added/removed.
I don't like this approach because:
needs to loop-list N files and set metadata for each one (not very performant)
To add new files, I think I would need to set create to public (as there is no metadata to check against yet), and then a Function would need to be triggered to add the userIds to the metadata
there might be some seconds of delay to give files access, which could cause problems in my case if the user opens the group page before that time, having a bad experience
So, my questions are:
Is there a better way?
If I only allow the client to get and create all files when authenticated (disallowing delete and list), would this be enough for security? I think that there might be a chance that malicious hackers can upload anything with an anonymous user, or potentially read all private group files if they know the path...
Thanks!
If custom claims don't work for you, there is really no "good" way to implement this. Your only real options are:
Make use of Cloud Functions in some way to mirror the relevant data from Firestore into Storage, placing Firestore document data into Storage object metadata to be checked by rules.
Route all access to Storage through a backend you control (could also be Cloud Functions) that performs all the relevant security checks. If you use Cloud Functions, this will not work for files whose content is greater than 10MB, as that's the limit for the size of the request and response with Cloud Functions.
Please file a feature request with Firebase support to be allow use of Firestore documents in Storage rules - it's a common request. https://support.google.com/firebase/contact/support
I had similar use case, here’s another way to go about it without using file metadata.
Create a private bucket
Upload files to this bucket via cloud function
2a. validate group stuff here then upload to above bucket.
2b. Generate a signed url for uploaded file
2c. Put this signed URL in Firestore where only the group members can read it (eg. /groups/id/urls)
In UI get the signed URL from firestore for given image id in a group and render the image.
Because we generate the signed URL and upload file together there will be no delay in using the image. (The upload might take longer but we can show spinner)
Also we generate the URL once so not incurring any B class operations or extra functions running every time we add new members to groups.
If you want to be more secure you could set expiry of signed urls quite short and rotate them periodically.
My goal is to have a firebase cloud function track the upload of three separate files to the same storage bucket. These uploads are preceded by a write to the real time database which would preferably be the trigger for the cloud function to track the uploads.
The context is a user is adding an item to her shopping cart. The data is written to the RTDB and then a custom 3d model and 2 images are copied into a storage bucket. If any of these files don't successfully upload, I need to know that and conduct a rollback of the 3 files in the storage bucket and also remove the entry in the database. I could handle this client side, but that isn't ideal since usually if the uploads fail, its because the connection with the client has failed.
I haven't been able to find any sort of batch add or transaction-type uploads to firebase storage. Sorry for not having any code to show, but I'm not even really sure how to get started on this. Any suggestions would be much appreciated. Thanks!
There are no transactions that cross products like this. Nor are there any transactions offered by Cloud Storage. You're going to have to check errors and manually undo things previously done. Or, have some job that checks for orphaned data and deletes it later.
I want a group of users to access files stored in Cloud Storage, but I want to make sure they are authorized. Do the unique ids generated by Firestore create enough protection to make them unguessable?
I have my files stored using this structure in Firestore:
/projects/uidOfProject/files/uidOfFile
I made sure that only authorized users can view uidOfProject and uidOfFile using Firestore Rules.
I store that actual files in Storage here:
/projects/uidOfProject/files/uidOfFile
But, I cannot lock down this path to only the authenticated user id, because other users can access this project.
Is the fact that I have two unique ids enough to prevent a user who doesn't have access from finding these files? What are the odds of a user figuring out both the uidOfProject and uidOfFile and manipulating that file? Is there a more secure way of doing this? I know cloud functions could offer a solution, but at a cost of speed.
Do the unique ids generated by Firestore create enough protection to
make them unguessable?
Security through obscurity is NOT security. Good reference to read.
Unguessable, probably. However, due to the somewhat public nature of URLs, logfiles, information leaks, "hey check this out" favors, etc. objects that are not properly protected will be discovered.
If only users of the project can access the files, can they also list the files? If yes, curiosity might take place browsing to see what is there.
Trying to workout how I should be storing files each users uploaded files. The files need to be private so only the person who uploaded it can read/write.
My question is, should I be creating one bucket per userId and securing the bucket to that user, or am I supposed to dump everything in a single bucket and make use of the GCS ACL permissions on each file?
Putting each users files in their own bucket seems to make sense but just looking for some clarification around best practises.
In general, there is no need to create a new bucket for each user. That will not scale (in terms of effort) as you'll spend a lot of time administering all these buckets.
You should start with the documentation on Cloud Storage security rules. Especially the page on user based security. You use security rules to determine who can do what to the various files in storage. How you actually write those rules is going to depend on how you want to structure the files. Typically you use the Firebase Auth user id in the path of the files, and you use a wildcard in the rules to protect based on that uid.
I use firebase cloud storage to upload images.
The app I am working on allows users to send images to one another (a chat thing), so that one user uploads the photo and another one will download it and once it is downloaded it should be deleted from the storage.
Example of what I am talking about
User A sends a photo to User B by uploading it to firebase storage, then User B notices that User A send him an image and decides to download it, after User B downloaded the image it should be deleted from storage.
My question
What if User A sends too many images and User B never downloads any of these images. Then this means that I will end with useless images on storage taking space.
So in this case is there a way in firebase to auto delete a file after it has been uploaded after (n) amount of time (not client side)?
I am still in the middle of doing this research, but it seems like you can use life cycle rules to delete files based on the age of the file.
Here are a few examples listed in the intro of the doc.
Downgrade the storage class of objects older than 365 days to
Coldline Storage.
Delete objects created before January 1, 2013.
Keep only the 3 most recent versions of each object in a bucket with
versioning enabled