Set Firebase Storeage file size and File number limit - firebase

I am successfully uploading images using AngularFire2 to Firebase Storage.
I have the following upload code.
this.AfStorage.ref(`images/${userId}/${timeStamp}`).putString(base64Image,'data_url');
There are a few issues that I want to solve.
How can I set a limit on the file size? Meaning I want the user to be able to upload files which are less than 10mb.
How can I limit the file number? Meaning I want one user to be able to upload only 3 files.
If there are no firebase server size solutions do suggest some client size solutions.
Thanks

To limit the size of uploads, see this example from the documentation:
service firebase.storage {
match /b/{bucket}/o {
match /images {
// Cascade read to any image type at any path
match /{allImages=**} {
allow read;
}
// Allow write files to the path "images/*", subject to the constraints:
// 1) File is less than 5MB
// 2) Content type is an image
// 3) Uploaded content type matches existing content type
// 4) File name (stored in imageId wildcard variable) is less than 32 characters
match /{imageId} {
allow write: if request.resource.size < 5 * 1024 * 1024
&& request.resource.contentType.matches('image/.*')
&& request.resource.contentType == resource.contentType
&& imageId.size() < 32
}
}
}
}
There is no way to limit the number of files with security rules, so you'll have to look at workarounds such as shown here:
https://groups.google.com/forum/#!topic/firebase-talk/ZtJPcEJr0Mc (seems to hint this is possible, but I've never tried it)
limit number of children with storage rules
Limit number of files in a firebase storage path

Related

Firestore Security Rule: Check user input on get query

I'm creating a blog with Firestore. I have two collections called users and blogPosts. Each document in blogPosts contains name, createdAt, createdBy and password (plain string) field.
I want to create a security rule so clients can access a document only if they provide the correct document password.
According to an idea in this link, I wrote a rule like this:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /blogPosts/{postUid} {
allow write: if
request.resource.data.createdBy == request.auth.uid &&
request.resource.data.name is string &&
request.resource.data.name.size() > 2 &&
request.resource.data.name.size() < 32 &&
request.resource.data.password is string &&
request.resource.data.password.size() > 5 &&
request.resource.data.password.size() < 32
allow read: if
request.auth != null &&
request.resource.data.password == resource.data.password // <---- THIS LINE IS NOT WORKING
}
}
}
I get this error in playground with the rule above: Error: simulator.rules line [16], column [8]. Property resource is undefined on object. So it means we don't have resource.data on read queries.
How can I achieve my goal with Firebase security rules, so only clients that has blogPosts password can access to documents?
What you're trying to do isn't possible with security rules (and also isn't really "secure" at all). A client app can't simply pass along some password in a query. The only time input is checked is for document fields in a write operation, not document reads.
If you want to check a password, you will have to make some sort of API endpoint and require that the caller provide the password to that endpoint. Again, bear in mind that this is only as secure as your ability to keep that password a secret, because once it becomes known (perhaps by simply reverse engineering your app), anyone will be able to use it.

Firebase Storage Security Rule (Image dimensions)

I am running a problem with my app. I want to check in the backend if the photo which has been uploaded has the allowed dimensions (1024px * 1024px or 1024px * 1980 px, for example), and also that the image is lower than 30MB (for all dimensions).
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /photos/{photo} {
function hasValidSize() {
// Max. photo size = 30MB (For all dimensions)
return request.resource.size < 30 * 1024 * 1024;
}
allow read;
allow write: if request.auth != null && hasValidSize();
}
}
}
Is it possible to check the photo dimensions as a security rule too?
Pd: I have already implemented a photo cropper that has 3 possible dimensions, but what if a hacker downloads the client code and modifies it?...
Thanks.
It's not possible to check image dimensions in security rules. The only thing you can check is the total size of the file, which is what request.resource.size is used for.
In fact, Cloud Storage is used for any type of content at all. It's not limited to images, and doesn't have any special considerations for images. To Cloud Storage, everything is just a sequence of bytes.
If you need to place limits on the contents of the file, you'll need to write some backend code for that, and make sure all your clients are using that backend for the upload. Either that, or use Cloud Functions to write a trigger that deletes invalid files after they've been uploaded.

Correct way to reference images in Firestore document

In my application, my user documents have an avatar image associated with them which is kept in cloud storage. Currently I have a field in the user object that references the download URL of its image. Just wondering if this is the correct/best way to do it.
There isn't really a best way to materialize the link between an avatar image that you store in Cloud Storage and a specific user of your Firebase project.
You can very well do the way you do (having a "field in the user object that references the download URL").
Another approach would be to store the avatar images in a public "folder" under your default bucket using the user UID to name the avatar image (see at the bottom the note on "folders").
Then you you can use a link with the following structure to directly download the image (or include it in a img src HTML tag)
https://firebasestorage.googleapis.com/v0/b/<yourprojectname>.appspot.com/o/users%2F38r174prM9aTx4JAdcm50r3V0Hq2.png?alt=media
where users is the name of the "folder" dedicated to public avatar images
and 38r174prM9aTx4JAdcm50r3V0Hq2.png is the image file name for a specific user (i.e. user UId + png extension).
Note that the / is encoded as %2F (standard URL encoding).
You would then set your Cloud Storage security rules like the following:
service firebase.storage {
match /b/{bucket}/o {
match /privateFiles { //All other files that are not under users
match /{allprivateFiles=**} {
allow read: if false;
allow write: .....
}
}
match /users/{userId} { //Public "folder"
allow read;
}
}
}
Note: Actually Google Cloud Storage does not have true "folders", but by using a "/" delimiter character in the file path it will behave similarly to folders. In particular the Firebase console will display the files organised in folders.

Don't allow deletions on Firebase storage?

If I wanted to not allow users to delete a file stored in Firebase storage, what rule would I need to write to accomplish this?
I know for Firebase database I would do something like:
".write": "newData.val() != null"
But how would I do this for storage?
Pretty sure this has been answered a few times (in a few ways), but the easiest answer I've seen is:
allow write: request.resource.someProperty == resource.someProperty || resource == null;
someProperty can be a hash (if you don't want to allow overwrites) or a name (if you want the contents to be overwritten by a new object).
One way to do this would be to only allow writes if the MD5 hash of the new file is the same as the existing file:
// Allow writes if the hash of the uploaded file is the same as the existing file
allow write: if request.resource.md5Hash == resource.md5Hash;
There are probably more/easier ways. But this is the first one I came across in https://firebase.google.com/docs/reference/security/storage/.

Please suggest a way to store a temp file in Windows Azure

Here I have a simple feature on ASP.NET MVC3 which host on Azure.
1st step: user upload a picture
2nd step: user crop the uploaded picture
3rd: system save the cropped picture, delete the temp file which is the uploaded original picture
Here is the problem I am facing now: where to store the temp file?
I tried on windows system somewhere, or on LocalResources: the problem is these resources are per Instance, so here is no guarantee the code on an instance shows the picture to crop will be the same code on the same instance that saved the temp file.
Do you have any idea on this temp file issue?
normally the file exist just for a while before delete it
the temp file needs to be Instance independent
Better the file can have some expire setting (for example, 1H) to delete itself, in case code crashed somewhere.
OK. So what you're after is basically somthing that is shared storage but expires. Amazon have just announced a rather nice setting called object expiration (https://forums.aws.amazon.com/ann.jspa?annID=1303). Nothing like this for Windows Azure storage yet unfortunately, but, doesnt mean we can't come up with some other approach; indeed even come up with a better (more cost effective) approach.
You say that it needs to be instance independant which means using a local temp drive is out of the picture. As others have said my initial leaning would be towards Blob storage but you will have cleanup effort there. If you are working with large images (>1MB) or low throughput (<100rps) then I think Blob storage is the only option. If you are working with smaller images AND high throughput then the transaction costs for blob storage will start to really add up (I have a white paper coming out soon which shows some modelling of this but some quick thoughts are below).
For a scenario with small images and high throughput a better option might be to use the Windows Azure Cache as your temporary storaage area. At first glance it will be eye wateringly expensive; on a per GB basis (110GB/month for Cache, 12c/GB for Storage). But, with storage your transactions are paid for whereas with Cache they are 'free'. (Quotas are here: http://msdn.microsoft.com/en-us/library/hh697522.aspx#C_BKMK_FAQ8) This can really add up; e.g. using 100kb temp files held for 20 minutes with a system throughput of 1500rps using Cache is about $1000 per month vs $15000 per month for storage transactions.
The Azure Cache approach is well worth considering, but, to be sure it is the 'best' approach I'd really want to know;
Size of images
Throughput per hour
A bit more detail on the actual client interaction with the server during the crop process? Is it an interactive process where the user will pull the iamge into their browser and crop visually? Or is it just a simple crop?
Here is what I see as a possible approach:
user upload the picture
your code saves it to a blob and have some data backend to know the relation between user session and uploaded image (mark it as temp image)
display the image in the cropping user interface interface
when user is done cropping on the client:
4.1. retrieve the original from the blob
4.2. crop it according the data sent from the user
4.3. delete the original from the blob and the record in the data backend used in step 2
4.4. save the final to another blob (final blob).
And have one background process checking for "expired" temp images in the data backend (used in step 2) to delete the images and the records in the data backend.
Please note that even in WebRole, you still have the RoleEntryPoint descendant, and you still can override the Run method. Impleneting the infinite loop in the Run() (that method shall never exit!) method, you can check if there is anything for deleting every N seconds (depending on your Thread.Sleep() in the Run().
You can use the Azure blob storage. Have look at this tutorial.
Under sample will be help you.
https://code.msdn.microsoft.com/How-to-store-temp-files-in-d33bbb10
you have two way of temp file in Azure.
1, you can use Path.GetTempPath and Path.GetTempFilename() functions for the temp file name
2, you can use Azure blob to simulate it.
private long TotalLimitSizeOfTempFiles = 100 * 1024 * 1024;
private async Task SaveTempFile(string fileName, long contentLenght, Stream inputStream)
{
try
{
//firstly, we need check the container if exists or not. And if not, we need to create one.
await container.CreateIfNotExistsAsync();
//init a blobReference
CloudBlockBlob tempFileBlob = container.GetBlockBlobReference(fileName);
//if the blobReference is exists, delete the old blob
tempFileBlob.DeleteIfExists();
//check the count of blob if over limit or not, if yes, clear them.
await CleanStorageIfReachLimit(contentLenght);
//and upload the new file in this
tempFileBlob.UploadFromStream(inputStream);
}
catch (Exception ex)
{
if (ex.InnerException != null)
{
throw ex.InnerException;
}
else
{
throw ex;
}
}
}
//check the count of blob if over limit or not, if yes, clear them.
private async Task CleanStorageIfReachLimit(long newFileLength)
{
List<CloudBlob> blobs = container.ListBlobs()
.OfType<CloudBlob>()
.OrderBy(m => m.Properties.LastModified)
.ToList();
//get total size of all blobs.
long totalSize = blobs.Sum(m => m.Properties.Length);
//calculate out the real limit size of before upload
long realLimetSize = TotalLimitSizeOfTempFiles - newFileLength;
//delete all,when the free size is enough, break this loop,and stop delete blob anymore
foreach (CloudBlob item in blobs)
{
if (totalSize <= realLimetSize)
{
break;
}
await item.DeleteIfExistsAsync();
totalSize -= item.Properties.Length;
}
}

Resources