Safety of exposing document IDs for "anyone with the link can view"-style functionality - firebase

I have an API endpoint that I'm using to return some data from a Cloud Firestore collection.
The data it returns is largely insensitive, but it's publicly callable, so I'm not using auth for this endpoint. I wouldn't want the collections to be listable, i.e. I want it to act like "anyone with the link can view" data.
I'm looking up data for a subcollection's document, so currently the call would look like something like this:
GET endpoint.example/?parentDoc=XXXX0000XXXX&subDoc=XXXX0000XXXXX
I was considering creating a separate "references" collection with a UUID or something to represent the two, in case revealing the document IDs like that is considered a bad practice(?) — e.g.
GET endpoint.example/?myOwnRef=123-234-123-234-ABC-DEF
Assuming I have the Firestore locked down with appropriate security rules, is it safe to assume that the only benefit I'd get from further hashing / creating my own (e.g. UUID) reference for the parent doc / subcollection doc is security by obscurity?
...Or is there more merit to further obscuring the IDs here if I'm after a private / shareable link style functionality to reference the data?
EDIT: As Doug Stevenson pointed out, this question refers to autogenerated Firestore document IDs.

It depends on if your document IDs actually contain any data in them. If they are just randomly generated, then good security rules should be sufficient to prevent someone from doing something they're not supposed to with a document if they know the ID. There is no advantage to hashing it, since it's already an opaque value.
If the ID does contain some data, then you are putting that data into the hands of someone who might do something with it that you'd not like, and you might want to remove that from view by hashing it.

Related

Restrict specific object key values with authentication in Firestore

I have an object stored in the Firestore database. Among other keys, it has a userId of the user who created it. I now want to store an email address, which is a sensitive piece of info, in the object. However, I only want this email address to be retrieved by the logged in user whose userId is equal to the userId of the object. Is it possible to restrict this using Firebase rules? Or will I need to store that email address in a /private collection under the Firebase object, apply restrictive firebase rules, and then retrieve it using my server?
TL;DR: Firestore document reads are all or nothing. Meaning, you can't retrieve a partial object from Firestore. So there is no feature at rule level that will give you granularity to restrict access to a specific field. Best approach is to create a subcollection with the sensitive fields and apply rules to it.
Taken from the documentation:
Reads in Cloud Firestore are performed at the document level. You either retrieve the full document, or you retrieve nothing. There is no way to retrieve a partial document. It is impossible using security rules alone to prevent users from reading specific fields within a document.
We solved this in two very similar approaches:
As you suggested, you can move your fields to a /private collection and apply rules there. However, this approach caused some issues for us because the /private collection is completely dettached from the original doc. Solving references implied multiple queries and extra calls to FS.
The second option -which is what the Documentation suggests also, and IMHO a bit better- is to use a subcollection. Which is pretty much the same as a collection but it keeps a hierarchical relationship with the parent coll.
From the same docs:
If there are certain fields within a document that you want to keep hidden from some users, the best way would be to put them in a separate document. For instance, you might consider creating a document in a private subcollection
NOTE:
Those Docs also include a good step-by-step on how to create this kind of structure on FS, how to apply rules to them, and how to consume the collections in various languages

Best way to save multiple collections under one user UID

I am writing an app where there is not a lot of interaction with other users. Set and retrieve your own data only.
In Firebase Firestore how could I model this so that everything fits under a users UID?
Something that would look like this?
users/{uid}/user/
users/{uid}/settings/
users/{uid}/weather/
If I want to achieve something like this, then I need to create another UID:
users/{uid}/user/{uid}/{userInfo}
This feels a bit off to me.
Is this wrong? Would it be better if I moved every subcollection into its own collection?
Is this faster / more efficient?
Any help is appreciated!
The most common approaches for me:
Store the profile information, settings and weather in the user document (your {uid}) itself. This most common for the profile information, but it's always worth considering for other types too: do they really need to be in their own documents?
Have a default name for a single subcollection for each user, and then have each information type as a document with a known name in there. So /users/$uid/documents/profile, /users/$uid/documents/settings, and /users/$uid/documents/weather. So now each information type is in a separate document, meaning you can for example secure access to them individually.
If the information for a certain type is repeated, I'd put that in documents in a known/named subcollection. So if there are many weathers, you'd get /users/$uid/weather/$weatherdocs. So with this you can now have an endless set of the specific type of information.
Neither of these is pertinently better/worse, as it all depends on the use-cases of your app.
There will be performance differences between these approaches, as they require a different number of network requests. If this is a concern for your app, I'd recommend testing all approaches above to measure their relative performance against your requirements.

Is it always safe to use eventId as the Firestore document id?

This article here recommends using the eventId as the document id to prevent multiple creations of a document due to background process retries. Is it guaranteed that there will never be a collision?
Mentioned article is showing how to avoid duplicate item created by retires of unsuccessful function. In shortcut its saying that if you use add method (reference) and function is retried (but failed after Firestore write) you may have a problem with 2 documents identical created in Firestore with different IDs created automatically.
As solution to this author is proposing to create documentID with eventID and write to it using set (refrence).
This approach gives you 100% that retries of the same function invocation will not create duplicate items.
Backing to the question... I think you are afraid that 2 different invocation will want will have the same event_id and the document can be overwritten. This I think is possible, but in my opinion it's not in scope of this article as it's answers different question and creating as simple use case as possible to help understand the approch.
Lets imagine we have to different functions invoked by the same event writing different content to the same collection. The result will be unpredictable, I think. However in such situation you can use the same mechanism, little bit upgraded ex. like this <function_name>_<event_id>. Using the example from the article it will be small change like:
...
return db.collection('contents').doc('<function_name>_'+eventId).set(content).then
...
So in my understanding if you afraid of collision you should add additional elements to created document references, like in the example above.
From my point of view, an ability to use an event_id as a firestore document id depends on a your context and requirements.
For example - from the "business" point of view - is the message/event really a unique business related thing (thus you really would like to avoid duplication of messages)? Or are there some other business entity which is to be unique, but there can be more than one messages (with different event_id) about that business entity?
On top of that, from the best of my knowledge, it may be a good practice to generate/create the firestore document ids randomly (as a hash, of a guid, etc.). In that case, the search/retrieval from the firestore should work "faster". So, I don't know if the event_id is "random" enough in your context. Maybe it is Ok, may be not...
In my personal experience I try to generate a document id as a hex digest of a hash from a string (may be composed string), which supposed to be unique in the business context. For example, the event/message - is a google.storage.object.finalize event. In that case, I would use some metadata about the underlined object/file. Depends on the business context and requirements, or can be (or not be) a bucket name, object name, size, md5 or crc32c etc. or a combination of those elements... The chosen elements are concatenated into a string, then a hash is calculated, and a hex digest of that hash becomes a document id in the firestore collection.

Using Firestore document's auto-generated ID versus using a custom ID

I'm currently deciding on my Firestore data structure.
I'll need a products collection, and the products items will live inside of it as documents.
Here are my product's fields:
uniqueKey: string
description: array of strings
images: array of objects
price: number
QUESTION
Should I use Firestore auto-generated ID's to be the ID of my documents, or is it better to use my uniqueKey (which I'll query for in many occasions) as the document ID? Is there a best option between the 2?
I imagine that if I use my uniqueKey, it will make my life easier when retrieving a single document, but I'll have to query for more than 1 product on many occasions too.
Using my uniqueKey as ID:
db.collection("products").doc("myUniqueKey").get();
Using my Firestore auto-generated ID:
db.collection("products").where("uniqueKey", "==", "myUniqueKey").get();
Is this enough of a reason to go with my uniqueKey instead of the auto-generated one? Is there a rule of thumb here? What's the best practice in this case?
In terms of making queries from a client, using only the information you've given in the question, I don't see that there's much practical difference between a document get using its known ID, or a query on a field that is also unique. Either way, an index is used on the server side, and it costs exactly 1 document read. The document get() might be marginally faster, but it's not worthwhile to optimize like this (in my opinion).
When making decision about data modeling like this, it's more important to think about things like system behavior under load and security rules.
If you're reading and writing a lot of documents whose IDs have a sequential property, you could run into hotspotting on those writes. So, if you want to use your own ID, and you expect to be reading and writing them in that sequence under heavy load, you could have a problem. If you don't anticipate this to be the situation, then it likely doesn't matter too much whose ID you use.
If you are going to use security rules to limit access to documents, and you use the contents of other documents to help with that, you'll need to be able to uniquely identify those documents in your rule. You can't perform a query against a collection in rules, so you might need meaningful IDs that will give direct access when used by rules. If your own IDs can be used easily this way in security rules, that might be more convenient overall. If you're force to used Firestore's generated IDs, it might become inconvenient, difficult, or expensive to try to maintain a relationship between your IDs and Firestore's IDs.
In any event, the decision you're making is not just about which ID is "better" in a general sense, but which ID is better for your specific, anticipated situation, under load, with security in mind.

Managing Denormalized/Duplicated Data in Cloud Firestore

If you have decided to denormalize/duplicate your data in Firestore to optimize for reads, what patterns (if any) are generally used to keep track of the duplicated data so that they can be updated correctly to avoid inconsistent data?
As an example, if I have a feature like a Pinterest Board where any user on the platform can pin my post to their own board, how would you go about keeping track of the duplicated data in many locations?
What about creating a relational-like table for each unique location that the data can exist that is used to reconstruct the paths that require updating.
For example, creating a users_posts_boards collection that is firstly a collection of userIDs with a sub-collection of postIDs that finally has another sub-collection of boardIDs with a boardOwnerID. Then you use those to reconstruct the paths of the duplicated data for a post (eg. /users/[boardOwnerID]/boards/[boardID]/posts/[postID])?
Also if posts can additionally be shared to groups and lists would you continue to make users_posts_groups and users_posts_lists collections and sub-collections to track duplicated data in the same way?
Alternatively, would you instead have a posts_denormalization_tracker that is just a collection of unique postIDs that includes a sub-collection of locations that the post has been duplicated to?
{
postID: 'someID',
locations: ( <---- collection
"path/to/post/location1",
"path/to/post/location2",
...
)
}
This would mean that you would basically need to have all writes to Firestore done through Cloud Functions that can keep a track of this data for security reasons....unless Firestore security rules are sufficiently powerful to allow add operations to the /posts_denormalization_tracker/[postID]/locations sub-collection without allowing reads or updates to the sub-collection or the parent postIDs collection.
I'm basically looking for a sane way to track heavily denormalized data.
Edit: oh yeah, another great example would be the post author's profile information being embedded in every post. Imagine the hellscape trying to keep all that up-to-date as it is shared across a platform and then a user updates their profile.
I'm aswering this question because of your request from here.
When you are duplicating data, there is one thing that need to keep in mind. In the same way you are adding data, you need to maintain it. With other words, if you want to update/detele an object, you need to do it in every place that it exists.
What patterns (if any) are generally used to keep track of the duplicated data so that they can be updated correctly to avoid inconsistent data?
To keep track of all operations that we need to do in order to have consistent data, we add all operations to a batch. You can add one or more update operations on different references, as well as delete or add operations. For that please see:
How to do a bulk update in Firestore
What about creating a relational-like table for each unique location that the data can exist that is used to reconstruct the paths that require updating.
In my opinion there is no need to add an extra "relational-like table" but if you feel confortable with it, go ahead and use it.
Then you use those to reconstruct the paths of the duplicated data for a post (eg. /users/[boardOwnerID]/boards/[boardID]/posts/[postID])?
Yes, you need to pass to each document() method, the corresponding document id in order to make the update operation work. Unfortunately, there are no wildcards in Cloud Firestore paths to documents. You have to identify the documents by their ids.
Alternatively, would you instead have a posts_denormalization_tracker that is just a collection of unique postIDs that includes a sub-collection of locations that the post has been duplicated to?
I consider that isn't also necessary since it require extra read operations. Since everything in Firestore is about the number of read and writes, I think you should think again about this approach. Please see Firestore usage and limits.
unless Firestore security rules are sufficiently powerful to allow add operations to the /posts_denormalization_tracker/[postID]/locations sub-collection without allowing reads or updates to the sub-collection or the parent postIDs collection.
Firestore security rules are so powerful to do that. You can also allow to read or write or even apply security rules regarding each CRUD operation you need.
I'm basically looking for a sane way to track heavily denormalized data.
The simplest way I can think of, is to add the operation in a datastructure of type key and value. Let's assume we have a map that looks like this:
Map<Object, DocumentRefence> map = new HashMap<>();
map.put(customObject1, reference1);
map.put(customObject2, reference2);
map.put(customObject3, reference3);
//And so on
Iterate throught the map, and add all those keys and values to batch, commit the batch and that's it.

Resources