Cosmos db user id/email as partition key - azure-cosmosdb

I have a dilema about choosing best (syntetic) value for partition key for storing user data.
User document has:
- id (guid)
- email (used to login, e.g.)
- profile data
There are 2 main types of queries:
Looking for user by id (most queries)
Looking for user by email (login and some admin queries)
I want to avoid cross partition queries.
If i choose id for partitionKey (synthetic field) then login queries would be cross partition.
On the other hand, if i choose email then if user ever changes email - its a problem.
What i am thinking is to introduce new type within the collection. Something like:
userId: guid,
userEmail: “email1”,
partitonKey: “users-mappings”
then i can have User document itself as:
id: someguid,
type: “user”,
partitionKey: “user_someguid”,
profileData: {}
that way when user logs in, i first check mappings type/partition by email, get guid and then check actual User document by guid.
also, this way email can be changed without affecting partitioning.
is this a valid approach? any problems with it? am i missing something?

Your question does not has a standard answer. In my opinion, you solution named mapping type causes two queries which is also inefficient. Choosing partition key is always a process of balancing the pros and cons.Please see the guidance from official document.
Based on your description:
1.Looking for user by id (most queries)
2.Looking for user by email (login and some admin queries)
I suggest you to prioritize the most frequent queries, that is to say, id.
My reason:
1.id won't change easily,is relatively stable.
2.Session or cookie can be saved after login, so there is not much accesses to login as same as id.
3.id is your most frequent query condition, so it's impossible to cross all partitions every time.
4.If you do concern about login performance,don't forget adding indexing policy for email column.It could also improve the performance.

As you already know, in querying Cosmos DB, Fan-out should be the last option to query, especially on such a high-volume action such as logging in. Plus, the cost in RUs will be significantly higher with large data.
In the Cosmos DB SQL API, one pattern is to use synthetic partition keys. You can compose a synthetic partition key by concatenating the id and the email on write. This pattern works for a myriad of query scenarios providing flexibility.
Something like this:
{
"id": "123",
"email":"joe#abc.com",
"partitionKey":"123-joe#abc.com"
}
Then on read, do something like this:
SELECT s.something
FROM s
WHERE STARTSWITH(s.partitionKey, "123")
OR
ENDSWITH(s.partitionKey, "joe#abc.com")
You can also use SUBSTRING() etc...
With the above approach, you can search for a user either by their id or email and still use the efficiency of a partition key, minimizing your query RU cost and optimizing performance.

Related

Adding a new field to nested subcollection in firebase [duplicate]

I have two collections. One collection "User", who contains the user info (username...) And one collection "Post" who contains all posts of my flutter application. A post document contains a "Text Post" and the "Username" of the writter. I add an option in my application to allow the user to change his nickname every 6 months. But I must change the username in the "User" collection and in all posts it creates in the "Post" collection. What is the best practice ?
The user make a query for update username in "User" collection, i intercept the "OnUpdate" in cloud function and i update all post in server side.
The user make a query for update username in "User" collection, and update all "Post" collection in client side.
I guess if i do a geDocuments () there is a limit, so I should do it in multiple times if I have too many "Post" Documents, am I correct?
There is no singular best practice here. Both approaches you describe are valid, and neither is pertinently better than the other.
A few things to keep in mind in either scenario:
You may not be able to handle all updates in a single batched write (since that can handle at most 500 documents at once), so I'd recommend not wasting energy on that.
In some scenario's it is also acceptable (and sometimes even required) to not update the existing documents, so I recommend always considering that too.

How to keep documents with 2 partition keys in sync / referential integrity?

I have a cosmos db with high cardinality synthetic partition keys and type properties.
I need a setup where users can share documents between them.
for example, this is a document:
{
“id”:”guid”,
“title”:”Example document to share”,
“ownerUserId”:”user1Guid”,
“type”: “usersDocument”,
“partitionKey”:”user_user1Guid_documents”
}
now, user wants to share this document with another user.
Assumptions:
one document can be shared with many users (thousands)
one user can have thousands of documents shared with him
For these 2 reasons:
i dont want to embed sharings into document documents nor in user documents (since writes would very soon become ineffective/expensive) but i would prefer those m:n be separate documents.
i dont want to put shares for all users/documents as it will create hot spots very soon
I need both queries:
1. ListDocumentsSharedWithMe
In this query, at query time, i know id of the user documents are shared with.
2. ListAllUsersISharedThisDocumentWith
In this query, at query time, i know ‘idof thedocumentthat has been shared with differentusers`.
All this makes me think i should have 2 separate document types with separate partition
For listing all documents shared with me:
{
“id”:”documentGuid”,
“type”:”sharedWithMe”,
“partitionKey”:”sharedWithMe_myUserGuid”
}
(this could also be a single document with collection of shared documents. important here is partitionKey)
Now i can easily do SQL like SELECT * FROM c WHERE c.type = “sharedWithMe” and run query against partition key containing my user guid.
For listing all users i shared some document with, its similar:
{
“id”:”userISharedWithGuid”,
“type”:”documentSharings”,
“partitionKey”:”documentShare_documentGuid”
}
Now i can easily do SQL like SELECT * FROM c WHERE c.type = “documentSharings” and run query against partition key containing my document guid.
Question:
When user shares a document with some user, both documents should be created with different partition keys (thus, no sp/transactions).
How to keep this “atomic-like” or avoid create/update anomalies?
Or is there any better way to model this?
I think your method makes sense I do something similar to partition in multiple ways based on the scope of a query. I assume your main concern is if a failure happens in between saving the first and last set of related documents? The only way unfortunately to manage the chain of documents as they save is within your application code. i.e. we make sure we save in the order that makes it easiest to rollback and then implement a rollback method within the exception handler, this works by keeping a collection saved documents in memory.
As you say as you are across partitions there is no transaction handling out of the box.

Cloud Firestore and data modeling: From RDBMS to No-SQL

I am building an iOS app that is using Cloud Firestore (not Firebase realtime database) as a backend/database.
Google is trying to push new projects towards Cloud Firestore, and to be honest, developers with new projects should opt-in for Firestore (better querying, easier to scale, etc..).
My issue is the same that any relational database developer has when switching to a no-SQL database: data modeling
I have a very simple scenario, that I will first explain how I would configure it using MySQL:
I want to show a list of posts in a table view, and when the user clicks on one post to expand and show more details for that post (let say the user who wrote it). Sounds easy.
In a relational database world, I would create 2 tables: one named "posts" and one named "users". Inside the "posts" table I would have a foreign key indicating the user. Problem solved.
Poor Barry, never had the time to write a post :(
Using this approach, I can easily achieve what I described, and also, if a user updates his/her details, you will only have to change it in one place and you are done.
Lets now switch to Firestore. I like to think of RDBMS's table names as Firestore's collections and the content/structure of the table as the documents.
In my mind i have 2 possible solutions:
Solution 1:
Follow the same logic as the RDBMS: inside the posts collection, each document should have a key named "userId" and the value should be the documentId of that user. Then by fetching the posts you will know the user. Querying the database a second time will fetch all user related details.
Solution 2:
Data duplication: Each post should have a map (nested object) with a key named "user" and containing any user values you want. By doing this the user data will be attached to every post it writes.
Coming from the normalization realm of RDBMS this sounds scary, but a lot of no-SQL documents encourage duplication(?).
Is this a valid approach?
What happens when a user needs to update his/her email address? How easily you make sure that the email is updated in all places?
The only benefit I see in the second solution is that you can fetch both post and user data in one call.
Is there any other solution for this simple yet very common scenario?
ps: go easy on me, first time no-sql dev.
Thanks in advance.
Use solution 1. Guidance on nesting vs not nesting will depend on the N-to-M relationship of those entities (for example, is it 1 to many, many to many?).
If you believe you will never access an entity without accessing its 'parent', nesting may be appropriate. In firestore (or document-based noSQL databases), you should make the decision whether to nest that entity directly in the document vs in a subcollection based on the expect size of that nested entity. For example, messages in a chat should be a subcollection, as they may in total exceed the maximum document size.
Mongo, a leading noSQL db, provides some guides here
Firestore also provided docs
Hope this helps
#christostsang I would suggest a combination of option 1 and option 2. I like to duplicate data for the view layer and reference the user_id as you suggested.
For example, you will usually show a post and the created_by or author_name with the post. Rather than having to pay additional money and cycles for the user query, you could store both the user_id and the user_name in the document.
A model you could use would be an object/map in firestore here is an example model for you to consider
posts = {
id: xxx,
title: xxx,
body: xxx,
likes: 4,
user: {refId: xxx123, name: "John Doe"}
}
users = {
id: xxx,
name: xxx,
email: xxx,
}
Now when you retrieve the posts document(s) you also have the user/author name included. This would make it easy on a postList page where you might show posts from many different users/authors without needed to query each user to retrieve their name. Now when a user clicks on a post, and you want to show additional user/author information like their email you can perform the query for that one user on the postView page. FYI - you will need to consider changes that user(s) make to their name and if you will update all posts to reflect the name change.

Firebase query for bi-directional link

I'm designing a chat app much like Facebook Messenger. My two current root nodes are chats and users. A user has an associated list of chats users/user/chats, and the chats are added by autoID in the chats node chats/a151jl1j6. That node stores information such as a list of the messages, time of the last message, if someone is typing, etc.
What I'm struggling with is where to make the definition of which two users are in the chat. Originally, I put a reference to the other user as the value of the chatId key in the users/user/chats node, but I thought that was a bad idea incase I ever wanted group chats.
What seems more logical is to have a chats/chat/members node in which I define userId: true, user2id: true. My issue with this is how to efficiently query it. For example, if the user is going to create a new chat with a user, we want to check if a chat already exists between them. I'm not sure how to do the query of "Find chat where members contains currentUserId and friendUserId" or if this is an efficient denormalized way of doing things.
Any hints?
Although the idea of having ids in the format id1---||---id2 definitely gets the job done, it may not scale if you expect to have large groups and you have to account for id2---||---id1 comparisons which also gets more complicated when you have more people in a conversation. You should go with that if you don't need to worry about large groups.
I'd actually go with using the autoId chats/a151jl1j6 since you get it for free. The recommended way to structure the data is to make the autoId the key in the other nodes with related child objects. So chats/a151jl1j6 would contain the conversation metadata, members/a151jl1j6 would contain the members in that conversation, messages/a151jl1j6 would contain the messages and so on.
"chats":{
"a151jl1j6":{}}
"members":{
"a151jl1j6":{
"user1": true,
"user2": true
}
}
"messages":{
"a151jl1j6":{}}
The part where this gets is little "inefficient" is the querying for conversations that include both user1 and user2. The recommended way is to create an index of conversations for each user and then query the members data.
"user1":{
"chats":{
"a151jl1j6":true
}
}
This is a trade-off when it comes to querying relationships with a flattened data structure. The queries are fast since you are only dealing with a subset of the data, but you end up with a lot of duplicate data that need to be accounted for when you are modifying/deleting i.e. when the user leaves the chat conversation, you have to update multiple structures.
Reference: https://firebase.google.com/docs/database/ios/structure-data#flatten_data_structures
I remember I had similar issue some time ago. The way how I solved it:
user 1 has an unique ID id1
user 2 has an unique ID id2
Instead of adding a new chat by autoId chats/a151jl1j6 the ID of the chat was id1---||---id2 (superoriginal human-readable delimeter)
(which is exactly what you've originally suggested)
Originally, I put a reference to the other user as the value of the chatId key in the users/user/chats node, but I thought that was a bad idea in case I ever wanted group chats.
There is a saying: https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it
There might a limitation of how many userIDs can live in the path - you can always hash the value...

How to entirely skip validation in simple schema and allow incomplete documents to be stored?

I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.

Resources