Cosmo db query stats understand - azure-cosmosdb

Can someone tell me how to read the query stats in cosmo db. I know that each item in cosmo have a hard limit of 2MB, however, the the retrieved document size here is more than 5 MB. So I am confused, is the the correct place to find out the size of an item. Because if it is then what about the hard limit of 2MB

I think you have misunderstood the limits , the 2MB hard limit is for the document that you store.
However in this particular sample that you provided , the 5.7MB is the size of total documents that you retrieved. In this case there are 646 documents. You can see the exact definition from the metrics explained here.
Also, Response Size Quota per Page of Results (default is 4MB, for large query results - Cosmos DB will paginate results, and each page will be limited.

Related

Firebase Error: Document Exceeds 1MB size limit. How to work around it?

I'm having this problem when trying to register a user on the database. The database template is a document specific to the registered user in which it has 3 arrays of objects.
I searched on the internet a little bit about it, and people seemed to say that creating a subcollection would solve it, but I don't get how it would solve the problem nor avoid it happening again in the future when theses arrays grow bigger since the subcollections are also limited by 1MB of size limit, or even needing to make multiple fetchs instead of only one if I understood it correctly.
I believed firestore was a database. Isn't it one? How can't I store data in a database then, being so limited? What's the logic behind Firestore for doing that?
And if so, how do I get around it so it never happens again?
The size limit of 1MB applies to each individual document. Subcollections are not part of it.
Also see:
Are Cloud Firestore subcollections included in document size calculation
Does 1 mb size limit apply to a sub collection in inside a document in Firestore?
Does the size of subcollections included while calculating the document size and add to the limit of 1MB?

what is the maximum size of a document in Realtime database in firestore

I am working on a more complicated database where I a want to store lots of data, the issue with fire store is the limit to 1MB per documents, and I am splitting my data in to different document but still according to my calculation the size will be bigger than the limit, yet I cannot find the limit for the Realtime database, and I want to be sure before switching to it, my single document in some cases could hit 6-9mb when scaling big.... at first I want to go with mongodb but I wanted to try the google cloud services.. any idea if the do size is same for both Realtime and firestore ?
Documents are part of Firestore (that have 1 MB max size limit each) while Realtime Database on the other hand is just a large JSON like thing. You can find limits of Realtime database in the documentation.
Property
Limit
Description
Maximum depth of child nodes
32
Each path in your data tree must be less than 32 levels deep.
Length of a key
768 Bytes
Keys are UTF-8 encoded and can't contain new lines or any of the following characters: . $ # [ ] / or any ASCII control characters (0x00 - 0x1F and 0x7F)
Maximum size of a string
10 MB
Data is UTF-8 encoded
There isn't a limit of number of child nodes you can have but just keep the max depth in mind. Also it might be best if you could share a sample of what currently takes over 6 MB in Firestore and maybe restructure the database.

How to handle Firestore max document size?

According to Firestore, the max document size is 1MiB. In my app, every user has his own document. So can every user have a maximum storage size from 1 MiB or do I understand this wrong?
Because in my app the user can input a lot of data, so I fear that the Storage is not enough. How to handle this problem?
So can every user have a maximum storage size from 1MiB or do I understand this wrong?
Yes, that's correct. You are limited to 1 MiB for each document.
Because in my App the user can input a lot of data, so I fear that the Storage is not enough
If you are afraid of reaching the limit, then you should consider storing the data in other collections as well. In your case, I would create a collection of "plans" as well as one of "todos". In this way, you aren't limited in the number of documents you can add to a collection.
How to handle this problem?
For Android, there is a library called FirestoreDocument-Android, which will help you check against the maximum of 1 MiB (1,048,576 bytes) quota.

What is the maximum size of document I can save in firebase collection?

I'm trying to save a user profile, which contains 7-8 text fields with up to 500 words each, and three pictures. I'm converting images to Base64 URL and storing the URL in respective fields When the user tries to save the profile, it shows an error that the payload size exceeds the allowed limit. Documentation says that the maximum size for a document can be 1MB, which is too low in my case. Is there any way to increase the size? or any other way around?
Is there any way to increase the size?
AFAIK there is no way to increase the limit of 1MiB for the size of a document.
Any other way around?
In your case you could save the images in Cloud Storage and only save their URL in the Firestore document. You'll find the doc for Cloud Storage here.
If this is not sufficient you will need to distribute the data for one use profile among different Firestore Documents. To identify them, you could either group them in a subcollection or save in a dedicated field a common unique ID (e.g. the document ID of the first one).

Huge amount of RU to write document of 400kb - 600kb on Azure cosmos db

This is the log of my azure cosmos db for last write operations:
Is it possible that write operations of documents with size between 400kb to 600kb have this costs?
Here my document (a list of coordinate):
Basically I thought at the beginning it was a hotPartition problem, but afterwards I understood (I hope) that it is a problem in the loading of documents ranging in size from 400kb to 600kb. I wanted to understand if there was something wrong in the database setting, in the indexing policy or other as it seems to me anomalous that about 3000 ru are used to load a json of 400kb, when in the documentation it is indicated that to load a file of equal size at 100kb it takes about 50ru. Basically the document to be loaded is a road route and therefore I would not know in what other way to model it.
This is my indexing policy:
Thanks to everybody. I spent months behind this problem without having solutions...
It's hard to know for sure what the expected RU/s cost should be to ingest a 400KB-600KB item. The cost of this operation will depend on the size of the item, your indexing policy and the structure of the item itself. Greater hierarchy depth is more expensive to index.
You can get a good estimate for what the cost for a single write for an item will be using the Cosmos Capacity Calculator. In the calculator, click Sign-In, cut/paste your index policy, upload a sample document, reduce the writes per second to 1, then click calculate. This should give you the cost to insert a single item.
One thing to note here, is if you have frequent updates to a small number of properties I would recommend you split the documents into two. One with static properties, and another that is frequently updated. This can drastically reduce the cost for updates on large documents.
Hope this is helpful.
You can also pull the RU cost for a write using the SDK.
Check storage consumed
To check the storage consumption of an Azure Cosmos container, you can run a HEAD or GET request on the container, and inspect the x-ms-request-quota and the x-ms-request-usage headers. Alternatively, when working with the .NET SDK, you can use the DocumentSizeQuota, and DocumentSizeUsage properties to get the storage consumed.
Link.

Resources