Calculating the size of a node in Firebase Realtime Database - firebase

I have this node in Realtime Database:
{
fieldA: 123,
field2: '456',
field3: true,
}
How is this calculated on an individual node basis?

If you're asking about the storage and bandwidth cost of transferring this object, Firebase uses pretty much the JSON size for the node without whitespace. So for your code:
let node = {
fieldA: 123,
field2: '456',
field3: true,
}
console.log(JSON.stringify(node).length);

Related

How To Add Subcollection To Document In Firestore?

Current Code
// items is an array.
// Array [
Object {
"id": "KQJfb2RkT",
"name": "first",
},
Object {
"id": "1mvshyh9H",
"name": "second",
},
]
storeSale = async ({ items }) => {
this.salesCollection.add({
status: 1,
created_at: new Date(),
updated_at: new Date(),
});
};
When adding a document in SalesCollection, I want to add items as subcollection to this document.
I would appreciate it if you could give me any advices.
I would like to save like this.
enter image description here
You can use a batched write, as follows:
// Get a new write batch
let batch = db.batch();
// Set the value of parent
const parentDocRef = db.collection("parentColl").doc();
batch.set(parentDocRef, {
status: 1,
created_at: new Date(),
updated_at: new Date(),
});
//Set the value of a sub-collection doc
const parentDocId = parentDocRef.id;
const subCollectionDocRef = db.collection("parentColl").doc(parentDocId).collection("subColl").doc();
batch.set(subCollectionDocRef, {
...
});
// Commit the batch
await batch.commit();
One key point to note: Actually, from a technical perspective, a parent collection and the sub-collections of the documents in this parent collection are not at all relating to each other.
Let's take an example: Imagine a doc1 document under the col1 collection
col1/doc1/
and another one subDoc1 under the subCol1 (sub-)collection
col1/doc1/subCol1/subDoc1
These two documents (and the two immediate parent collections, i.e. col1 and subCol1) just share a part of their path but nothing else.
One side effect of this is that if you delete a document, its sub-collection(s) still exist.

Firebase Realtime DB: Order query results by number of values for a key

I have a Firebase web Realtime DB with users, each of whom has a jobs attribute whose value is an object:
{
userid1:
jobs:
guid1: {},
guid2: {},
userid2:
jobs:
guid1: {},
guid2: {},
}
I want to query to get the n users with the most jobs. Is there an orderby trick I can use to order the users by the number of values the given user has in their jobs attribute?
I specifically don't want to store an integer count of the number of jobs each user has because I need to update users' jobs attribute as a part of atomic updates that update other user attributes concurrently and atomically, and I don't believe transactions (like incrementing/decrementing counters) can be a part of those atomic transactions.
Here's an example of the kind of atomic update I'm doing. Note I don't have the user that I'm modifying in memory when I run the following update:
firebase.database().ref('/').update({
[`/users/${user.guid}/pizza`]: true,
[`/users/${user.guid}/jobs/${job.guid}/scheduled`]: true,
})
Any suggestions on patterns that would work with this data would be hugely appreciated!
Realtime Database transactions run on a single node in the JSON tree, so it would be quite difficult to integrate the update of a jobCounter node within your atomic update to several nodes (i.e. to /users/${user.guid}/pizza and /users/${user.guid}/jobs/${job.guid}/scheduled). We would need to update at /users/${user.guid} level and calculate the counter value, etc...
An easier approach is to use a Cloud Function to update a user's jobCounter node each time there is a change to one of the jobs nodes that implies a change in the counter. In other words, if a new job node is added or removed, the counter is updated. If an existing node is only modified, the counter is not updated, since there were no change in the number of jobs.
exports.updateJobsCounter = functions.database.ref('/users/{userId}/jobs')
.onWrite((change, context) => {
if (!change.after.exists()) {
//This is the case when no more jobs exist for this user
const userJobsCounterRef = change.before.ref.parent.child('jobsCounter');
return userJobsCounterRef.transaction(() => {
return 0;
});
} else {
if (!change.before.val()) {
//This is the case when the first job is created
const userJobsCounterRef = change.before.ref.parent.child('jobsCounter');
return userJobsCounterRef.transaction(() => {
return 1;
});
} else {
const valObjBefore = change.before.val();
const valObjAfter = change.after.val();
const nbrJobsBefore = Object.keys(valObjBefore).length;
const nbrJobsAfter = Object.keys(valObjAfter).length;
if (nbrJobsBefore !== nbrJobsAfter) {
//We update the jobsCounter node
const userJobsCounterRef = change.after.ref.parent.child('jobsCounter');
return userJobsCounterRef.transaction(() => {
return nbrJobsAfter;
});
} else {
//No need to update the jobsCounter node
return null;
}
}
}
});

Do I require a partition key when dealing with the CosmosDB emulator?

Trying to delete a CosmosDB document via DeleteDocumentAsync is giving me a Microsoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Resource Not Found"]} no matter what I try.
I am using the CosmosDB local emulator with a single collection and a single record for now, so I haven't defined any partition key.
This is my document structure:
{
"id": "a1032017-c131-4fe0-a045-1d342bc56410",
"Code": "059058",
"Key": "f9971f3a-9737-4da5-90df-2ab7f93ba679",
"CreatedOn": "2019-09-30T15:50:53.0368614-04:00",
"TTL": 1440,
"PhoneNumber": "1112223333",
"_rid": "35E3AOfSiUUBAAAAAAAAAA==",
"_self": "dbs/35E3AA==/colls/35E3AOfSiUU=/docs/35E3AOfSiUUBAAAAAAAAAA==/",
"_etag": "\"00000000-0000-0000-77c8-620aa5ca01d5\"",
"_attachments": "attachments/",
"_ts": 1569873059
}
Code to delete:
public async Task Delete<T>(T codeKeyPairModel) where T : CodeKeyPairModel
{
var documentLink = UriFactory.CreateDocumentUri(cosmosDBId, collectionId, codeKeyPairModel.Id.ToString());
var result = await cosmosDBClient.DeleteDocumentAsync(documentLink,
new RequestOptions() { PartitionKey = new PartitionKey(Undefined.Value) });
}
documentLink value:
{dbs/CodeCheckerDB/colls/CodeKeyPair/docs/a1032017-c131-4fe0-a045-1d342bc56410}
Does the emulator requires a partition to be set even for smaller DBs? If so, how can I set one?
I did a test for your sample,it works for me. You could remove the PartitionKey settings because you said your collection is single collection, not partitioned collection. No need to pointing any partition key.
My code:
DocumentClient documentClient = new DocumentClient(new Uri(endpointUrl), authorizationKey);
var documentLink = UriFactory.CreateDocumentUri(databaseId, collectionId, "a1032017-c131-4fe0-a045-1d342bc56410");
await documentClient.DeleteDocumentAsync(documentLink, null);

400 error when upsert using Cosmos SP

I'm trying to execute the below SP
function createMyDocument() {
var collection = getContext().getCollection();
var doc = {
"someId": "123134444",
};
var options = {};
options['PartitionKey'] = ["someId"];
var isAccepted = collection.upsertDocument(collection.getSelfLink(), doc, options, function (error, resources, options) {
});
}
and cosmos keeps on complaining that there's something wrong with the partition key
{ code: 400,
body: '{"code":"BadRequest","message":"Message: {\\"Errors\\":
[\\"PartitionKey extracted from document doesn\'t match the one specified in the header\\"]}
}
Does anyone have any idea how to pass in the partion key in options so it gets pass this validation ?
Figured it out. The error was with how we call the stored proc.
How we were doing it
client.executeStoredProcedure('dbs/db1/colls/coll-1/sprocs/createMyDocument',
{},
{} //Here you have to pass in the partition key
);
How it has to be
client.executeStoredProcedure('dbs/db1/colls/coll-1/sprocs/createMyDocument',
{},
{"partitionKey": "43321"}
);
I think you misunderstand the meaning of partitionkey property in the options[].
For example , my container is created like this:
The partition key is "name" for my collection here. You could check your collection's partition key.
And my documents as below :
{
"id": "1",
"name": "jay"
}
{
"id": "2",
"name": "jay2"
}
My partitionkey is 'name', so here I have two paritions : 'jay' and 'jay1'.
So, here you should set the partitionkey property to '123134444' in your question, not 'someId'.
More details about cosmos db partition key.
Hope it helps you.

Update Multiple Items with same Hash Key in DynamoDb

I have a dynamodb table that stores users videos.
It's structured like this:
{
"userid": 324234234234234234, // Hash key
"videoid": 298374982364723648 // Range key
"user": {
"username": "mario"
}
}
I want to update username for all videos of a specific user. It's possible with a simple update or i have to scan the complete table and update one item a time?
var params = {
TableName: DDB_TABLE_SCENE,
Key: {
userid: userid,
},
UpdateExpression: "SET username = :username",
ExpressionAttributeValues: { ":username": username },
ReturnValues: "ALL_NEW",
ConditionExpression: 'attribute_exists (userid)'
};
docClient.update(params, function(err, data) {
if (err) fn(err, null);
else fn(err, data.Attributes.username);
});
I receive the following error, I suppose the range key is necessary.
ValidationException: The provided key element does not match the schema
Dynamo does not support write operations across multiple items (ie. for more than one item at a time). You will have to first scan/query the table, or otherwise generate a list of all items you'd like to update, and then update them one by one.
Dynamo does provide a batching API but that is still just a way to group updates together in batches of 25 at a time. It's not a proxy for a multi-item update like you're trying to achieve.

Resources