In this dynamodb documentation it is stated that existing items can not be updated with batch writing. However, when I try it replaces new items. How can I prevent it to update already exists one?
As stated in the documentation if you re-put an item it replaces the old one.
Update item adds/changed attributes but doesn't remove other ones.
So basically what you are doing is replacing items and not updating them.
With batch write you can't put conditions on individual items thus you can't prevent it from updating.
Related
I'm trying to delete a specific value from a firebase realtime database, but I don't know how to do it because I don't know to save or find the key value of the child which is automatically generate.
If you see the picture I've only managed to remove all the children from the first key with
FirebaseDatabase.getInstance().reference.child("Comentarios").removeValue()
But I need to delete just by the child creadoPor
Is there any way of skkiping an unnamed child?
FirebaseDatabase.getInstance().reference.child("Comentarios").removeValue()
But I need to delete just by the child creadoPor.
Since you know the "grantparent" key of the data and the value of one of the nodes properties, you can use a query to find the nodes that match that value.
FirebaseDatabase.instance
.ref("Comentarios")
.child("-NGi7xP...")
.orderByChild("creadoPor")
.equalTo("R7lji3...")
When you get the DataSnapshot from the query, you'll need to loop over its children as shown in the documentation on listening for value events. Even when there's only one result, you'll get a list of one child node and thus will need to loop over them.
I am successfully using DataStore. However I am wondering how to do the following:
I have set the maxRecordsToSync to 1000 items. I can get an item directly from DynamoDb (that is not in the local database, lets say item number 10, 000) using the DynamoDBClient plus the QueryCommand and GetItemCommand.
My problem is how to update this item. I could probably do it using the DynamoDBClient and the PutItemCommand, however I have read somewhere that I should not do this as this would not update the version number and would therefore cause problems with syncing.
My question is therefore, what is the best practice to update an item retrieved directly from dynamoDb using the DynamoDBClient? I am asuming that I cannot use DataStore.save() as the item is not in the local database?
Thanks
I have a use case where DynamoDB is running in production and I need to add a new column IDUpdatedAt which will also be serving as a sort key for one of the GSIs.
I tried a thing in test where my application adds the new rows with IDUpdatedAt, it's working fine but what about the existing rows? How to add the values for those?
Also the new rows will not be added without IDUpdatedAt, but how will the search be impacted for older rows?
PS: IDUpdatedAt is being used as a filter in the application, i.e., user can search for specific ID and can get results sorted by date. That's why IDUpdatedAt is also a part of GSI (sort key).
Please help.
You've got the right idea by adding the field to new items. After all, DynamoDB does not enforce a particular schema outside of the primary key.
This also happens to be a very useful feature, especially when defining a GSI on that attribute; if the atttibute exists on the item, it ends up in the index! For example, imagine modeling an email inbox in DDB where each item represents an email. You could include an attribute 'is_read' and define a GSI using that atttibute.
If the 'is_read' attribute exists on the item, it's in the index. Otherwise, it's not. A cool way to use GSIs to implement filtering.
Pretty neat stuff!
However, there is no way to retroactively update all items with a new attribute other than manually updating each item (or in batches). The equivalent in SQL databases is defining a new column. Unfortunately, an analogous operation in DDB does not exist.
I need to increment or decrement filed maxQty and the data structure is added in the below images.
Image with the red mark is the filed and I added another image to understand the data structure
Is there any way to do that?
According to the official documentation regarding updating array elements:
If your document contains an array field, you can use arrayUnion() and arrayRemove() to add and remove elements.
Unfortunately, arrayUnion() does not apply to your use-case, as your businessCard array contains objects and not strings. There are two options, one would be to read the entire array, increase the maxQty, and then write the document back on the server. The second one would be to update the document with the help of a Map by manually copying values into it for each of the fields you want to change.
Please also note that the update operation is not compatible with the automatic field mapping that occurs with Java POJO objects. You are allowed only to use Map objects.
I need to query a collection and return all documents that are new or updated since the last query. The collection is partitioned by userId. I am looking for a value that I can use (or create and use) that would help facilitate this query. I considered using _ts:
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value]
The problem with _ts is that it is not granular enough and the query could miss updates made in the same second by another client.
In SQL Server I could accomplish this using an IDENTITY column in another table. Let's call the table version. In a transaction I would create a new row in the version table, do the updates to the other table (including updating the version column with the new value. To query for new and updated rows I would use a query like this:
SELECT * FROM table WHERE userId=[some-user-id] and version > [some-value]
How could I do something like this in Cosmos DB? The Change Feed seems like the right option, but without the ability to query the Change Feed, I'm not sure how I would go about this.
In case it matters, the (web/mobile) clients connect to data in Cosmos DB via a web api. I have control of the entire stack - from client to back-end.
As the statements in this link:
Today, you see all operations in the change feed. The functionality
where you can control change feed, for specific operations such as
updates only and not inserts is not yet available. You can add a “soft
marker” on the item for updates and filter based on that when
processing items in the change feed. Currently change feed doesn’t log
deletes. Similar to the previous example, you can add a soft marker on
the items that are being deleted, for example, you can add an
attribute in the item called "deleted" and set it to "true" and set a
TTL on the item, so that it can be automatically deleted. You can read
the change feed for historic items, for example, items that were added
five years ago. If the item is not deleted you can read the change
feed as far as the origin of your container.
Change feed is not available for your requirements.
My idea:
Use Azure Function Cosmos DB Trigger to collect all the operations in your specific cosmos collection. Follow this document to configure the input of azure function as cosmos db, then follow this document to configure the output as azure queue storage.
Get the ids of changed items and send them into queue storage as messages.When you want to query the changed item,just query the messages from the queue to consume them at a specific unit time and after that just clear the entire queue. No items will be missed.
With your approach, you can get added/updated documents and save reference value (_ts and id field) somewhere (like blob)
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value] and id !='guid' order by _ts desc
This is a similar approach we use to read data from Eventhub and store checkpointing information (epoch number, sequence number and offset value) in blob. And at a time only one function can take a lease of that blob.
If you go with ChangeFeed, you can create listener (Function or Job) to listen all add/update data from collection and you can store those value in some collection, while saving data you can add Identity/version field on every document. This approach may increase your cosmos DB bill.
This is what the transaction consistency levels are for: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Choose strong consistency and your queries will always return the latest write.
Strong: Strong consistency offers a linearizability guarantee. The
reads are guaranteed to return the most recent committed version of an
item. A client never sees an uncommitted or partial write. Users are
always guaranteed to read the latest committed write.