Amplify DataStore: how to update item not in local database - amazon-dynamodb

I am successfully using DataStore. However I am wondering how to do the following:
I have set the maxRecordsToSync to 1000 items. I can get an item directly from DynamoDb (that is not in the local database, lets say item number 10, 000) using the DynamoDBClient plus the QueryCommand and GetItemCommand.
My problem is how to update this item. I could probably do it using the DynamoDBClient and the PutItemCommand, however I have read somewhere that I should not do this as this would not update the version number and would therefore cause problems with syncing.
My question is therefore, what is the best practice to update an item retrieved directly from dynamoDb using the DynamoDBClient? I am asuming that I cannot use DataStore.save() as the item is not in the local database?
Thanks

Related

How to delete entire cosmosdb partition using spirngboot

What's the easiest way to delete an entire partition in cosmos Db assuming that I'm using spirngboot with SQL API?
I have a class marked with the #Repository that extends the CosmosRepository and I want to delete every item from a particular partition.
I know that with CosmosClientBuilder I could do something like:
cosmosDbClient.getDatabase(dataBaseName)
.getContainer(container)
.deleteAllItemsByPartitionKey(
PartitionKey("partitionKey0001"),
CosmosItemRequestOptions())
Is it possible to access the container from the repository?
I don't want to use stored procedures for something that should be easy to do.
Thank you
Microsoft is releasing this feature since year now.
Its yet in preview state and you will need to opt to this before using

How can I query for all new and updated documents since last query?

I need to query a collection and return all documents that are new or updated since the last query. The collection is partitioned by userId. I am looking for a value that I can use (or create and use) that would help facilitate this query. I considered using _ts:
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value]
The problem with _ts is that it is not granular enough and the query could miss updates made in the same second by another client.
In SQL Server I could accomplish this using an IDENTITY column in another table. Let's call the table version. In a transaction I would create a new row in the version table, do the updates to the other table (including updating the version column with the new value. To query for new and updated rows I would use a query like this:
SELECT * FROM table WHERE userId=[some-user-id] and version > [some-value]
How could I do something like this in Cosmos DB? The Change Feed seems like the right option, but without the ability to query the Change Feed, I'm not sure how I would go about this.
In case it matters, the (web/mobile) clients connect to data in Cosmos DB via a web api. I have control of the entire stack - from client to back-end.
As the statements in this link:
Today, you see all operations in the change feed. The functionality
where you can control change feed, for specific operations such as
updates only and not inserts is not yet available. You can add a “soft
marker” on the item for updates and filter based on that when
processing items in the change feed. Currently change feed doesn’t log
deletes. Similar to the previous example, you can add a soft marker on
the items that are being deleted, for example, you can add an
attribute in the item called "deleted" and set it to "true" and set a
TTL on the item, so that it can be automatically deleted. You can read
the change feed for historic items, for example, items that were added
five years ago. If the item is not deleted you can read the change
feed as far as the origin of your container.
Change feed is not available for your requirements.
My idea:
Use Azure Function Cosmos DB Trigger to collect all the operations in your specific cosmos collection. Follow this document to configure the input of azure function as cosmos db, then follow this document to configure the output as azure queue storage.
Get the ids of changed items and send them into queue storage as messages.When you want to query the changed item,just query the messages from the queue to consume them at a specific unit time and after that just clear the entire queue. No items will be missed.
With your approach, you can get added/updated documents and save reference value (_ts and id field) somewhere (like blob)
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value] and id !='guid' order by _ts desc
This is a similar approach we use to read data from Eventhub and store checkpointing information (epoch number, sequence number and offset value) in blob. And at a time only one function can take a lease of that blob.
If you go with ChangeFeed, you can create listener (Function or Job) to listen all add/update data from collection and you can store those value in some collection, while saving data you can add Identity/version field on every document. This approach may increase your cosmos DB bill.
This is what the transaction consistency levels are for: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Choose strong consistency and your queries will always return the latest write.
Strong: Strong consistency offers a linearizability guarantee. The
reads are guaranteed to return the most recent committed version of an
item. A client never sees an uncommitted or partial write. Users are
always guaranteed to read the latest committed write.

How can you create a transaction/batch write between multiple Firestore instances?

Firebase allows having multiple projects in a single application.
// Initialize another app with a different config
var secondary = firebase.initializeApp(secondaryAppConfig, "secondary");
// Retrieve the database.
var secondaryDatabase = secondary.database();
Example:
Project 1 has my users collection; Project 2 has my friends collection (suppose there's a reason for that). When I add a new friend in the Project 2 database, I want to increment the friendsCount in the user document in Project 1. For this reason, I want to create a transaction/batch write to insure consistency in the data.
How can I achieve this? Can I create a transaction or a batch write between different Firestore instances?
No, you cannot use the database transaction feature across multiple databases.
If absolutely required, I'd probably instead create a custom locking feature. From wiki,
To allow several users to edit a database table at the same time and also prevent inconsistencies created by unrestricted access, a single record can be locked when retrieved for editing or updating. Anyone attempting to retrieve the same record for editing is denied write access because of the lock (although, depending on the implementation, they may be able to view the record without editing it). Once the record is saved or edits are canceled, the lock is released. Records can never be saved so as to overwrite other changes, preserving data integrity.
In database management theory, locking is used to implement isolation among multiple database users. This is the "I" in the acronym ACID.
Source: https://en.wikipedia.org/wiki/Record_locking
It's been three years since the question, I know, but since I needed the same thing I found a working solution to perform the double (or even ^n) transaction. You have to nest the transactions like this.
db1.runTransaction(t1 => db2.runTransaction(t2 => async () => {
await t1.set(.....
await t2.update(.....
etc....
})).then(...).catch(...)
Since the error is propagated in the nested promises it is safe to execute the double transaction in this way because for a failure in any one of the databases it results in the error in all of them.

Firebase Database Unity differentiating between old and new data

I am building a chat engine using firebase in unity. I want to differentiate between the existing data and all the new data that gets added into the database. There is method once in web sdk of firebase which helps in differentiating between old and new data, is anyone aware if we have something similar on unity
There is no direct way to do this, one way that is a workaround is to add a timestamp value in all the entries maintained in the database and each time one subscribes for the new data we make use of the OrderByValue|OrderByKey and StartAt to do the same.
In the beginning value for StartAt will be 0 but post that whenever a child gets added we can update the StartAt value to that so that the next time client subscribes for the childAdded, it will only receive data post the last child.

dynamodb batch write updates existing items

In this dynamodb documentation it is stated that existing items can not be updated with batch writing. However, when I try it replaces new items. How can I prevent it to update already exists one?
As stated in the documentation if you re-put an item it replaces the old one.
Update item adds/changed attributes but doesn't remove other ones.
So basically what you are doing is replacing items and not updating them.
With batch write you can't put conditions on individual items thus you can't prevent it from updating.

Resources