I'm trying to implement a trigger that will take all incoming changes and write them to my other Cosmos DB.
I'm aware of Change Feed and how it works, but currently, I'm looking for the approach to write changes with container triggers.
What I can't find is how can I create a connection to another database. Is it even possible inside a containers triggers?
Triggers in Cosmos DB are scoped to a single logical partition the same as stored procedures. Also, triggers must be called manually.
To copy data from one container to another you should use Change Feed.
Related
I need to access Cosmos DB data through a middleware API that gives access to SQL queries but not the change feed (i.e. DocumentClient.CreateDocumentQuery() but not DocumentClient.CreateDocumentChangeFeedQuery()). Is it possible to query the change feed using regular SQL queries?
I was thinking about filtering documents on recent _ts but I am not sure timestamps are guaranteed to be monotonically increasing across entire collections due to potential clock drift across the VMs Cosmos DB runs on.
You cannot query the Change Feed using a SQL query. The Change Feed contains documents that have been inserted / updated, and any filtering needs to be done client-side after receiving such changes.
I am trying to write a record with a different schema to an existing collection with records . I don't get a exception, but i don't see the new record.
Do I need to use a different collection?
DocumentDBRepository<ScheduleViewModel>.CreateItemAsync(task).GetAwaiter();
Cosmos DB doesn't care about what you put into to it (as almost any other nosql db), so this is supported from the Cosmos DB perspective. from the code perspective, I suppose you need to create a connection that would support the model your are using and create a document
I have a DynamoDB table that contains key value pairs that will be read by a number of applications. On startup each application will read the entire table and cache it in-memory.
The problem I'm trying to solve is that of getting the applications to update their cache if one or more items in the DynamoDB table have been modified.
DynamoDB streams initially seemed to be the right approach to solving the problem. I have implemented the consumer using Kinesis Client Library (KCL) as recommended by AWS. While implementing it, however, I have encountered some problems that make me believe that I'm on the wrong track. Specifically:
When I create a new consumer using KCL, it creates a new DynamoDB table to do the housekeeping of leases and checkpoints, such that when the application is restarted, KCL knows which records have been consumed and which have not. This is not what I need for this problem. Any stream records that are created while the application is offline is irrelevant, since the entire table is read upon application startup.
Several instances of the same application are running at the same time. Each of them needs to be notified of table updates. To implement that in KCL I need to assign a unique application name to each of them. Otherwise they will share the lease table and only one of the applications will get notified. One table for each application instance doesn't seem right. Also I would then need something to remove unused tables.
I also implemented it using the low level API instead. That works fine when there's a single shard. My implementation doesn't handle re-sharding like KCL, however, so it's too fragile. It seems wrong to have to implement handling of re-sharding for the simple problem I'm trying to solve.
I'm beginning to consider other solutions like:
Implementing a lambda function that gets triggered on updates to the table. The function sends a notification to an SNS topic. Consumers create SQS subscriptions on the topic and gets notified via that. This solution has too many moving parts for my liking.
Make the applications periodically re-read the entire table and determine themselves if changes have been made. This solution feels a bit primitive, but seems to be the simplest.
All solutions that I have considered so far have quite significant drawbacks. What am I missing?
It depends on how your KCL is pushing to the dependent apps but
I believe the SQS path is the correct choice.
You can add a presumably infinite number of consumers without being throttled.
When you do add another dependent app, it won't require changing your KCL to push to it, the new app will simply watch the SQS queue.
You gain the ability to monitor the queue when issues happen.
More moving parts to setup, but once you have the Streams -> SNS -> SQS pipe in place, it's basically bulletproof.
Just my 2ยข.
Nowadays an AWS AppSync GraphQL API with subscriptions may be the simplest approach to power this type of application, with the least number of moving parts.
Whenever one of your applications starts up, it connects to your AppSync GraphQL API using the Amplify framework or AppSync SDK and subscribes to the updates its interested in. Then whenever an application updates information in the table via your GraphQL API, all your other applications will be notified of the change, along with the relevant changed data.
AppSync integrates well with DynamoDB out of the box, allowing you to generate DynamoDB tables with appropriate indexes alongside your GraphQL or generate GraphQL from your existing DynamoDB tables if you so choose. Amplify can even help you automatically generate an AppSync GraphQL API at a higher level with associated DynamoDB tables, indexes, entity relationships, and more like elasticsearch search capabilities by using their GraphQL transformers.
I have a question in regards to the capabilities of Firebase in equivalence to MySQL features like:
Events
Triggers
Stored Procedures
In my case I want to migrate off of MySQL to Firebase but I need to know if the usecase can be replicated in Firebase.
My current MySQL DB I have a table that has a column called status which once it gets changed to 'Final' it triggers the execution of a stored procedure to do a calculation on an entire table.
So in other words I would have to be able to add a 'trigger' on the actual firebase data to then perform a 'stored procedure' to calculate something; is this possible with Firebase?!
You now have the capability to use Cloud Functions for Firebase to write code that triggers in response to changes in your Firebase project. There is lots of sample code that illustrates how to respond to writes to a particular location in your database, among many other types of events.
Am using grids in VB.net to display database records stored in Microsoft Access, the tables allow editing and deleting using the grid fields.
Is there a way I can monitor whenever a user deletes or edits a record? I want to be able to view details of every update or deletion to certain records, such as the date and users who did it.
What you're speaking of is known as "auditing" and certain databases - such as MS SQL Server - have built-in support for this. MS Access does not include this feature. With the abscence of auditing, a common way to implement this in a custom manner is using update triggers. Unfortunately MS Access also does not have triggers. The only way you'll be able to do this is via an API you write yourself to interact with your tables and discipline to stick to that API.
What you want to do is hook into the save commands on your inserts and deletes. You could also hook into the events to capture the data. Either way, create an insert statement that dumps the log data into your log database.