DynamoDB Stream to EventBridge - amazon-dynamodb

I need to get data from DynamoDB into EventBrdige. Can this happen without the need of a Lambda Function? Can EventBridge listen to DynamoDB changes as a rule?

No. There is currently no direct integration between DynamoDB Stream and EventBridge.
You need to consider the different use cases for these two services. Event Bridge is for highly important low-volume events for simpler systems integration, while DynamoDB is for high-volume events, which are already recorded in the DB. DynamoDB Stream is designed chiefly for analytical use cases with the ability to aggregate or filter the events to analyze.
There are a few other event processing services in AWS, such as Kinesis and Managed Kafka, which are designed differently for scale than Event-Bridge.

Related

Using database other than DynamoDB with iot-core

I'm just getting started on iot and note that iot-core uses DynamoDB. That's not going to work for me (no table joins, no stored procs etc.). Is it possible to use a different DB? I usually use Postgres but appreciate it may not be the quickest for a lot of inserts. What do others use?
For IoT use case and telemetry data storage, I'd suggest to use AWS Timestream. When data is sent to AWS IoT Core, you can use an IoT Rule to forward data to Kinesis Data Stream (useful here for decoupling as well as doing batch processing from downstream pipeline). AWS Lambda can be used to pull data from Kinesis Data Stream, enrich the data if needed (like adding building, users, company information) before storing the data into Timestream. You can then use Grafana to visualize the data from Timestream.

Stream on DynamoDB table

I have a DynamoDB table. When I check the Overview tab for the table on AWS Web Console, I see two options to enable streams.
Manage Streams to Kinesis
Manage DynamoDB Stream.
How these two streams are different from one another?
DynamoDB offers two streaming models for change data capture: Kinesis Data Streams for DynamoDB and DynamoDB Streams.
For the differences between the two and to help you choose the right solution for your application, please refer to Streaming Options for Change Data Capture here

Firebase Document Write Limit

Hey so with my current feed database design, I am using Redis for the cache for super-fast reads, which are routed through my Google Cloud Functions. The Redis database handles all post data and timeline updates, which is great and all, but I forgot one of the most considerable caveats to this. Firebase Firestore only permits one document write per second, meaning that if I have a document that stores the post data (post_id, user_id, content, like_count), the like_count would be impossible to track with the possibility for many likes per second. Does anyone have any solutions to this?
You can shard your counter among multiple documents and query them in aggregate as needed.
You can also try Cloud Tasks queue to smooth out the write frequency. It will add considerable complexity to the system, but is really the only genericized way in GCP to manage the rate of some work. This might not work out the way you need, however.
If you use Cloud Tasks, your task will need to be configured with a rate limit, and it will have to deliver the document data to write to yet another function or other HTTP endpoint that will perform the write.

Monitoring table changes in DynamoDB

I am using AWS DynamoDB in order to store information.
I have two machines running separate codes, that accessing the information in the database.
One of the machines is writing into the database and the second one is reading.
Since the second one does not know whether or not the information in the database has been changed I need to somehow monitor my database for changes.
I know that there is something called dynamo streams that can provide you with the information regarding changes made in your database and I already have that code implemented.
The question is as follows: if I am monitoring the database constantly, I need to query this stream all the time, let's say once every minute.
What is the difference between doing that and actually querying the database every minute?
Is it much more efficient?
Is it less costly (resources, moneywise)?
Is there any other, more efficient way of monitoring changes in the database in a specific table from the code?
any help would be appreciated, thank you.
Most people I have seen do something like this do it with DynamoDB Streams + Lambda for best results. Definitely check out the DynamoDB docs and the Lambda docs on this topic.
There's also an example in the docs of monitoring DynamoDB where changes fire off a message to an SNS topic.
DynamoDB Streams is more efficient and near real time. Think of using Lambda in this way like you would a trigger in a relational database. Why do the extra effort, when the patterns are this very well defined and people use them all the time?

Is DynamoDB streams the right option for this use case?

I have a DynamoDB table that contains key value pairs that will be read by a number of applications. On startup each application will read the entire table and cache it in-memory.
The problem I'm trying to solve is that of getting the applications to update their cache if one or more items in the DynamoDB table have been modified.
DynamoDB streams initially seemed to be the right approach to solving the problem. I have implemented the consumer using Kinesis Client Library (KCL) as recommended by AWS. While implementing it, however, I have encountered some problems that make me believe that I'm on the wrong track. Specifically:
When I create a new consumer using KCL, it creates a new DynamoDB table to do the housekeeping of leases and checkpoints, such that when the application is restarted, KCL knows which records have been consumed and which have not. This is not what I need for this problem. Any stream records that are created while the application is offline is irrelevant, since the entire table is read upon application startup.
Several instances of the same application are running at the same time. Each of them needs to be notified of table updates. To implement that in KCL I need to assign a unique application name to each of them. Otherwise they will share the lease table and only one of the applications will get notified. One table for each application instance doesn't seem right. Also I would then need something to remove unused tables.
I also implemented it using the low level API instead. That works fine when there's a single shard. My implementation doesn't handle re-sharding like KCL, however, so it's too fragile. It seems wrong to have to implement handling of re-sharding for the simple problem I'm trying to solve.
I'm beginning to consider other solutions like:
Implementing a lambda function that gets triggered on updates to the table. The function sends a notification to an SNS topic. Consumers create SQS subscriptions on the topic and gets notified via that. This solution has too many moving parts for my liking.
Make the applications periodically re-read the entire table and determine themselves if changes have been made. This solution feels a bit primitive, but seems to be the simplest.
All solutions that I have considered so far have quite significant drawbacks. What am I missing?
It depends on how your KCL is pushing to the dependent apps but
I believe the SQS path is the correct choice.
You can add a presumably infinite number of consumers without being throttled.
When you do add another dependent app, it won't require changing your KCL to push to it, the new app will simply watch the SQS queue.
You gain the ability to monitor the queue when issues happen.
More moving parts to setup, but once you have the Streams -> SNS -> SQS pipe in place, it's basically bulletproof.
Just my 2ยข.
Nowadays an AWS AppSync GraphQL API with subscriptions may be the simplest approach to power this type of application, with the least number of moving parts.
Whenever one of your applications starts up, it connects to your AppSync GraphQL API using the Amplify framework or AppSync SDK and subscribes to the updates its interested in. Then whenever an application updates information in the table via your GraphQL API, all your other applications will be notified of the change, along with the relevant changed data.
AppSync integrates well with DynamoDB out of the box, allowing you to generate DynamoDB tables with appropriate indexes alongside your GraphQL or generate GraphQL from your existing DynamoDB tables if you so choose. Amplify can even help you automatically generate an AppSync GraphQL API at a higher level with associated DynamoDB tables, indexes, entity relationships, and more like elasticsearch search capabilities by using their GraphQL transformers.

Resources