How do I subscribe directly to my AWS AppSync data source? - amazon-dynamodb

I have a DynamoDB connected to step functions and I am building a UI to display changes. I connected the DB to an AppSync instance and have tried using subscriptions through AppSync, but it seems they only observe mutations within the current AppSync.
How can I subscribe to the data source changes directly?

You are correct. Currently, AppSync Subscriptions are only triggered from GraphQL Mutations. If there are changes made to the DynamoDB from a source other than AppSync, subscriptions will not trigger.
If you want to track all changes being made to DynamoDB table and publish them using AppSync, you can do the following:
1) Setup a DynamoDB stream to capture changes and feed the changes to AWS Lambda
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
2) Setup an AppSync mutation with a Local (no datasource) resolver. You can use this to publish messages to subscribers without writing to a datasource.
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-local-resolvers.html
3) Make the DynamoDB Stream Lambda function (setup in step 1) call the AWS AppSync mutation (setup in step 2).
This will enable publishing ALL changes made to a DynamoDB table to AppSync subscribers, regardless of where the change came from.

Related

How to implements publish/subscribed with DynamoDB?

I need to implement a publish/subscribe with DynamoDB. Every of my cloud node should send events to all other nodes of my application that are connected to the same DynamoDB database. A typical use case is to clear caches if data in the database has changed.
Can be the DynamoDB streams a solution for it? It look for me that the stream can consumed only once and not from every other node. Is this right?
Is there some like tailed cursor support in DynamoDB?
Are there any other feature that I can use?
DynamoDB doesn't support Pub/Sub as a feature like you might see in Redis.
If you need to have cache functionality, you can check DynamoDB Accelerator (Dax). You can check specifically the consistency best practices documentation.
You can also use a dedicated Pub/Sub service such as Simple Notification Service (SNS).

How to copy data from Cosmos DB API for MongoDB to another Cosmos DB account

How to copy data, one collection, from one Cosmos DB API for MongoDB account to another Cosmos DB API for MongoDB account, in another subscription, placed in another Azure region.
Preferably do it periodically.
You can use Azure Data Factory to easily copy a collection from one Cosmos DB API for MongoDB account to another Cosmos DB API for MongoDB account, in any other subscription, placed in any other Azure region simply using Azure Portal.
You need to deploy some required components like Linked Services, Datasets and Pipeline with Copy data activity in order to accomplish this task.
Use Azure Cosmos DB (MongoDB API) Linked Service to connect the Azure Data Factory with your Cosmos DB Mongo API account. Refer Create a linked service to Azure Cosmos DB's API for MongoDB using UI for more details and step to deploy.
Note: You need to deploy two Azure Cosmos DB (MongoDB API) Linked Service, one for source account from where you need to copy the collection, and another for destination account where the data will be copied.
Create Datasets by using Linked service created in above step. Your dataset will connect you to the collection. Again you need to deploy two datasets, one for source collection and another for destination collection. It will look like as shown below.
Now create a pipeline using Copy data activity
In Source and Sink tab in copy data activity settings, select the source dataset and sink dataset respectively which you have created in step 2.
Now just Publish the changes and click on Debug option to run the pipeline once. The pipeline will run and collection will be copied at destination.
If you want to run the pipeline periodically, you can create Trigger based on event or any specific time. Check Create a trigger that runs a pipeline on a schedule for more details.

How to find DynamoDB table created by #model GraphQL api in Amplify?

My understanding is that when creating objects annotated with #model in GraphQL API, DynamoDB table is created and configured automatically. How to access this table? I cannot see it in the DynamoDB console.
They should be there. Make sure you are in the same region as your Amplify application. So for example, if you provisioned your Amplify app in us-east-2, your DynamoDB tables will also be in us-east-2.

How to drop and recreate local dynamodb/appsync/amplify database created by mock api?

I am using AWS Amplify to build a Web Application. I am using Appsync and DynamoDb and I've defined my GraphQL schema. Now, Amplify offers the ability to test local GraphQL endpoints by running "amplify mock api" from the command line. I did this and it successfully created some local GraphQL endpoints for me and I was able to insert some data and do some local queries. (When I ran "amplify mock api" the first time I got some messages on the console that my tables were created.)
I have since made quite significant changes to my GraphQL schema, including keys, sorting keys, etc. I don't think all of my changes successfully got applied to my local api and database tables. So I just basically want to completely delete my local "database" so that "amplify mock api" can regenerate a new local database for me based on my new schema. How do I do this? I don't know where this amplify local database resides or what underlying technology it uses. (Otherwise I would just connect directly to the database and drop all tables to force a recreation.) I have tried "amplify remove api" which removed the local endpoints. I even pushed this to AWS (I am in development mode currently, so I didn't mind destroying my AWS environment.) I then did "amplify add api" again from scratch and I typed out my schema again. But if I run "amplify mock api" then it doesn't recreate the tables. The endpoint starts up and if I perform a GraphQL query I get the data back that I originally added. Which means those tables persist.
How can I completely drop my local "mock" Amplify Appsync GraphQL endpoints and database to force a recreate? (I am using a Mac, if it's relevant).
It ended up being very simple. Amplify creates the mock data in ./amplify/mock-data. So to delete the database and recreate it I just deleted this directory in my project. This question was helpful in working out how the mock API and database setup works.

Improper data returned by reference.on('value') in firebase web api

When using firebase realtime database Node API (import "firebase-admin"), improper values (empty updates) are randomly delivered via query.on('value') as the initial update.
I rely proper data being delivered from the very first update. The reason why I'm using .on('value') is code reuse, in other places that require the data stream.
What is the contract on the data updates in firebase?
If I switch to .once('value') does it change the contract?

Resources