DynamoDB And RDS Mysql - amazon-dynamodb

I am working on Proof of Concept in which we are getting Live Data streams from Kinesis and we want to save it in DynamoDB.
But we have the lookup tables data in RDS (Mysql) instances as we would require to perform join operations with lookup tables.
Questions:
should we Migrate the lookup tables data in DynamoDB through AWS DMS Or any other approach will be suitable.
Is DynamoDB more suitable for join operations with lookup tables?
Can we use PartiQL in DynamoDB to query data and perform join operations with lookup tables?

You can use DynomoDB streams to replicate data to any other application.
Please have a look here.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
Further more, DynomoDB is not suitable for Join operations.

Related

How to copy a table from one database to another database in same ADX Cluster

I want to copy table from One database to another database in same ADX (Azure Data Explorer) cluster.
From Microsoft document I could see table copy available within same database using below commands.
.set, .append, .set-or-append, .set-or-replace
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/management/data-ingestion/ingest-from-query
I am looking for a table copy from one database to another database in same ADX Cluster. How can I achieve this?
You can use the database() function for example:
.set Table <| database("myOtherDatabase").Table

Using java to stream index into elastic-search from dynamodb using trigger

I am a novice on dynamodb and elasticsearch.
Need to stream indexes into elastic search from dynamodb table by trigger using Java i.e. whenever a new record is inserted in dynamodb table the same has to updated in the elastic-search.
Most of the examples available in web are either incomplete or implemented in python/nodejs. If there can be any explanation on how to achieve this in Java or any links/reference articles are also welcome.

Can Cosmos DB read data from File Blob or Csv or Json file at a batch size?

I am currently researching around reading data using cosmos db, basically our current approach is using a .Net Core C# application with Cosmos DB SDK to read entire data from a file blob or csv or json file, and then use the for loop, one by one pulling its information from cosmos db and compare/insert/update, this somehow feels inefficient.
We're curious if cosmos DB could perform the ability to read a bunch of data (let's say a batch size of 5000 records) from file blob or csv or json file and similar like SQL server, do a bulk insert or merge statement within the cosmos DB directly? Basically the goal is not doing same operation one by one for each item interacting with cosmos DB.
I've noticed and researched in BulkExecutor as well, the BulkUpdate looks like a more straightforward way of directly updating an item without considering if it should update. In my case for example, if I have 1000 items, only 300 items' properties got changed, so I'll just need to update those 300 items without updating the irrelevant remaining 700 items as well. Basically I need to find out a way to have Cosmos DB do the data compare as in a collection, not inside a loop and focus on each single item, it could either perform a update or output a collection that I can use for later updating as well.
Would the (.Net + SDK) application be able to perform that or would a cosmos DB stored procedure could handle similar job? Any other Azure tool is welcome as well!
What you are looking for is the Cosmos DB Bulk Executor library
It is designed to operate using millions of records in bulk and it is very efficient.
You can find the .NET documentation here

DocumentDB Data Migration Tool - transformDocument procedure with partitions

I am trying to convert data from SQL Server to DocumentDb. I need to create embedded arrays in the DocumentDb document.
I am using the DocumentDb Data Migration Tool and it describes using the transformDocument for a bulk insert stored proc...unfortunately we are using partitioned collections and they do not support bulk insert.
Am I missing something or is this not currently supported?
The migration tool only supports sequential data import to a partitioned collection. Please follow sample below to bulk-import data efficiently into a partition collection.
https://github.com/Azure/azure-documentdb-dotnet/tree/master/samples/documentdb-benchmark

SqLite Multicore Processing

How do you configure SqLite 3 to process a single query using more than 1 core of a CPU ?
Since version 3.8.7, SQLite can use multiple threads for parallel sorting of large data sets.
sqlite3 itself does not do that.
However, I have a project called multicoresql on github that has utility programs and a C library for spreading sql queries onto multiple cores.
It uses sharding so you have to break your large database or datafile into multiple sqlite3 database files. A single SQL query must be written as two SQL queries, a map query that first runs on all the shards, and a reduce query to determine the result from the collected output from all the shards running the map query.

Resources