what is DynamoDB's replication type? - amazon-dynamodb

What is the replication type of dynamoDB?
Im assuming it is peer-to-peer based on online results but can any confirm or deny?

People often assume a connection between the Dynamo Paper and DynamoDB, so they claim that DynamoDB uses leaderless replication.
However, DynamoDB is based on many principles of Dynamo, but it is not an implementation of Dynamo. See reference 4 of the Wikipedia article on Dynamo for a quote explaining that it uses single-leader replication.

Dynamo had a multi-leader design requiring the client to resolve version conflicts and DynamoDB uses synchronous replication across multiple data centers for high durability and availability.

Related

Cloudflare worker kv is a Database like dynamoDB in aws?

I can't find the exact understanding between cloudflare worker kv and dynamoDB in aws, Can anyone make it clear in simpler ?
Although there are some similarities (ie. both DynamoDB and Worker KV are offered as managed services) I would say they are more different than they are alike.
Worker KV are always eventually consistent whereas DynamoDB can be strongly consistent for read after write operations.
DynamoDB has additional capabilities such as local and global secondary indexes allowing you to have different access patterns for the same underlying data.
Workers KV is heavily optimized for reads with infrequent writes, whereas DynamoDB doesn't have the same limitation (though DynamoDB also does better at reading data than writing in terms of throughput).
DynamoDB also has other features such as stream processing which allows you to do out of band processing in reaction to changes to the data stored in the database.
I'm not sure about the security model for Workers KV but DynamoDB allows you to configure strict access policies to the tables.

Verify dynamodb is healthy

I would like to verify in my service /health check that I have a connection with my dynamodb.
I am searching for something like select 1 in MySQL (its only ping the db and return 1) but for dynamodb.
I saw this post but searching for a nonexisting item is an expensive action.
Any ideas on how to only ping my db?
I believe the select 1 equivalent in DDB is Scan with a Limit of 1 item. You can read more here.
Dynamodb is a managed service from AWS. It is highly available anyways. Instead of using query for verifying health of dynamodb, why not setup cloudwatch metrics on your table and check for recent alarm in cloud watch concerning dynamodb. This will also prevent you from spending your read units.
The question is perhaps too broad to answer as stated. There are many ways you could set this up, depending on your concerns and constraints.
My recommendation would be to not over-think, or over-do it in terms of verifying connectivity from your service host to DynamoDB: for example just performing a periodic GetItem should be sufficient to establish basic network connectivity..
Instead of going about the problem from this angle, perhaps you might want to consider a different approach:
a) setup canary tests that exercise all your service features periodically -- these should be "fail-fast" light tests that run constantly and in the event of consistent failure you can take action
b) setup error metrics from your service and monitor on those metrics: for example, CloudWatch allows you to take action on metrics -- you will likely get more milage out of this approach than narrowly focusing on a single failure mode (ie. DynamoDB, which, as other have stated, is a Managed service with very good availability SLA)

Trying to consolidate multiple Amazon DynamoDB tables into one

Scenario:
I've got a semi-structured dataset in JSON format. I'm storing the 3 subsets (new_records, upated_records, and deleted_records) from the dataset in 3 different Amazon DynamoDB tables. Scheduled to truncate and load daily.
I'm trying to create a mapping, to source data from these DynamoDB tables, append a few metadata columns (date_created, date_modified, is_active) and consolidate the data in a master DynamoDB table
Issues and Challenges:
I tried AWS Glue - Created Data Catalogue for source tables using Crawler. I understand AWS Glue doesn't provide provisions to store data in DynamoDB, so I changed the target to Amazon S3. However, the AWS Glue job results in creating some sort of reduced form of the data (parquet objects) in my Amazon S3 bucket. I've limited experience with PySpark, Pig, and Hive, so excuse me if I'm unable to explain clearly.
Quick research on Google hinted me to read parquet objects available on Amazon S3, using Amazon Athena or Redshift Spectrum.
I'm not sure, but this looks like overkill, doesn't it?
I read about Amazon Data Pipelines, which offers to quickly transfer data between different AWS services. Although I'm not sure if it provides some mechanism to create mappings between source and target (in order to append additional columns) or does it straightaway dumps data from one service to others?
Can anyone hint at a lucid and minimalistic solution?
-- Update --
I've been able to consolidate the data from Amazon DynamoDB to Amazon Redshift using AWS Glue, which turned out to be actually quite simple.
However, with Amazon Redshift, there are a few characteristic issues - its relational nature and its inability to directly perform a single merge, or upsert to update a table are few major things I'm considering here.
I'm considering if Amazon ElasticSearch can be used here, to index and consolidate the data from Amazon DynamoDB.
I'm not sure about your needs and assumptions. But let me post my thoughts that may help!
Why are you planning to do this migration? Think about this carefully.
Moving from 3 tables to 1 table, table size should not be an issue with DynamoDB But think about read/write unit capacity.
Athena is a good option, you will write SQL to query your data, will pay based on data scanned for your query, ... But Athena has 30 minutes query timeout. (I think you can request an increase for that, not sure!)
I think it is worth to try Data Pipelines. Yes, you can process the data while moving it.

Does SQLite have an auto-deletion feature in an in-memory database?

I want to start using an in-memory database with an auto-deletion feature enabled and set for a couple hours.
I have a huge volume of data and requirements to support different types of complex queries (just for data persisted on last one or two hours).
I believe that in-memory databases can help me with that. And SQLite is very well adopted and trusted. And it has a in-memory feature. But, I didn't find anything about an auto-deletion feature (based on time).
I see that auto-deletion is a common feature on in-memory databases (i.e., IMDB), but I didn't find a SQLite documentation about it.
I want to have some feedback/direction from the community with more experience with in-memory databases.
Thanks.
Full disclosure: I work for the vendor. eXtremeDB (an in-memory database system available since 2001) has a time-to-live (TTL) feature. It can be expressed either in terms of the maximum number of rows to be stored for a table, or the amount of elapsed time after which a row is purged (but only when a new row comes in). There are non-SQL and SQL APIs available.

What documentation exists for DynamoDB's consistency model, CAP, partition recovery etc?

I'm considering using Amazon's DynamoDB. Naturally, if you're bothering to use a highly available distributed data store, you want to make sure your client deals with outages in a sensible way!
While I can find documentation describing Amazon's "Dynamo" database, it's my understanding that "DynamoDB" derives its name from Dynamo, but is not at all related in any other way.
For DynamoDB itself, the only documentation I can find is a brief forum post which basically says "retry 500 errors". For most other databases much more detailed information is available.
Where should I be looking to learn more about DynamoDB's outage handling?
While Amazon DynamoDB indeed lacks a detailed statement about their choices regarding the CAP theorem (still hoping for a DynamoDB edition of Kyle Kingsbury's most excellent Jepsen series - Call me maybe: Cassandra analyzes a Dynamo inspired database), Jeff Walker Code Ranger's answer to DynamoDB: Conditional writes vs. the CAP theorem confirms the lack of clear information in this area, but asserts that we can make some pretty strong inferences.
The referenced forum post also suggests a strong emphasis on availability too in fact:
DynamoDB does indeed synchronously replicate across multiple
availability zones within the region, and is therefore tolerant to a
zone failure. If a zone becomes unavailable, you will still be able to
use DynamoDB and the service will persist any successful writes that
we have acknowledged (including writes we acknowledged at the time
that the availability zone became unavailable).
The customer experience when a complete availability zone is lost
ranges from no impact at all to delayed processing times in cases
where failure detection and service-side redirection are necessary.
The exact effects in the latter case depend on whether the customer
uses the service's API directly or connects through one of our SDKs.
Other than that, Werner Vogels' posts on Dynamo/DynamoDB provide more insight eventually:
Amazon's Dynamo - about the original paper
Amazon DynamoDB – a Fast and Scalable NoSQL Database Service Designed for Internet Scale Applications - main introductory article including:
History of NoSQL at Amazon – Dynamo
Lessons learned from Amazon's Dynamo
Introducing DynamoDB - this features the most relevant information regarding the subject matter
Durable and Highly Available. Amazon DynamoDB replicates its data over
at least 3 different data centers so that the system can continue to
operate and serve data even under complex failure scenarios.
Flexible. Amazon DynamoDB is an extremely flexible system that does
not force its users into a particular data model or a particular
consistency model. DynamoDB tables do not have a fixed schema but
instead allow each data item to have any number of attributes,
including multi-valued attributes. Developers can optionally use
stronger consistency models when accessing the database, trading off
some performance and availability for a simpler model. They can also
take advantage of the atomic increment/decrement functionality of
DynamoDB for counters. [emphasis mine]
DynamoDB One Year Later: Bigger, Better, and 85% Cheaper… - about improvements
Finally, Aditya Dasgupta's presentation about Amazon's Dynamo DB also analyzes its modus operandi regarding the CAP theorem.
Practical Guidance
In terms of practical guidance for retry handling, the DynamoDB team has meanwhile added a dedicated section about Handling Errors, including Error Retries and Exponential Backoff.

Resources