What's your preferred strategy for dealing with DAX's maintenance windows?
DynamoDB itself has no MWs and is very highly available. When DAX is introduced into the mix, if it's the sole access point of clients to DDB then it becomes a SPOF. How do you then handle degradation gracefully during DAX scheduled downtimes?
My thinking was to not use the DAX Client directly but introduce some abstraction layer that allows it to fall back to direct DDB access when DAX is down. Is that a good approach?
DAX maintenance window doesn't take the cluster offline, unless it is a one-node cluster. DAX provides availability through multiple nodes in the cluster. For a multi-node cluster, each node in the cluster goes through maintenance in a specific order in order for the cluster to remain available. With retries configured on the DAX client, your worload shouldn't see an impact during maintenance windows.
Other than maintenance window, cluster nodes need to be divided across multiple AZs, for availability in case an AZ goes down.
An abstraction layer to fall back to DDB is not a bad idea. But you need to make sure you have the provisioned capacity configured to handle the load spike.
Related
I'm in a high availability project which includes deployment of 2-node high availability cluster for hot replacement of services (applications) running on the cluster nodes. The applications have inbound and outbound tcp connections as well as process udp traffic (mainly for communicating with ntp server).
The problem is pretty standard until one needs to provide a hot migration of services to backup node with all the data stored in RAM. Applications are agnostic of backup mechanisms and it is highly undesirable to modify them.
As only approach to this problem, I've come off with a duplication approach assuming that both cluster nodes will run the same applications repeating calculations of each other. In case of failure the primary server the backup server will become a primary.
However, I have not found any ready solution for proxy which will have synchronous port mirroring. No existing proxy servers (haproxy, dante, 3proxy etc.) support such feature as far as I know. Have I missed something, or I should write a new one from scratch?
A rough sketch of the functionality can be found here:
p.s. I assume that it is possible to compare traffic from the two clones of the same application...
I have three workloads.
DATACENTER1 sharing data by rest services - streaming ingest
DATACENTER2 load bulk - analysis
DATACENTER3 research
I want to isolated workloads, i am going to create one datacenter foreach workloads.
The objective of the operation is to prevent a heavy process from consuming all the resources and gurantee hight availablity data.
Is anyone already trying this ?
During a loadbulk on datacenter2, is data availability good on datacenter1 ?
Short answer is that workload won't cause disruption of load across datacenter. How it works is as follows:
Conceptually when you create a Keyspace, Cassandra creates a Virtual Data Center (VDC). Nodes with similar workloads must be assigned to same VDC. Segregating workload will ensure that only (exactly) one workload is ever executed at a VDC. As long as you follow this pattern, it works.
Data sync needs to be monitored under load on busy nodes but thats a normal concern on any Cassandra deployment.
Datastax Enterprise also support this model as can be seen from:
https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/deploy/deployWkLdSep.html#deployWkLdSep__srchWkLdSegreg
Currently, i am researching about galera cluster using many of servers(linux centos). Scaling up read traffic is very effective and easy, but scaling for write seems difficult(not improved).
I have used many servers, using maxscale as router(Readconnroute) to distribute write queries in paralles to all servers. But the write speed is not improved.
One option would be to use the Spider storage engine in MariaDB. It supports sharding of tables and should improve write speeds compared to a Galera cluster. On the other hand, you will lose the high availability of the Galera cluster in favor of increased write speeds.
This slide set by Kentoku Shiba on Spider is a good overview of how Spider improves write scalability.
Galera does not improve write speed, as all servers will have to process all writes. MySQL is very poor for scaling writes. You could do it with a proxy (like you mentioned maxscale). Then you can shard your data. You have to pick a key for each table to distribute keys to multiple servers.
I would suggest to use another nosql server i.e. mongodb, which have sharding capabilities built in for write heavy use cases. Mongodb is much easier to set up and to maintain than mysql for this job.
Currently I am using one availability zone in my ec2 launch config. It is important that I don't get network partitions in my app, as rabbitmq does not handle network partitions well when clustering and HA is used (which I am using).
I am very fuzzy on the concept of network partitions. Would it be safe for me to use two availability zones?
The different Amazon EC2 Availability Zones are in different physical locations. While the connections between availability zones are quite good, it is still a WAN connection.
From the RabbitMQ docs
RabbitMQ clusters do not tolerate network partitions well. If you are thinking of clustering across a WAN, don't. You should use federation or the shovel instead
(emphasis mine)
https://www.rabbitmq.com/partitions.html
In short, a 1 minute or so interruption in connectivity will cause a network partition to be created. While this would be an unusual event for EC2, it can and sometimes will happen.
In the DynamoDB documentation and in many places around the internet I've seen that single digit ms response times are typical, but I cannot seem to achieve that even with the simplest setup. I have configured a t2.micro ec2 instance and a DynamoDB table, both in us-west-2, and when running the command below from the aws cli on the ec2 instance I get responses averaging about 250 ms. The same command run from my local machine (Denver) averages about 700 ms.
aws dynamodb get-item --table-name my-table --key file://key.json
When looking at the CloudWatch metrics in the AWS console it says the average get latency is 12 ms though. If anyone could tell me what I'm doing wrong or point me in the direction of information where I can solve this on my own I would really appreciate it. Thanks in advance.
The response times you are seeing are largely do to the cold start times of the aws cli. When running your get-item command the cli has to get loaded into memory, fetch temporary credentials (if using an ec2 iam role when running on your t2.micro instance), and establish a secure connection to the DynamoDB service. After all that is completed then it executes the get-item request and finally prints the results to stdout. Your command is also introducing a need to read the key.json file off the filesystem, which adds additional overhead.
My experience running on a t2.micro instance is the aws cli has around 200ms of overhead when it starts, which seems inline with what you are seeing.
This will not be an issue with long running programs, as they only pay a similar overhead price at start time. I run a number of web services on t2.micro instances which work with DynamoDB and the DynamoDB response times are consistently sub 20ms.
There are a lot of factors that go into the latency you will see when making a REST API call. DynamoDB can provide latencies in the single digit milliseconds but there are some caveats and things you can do to minimize the latency.
The first thing to consider is distance and speed of light. Expect to get the best latency when accessing DynamoDB when you are using an EC2 instance located in the same region. It is normal to see higher latencies when accessing DynamoDB from your laptop or another data center. Note that each region also has multiple data centers.
There are also performance costs from the client side based on the hardware, network connection, and programming language that you are using. When you are talking millisecond latencies the processing time on your machine can make a difference.
Another likely source of the latency will be the TLS handshake. Establishing an encrypted connection requires multiple round trips and computation on both sides to get the encrypted channel established. However, as long as you are using a Keep Alive for the connection you will only pay this overheard for the first query. Successive queries will be substantially faster since they do not incur this initial penalty. Unfortunately the AWS CLI isn't going to keep the connection alive between requests, but the AWS SDKs for most languages will manage this for you automatically.
Another important consideration is that the latency that DynamoDB reports in the web console is the average. While DynamoDB does provide reliable average low double digit latency, the maximum latency will regularly be in the hundreds of milliseconds or even higher. This is visible by viewing the maximum latency in CloudWatch.
They recently announced DAX (Preview).
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. For more information, see In-Memory Acceleration with DAX (Preview).