How do I enable continuous backups for my DynamoDB table when using the Serverless Framework?
Ideally, I would define something in serverless.yml that would enable automatic DynamoDB backups
It's a little hidden in a couple docs, but this can be done by defining PointInTimeRecoverySpecification in the resources section of you serverless.yml file, e.g.
resources:
Resources:
developers:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
AttributeDefinitions:
- AttributeName: myId
AttributeType: S
KeySchema:
- AttributeName: myId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
Related
I followed this tutorial to connect AWS API Gateway and DynamoDB. My DynamoDB table is defined as follows:
Resources:
DragonsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: mytable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
I was hoping to be able to use my own custom id values, but when I include one in the POST request this is overridden and replaced with a UUID. Is there any way I can use my own IDs?
DynamoDB does not auto generate IDs for you.
You are setting your mapping template to use the context requestId as shown in the tutorial:
"id": "S": "$context.requestId"
You should set this template mapping to the Id you wish to use in your openAPI mapping.
I am using AWS SAM to deploy a DynamoDB table and my template.yaml looks something like this:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: owner
AttributeType: S
KeySchema:
- AttributeName: owner
KeyType: HASH
I do sam build && sam deploy to (re-)deploy it.
When I add a sortKey and/or a GlobalSecondaryIndex the yaml file looks something like this:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: owner
AttributeType: S
- AttributeName: Timestamp
AttributeType: S
KeySchema:
- AttributeName: owner
KeyType: HASH
- AttributeName: Timestamp
KeyType: RANGE
GlobalSecondaryIndexes:
- IndexName: TestIndex
KeySchema:
- AttributeName: owner
KeyType: HASH
- AttributeName: Timestamp
KeyType: RANGE
Projection:
ProjectionType: KEYS_ONLY
According to the docs updating these fields should be possible (no interruption).
But in my case the deploy command always recreates the whole table (deleting all data).
Am I doing something wrong here?
Edit
Maybe my explanation was a unclear about that. I tried to add both (GSI and sortKey) but I also tried adding each one by one, i.e. just adding the GSI.
DynamoDb tables key schema and LSI can only be set during table creation and only GSI can be added later.
Just to add on to it, We must add a name attribute in Sam/CloudFormation to the resources like Databases, Dynamo Tables, etc to avoid getting deleted. When a resource needs replacement, deploy will fail, rather than deleting and replacing it with a new resource.
Ex:
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: "test-table"
BillingMode: PAY_PER_REQUEST
Adding a sort key, or otherwise changing the KeySchema requires the table to be replaced. See the correct docs for the table definition.
Adding/Changing an LSI also requires replacement.
Adding a GSI can be done without interruption.
Though I think changing a GSI KeySchema would require the GSI to be replaced also...the docs seem to imply that it doesn't.
How to enable "PointInTimeRecoverySpecification" for existing dynamodb tables by using cloud formation.
I have tried like below:
Resources:
mytableenablerecovery:
Properties:
AttributeDefinitions:
-
AttributeName: ArtistId
AttributeType: S
KeySchema:
-
AttributeName: ArtistId
KeyType: HASH
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
ProvisionedThroughput:
ReadCapacityUnits: "5"
WriteCapacityUnits: "5"
TableName: mytablename123
Type: "AWS::DynamoDB::Table"
But it is creating the new table if not exists otherwise it is throwing the error "mytablename123 already exists in stack arn:aws:cloudformation:us-east-"
While the list is expanding, only some resource types currently support importing existing resources into CloudFormation
Luckily, AWS::DynamoDB::Table is currently one of those resource types
To import existing resources of one of those supported resource types into CloudFormation, they must be imported using change sets as described here
I have several non-scaling DynamoDB tables created via CloudFormation. Each table auto-creates CloudWatch Alarms (and more for each GSI). In PROD this is okay, but in DEV this adds up in terms of cost. For example, for action table with a GSI I get the following alarms created:
action-ReadCapacityUnitsLimit-BasicAlarm
action-WriteCapacityUnitsLimit-BasicAlarm
action-siteId-lastCaptured-index-ReadCapacityUnitsLimit-BasicAlarm
action-siteId-lastCaptured-index-WriteCapacityUnitsLimit-BasicAlarm
My CF template is quite simple for each table. For example:
tableuser:
Type: 'AWS::DynamoDB::Table'
DependsOn: tablepage
Properties:
TableName: user
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
ProvisionedThroughput:
ReadCapacityUnits:
Ref: 5
WriteCapacityUnits:
Ref: 5
How can I disable CloudWatch Alarms for CloudFormation-created DynamoDB tables? Of course I would prefer to do this via CloudFormation templates itself, but since I am not specifying their creation, I am not sure if this is possible?
If you choose on-demand capacity (https://aws.amazon.com/dynamodb/pricing/) instead of provisioned, no alarms will be created.
Instead of
ProvisionedThroughput:
ReadCapacityUnits:
Ref: 5
WriteCapacityUnits:
Ref: 5
You say
BillingMode: PAY_PER_REQUEST
From the command line or the online API, I have no trouble creating a "composite primary key" but when I try to use CloudFormation to do the job for me, I don't see any JSON/YAML that will let me set something called a "composite primary key". The language is completely different so I was hoping someone could guide me as to how I create such a key using Cloudformation.
My best guess is something like the following where I want the composite key to consist of both userId and noteId:
Resources:
usersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: notes_serverless
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Here is the YAML syntax for DynamoDB table creation with partition and sort keys.
The syntax on OP is almost correct. I have just formatted with proper quotes and rearranged the order of the properties.
AWSTemplateFormatVersion: "2010-09-09"
Resources:
usersTable:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "userId"
AttributeType: "S"
-
AttributeName: "noteId"
AttributeType: "S"
KeySchema:
-
AttributeName: "userId"
KeyType: "HASH"
-
AttributeName: "noteId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: "5"
WriteCapacityUnits: "5"
TableName: "notes_serverless"