Custom IDs with AWS API Gateway and DynamoDB - amazon-dynamodb

I followed this tutorial to connect AWS API Gateway and DynamoDB. My DynamoDB table is defined as follows:
Resources:
DragonsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: mytable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
I was hoping to be able to use my own custom id values, but when I include one in the POST request this is overridden and replaced with a UUID. Is there any way I can use my own IDs?

DynamoDB does not auto generate IDs for you.
You are setting your mapping template to use the context requestId as shown in the tutorial:
"id": "S": "$context.requestId"
You should set this template mapping to the Id you wish to use in your openAPI mapping.

Related

AWS SAM recreates DynamoDB Table when adding SortKey or GlobalSecondaryIndex

I am using AWS SAM to deploy a DynamoDB table and my template.yaml looks something like this:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: owner
AttributeType: S
KeySchema:
- AttributeName: owner
KeyType: HASH
I do sam build && sam deploy to (re-)deploy it.
When I add a sortKey and/or a GlobalSecondaryIndex the yaml file looks something like this:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: owner
AttributeType: S
- AttributeName: Timestamp
AttributeType: S
KeySchema:
- AttributeName: owner
KeyType: HASH
- AttributeName: Timestamp
KeyType: RANGE
GlobalSecondaryIndexes:
- IndexName: TestIndex
KeySchema:
- AttributeName: owner
KeyType: HASH
- AttributeName: Timestamp
KeyType: RANGE
Projection:
ProjectionType: KEYS_ONLY
According to the docs updating these fields should be possible (no interruption).
But in my case the deploy command always recreates the whole table (deleting all data).
Am I doing something wrong here?
Edit
Maybe my explanation was a unclear about that. I tried to add both (GSI and sortKey) but I also tried adding each one by one, i.e. just adding the GSI.
DynamoDb tables key schema and LSI can only be set during table creation and only GSI can be added later.
Just to add on to it, We must add a name attribute in Sam/CloudFormation to the resources like Databases, Dynamo Tables, etc to avoid getting deleted. When a resource needs replacement, deploy will fail, rather than deleting and replacing it with a new resource.
Ex:
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: "test-table"
BillingMode: PAY_PER_REQUEST
Adding a sort key, or otherwise changing the KeySchema requires the table to be replaced. See the correct docs for the table definition.
Adding/Changing an LSI also requires replacement.
Adding a GSI can be done without interruption.
Though I think changing a GSI KeySchema would require the GSI to be replaced also...the docs seem to imply that it doesn't.

How to update dynamodb table using cloud formation

How to enable "PointInTimeRecoverySpecification" for existing dynamodb tables by using cloud formation.
I have tried like below:
Resources:
mytableenablerecovery:
Properties:
AttributeDefinitions:
-
AttributeName: ArtistId
AttributeType: S
KeySchema:
-
AttributeName: ArtistId
KeyType: HASH
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
ProvisionedThroughput:
ReadCapacityUnits: "5"
WriteCapacityUnits: "5"
TableName: mytablename123
Type: "AWS::DynamoDB::Table"
But it is creating the new table if not exists otherwise it is throwing the error "mytablename123 already exists in stack arn:aws:cloudformation:us-east-"
While the list is expanding, only some resource types currently support importing existing resources into CloudFormation
Luckily, AWS::DynamoDB::Table is currently one of those resource types
To import existing resources of one of those supported resource types into CloudFormation, they must be imported using change sets as described here

serverless dynamodb enable continuous backups

How do I enable continuous backups for my DynamoDB table when using the Serverless Framework?
Ideally, I would define something in serverless.yml that would enable automatic DynamoDB backups
It's a little hidden in a couple docs, but this can be done by defining PointInTimeRecoverySpecification in the resources section of you serverless.yml file, e.g.
resources:
Resources:
developers:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
AttributeDefinitions:
- AttributeName: myId
AttributeType: S
KeySchema:
- AttributeName: myId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true

Disable/Prevent CloudWatch Alarms when creating a new DynamoDB table with CloudFormation

I have several non-scaling DynamoDB tables created via CloudFormation. Each table auto-creates CloudWatch Alarms (and more for each GSI). In PROD this is okay, but in DEV this adds up in terms of cost. For example, for action table with a GSI I get the following alarms created:
action-ReadCapacityUnitsLimit-BasicAlarm
action-WriteCapacityUnitsLimit-BasicAlarm
action-siteId-lastCaptured-index-ReadCapacityUnitsLimit-BasicAlarm
action-siteId-lastCaptured-index-WriteCapacityUnitsLimit-BasicAlarm
My CF template is quite simple for each table. For example:
tableuser:
Type: 'AWS::DynamoDB::Table'
DependsOn: tablepage
Properties:
TableName: user
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
ProvisionedThroughput:
ReadCapacityUnits:
Ref: 5
WriteCapacityUnits:
Ref: 5
How can I disable CloudWatch Alarms for CloudFormation-created DynamoDB tables? Of course I would prefer to do this via CloudFormation templates itself, but since I am not specifying their creation, I am not sure if this is possible?
If you choose on-demand capacity (https://aws.amazon.com/dynamodb/pricing/) instead of provisioned, no alarms will be created.
Instead of
ProvisionedThroughput:
ReadCapacityUnits:
Ref: 5
WriteCapacityUnits:
Ref: 5
You say
BillingMode: PAY_PER_REQUEST

I need a list in a dynamodb using cloud formation

I have a users table, and a requests table. Many requests for one user. I would like to have a list of requests in the users table. But I am not sure how to write the cloud formation call to build it. Currently I have just a flat set of attributes:
resources:
Resources:
DynamoDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: Employee
AttributeDefinitions:
- AttributeName: employeeid
AttributeType: S
- AttributeName: name
AttributeType: S
- AttributeName: requests
AttributeType: S
KeySchema:
- AttributeName: employeeid
KeyType: HASH
I would like requests to be a list of request ids for the user, not a string value so no S type, so I can cycle through them and call the ones I want. Let me know if my schema is ok. Thanks in advance.
Take a look at the following documentation. Notice that as long as you don't use the attribute as index you don't need to define it.
DynamoDB is a NoSQL database, and is schemaless, which means that,
other than the primary key attributes, you do not need to define any
attributes or data types at table creation time.
So in your case, the serverless.yml should only specify:
resources:
Resources:
DynamoDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: Employee
AttributeDefinitions:
- AttributeName: employeeid
AttributeType: S
KeySchema:
- AttributeName: employeeid
KeyType: HASH
And in your code you can dynamically write into the table attributes which consist of sets, maps or even json.

Resources