DynamoDB get_item "The provided key element does not match the schema" - amazon-dynamodb

I'm trying to implement a DynamoDB get_item call using https://github.com/aws/aws-sdk-ruby/blob/0465bfacbf87e6bc78c38191961ed860413d85cd/gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/table.rb#L697 via our Ruby on Rails app, dr-recommends.
From reviewing this DynamoDB CloudFormation stack that I wrote, I expect the following get_item call to work, so I'm a little lost as to how to proceed.
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
- AttributeName: 'pk'
AttributeType: 'S'
- AttributeName: 'sk'
AttributeType: 'S'
KeySchema:
- KeyType: 'HASH'
AttributeName: 'pk'
- KeyType: 'RANGE'
AttributeName: 'sk'
BillingMode: 'PAY_PER_REQUEST'
Do you see anything here that would explain why the following call might not work?
aws dynamodb get-item --table-name actual-table-name --key "{\"pk\":{\"S\":\"77119f89-d5bc-4662-91bb-f3e81e1d9b21\"}}" --projection-expression sk
# An error occurred (ValidationException) when calling the GetItem operation: The provided key element does not match the schema

GetItem() requires you to use the primary key.
You'll need to specify both hash(aka partition) and sort keys.

Related

Is there a way to override AppSync auto generated input types when using Amplify?

Let's say we have the following graphql schema
type Customer
#model
#auth(
rules: [
{ allow: owner }
{ allow: groups, groups: ["Admin"] }
{ allow: public, provider: iam }
]
) {
id: ID! #primaryKey
owner: ID!
customer_last_name: String
}
When pushing above schema to AppSync via AWS Amplify, the following is created in the autogenerated graphql schema.
type Query {
getCustomer(id: ID!): Customer
#aws_iam
#aws_cognito_user_pools
listCustomers(
id: ID,
filter: ModelCustomerFilterInput,
limit: Int,
nextToken: String,
sortDirection: ModelSortDirection
): ModelCustomerConnection
#aws_iam
#aws_cognito_user_pools
}
Is it possible to pass and enforce a custom argument for the query input, such as
getCustomer(id: ID!, owner:ID!): Customer instead of the autogenerated getCustomer(id: ID!): Customer ?
This can be done by editing the autogenerated schema directly in the Appsync console, but in the case of a new Amplify push, the changes will be lost.
There can be only one Hash Key(aka Partition key or Primary Key) in dynamoDB. If you want multiple keys to be your Hash Key, you need to concatenate those keys into one Hash Key. The key pattern you want is called composite primary key.
For more information,
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey
For creating a composite key with amplify graphql schema, please check details on this link. (Assuming you are using amplify graphql transformer v1)
https://docs.amplify.aws/cli-legacy/graphql-transformer/key/

serverless step functions: Getting error when passing more than one fields in the payload for lambda

Error: Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: The value for the field 'Date.$' must be a valid JSONPath at /States/Insert Data Dynamodb/Parameters' (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition;
below is the corresponding serverless.yaml code.
I tried wrapping the two parameters into encoded json string and passed it as single payload field and it resulted in the same error but when there is only one plain field in the payload this code deployed successfully
Any suggestions on how to pass two parameters?
service: service-name
frameworkVersion: '2'
provider:
name: aws
runtime: go1.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, self:custom.defaultStage}
region: us-east-1
tags: ${self:custom.tagsObject}
logRetentionInDays: 1
timeout: 10
deploymentBucket: lambda-repository
memorySize: 128
tracing:
lambda: true
plugins:
- serverless-step-functions
configValidationMode: error
stepFunctions:
stateMachines:
sortData:
name: datasorting-dev
type: STANDARD
role: ${self:custom.datasorting.${self:provider.stage}.iam}
definition:
Comment: "Data Sort"
StartAt: Query Data
States:
Query Data:
Type: Task
Resource: arn:aws:states:::athena:startQueryExecution.sync
Parameters:
QueryString: >-
select * from table.data
WorkGroup: primary
ResultConfiguration:
OutputLocation: s3://output/location
Next: Insert Data Dynamodb
Insert Data Dynamodb:
Type: Task
Resource: arn:aws:states:::lambda:invoke
Parameters:
FunctionName: arn:aws:lambda:us-east-1:${account-id}:function:name
Payload:
OutputLocation.$: $.QueryExecution.ResultConfiguration.OutputLocation
Date.$: ${self:custom.dates.year}${self:custom.dates.month}${self:custom.dates.day}
End: true
Your Date.$ property has value of ${self:custom.dates.year}${self:custom.dates.month}${self:custom.dates.day}. Let's assume that:
const dates = {
"year": "2000",
"month": "01",
"day": "20"
}
The result will be Date.$: "20000120" which is not a valid JSON Path.
JSON Path needs to start with a $ sign and each level is divided by ..
Do you want to achieve something like this? $.2000.01.20?
As you see, the issue is not with passing 2 parameters but with wrong string JSON Path created by string interpolation for Date.$.
Some useful links:
https://github.com/json-path/JsonPath
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-paths.html

DynamoDB table name created with the serverless framework has a random suffix

I am using the serverless framework to create a DynamoDB table and then I want to access it from a Lambda function.
In the serverless.yml file I have the definitions below for the environment variable and CF resources.
What I was expecting was a table with the name accounts-api-dev-accounts, but what the cloudformation stack is creating for me is accounts-api-dev-accounts-SOME_RANDOM_LETTERS_AND_NUMBERS_SUFFIX.
In my lambda function the environment variable DYNAMODB_ACCOUNTS_TABLE_NAME is exposed to the function without the SOME_RANDOM_LETTERS_AND_NUMBERS_SUFFIX part. Is the CF stack supposed to add a random suffix? How do I actually retrieve the right table name?
service:
name: accounts-api
provider:
...
stage: ${opt:stage, 'dev'}
environment:
DYNAMODB_ACCOUNTS_TABLE_NAME: '${self:service}-${self:provider.stage}-accounts'
And the following CF resource:
Resources:
AccountsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${env:DYNAMODB_ACCOUNTS_TABLE_NAME}
AttributeDefinitions:
- AttributeName: customerNumber
AttributeType: S
- AttributeName: accountNumber
AttributeType: S
KeySchema:
- AttributeName: customerNumber
KeyType: HASH
- AttributeName: accountNumber
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Maybe the environment variables are not updated yet at the time of the creation of the table definition? I'm not sure.
Try ${self:provider.environment.DYNAMODB_ACCOUNTS_TABLE_NAME} instead of ${env:DYNAMODB_ACCOUNTS_TABLE_NAME}.
I haven't seen this behavior yet (random characters after deploy), it could be a way to force uniqueness when the table has to be replaced. You could use another environment variable and have the value populated by the Table resource's output. That way, CloudFormation will inject the actual resource name to the Lambda environment variable. I haven't tried this, but this would be my first "go to".
environment:
DYNAMODB_ACCOUNTS_TABLE_NAME: '${self:service}-${self:provider.stage}-accounts'
ACTUAL_DYNAMODB_ACCOUNTS_TABLE_NAME:
Ref: AccountsTable

Error while editing existing GSI on my scheme

I am using AppSync with DynamoDB and GraphQL API. My current scheme works as expected but when trying to edit existing GSI I am getting an error.
My current scheme looks like this:
type Item #model
#key(fields: ["id", "version"])
#key(name: "type-subtype", fields: ["type", "subtype"])
{
id: ID!
version: String!
type: String!
subtype: String!
}
I want to change GSI defined here to be:
#key(name: "version-type-subtype", fields: ["version", "type", "subtype"]
The error I am getting is Schema Creation Status is FAILED with details: Failed to parse schema document - ensure it's a valid SDL-formatted document.
Can someone help? Is there any constraint using sort key from primary index as hash in GSI that I am not aware of?
Thanks

How to configure StreamArn of existing dynamodb table

I'm creating serverless framework project.
DynamoDB table is created by other CloudFormation Stack.
How I can refer existing dynamodb table's StreamArn in serverless.yml
I have configuration as below
resources:
Resources:
MyDbTable: //'arn:aws:dynamodb:us-east-2:xxxx:table/MyTable'
provider:
name: aws
...
onDBUpdate:
handler: handler.onDBUpdate
events:
- stream:
type: dynamodb
arn:
Fn::GetAtt:
- MyDbTable
- StreamArn
EDIT:
- If your tables were created in another Serverless service you can skip steps 1, 4 and 8.
- If your tables were created in a standard CloudFormation Stack, edit this stack to add the outputs from step 2 and skip steps 1, 4 and 8
Stuck with the same issue I came up the following workaround:
Create a new serverless service with only tables in it (you want to make a copy of your existing tables set-up):
service: MyResourcesStack
resources:
Resources:
FoosTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${opt:stage}-${self:service}-foos
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES # This enables the table's stream
(Optional) You can use serverless-dynamodb-autoscaling to configure autoscaling from the serverless.yml:
plugins:
- serverless-dynamodb-autoscaling
custom:
capacities:
- table: FoosTable # DynamoDB Resource
read:
minimum: 5 # Minimum read capacity
maximum: 50 # Maximum read capacity
usage: 0.75 # Targeted usage percentage
write:
minimum: 5 # Minimum write capacity
maximum: 50 # Maximum write capacity
usage: 0.75 # Targeted usage percentage
Set up the stack to output the tables name, Arn and StreamArn:
Outputs:
FoosTableName:
Value:
Ref: FoosTable
FoosTableArn:
Value: {"Fn::GetAtt": ["FoosTable", "Arn"]}
FoosTableStreamArn:
Value: {"Fn::GetAtt": ["FoosTable", "StreamArn"]}
Deploy the stack
Copy the data from your old tables to the newly created ones.
To do so, I used this script which works well if the old and new tables are in the same region and if the table are not huge. For larger tables, you may want to use AWS Data Pipeline.
Replace your hardcoded references to your tables in your initial service with the previously outputed variables:
provider:
environment:
stage: ${opt:stage}
region: ${self:provider.region}
dynamoDBTablesStack: "MyResourcesStack-${opt:stage}" # Your resources stack's name and the current stage
foosTable: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableName}"
foosTableArn: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableArn}"
foosTableStreamArn: "${cf:${self:provider.environment.dynamoDBTablesStack}.FoosTableStreamArn}"
functions:
myFunction:
handler: myFunction.handler
events:
- stream:
batchSize: 100
type: dynamodb
arn: ${self:provider.environment.foosStreamArn}
Deploy those changes
Test everything
Backup and delete your old tables

Resources