I am wanting to only insert an item if the partition/hash key exists. I am attempting to use a conditional expression along with attribute_exists to achieve this but I am getting unexpected results.
The example table
{
"TableName": "example",
"KeySchema": [
{ "AttributeName": "PK", "KeyType": "HASH" },
{ "AttributeName": "SK", "KeyType": "RANGE" }
],
"AttributeDefinitions": [
{ "AttributeName": "PK", "AttributeType": "S" },
{ "AttributeName": "SK", "AttributeType": "S" }
],
}
Insert an initial item with PK USER#123
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"PROFILE"}}'
$ aws dynamodb scan --table-name "example" --endpoint-url http://localhost:8000
{
"Items": [
{
"PK": {
"S": "USER#123"
},
"SK": {
"S": "PROFILE"
}
}
],
"Count": 1,
"ScannedCount": 1,
"ConsumedCapacity": null
}
Attempt to insert another item with the same PK. This results in ConditionalCheckFailedException. Based on the docs and various attribute_not_exists examples I have seen, I would expect this to succeed because the PK exists.
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
I would expect this to fail because the PK does not exist:
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#321"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
Instead, both of these operations fail.
If it helps, I am looking for the exact OPPOSITE of this stackoverflow post
There is no such concept as “the PK already exists” because there is no PK entity, only items, some of which may have that PK.
If you really want to enforce this type of behavior you’ll need to put an actual item in the database to indicate to your application that this PK ”exists”. Pick whatever SK you want for the marker item. Then do a transactional write for your new item with a ConditionCheck as part of it that the marker item already exists.
Related
Below is what the table structure looks like in DynamoDb when I scan the table using
aws dynamodb scan --table-name "hotel" --endpoint-url http://localhost:8088
{
"Count": 2,
"Items": [
{
"dc": {
"N": "0"
},
"sw": {
"L": [
{
"N": "1"
}
]
}
},
{
"dc": {
"N": "0"
},
"sw": {
"L":[]
},
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
Now I want to query the table where sw: []. I am using following query to retrieve the results.
aws dynamodb query --table-name "hotel" --key-conditions file:////tables/key1.json --endpoint-url http://localhost:8088 where
key1.json
{
"sw":{
"ComparisonOperator":"EQ",
"AttributeValueList": [ {"L":[]} ]
}
}
But I get following error
An error occurred (ValidationException) when calling the Query operation: Query condition missed key schema element
Please suggest how can I query the table to retrieve the results.
The way you have structured you table its hard to query the array by its fields.Try to save each Item as a row.
Is it possible to enforce table level schema validation on DynamoDB
For instance, consider the following table
aws dynamodb create-table\
--table-name spaces-tabs-votes\
--attribute-definitions AttributeName=id,AttributeType=S
--key-schema AttributeName=id,KeyType=HASH
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
--endpoint-url http://localhost:8000
At time T, table looked like below (Notice that votes is of type N)
$ aws dynamodb scan --table-name spaces-tabs-votes
{
"Count": 2,
"Items": [
{
"votes": {
"N": "104"
},
"id": {
"S": "space"
}
},
{
"votes": {
"N": "60"
},
"id": {
"S": "tab"
}
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
At time T+1, I was able to change the type of votes from N to S. All I had to do was do a put with votes set as String 1 instead of incrementing the already existing N value.
$ aws dynamodb scan --table-name spaces-tabs-votes
{
"Count": 2,
"Items": [
{
"votes": {
"N": "104"
},
"id": {
"S": "space"
}
},
{
"votes": {
"S": "1"
},
"id": {
"S": "tab"
}
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
I would like schema enforcement - i.e, if the record that I am trying to post to DynamoDB doesn't conform to a schema, I want it to throw an exception. Is it possible at all?
I can notseem to find an example on how to add Global Secondary Index to an existing table in DynamoDB using the aws cli.
This is what i know so far from the docs
Any pointers would be appreciated
Here is the update-table document.
Example:
aws dynamodb update-table --table-name <tableName> --global-secondary-index-updates file://gsi-command.json
Create a JSON file based with either update, create or delete action:-
Keep one of the action (update, create or delete) from below sample JSON and update the attribute definitions accordingly
[
{
"Update": {
"IndexName": "string",
"ProvisionedThroughput": {
"ReadCapacityUnits": long,
"WriteCapacityUnits": long
}
},
"Create": {
"IndexName": "string",
"KeySchema": [
{
"AttributeName": "string",
"KeyType": "HASH"|"RANGE"
}
...
],
"Projection": {
"ProjectionType": "ALL"|"KEYS_ONLY"|"INCLUDE",
"NonKeyAttributes": ["string", ...]
},
"ProvisionedThroughput": {
"ReadCapacityUnits": long,
"WriteCapacityUnits": long
}
},
"Delete": {
"IndexName": "string"
}
}
...
]
There is a small section in the Options section of the Update Table documentation that mentions the required options specific to creating a new global secondary index which requires that the attribute-definitions include the key elements of the new index. Just adding that option to the end of the example provided by #notionquest should do the trick.
aws dynamodb update-table --table-name <tableName> --global-secondary-index-updates file://gsi-command.json --attribute-definitions AttributeName=<attributeName>, AttributeType=<attributeType>
Creating global secondary indexes in existing tables.
Use this CLI command and JSON file for update.
aws dynamodb update-table --table-name sample--cli-input-json file://gsi-update.json --endpoint-url http://localhost:8000
Save the arguments in JSON format.
{
"AttributeDefinitions":[
{
"AttributeName":"String",
"AttributeType":"S"
},
{
"AttributeName":"String",
"AttributeType":"S"
}
],
"GlobalSecondaryIndexUpdates":[
{
"Create":{
"IndexName":"index-name",
"KeySchema":[
{
"AttributeName":"String",
"KeyType":"HASH"
},
{
"AttributeName":"String",
"KeyType":"RANGE"
}
],
"Projection":{
"ProjectionType":"ALL"
},
"ProvisionedThroughput":{
"ReadCapacityUnits":5,
"WriteCapacityUnits":5
}
}
}
]
}
Im trying to build a histogram of a certain attribute in my dynamodb.
I thought the easiest way would be to use multiple filter-expression
This is my baseline query with a single filter-expression and it works
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "contains(score, :s)" --expression-attribute-values '{ ":s": { "N": "1" } }' --limit 100
Now, im trying to extend it to multiple filter expressions and im not sure how.
I have tried:
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" --filter-expression "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100
and
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" | "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100
Probably I am late to answer. But, was searching for a similar scenario and got nothing. Still answering if someone else could benefit.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score = :s OR score = :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100
Filter expressions are a kind of condition expression. You can combine filter expressions with boolean logic. However, in your example, you can get away without using AND/OR operators to combine expressions.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score IN :s, :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100
using between
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "(score between :s and :s1)" \
--expression-attribute-values '{ ":s": { "N": "0" }, ":s1": { "N": "10" } }'
For those who are searching for the nodeJs filter Expression:
const params = {
TableName: "orderMessages",
Key: {
order_id,
},
FilterExpression: "#order_id = :ordrId OR #timestamp < :ts",
ExpressionAttributeNames: {
"#order_id": "order_id",
"#timestamp": "timestamp"
},
ExpressionAttributeValues: {
":ordrId": order_id,
":ts": now
},
}
Happy coding :)
There is possibility to dump DynamoDb via Data Pipeline and also import data in DynamoDb. Import is going well, but all the time data appends to already exists data in DynamoDb.
For now I found work examples that scan DynamoDb and delete items one by one or via Batch. But at any rate for big amount of data it is not good variant.
Also it is possible to delete table at all and create it. But with that variant indexes will be lost.
So, best way would be to override DynamoDb data via import by Data Pipeline or truncate somehow. Is it possible to do? And how is it possible if yes?
Truncate Table functionality is not available in DynamoDB, So kindly consider deleting the table & creating again,
Reason : DynamoDB Charges you based on ReadCapacityUnits & WriteCapacityUnits which you have used. If you delete all items using BatchWriteItem function, it will use WriteCapacityUnits. So, to save these WriteCapacityUnits for deleting items, It will be better if you truncate the table & recreate it agian.
Steps to Delete & Create DynamoDB Tables as follows :
Delete Table via AWS CLI :
aws dynamodb delete-table --table-name *tableName*
Delete Table via AmazonDynamoDB API :
Sample Request
POST / HTTP/1.1
Host: dynamodb.<region>.<domain>;
Accept-Encoding: identity
Content-Length: <PayloadSizeBytes>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.0
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>, Signature=<Signature>
X-Amz-Date: <Date>
X-Amz-Target: DynamoDB_20120810.DeleteTable
{
"TableName": "Reply"
}
Creating DynamoDB Table via AmazonDynamoDB API :
POST / HTTP/1.1
Host: dynamodb.<region>.<domain>;
Accept-Encoding: identity
Content-Length: <PayloadSizeBytes>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.0
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>, Signature=<Signature>
X-Amz-Date: <Date>
X-Amz-Target: DynamoDB_20120810.CreateTable
{
"AttributeDefinitions": [
{
"AttributeName": "ForumName",
"AttributeType": "S"
},
{
"AttributeName": "Subject",
"AttributeType": "S"
},
{
"AttributeName": "LastPostDateTime",
"AttributeType": "S"
}
],
"TableName": "Thread",
"KeySchema": [
{
"AttributeName": "ForumName",
"KeyType": "HASH"
},
{
"AttributeName": "Subject",
"KeyType": "RANGE"
}
],
"LocalSecondaryIndexes": [
{
"IndexName": "LastPostIndex",
"KeySchema": [
{
"AttributeName": "ForumName",
"KeyType": "HASH"
},
{
"AttributeName": "LastPostDateTime",
"KeyType": "RANGE"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
}
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
}
}
Summary : Delete the table & Create it again would be the best solution.