Multiple FilterExpression in dynamodb scan - amazon-dynamodb

Im trying to build a histogram of a certain attribute in my dynamodb.
I thought the easiest way would be to use multiple filter-expression
This is my baseline query with a single filter-expression and it works
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "contains(score, :s)" --expression-attribute-values '{ ":s": { "N": "1" } }' --limit 100
Now, im trying to extend it to multiple filter expressions and im not sure how.
I have tried:
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" --filter-expression "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100
and
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" | "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100

Probably I am late to answer. But, was searching for a similar scenario and got nothing. Still answering if someone else could benefit.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score = :s OR score = :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100

Filter expressions are a kind of condition expression. You can combine filter expressions with boolean logic. However, in your example, you can get away without using AND/OR operators to combine expressions.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score IN :s, :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100

using between
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "(score between :s and :s1)" \
--expression-attribute-values '{ ":s": { "N": "0" }, ":s1": { "N": "10" } }'

For those who are searching for the nodeJs filter Expression:
const params = {
TableName: "orderMessages",
Key: {
order_id,
},
FilterExpression: "#order_id = :ordrId OR #timestamp < :ts",
ExpressionAttributeNames: {
"#order_id": "order_id",
"#timestamp": "timestamp"
},
ExpressionAttributeValues: {
":ordrId": order_id,
":ts": now
},
}
Happy coding :)

Related

Insert an item in DynamoDB only if the partition key exists

I am wanting to only insert an item if the partition/hash key exists. I am attempting to use a conditional expression along with attribute_exists to achieve this but I am getting unexpected results.
The example table
{
"TableName": "example",
"KeySchema": [
{ "AttributeName": "PK", "KeyType": "HASH" },
{ "AttributeName": "SK", "KeyType": "RANGE" }
],
"AttributeDefinitions": [
{ "AttributeName": "PK", "AttributeType": "S" },
{ "AttributeName": "SK", "AttributeType": "S" }
],
}
Insert an initial item with PK USER#123
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"PROFILE"}}'
$ aws dynamodb scan --table-name "example" --endpoint-url http://localhost:8000
{
"Items": [
{
"PK": {
"S": "USER#123"
},
"SK": {
"S": "PROFILE"
}
}
],
"Count": 1,
"ScannedCount": 1,
"ConsumedCapacity": null
}
Attempt to insert another item with the same PK. This results in ConditionalCheckFailedException. Based on the docs and various attribute_not_exists examples I have seen, I would expect this to succeed because the PK exists.
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
I would expect this to fail because the PK does not exist:
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#321"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
Instead, both of these operations fail.
If it helps, I am looking for the exact OPPOSITE of this stackoverflow post
There is no such concept as “the PK already exists” because there is no PK entity, only items, some of which may have that PK.
If you really want to enforce this type of behavior you’ll need to put an actual item in the database to indicate to your application that this PK ”exists”. Pick whatever SK you want for the marker item. Then do a transactional write for your new item with a ConditionCheck as part of it that the marker item already exists.

JQ only returns one CIDR block from AWS CLI

I am trying to read the CIDR blocks from the VPCs in AWS on the AWS CLI. I will use this in a script when I'm done. I am using jq to parse the info:
aws ec2 describe-vpcs --region=us-east-1 | jq -r '.Vpcs[].CidrBlock'
10.200.3.0/24
However, jq only returns one of the two CIDR blocks in the VPC. This is the original json:
{
"Vpcs": [
{
"CidrBlock": "10.200.3.0/24",
"DhcpOptionsId": "dopt-d0aa95ab",
"State": "available",
"VpcId": "vpc-00de11103235ec567",
"OwnerId": "046480487130",
"InstanceTenancy": "default",
"Ipv6CidrBlockAssociationSet": [
{
"AssociationId": "vpc-cidr-assoc-09f19d81c2e4566b9",
"Ipv6CidrBlock": "2600:1f18:1f7:300::/56",
"Ipv6CidrBlockState": {
"State": "associated"
},
"NetworkBorderGroup": "us-east-1"
}
],
"CidrBlockAssociationSet": [
{
"AssociationId": "vpc-cidr-assoc-0511a5d459f937899",
"CidrBlock": "10.238.3.0/24",
"CidrBlockState": {
"State": "associated"
}
},
{
"AssociationId": "vpc-cidr-assoc-05ad73e8c515a470f",
"CidrBlock": "100.140.0.0/27",
"CidrBlockState": {
"State": "associated"
}
}
],
"IsDefault": false,
"Tags": [
{
"Key": "environment",
"Value": "int01"
},
{
"Key": "Name",
"Value": "company-int01-vpc"
},
{
"Key": "project",
"Value": "company"
}
]
}
]
}
Why does jq only return part of the info I'm after? I need to get all VPC CIDR blocks in the output.
You have two keys CidrBlock and CidrBlockAssociationSet under the Vpcs array.
aws ec2 describe-vpcs --region=us-east-1 |
jq -r '.Vpcs[] | .CidrBlock, .CidrBlockAssociationSet[].CidrBlock'
10.200.3.0/24
10.238.3.0/24
100.140.0.0/27
and this is an invariant solution:
aws ... | jq -r '.. | if type == "object" and has("CidrBlock") then .CidrBlock else empty end'
and, inspired by jq170727's answer, a less expressive form:
aws ... | jq -r '.. | objects | .CidrBlock // empty'
Here is a filter inspired by Dmitry's answer which is slightly shorter: .. | .CidrBlock? | values
Try it online!

Query GSI from AWS Dynamodb CLI

How to query global secondary index (gsi) in dynamodb from aws cli?
I have the following GSI query saved in a gsi.json file
{
"IndexName": {"S": "Index"},
"KeyConditionExpression": "name = :name",
"ExpressionAttributeValues": {
":name": {"S": "bob"},
":id": {"S": "bob-1234"}
},
"ProjectionExpression": {"S": "age"},
"ScanIndexForward": {"BOOL": "false"}
}
Now how to run the query using aws dynamodb cli command?

How can I query list of objects in DynamoDb (using CLI)

Below is what the table structure looks like in DynamoDb when I scan the table using
aws dynamodb scan --table-name "hotel" --endpoint-url http://localhost:8088
{
"Count": 2,
"Items": [
{
"dc": {
"N": "0"
},
"sw": {
"L": [
{
"N": "1"
}
]
}
},
{
"dc": {
"N": "0"
},
"sw": {
"L":[]
},
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
Now I want to query the table where sw: []. I am using following query to retrieve the results.
aws dynamodb query --table-name "hotel" --key-conditions file:////tables/key1.json --endpoint-url http://localhost:8088 where
key1.json
{
"sw":{
"ComparisonOperator":"EQ",
"AttributeValueList": [ {"L":[]} ]
}
}
But I get following error
An error occurred (ValidationException) when calling the Query operation: Query condition missed key schema element
Please suggest how can I query the table to retrieve the results.
The way you have structured you table its hard to query the array by its fields.Try to save each Item as a row.

Enforce Schema validation in DynamoDB

Is it possible to enforce table level schema validation on DynamoDB
For instance, consider the following table
aws dynamodb create-table\
--table-name spaces-tabs-votes\
--attribute-definitions AttributeName=id,AttributeType=S
--key-schema AttributeName=id,KeyType=HASH
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
--endpoint-url http://localhost:8000
At time T, table looked like below (Notice that votes is of type N)
$ aws dynamodb scan --table-name spaces-tabs-votes
{
"Count": 2,
"Items": [
{
"votes": {
"N": "104"
},
"id": {
"S": "space"
}
},
{
"votes": {
"N": "60"
},
"id": {
"S": "tab"
}
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
At time T+1, I was able to change the type of votes from N to S. All I had to do was do a put with votes set as String 1 instead of incrementing the already existing N value.
$ aws dynamodb scan --table-name spaces-tabs-votes
{
"Count": 2,
"Items": [
{
"votes": {
"N": "104"
},
"id": {
"S": "space"
}
},
{
"votes": {
"S": "1"
},
"id": {
"S": "tab"
}
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
I would like schema enforcement - i.e, if the record that I am trying to post to DynamoDB doesn't conform to a schema, I want it to throw an exception. Is it possible at all?

Resources