Query GSI from AWS Dynamodb CLI - amazon-dynamodb

How to query global secondary index (gsi) in dynamodb from aws cli?
I have the following GSI query saved in a gsi.json file
{
"IndexName": {"S": "Index"},
"KeyConditionExpression": "name = :name",
"ExpressionAttributeValues": {
":name": {"S": "bob"},
":id": {"S": "bob-1234"}
},
"ProjectionExpression": {"S": "age"},
"ScanIndexForward": {"BOOL": "false"}
}
Now how to run the query using aws dynamodb cli command?

Related

Insert an item in DynamoDB only if the partition key exists

I am wanting to only insert an item if the partition/hash key exists. I am attempting to use a conditional expression along with attribute_exists to achieve this but I am getting unexpected results.
The example table
{
"TableName": "example",
"KeySchema": [
{ "AttributeName": "PK", "KeyType": "HASH" },
{ "AttributeName": "SK", "KeyType": "RANGE" }
],
"AttributeDefinitions": [
{ "AttributeName": "PK", "AttributeType": "S" },
{ "AttributeName": "SK", "AttributeType": "S" }
],
}
Insert an initial item with PK USER#123
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"PROFILE"}}'
$ aws dynamodb scan --table-name "example" --endpoint-url http://localhost:8000
{
"Items": [
{
"PK": {
"S": "USER#123"
},
"SK": {
"S": "PROFILE"
}
}
],
"Count": 1,
"ScannedCount": 1,
"ConsumedCapacity": null
}
Attempt to insert another item with the same PK. This results in ConditionalCheckFailedException. Based on the docs and various attribute_not_exists examples I have seen, I would expect this to succeed because the PK exists.
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#123"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
I would expect this to fail because the PK does not exist:
$ aws dynamodb put-item --table-name "example" \
--endpoint-url http://localhost:8000 \
--item '{"PK": {"S":"USER#321"}, "SK":{"S":"COMMENT#123"}}' \
--condition-expression "attribute_exists(PK)"
Instead, both of these operations fail.
If it helps, I am looking for the exact OPPOSITE of this stackoverflow post
There is no such concept as “the PK already exists” because there is no PK entity, only items, some of which may have that PK.
If you really want to enforce this type of behavior you’ll need to put an actual item in the database to indicate to your application that this PK ”exists”. Pick whatever SK you want for the marker item. Then do a transactional write for your new item with a ConditionCheck as part of it that the marker item already exists.

What is the JSON format for a firestore composite index including an array?

I want to filter a collection (let's call it documents) using array-contains on one column (say keywords) and sort by another column (say name).
I am able to create this composite index in the firebase console, but I can only guess at the format for adding it to firestore.indexes.json.
It's unfortunate we can't download the index file from the console.
Set the mode to ARRAY_CONTAINS:
{
"collectionId": "documents",
"fields": [
{
"fieldPath": "keywords",
"mode": "ARRAY_CONTAINS"
},
{
"fieldPath": "name",
"mode": "ASCENDING"
}
]
}
You can also list your current Cloud Firestore indexes in JSON from the Firebase CLi:
firebase firestore:indexes
To export firestore indexes
firebase firestore:indexes > firestore.indexes.json
Caution! This will replace everything in firestore.indexes.json.
Or if you have multiple firebase projects in your .firebaserc:
"projects": {
"production": "prod-v6909",
"staging": "prod-c6020"
}
Then you can export like this:
firebase --project staging firestore:indexes > firestore.indexes.json
You can also list your current Cloud Firestore indexes in JSON from
the Firebase CLi: firebase firestore:indexes
For me this when exporting collection group indexes this is exporting a bad formatter json. The command line forget to close the array notifier on indexes:
{
"indexes": [],
"fieldOverrides": [
{
"collectionGroup": "restaurants_users_config",
"fieldPath": "user",
"indexes": [
{
"order": "ASCENDING",
"queryScope": "COLLECTION"
},
{
"order": "DESCENDING",
"queryScope": "COLLECTION"
},
{
"arrayConfig": "CONTAINS",
"queryScope": "COLLECTION"
},
{
"order": "ASCENDING",
"queryScope": "COLLECTION_GROUP"
}
}
]
}
Any Suggestions? It is right to only fix the json array by hand?
**edit
Fix: it was fixed with firebase upgrade 8.2.0 → 8.4.1

How can I query list of objects in DynamoDb (using CLI)

Below is what the table structure looks like in DynamoDb when I scan the table using
aws dynamodb scan --table-name "hotel" --endpoint-url http://localhost:8088
{
"Count": 2,
"Items": [
{
"dc": {
"N": "0"
},
"sw": {
"L": [
{
"N": "1"
}
]
}
},
{
"dc": {
"N": "0"
},
"sw": {
"L":[]
},
}
],
"ScannedCount": 2,
"ConsumedCapacity": null
}
Now I want to query the table where sw: []. I am using following query to retrieve the results.
aws dynamodb query --table-name "hotel" --key-conditions file:////tables/key1.json --endpoint-url http://localhost:8088 where
key1.json
{
"sw":{
"ComparisonOperator":"EQ",
"AttributeValueList": [ {"L":[]} ]
}
}
But I get following error
An error occurred (ValidationException) when calling the Query operation: Query condition missed key schema element
Please suggest how can I query the table to retrieve the results.
The way you have structured you table its hard to query the array by its fields.Try to save each Item as a row.

Add Global Secondary Index to an existing table in DynamoDB using aws cli

I can notseem to find an example on how to add Global Secondary Index to an existing table in DynamoDB using the aws cli.
This is what i know so far from the docs
Any pointers would be appreciated
Here is the update-table document.
Example:
aws dynamodb update-table --table-name <tableName> --global-secondary-index-updates file://gsi-command.json
Create a JSON file based with either update, create or delete action:-
Keep one of the action (update, create or delete) from below sample JSON and update the attribute definitions accordingly
[
{
"Update": {
"IndexName": "string",
"ProvisionedThroughput": {
"ReadCapacityUnits": long,
"WriteCapacityUnits": long
}
},
"Create": {
"IndexName": "string",
"KeySchema": [
{
"AttributeName": "string",
"KeyType": "HASH"|"RANGE"
}
...
],
"Projection": {
"ProjectionType": "ALL"|"KEYS_ONLY"|"INCLUDE",
"NonKeyAttributes": ["string", ...]
},
"ProvisionedThroughput": {
"ReadCapacityUnits": long,
"WriteCapacityUnits": long
}
},
"Delete": {
"IndexName": "string"
}
}
...
]
There is a small section in the Options section of the Update Table documentation that mentions the required options specific to creating a new global secondary index which requires that the attribute-definitions include the key elements of the new index. Just adding that option to the end of the example provided by #notionquest should do the trick.
aws dynamodb update-table --table-name <tableName> --global-secondary-index-updates file://gsi-command.json --attribute-definitions AttributeName=<attributeName>, AttributeType=<attributeType>
Creating global secondary indexes in existing tables.
Use this CLI command and JSON file for update.
aws dynamodb update-table --table-name sample--cli-input-json file://gsi-update.json --endpoint-url http://localhost:8000
Save the arguments in JSON format.
{
"AttributeDefinitions":[
{
"AttributeName":"String",
"AttributeType":"S"
},
{
"AttributeName":"String",
"AttributeType":"S"
}
],
"GlobalSecondaryIndexUpdates":[
{
"Create":{
"IndexName":"index-name",
"KeySchema":[
{
"AttributeName":"String",
"KeyType":"HASH"
},
{
"AttributeName":"String",
"KeyType":"RANGE"
}
],
"Projection":{
"ProjectionType":"ALL"
},
"ProvisionedThroughput":{
"ReadCapacityUnits":5,
"WriteCapacityUnits":5
}
}
}
]
}

Multiple FilterExpression in dynamodb scan

Im trying to build a histogram of a certain attribute in my dynamodb.
I thought the easiest way would be to use multiple filter-expression
This is my baseline query with a single filter-expression and it works
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "contains(score, :s)" --expression-attribute-values '{ ":s": { "N": "1" } }' --limit 100
Now, im trying to extend it to multiple filter expressions and im not sure how.
I have tried:
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" --filter-expression "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100
and
aws dynamodb scan --table-name test --select "COUNT" --filter-expression "score = :s" | "score = :s1" --expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' --limit 100
Probably I am late to answer. But, was searching for a similar scenario and got nothing. Still answering if someone else could benefit.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score = :s OR score = :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100
Filter expressions are a kind of condition expression. You can combine filter expressions with boolean logic. However, in your example, you can get away without using AND/OR operators to combine expressions.
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "score IN :s, :s1" \
--expression-attribute-values '{ ":s": { "N": "1" }, ":s1": { "N": "40" } }' \
--limit 100
using between
aws dynamodb scan --table-name test --select "COUNT" \
--filter-expression "(score between :s and :s1)" \
--expression-attribute-values '{ ":s": { "N": "0" }, ":s1": { "N": "10" } }'
For those who are searching for the nodeJs filter Expression:
const params = {
TableName: "orderMessages",
Key: {
order_id,
},
FilterExpression: "#order_id = :ordrId OR #timestamp < :ts",
ExpressionAttributeNames: {
"#order_id": "order_id",
"#timestamp": "timestamp"
},
ExpressionAttributeValues: {
":ordrId": order_id,
":ts": now
},
}
Happy coding :)

Resources