My table data looks like below one
{
"id": {
"S": "alpha-rocket"
},
"images": {
"SS": [
"apple/value:50",
"Mango/aa:284_454_51.0.0",
"Mango/bb:291",
"Mango/cc:4"
]
},
"product": {
"S": "fruit"
}
}
Below is my code to update table. The variables I am passing to function has values product_id has alpha-rocket, image_val has 284_454_53.0.0 and image has Mango/aa:284_454_53.0.0.
I am trying to update value of Mango/aa from 284_454_51.0.0 to 284_454_53.0.0 but getting an error "The document path provided in the update expression is invalid for update"
def update_player_score(product_id, image_val, image):
dynamo = boto3.resource('dynamodb')
tbl = dynamo.Table('<TableName>')
result = tbl.update_item(
expression_attribute_names: {
"#image_name" => "image_name"
},
expression_attribute_values: {
":image_val" => image_val,
},
key: {
"product" => "fruit",
"id" => product_id,
},
return_values: "ALL_NEW",
table_name: "orcus",
update_expression: "SET images.#image_val = :image_val",
}
Is there a way to update the value of Mango/aa or replace full string "Mango/aa:284_454_51.0.0" to "Mango/aa:284_454_53.0.0"
You cannot update a string in a list by matching the string. If you know the index of it you can replace the value of the string by index:
SET images[1] = : image_val
It seems like maybe what you want is not a list of strings, but another map. So instead of your data looking like it does you'd make it look like this, which would allow you to do the update you're looking for:
{
"id": {
"S": "alpha-rocket"
},
"images": {
"M": {
"apple" : {
"M": {
"value": {
"S": "50"
}
},
"Mango" : {
"M": {
"aa": {
"S": "284_454_51.0.0"
},
"bb": {
"S": "291"
},
"cc": {
"S": "4"
}
}
}
},
"product": {
"S": "fruit"
}
}
I would also consider putting the different values in different "rows" in the table and using queries to build the objects.
Related
I have a DDB entry as below
RequestId:'1234567890' // Partition key
LastScanDateTime: '2023-01-12T11:00:00.111Z'
SubjectUpdateCounter: {
Maths: 1,
English: 1
}
I'm trying to update the entry and below is the update query.
{
"TableName": "EmpDetails",
"Key": {
"RequestId": {
"S": "1234567890"
}
},
"ConditionExpression": "(#subjectUpdateCounter > :subjectUpdateCounterLimit)",
"UpdateExpression": "ADD #subjectUpdateCounter :dec SET LastScanDateTime = :LastScanDateTime,",
"ExpressionAttributeValues": {
":dec": {
"N": "-1"
},
":LastScanDateTime": {
"S": "2023-02-13T18:14:52.143Z"
},
":subjectUpdateCounterLimit": {
"N": "0"
}
},
"ReturnValues": "NONE",
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter.Maths"
}
}
Getting below error
ConditionalCheckFailedException: The conditional request failed
....
....
'$fault': 'client',
'$metadata': {
httpStatusCode: 400,
requestId: '12345ygfsdfagagdf',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
__type: 'com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException'
}
My current value in SubjectUpdateCounter.Maths is greater than 0, so the condition should succeed and this query should decrement the value of SubjectUpdateCounter.Maths to 0.
Why is the query throwing the above exception?
Your issue is here:
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter.Maths"
}
This means DynamoDB is looking for an attribute named "SubjectUpdateCounter.Maths" but there is none, as its a nested value you are looking for.
Your request should look like the following:
{
"TableName": "EmpDetails",
"Key": {
"RequestId": {
"S": "1234567890"
}
},
"ConditionExpression": "(#subjectUpdateCounter.#maths > :subjectUpdateCounterLimit)",
"UpdateExpression": "ADD #subjectUpdateCounter.#maths :dec SET LastScanDateTime = :LastScanDateTime,",
"ExpressionAttributeValues": {
":dec": {
"N": "-1"
},
":LastScanDateTime": {
"S": "2023-02-13T18:14:52.143Z"
},
":subjectUpdateCounterLimit": {
"N": "0"
}
},
"ReturnValues": "NONE",
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter",
"#maths":"Maths"
}
}
{
"id":{"N": "1"},
"attributes": {
"L": [
{
"M": {
"name": { "S": "AA" }
}
},
{
"M": {
"name": { "S": "BB" }
}
}
]
}
},
{
"id":{"N": "1"},
"attributes": {
"L": [
{
"M": {
"name": { "S": "BB" }
}
}
]
}
}
With the above data in the same partition( id as the partition key), How can I find records where any of attributes has name = 'BB' ?
I can filter by Nth item, for example.
KeyConditionExpression: 'id = :myId',
ExpressionAttributeValues: {'myId':{N:'1'},'myValue':{S:'BB'}},
ExpressionAttributeNames:{'#name': 'name'},
FilterExpression: 'attributes[0].#name=:myValue'
This would return only 2nd item. But is there a way to filter by ANY item in the array? It should return both records.
Tried set FilterExpression to attributes[*].#name=:myValue or attributes.#name=:myValue, neither works.
I'm trying to filter out a query based on a nested object (no array). I'm currently using AppSync and DynamoDB and the expression with expression values are executed correctly. But the filtering doesn't seem to work.
This is the sample data I'm trying to get (Filter by indicator.id):
Here's my query:
{
"version": "2017-02-28",
"operation": "Query",
"query": {
"expression": "pk = :pk and begins_with(sk, :sk)",
"expressionValues": {
":pk": { "S": "tenant:5fc30406-346c-42e2-8083-fda33ab6000a" },
":sk": {
"S": "school-year:2019-2020:grades:bVgA9abd:subject:m_kpc1Ae6:indicator:"
}
}
},
"filter": {
"expression": " contains(#indicatorId, :sk1) or contains(#indicatorId, :sk2) or contains(#indicatorId, :sk3)",
"expressionNames": { "#indicatorId": "indicator" },
"expressionValues": {
":sk1": {
"M": { "id": { "S": "07c658dd-999f-4e6f-95b8-c6bae422760a" } }
},
":sk2": {
"M": { "id": { "S": "0cf9f670-e284-4a93-b297-5e4a40c50228" } }
},
":sk3": { "M": { "id": { "S": "cd7902be-6512-4b47-b29d-40aff30c73e6" } } }
}
}
}
I've also tried:
{
"version": "2017-02-28",
"operation": "Query",
"query": {
"expression": "pk = :pk and begins_with(sk, :sk)",
"expressionValues": {
":pk": { "S": "tenant:5fc30406-346c-42e2-8083-fda33ab6000a" },
":sk": {
"S": "school-year:2019-2020:grades:bVgA9abd:subject:m_kpc1Ae6:indicator:"
}
}
},
"filter": {
"expression": " contains(#indicatorId, :sk1) or contains(#indicatorId, :sk2) or contains(#indicatorId, :sk3)",
"expressionNames": { "#indicatorId": "indicator.id" },
"expressionValues": {
":sk1": { "S": "07c658dd-999f-4e6f-95b8-c6bae422760a" },
":sk2": { "S": "0cf9f670-e284-4a93-b297-5e4a40c50228" },
":sk3": { "S": "cd7902be-6512-4b47-b29d-40aff30c73e6" }
}
}
}
I've also tried searching around StackOverflow, and Amazon forums and haven't found it directly to my problem:
How to filter by elements in an array (or nested object) in DynamoDB
Nested Query in DynamoDB returns nothing
Referring to this answer.enter link description here
According to DDB Nested Attributes doc, the filter expression should look like the following format:
"filter" : {
"expression" : "#path.#filter = :${fp}", ## filter path parent.target = :target
"expressionNames": {
"#path" : "${path}",
"#filter" : "${fp}"
},
"expressionValues" : {
":${fp}" : $util.dynamodb.toDynamoDBJson(${$target[$fp].eq}) ## :target : value to filter for
}
}
I'm trying to flatten and filter my json data that is in a CosmosDB.
The data looks like below and I would like to flatten everything in the array Variables and then filter by specific _id and Timestamp inside of the array:
{
"_id": 21032,
"FirstConnected": {
"$date": 1522835868346
},
"LastUpdated": {
"$date": 1523360279908
},
"Variables": [
{
"_id": 99999,
"Values": [
{
"Timestamp": {
"$date": 1522835868347
},
"Value": 1
}
]
},
{
"_id": 99998,
"Values": [
{
"Timestamp": {
"$date": 1523270312001
},
"Value": 8888
}
]
}
]
}
If you want to flatten data from the Variables array with properties from the root object you can query your collection like this:
SELECT root._id, root.FirstConnected, root.LastUpdated, var.Values
FROM root
JOIN var IN root.Variables
WHERE var._id = 99998
This will result into:
[
{
"_id": 21032,
"FirstConnected": {
"$date": 1522835868346
},
"LastUpdated": {
"$date": 1523360279908
},
"Values": [
{
"Timestamp": {
"$date": 1523270312001
},
"Value": 8888
}
]
}
]
If you want to even flatten the Values array you will need to write something like this:
SELECT root._id, root.FirstConnected, root.LastUpdated,
var.Values[0].Timestamp, var.Values[0]["Value"]
FROM root
JOIN var IN root.Variables
WHERE var._id = 99998
Note that CosmosDB considers "Value" as a reserved keyword and you need to use an escpape syntax. The result for this query is:
[
{
"_id": 21032,
"FirstConnected": {
"$date": 1522835868346
},
"LastUpdated": {
"$date": 1523360279908
},
"Timestamp": "1970-01-01T00:00:00Z",
"Value": 8888
}
]
Check for more details https://learn.microsoft.com/en-us/azure/cosmos-db/sql-api-sql-query#Advanced
If you're only looking for filtering by the nested '_id' property then you could use ARRAY_CONTAINS w/ the partial_match argument set to true. The query would look something like this:
SELECT VALUE c
FROM c
WHERE ARRAY_CONTAINS(c.Variables, {_id: 99998}, true)
If you also want to flatten the array, then you could use JOIN
SELECT VALUE v
FROM v IN c.Variables
WHERE v._id = 99998
Given the table schema defined below (create-table.json) I am getting the following error after I call put-item using add-event1.json followed by add-event2.json:
A client error (ConditionalCheckFailedException) occurred when calling the PutItem operation: The conditional request failed
Why doesn't the ConditionExpression allow me to write both records? (I expect to have 2 records after the second operation)
I suspected that it was because of the non-key conditions used, but I don't see anything in the docs that indicates a lack of support for non-key conditions.
Create Table
$ aws dynamodb create-table --cli-input-json file://create-table.json
create-table.json
{
"TableName": "EVENTS_TEST",
"KeySchema": [
{ "AttributeName": "aggregateId", "KeyType": "HASH" },
{ "AttributeName": "streamRevision", "KeyType": "RANGE" }
],
"AttributeDefinitions": [
{ "AttributeName": "aggregateId", "AttributeType": "S" },
{ "AttributeName": "streamRevision", "AttributeType": "N" }
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 10,
"WriteCapacityUnits": 10
}
}
Add First Record
$ aws dynamodb put-item --cli-input-json file://add-event1.json
add-event1.json
{
"TableName": "EVENTS_TEST",
"Item": {
"aggregateId": { "S": "id" },
"id": { "S": "119" },
"context": { "S": "*" },
"aggregate": { "S": "*" },
"streamRevision": { "N": "0" },
"commitId": { "S": "1119" },
"commitSequence": { "N": "0" },
"commitStamp": { "N": "1470185631511" },
"dispatched": { "BOOL": false },
"payload": { "S": "{ \"event\": \"bla\" }" }
},
"ExpressionAttributeNames": { "#name": "aggregate" },
"ConditionExpression": "attribute_not_exists(aggregateId) and attribute_not_exists(streamRevision) and #name <> :name and context <> :ctx",
"ExpressionAttributeValues": {
":name": { "S": "*" },
":ctx": { "S": "*" }
}
}
Add Second Record
$ aws dynamodb put-item --cli-input-json file://add-event2.json
add-event2.json
{
"TableName": "EVENTS_TEST",
"Item": {
"aggregateId": { "S": "id" },
"id": { "S": "123" },
"context": { "S": "myCtx" },
"aggregate": { "S": "myAgg" },
"streamRevision": { "N": "0" },
"commitId": { "S": "1123" },
"commitSequence": { "N": "0" },
"commitStamp": { "N": "1470185631551" },
"dispatched": { "BOOL": false },
"payload": { "S": "{ \"event\": \"bla2\" }" }
},
"ExpressionAttributeNames": { "#name": "aggregate" },
"ConditionExpression": "aggregateId <> :id and streamRevision <> :rev and #name <> :name and context <> :ctx",
"ExpressionAttributeValues": {
":id": { "S": "id" },
":rev": { "N": "0" },
":name": { "S": "myAgg" },
":ctx": { "S": "myCtx" }
}
}
Your goal is to save both records. There are 2 issues here
With your choice of hash and range key it is impossible to save both records
The combination of hash and range key make a record unique.
Event 1 and event 2 have the same values for hash and range key.
Therefore the second put-item wil simply overwrite the first record.
Your ConditionExpression prevents the replacement of record 1 by record 2
The ConditionExpression is evaluated just before putting a record. Your expression fails because when your event 2 is about to be inserted, DynamoDB discovers that a record with aggregateId “id1” already exists. The condition fails on "attribute_not_exists(aggregateId)", and you receive the ConditionalCheckFailedException This expression prevents overwriting of record 1 by record 2.
If you want to save both records you will have to come up with another choice of hash key and/or range key that better represents the unicity of your data item. You can not solve that with a ConditionExpression.
I received similar error.
The conditional request failed (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ConditionalCheckFailedException; Request ID: SMNKMSKJNSHBSGHVGHSB)
In my case I had not added sort key in my table and my second item had the same primary key as of first item. It worked after I added sort key.
SOLUTION
An item with that ID allready exists in the Table.
You need to create a unique ID for the item you try to add.