AWS Java SDK DynamoDB, how to get attribute values from ExecuteStatementRequest response? - amazon-dynamodb

I'm using Java AWS SDK to query a DynamoDB table using ExecuteStatementRequest, but I'm don't know how to fetch the returned attribute values from the response.
Given I have the following query:
var response2 = client.executeStatement(ExecuteStatementRequest.builder()
.statement("""
UPDATE "my-table"
SET thresholdValue= thresholdValue + 12.5
WHERE assignmentId='item1#123#item2#456#item3#789'
RETURNING ALL NEW *
""")
.build());
System.out.println(response2.toString());
System.out.println(response2.getValueForField("Items", Collections.class)); // Doesn't cast to
This query executes fine and returns as part of the response attributes, however I can't find a way to get these values out of the response object using Java.
How can I do that?

I have found how to do that, however I'm not sure it this is the indicated way as the documentation doesn't provide any examples.
List items = response2.getValueForField("Items", List.class).get();
for (Object item : items) {
var values = (Map<String, AttributeValue>) item;
System.out.println(values.get("assignmentId").s());
System.out.println(values.get("thresholdValue").n());
}

Related

Unable to Upsert data in DynamoDB using WriteRequest JAVA

I have a java application which is building a DynamoDB client write request as
WriteRequest.builder().putRequest(PutRequest.builder().item(attributeValueMap).build()).build();
The above request is replacing the items with same PartitionKey and SortKey instead of upserting the data into the table. Any idea what am I doing wrong or do I need a to pass any additional parameter in PutRequest ?
As the commenter mentions, you want to use UpdateItem if you want to "patch" an item. PutItem will replace the entire item. You can read more about the differences here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.WritingData
This is a simplified code sample for how that works in Javav2 SDK.
HashMap<String,AttributeValue> itemKey = new HashMap<>();
itemKey.put(key, AttributeValue.builder()
.s(keyVal)
.build());
HashMap<String,AttributeValueUpdate> updatedValues = new HashMap<>();
// Put your attributes/values you wish to update here.
// Attributes you don't include won't be effected by the update
updatedValues.put(name, AttributeValueUpdate.builder()
.value(AttributeValue.builder().s(updateVal).build())
.action(AttributeAction.PUT)
.build());
UpdateItemRequest request = UpdateItemRequest.builder()
.tableName(tableName)
.key(itemKey)
.attributeUpdates(updatedValues)
.build();
ddb.updateItem(request);

DynamoDB PartiQL pagination using SDK

I'm currently working on pagination in DynamoDB using the JS AWS-SDK's executeStatement using PartiQL, but my returned object does not contain a NextToken (only the Items array), which is used to paginate.
This is what the code looks like (pretty simple):
const statement = `SELECT "user", "id" FROM "TABLE-X" WHERE "activity" = 'XXXX'`;
const params = {Statement: statement};
try {
const posted = await dynamodb.executeStatement(params).promise();
return { posted: posted };
} catch(err) {
throw new Error(err);
}
I was wondering if anyone has dealt with pagination using PartiQL for DynamoDB.
Could this be because my partition key is a string type?
Still trying to figure it out.
Thanks, in advance!
It turns out that if you want a NextToken DO NOT use version 2 of the AWS SDK for JavaScript. Use version 3. Version 3 will always return a NextToken, even if it is undefined.
From there you can figure out your limits, etc (default limit until you actually get a NextToken is 1MB). You'll need to look into the dynamodb v3 execute statement method.
You can also look into dynamodb paginators, which I've never used, but plan on studying.

Ingest from storage with persistDetails = true not save ingest status result

I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}

DynamoDb - .NET Object Persistence Model - LoadAsync does not apply ScanCondition

I am fairly new in this realm and any help is appreciated
I have a table in Dynamodb database named Tenant as below:
"TenantId" is the hash primary key and I have no other keys. And I have a field named "IsDeleted" which is boolean
Table Structure
I am trying to run a query to get the record with specified "TenantId" while it is not deleted ("IsDeleted == 0")
I can get a correct result by running the following code: (returns 0 item)
var filter = new QueryFilter("TenantId", QueryOperator.Equal, "2235ed82-41ec-42b2-bd1c-d94fba2cf9cc");
filter.AddCondition("IsDeleted", QueryOperator.Equal, 0);
var dbTenant = await
_genericRepository.FromQueryAsync(new QueryOperationConfig
{
Filter = filter
}).GetRemainingAsync();
But no luck when I try to get it with following code snippet (It returns the item which is also deleted) (returns 1 item)
var queryFilter = new List<ScanCondition>();
var scanCondition = new ScanCondition("IsDeleted", ScanOperator.Equal, new object[]{0});
queryFilter.Add(scanCondition);
var dbTenant2 = await
_genericRepository.LoadAsync("2235ed82-41ec-42b2-bd1c-d94fba2cf9cc", new DynamoDBOperationConfig
{
QueryFilter = queryFilter,
ConditionalOperator = ConditionalOperatorValues.And
});
Any Idea why ScanCondition has no effect?
Later I also tried this: (throw exception)
var dbTenant2 = await
_genericRepository.QueryAsync("2235ed82-41ec-42b2-bd1c-d94fba2cf9cc", new DynamoDBOperationConfig()
{
QueryFilter = new List<ScanCondition>()
{
new ScanCondition("IsDeleted", ScanOperator.Equal, 0)
}
}).GetRemainingAsync();
It throws with: "Message": "Must have one range key or a GSI index defined for the table Tenants"
Why does it complain about Range key or Index? I'm calling
public AsyncSearch<T> QueryAsync<T>(object hashKeyValue, DynamoDBOperationConfig operationConfig = null);
You simply cant query a table only giving a single primary key (only hash key). Because there is one and only one item for that primary key. The result of the Query would be that still that single item, which is actually Load operation not Query. You can only query if you have composite primary key in this case (Hash (TenantID) and Range Key) or GSI (which doesn't impose key uniqueness therefore accepts duplicate keys on index).
The second code attempts to filter the Load. DynamoDBOperationConfig's QueryFilter has a description ...
// Summary:
// Query filter for the Query operation operation. Evaluates the query results and
// returns only the matching values. If you specify more than one condition, then
// by default all of the conditions must evaluate to true. To match only some conditions,
// set ConditionalOperator to Or. Note: Conditions must be against non-key properties.
So works only with Query operations
Edit: So after reading your comments on this...
I dont think there conditional expressions are for read operations. AWS documents indicates they are for put or update operations. However, not being entirely sure on this since I never needed to do a conditional Load. There is no such thing like CheckIfExists functionality as well in general. You have to read the item and see if it exists. Conditional load will still consume read throughput so your only advantage would be only NOT retrieving it in other words saving the bandwith (which is very negligible for single item).
My suggestion is read it and filter it in your application layer. Dont query for it. However what you can also do is if you very need it you can use TenantId as hashkey and isDeleted for range key. If you do so, you always have to query when you wanna get a tenant. With the query you can set rangeKey(isDeleted) to 0 or 1. This isnt how I would do it. As I said, would just read it and filter it at my application.
Another suggestion thing could be setting a GSI on isDeleted field and writing null when it is 0. This way you can only see that attribute in your table when its only 1. GSI on such attribute is called sparse index. Later if you need to get all the tenants that are deleted (isDeleted=1) you can simply scan that entire index without conditions. When you are writing null when its 0 dynamoDB wont put it in the index at the first place.

Dynamo db: Not able to fetch two columns with filterExpression

I am new to aws dynamodb so pardon for any silly mistake. I was trying to fetch two columns from my Activity table. Also I wanted to fetch only those columns where partition key starts with some specific string. Partition key has format activity_EnrolledStudentName.(e.g Dance_studentName) So I wanted to fetch all those items from table where activity is Dance. I was trying to use the following query:
public List<StudentDomain> getAllStudents(String activity) {
List<StudentDomain> scanResult = null;
DynamoDBUtil dynamoDBUtil = new DynamoDBUtil();
AmazonDynamoDB dynamoDBClient = dynamoDBUtil.getDynamoDBClient();
DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient);
DynamoDBScanExpression scanExpression = new DynamoDBScanExpression();
scanExpression.withProjectionExpression("studentId, ActivitySkills")
.addFilterCondition(STUDENT_PRIMARY_KEY,
new
Condition().withComparisonOperator(ComparisonOperator.BEGINS_WITH)
.withAttributeValueList(new
AttributeValue().withS(activity)));
scanResult = mapper.scan(StudentDomain.class, scanExpression);
return scanResult;
However I am getting the following error when i executed above query.
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Can not use both expression and non-expression parameters in the same request: Non-expression parameters: {ScanFilter} Expression parameters: {ProjectionExpression} (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: TMS27PABBC2BS3UU7LID731G0FVV4KQNSO5AEMVJF66Q9ASUAAJG)
Can anyone please suggest where I am mistaken and which other query shall i use otherwise?
It's not completely clear what you are trying to achieve but if I understood correctly then you don't need a scan operation for that which actually scans the whole table and afterwords filter the result.
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("TableName");
QuerySpec spec = new QuerySpec()
.withKeyConditionExpression("studentId = : ActivitySkills")
ItemCollection<QueryOutcome> items = table.query(spec);
Iterator<Item> iterator = items.iterator();
Item item = null;
while (iterator.hasNext()) {
item = iterator.next();
System.out.println(item.toJSONPretty());
}
The filter expression you are using are intented to be used as a filter for secondary attributes and not the range or partition keys. At least this is my interpretation of the documentation
Please read the query documentation http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryingJavaDocumentAPI.html

Resources