JSONPath query - parse object of object - jsonpath

I'm trying to write a JsonPath query that selects a specific object based on a condition.
I would like to select all volumes with a name containing 'mydrive' and 'hosts' array with no value. Can you help me to get the good JSONPath query?
With this json output
{
"purefa_info": {
"volumes": {
"DATASTORE": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": [
{
"host": "esxi1",
"lun": 251
},
{
"host": "esxi2",
"lun": 251
}
]
},
"RC1Clone": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": []
},
"RC2Clone": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": []
},
"mydrive-0d32e3799a": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": []
},
"mydrive-0e35cb6455": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": [
{
"host": "esxi1",
"lun": 251
},
{
"host": "esxi2",
"lun": 251
}
]
},
"mydrive-55c61ab79c": {
"bandwidth": null,
"host_encryption_key_status": "none",
"hosts": []
}
}
}
}
I would like:
[
"$['purefa_info']['volumes']['mydrive-0d32e3799a']",
"$['purefa_info']['volumes']['mydrive-55c61ab79c']"
]
Can you help me?
Thanks

if you are using JSONPath-Plus which is used in https://jsonpath.com/ you can use the below expression.
$.purefa_info.*[?(#property.match(/mydrive/) && #.hosts == 0)]
Note : #property is not present in the original spec.

Related

ConditionalCheckFailedException: The conditional request failed (Nested field)

I have a DDB entry as below
RequestId:'1234567890' // Partition key
LastScanDateTime: '2023-01-12T11:00:00.111Z'
SubjectUpdateCounter: {
Maths: 1,
English: 1
}
I'm trying to update the entry and below is the update query.
{
"TableName": "EmpDetails",
"Key": {
"RequestId": {
"S": "1234567890"
}
},
"ConditionExpression": "(#subjectUpdateCounter > :subjectUpdateCounterLimit)",
"UpdateExpression": "ADD #subjectUpdateCounter :dec SET LastScanDateTime = :LastScanDateTime,",
"ExpressionAttributeValues": {
":dec": {
"N": "-1"
},
":LastScanDateTime": {
"S": "2023-02-13T18:14:52.143Z"
},
":subjectUpdateCounterLimit": {
"N": "0"
}
},
"ReturnValues": "NONE",
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter.Maths"
}
}
Getting below error
ConditionalCheckFailedException: The conditional request failed
....
....
'$fault': 'client',
'$metadata': {
httpStatusCode: 400,
requestId: '12345ygfsdfagagdf',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
__type: 'com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException'
}
My current value in SubjectUpdateCounter.Maths is greater than 0, so the condition should succeed and this query should decrement the value of SubjectUpdateCounter.Maths to 0.
Why is the query throwing the above exception?
Your issue is here:
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter.Maths"
}
This means DynamoDB is looking for an attribute named "SubjectUpdateCounter.Maths" but there is none, as its a nested value you are looking for.
Your request should look like the following:
{
"TableName": "EmpDetails",
"Key": {
"RequestId": {
"S": "1234567890"
}
},
"ConditionExpression": "(#subjectUpdateCounter.#maths > :subjectUpdateCounterLimit)",
"UpdateExpression": "ADD #subjectUpdateCounter.#maths :dec SET LastScanDateTime = :LastScanDateTime,",
"ExpressionAttributeValues": {
":dec": {
"N": "-1"
},
":LastScanDateTime": {
"S": "2023-02-13T18:14:52.143Z"
},
":subjectUpdateCounterLimit": {
"N": "0"
}
},
"ReturnValues": "NONE",
"ExpressionAttributeNames": {
"#subjectUpdateCounter": "SubjectUpdateCounter",
"#maths":"Maths"
}
}

Removing null fields in the DynamoDB record

I am reading from a DynamoDB table in form of Map<String, AttributeValue>. The record looks something like this :-
{
"name": {
"s": "simran",
"n": null,
"b": null,
"m": null,
"l": null,
"ss": null,
"ns": null,
"bs": null,
"null": null,
"bool": null
},
"id": {
"s": "100",
"n": null,
"b": null,
"m": null,
"l": null,
"ss": null,
"ns": null,
"bs": null,
"null": null,
"bool": null
}
}
What i want to achieve is this :-
{
"name": {
"S": "simran"
},
"id": {
"S": "100"
}
}
The first JSON is extracted from this piece of code:-
com.amazonaws.services.dynamodbv2.model.Record inputRecord= inputRecord.getDynamodb().getNewImage()
Is there a AWS SDK that could be used for to convert the first JSON to the second model ? If i use this toString(), I get the JSON of this format (with unnecessary trailing commas after the attribute value), making the json invalid for further parsing:- { "name": { "s": "simran", }, "id": { "s": "100", } }
I used RecordMapper to serialise the value.
https://github.com/awslabs/dynamodb-streams-kinesis-adapter/blob/master/src/main/java/com/amazonaws/services/dynamodbv2/streamsadapter/model/RecordObjectMapper.java
ObjectMapper objectMapper = new RecordObjectMapper();
objectMapper.writeValueAsString()

Can I get the user email within a Velocity template of AWS Amplify?

When I query a resolver in my GraphQL API, in which I have added a $util.error($ctx) to return the context object, I get the following result (removed unnecessary values).
{
"data": {
"listXData": null
},
"errors": [
{
"message": {
"arguments": {},
"args": {},
"info": {
"fieldName": "listXData",
"variables": {},
"parentTypeName": "Query",
"selectionSetList": [
"items",
"items/id",
"items/createdAt",
"items/updatedAt",
"nextToken"
],
"selectionSetGraphQL": "{\n items {\n id\n createdAt\n updatedAt\n }\n nextToken\n}"
},
"request": {...},
"identity": {
"sub": "",
"issuer": "",
"username": "013fe9d2-95f7-4885-83ec-b7e2e0a1423f",
"sourceIp": "",
"claims": {
"origin_jti": "",
"sub": "",
"event_id": "",
"token_use": "",
"scope": "",
"auth_time": ,
"iss": "",
"exp": ,
"iat": ,
"jti": "",
"client_id": "",
"username": "013fe9d2-95f7-4885-83ec-b7e2e0a1423f"
},
"defaultAuthStrategy": "ALLOW"
},
"stash": {},
"source": null,
"result": {
"items": [],
"scannedCount": 0,
"nextToken": null
},
"error": null,
"prev": {
"result": {}
}
},
"errorType": null,
"data": null,
"errorInfo": null,
"path": [
"listXData"
],
"locations": [
{
"line": 2,
"column": 3,
"sourceName": "GraphQL request"
}
]
}
]
}
As you can see, the username is an ID, however I would prefer to (also) have the email. Is it possible to get the user email (within the Velocity template)?
Let me know if I need to add more details or if my question is unclear.
The identity context only returns back the Cognito username for the user pool. You will need to setup pipeline functions to perform additional queries to get your user information. Here is one intro to setting them up.
At this point, it seems that it is not possible to do this purely by vtl.
I have implemented it using a lambda function, as follow:
Lambda function (node):
/* Amplify Params - DO NOT EDIT
ENV
REGION
Amplify Params - DO NOT EDIT */
const aws = require('aws-sdk')
const cognitoidentityserviceprovider = new aws.CognitoIdentityServiceProvider({
apiVersion: '2016-04-18',
region: 'eu-west-1'
})
exports.handler = async (context, event, callback) => {
if (!context.identity?.username) {
callback('Not signed in')
}
const params = {
'AccessToken': context.request.headers.authorization
}
const result = await cognitoidentityserviceprovider.getUser(params).promise()
const email = result.UserAttributes.find(attribute => attribute.Name === 'email')
callback(null, JSON.stringify({ email }))
}
CustomResources.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "An auto-generated nested stack.",
"Metadata": {...},
"Parameters": {...},
"Resources": {
"GetEmailLambdaDataSourceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::If": [
"HasEnvironmentParameter",
{
"Fn::Join": [
"-",
[
"GetEmail17ec",
{
"Ref": "GetAttGraphQLAPIApiId"
},
{
"Ref": "env"
}
]
]
},
{
"Fn::Join": [
"-",
[
"GetEmail17ec",
{
"Ref": "GetAttGraphQLAPIApiId"
}
]
]
}
]
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "InvokeLambdaFunction",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": {
"Fn::If": [
"HasEnvironmentParameter",
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:GetEmail-${env}",
{
"env": {
"Ref": "env"
}
}
]
},
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:GetEmail",
{}
]
}
]
}
}
]
}
}
]
}
},
"GetEmailLambdaDataSource": {
"Type": "AWS::AppSync::DataSource",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "GetEmailLambdaDataSource",
"Type": "AWS_LAMBDA",
"ServiceRoleArn": {
"Fn::GetAtt": [
"GetEmailLambdaDataSourceRole",
"Arn"
]
},
"LambdaConfig": {
"LambdaFunctionArn": {
"Fn::If": [
"HasEnvironmentParameter",
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:GetEmail-${env}",
{
"env": {
"Ref": "env"
}
}
]
},
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:GetEmail",
{}
]
}
]
}
}
},
"DependsOn": "GetEmailLambdaDataSourceRole"
},
"InvokeGetEmailLambdaDataSource": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "InvokeGetEmailLambdaDataSource",
"DataSourceName": "GetEmailLambdaDataSource",
"FunctionVersion": "2018-05-29",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
},
"ResolverFileName": {
"Fn::Join": [
".",
[
"InvokeGetEmailLambdaDataSource",
"req",
"vtl"
]
]
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
},
"ResolverFileName": {
"Fn::Join": [
".",
[
"InvokeGetEmailLambdaDataSource",
"res",
"vtl"
]
]
}
}
]
}
},
"DependsOn": "GetEmailLambdaDataSource"
},
"IsOrganizationMember": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"FunctionVersion": "2018-05-29",
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "IsOrganizationMember",
"DataSourceName": "PermissionsPerOrganizationTable",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.isOrganizationMember.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.isOrganizationMember.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
},
"OrganizationAccessPipeline": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"TypeName": "Query",
"Kind": "PIPELINE",
"FieldName": "listXData",
"PipelineConfig": {
"Functions": [
{
"Fn::GetAtt": [
"InvokeGetEmailLambdaDataSource",
"FunctionId"
]
},
{
"Fn::GetAtt": [
"IsOrganizationMember",
"FunctionId"
]
}
]
},
"RequestMappingTemplate": "{}",
"ResponseMappingTemplate": "$util.toJson($ctx.result)"
}
}
},
"Conditions": {...},
"Outputs": {...}
}
The lambda is created with the CLI and IsOrganizationMember is a regular VTL which has the user email in the $context.prev.result.

Azure Time Series Insights query: how to denote a variable such as 'cpu.thermal.average' in the query?

My series' got the value: cpu.thermal.average. I see the values in the TSI explorer for the given range of time.
By executing the following request, for "temperatures" I only get "null".
I am not sure about how to write $event.cpu.thermal.average equivalent.
Any idea?
{
"aggregateSeries": {
"timeSeriesId": [
"MySeries",
],
"searchSpan": {
"from": "2021-03-10T07:00:00Z",
"to": "2021-03-10T20:00:50Z"
},
"interval": "PT3600M",
"filter": null,
"inlineVariables": {
"latitudes": {
"kind": "numeric",
"value": {
"tsx": "$event.latitude"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
},
"temperatures": {
"kind": "numeric",
"value": {
"tsx": "$event['cpu.thermal.average']"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
}
},
"projectedVariables": [
"latitudes",
"temperatures"
]
}
}
From here: https://learn.microsoft.com/en-us/rest/api/time-series-insights/reference-time-series-expression-syntax#value-expressions
When accessing nested properties, the Type is required.
It should work if you use $event.cpu.thermal.average.Double

Easiest way to import a simple csv file to a graph with OrientDB ETL

I would like to import a very simple directed graph file in csv to OrientDB. Concretely, the file is the roadNet-PA dataset from the SNAP collection https://snap.stanford.edu/data/roadNet-PA.html. The first lines of the file are as follows:
# Directed graph (each unordered pair of nodes is saved once)
# Pennsylvania road network
# Nodes: 1088092 Edges: 3083796
# FromNodeId ToNodeId
0 1
0 6309
0 6353
1 0
6353 0
6353 6354
There is only one type of vertex (a road intersection) and edges have no information (I suppose OrientDB lightweight edges are the best option for this). Note also that vertices are spaced with tabs.
I've tried to create a simple etl to import the file with no success. Here is the etl:
{
"config": {
"log": "debug"
},
"source" : {
"file": { "path": "/tmp/roadNet-PA.csv" }
},
"extractor": { "row": {} },
"transformers": [
{ "csv": { "separator": " ", "skipFrom": 1, "skipTo": 4 } },
{ "vertex": { "class": "Intersection" } },
{ "edge": { "class": "Road" } }
],
"loader": {
"orientdb": {
"dbURL": "remote:localhost/roads",
"dbType": "graph",
"classes": [
{"name": "Intersection", "extends": "V"},
{"name": "Road", "extends": "E"}
], "indexes": [
{"class":"Intersection", "fields":["id:integer"], "type":"UNIQUE" }
]
}
}
}
The etl works but it does not import the file as I expect. I suppose the problem is in the transformers. My idea is to read the csv line by line and create and edge connecting both vertices, but I'm not sure how to express this in an etl file. Any ideas?
Try this:
{
"config": {
"log": "debug"
},
"source" : {
"file": { "path": "/tmp/roadNet-PA.csv" }
},
"extractor": { "row": {} },
"transformers": [
{ "csv": { "separator": "\t", "skipFrom": 1, "skipTo": 4,
"columnsOnFirstLine": false,
"columns":["id", "to"] } },
{ "vertex": { "class": "Intersection" } },
{ "merge": { "joinFieldName":"id", "lookup":"Intersection.id" } },
{ "edge": {
"class": "Road",
"joinFieldName": "to",
"lookup": "Intersection.id",
"unresolvedLinkAction": "CREATE"
}
},
],
"loader": {
"orientdb": {
"dbURL": "remote:localhost/roads",
"dbType": "graph",
"wal": false,
"batchCommit": 1000,
"tx": true,
"txUseLog": false,
"useLightweightEdges" : true,
"classes": [
{"name": "Intersection", "extends": "V"},
{"name": "Road", "extends": "E"}
], "indexes": [
{"class":"Intersection", "fields":["id:integer"], "type":"UNIQUE" }
]
}
}
}
To speedup loading I suggest you to shutdown the server, and import the ETL by using "plocal:" instead of "remote:". Example replacing the existent with:
"dbURL": "plocal:/orientdb/databases/roads",
It finally worked. I've moved the merge before vertex line as suggested by Luca. I've also changed the 'id' field to 'from' to avoid the error "property key is reserved for all elements id". Here is the snippet:
{
"config": {
"log": "debug"
},
"source" : {
"file": { "path": "/tmp/roads.csv" }
},
"extractor": { "row": {} },
"transformers": [
{ "csv": { "separator": "\t",
"columnsOnFirstLine": false,
"columns":["from", "to"] } },
{ "merge": { "joinFieldName":"from", "lookup":"Intersection.from" } },
{ "vertex": { "class": "Intersection" } },
{ "edge": {
"class": "Road",
"joinFieldName": "to",
"lookup": "Intersection.from",
"unresolvedLinkAction": "CREATE"
}
},
],
"loader": {
"orientdb": {
"dbURL": "remote:localhost/roads",
"dbType": "graph",
"wal": false,
"batchCommit": 1000,
"tx": true,
"txUseLog": false,
"useLightweightEdges" : true,
"classes": [
{"name": "Intersection", "extends": "V"},
{"name": "Road", "extends": "E"}
], "indexes": [
{"class":"Intersection", "fields":["from:integer"], "type":"UNIQUE" }
]
}
}
}

Resources