I'm using dynamodb Local. I'm trying to assign two addresses to one person but I'm getting a "Duplicate Key" error.
I want to add more countries and more addresses for one single person. but whenever I try to add another country and address for the person it gives a Duplicate Key error
Can someone pls help me out? Help is appreciated.
The image of my localhost code
{
"AttributeDefinitions": [
{
"AttributeName": "AddressID",
"AttributeType": "S"
},
{
"AttributeName": "CustomerID",
"AttributeType": "S"
},
{
"AttributeName": "Country",
"AttributeType": "S"
}
],
"TableName": "Addresses",
"KeySchema": [
{
"AttributeName": "AddressID",
"KeyType": "HASH"
},
{
"AttributeName": "CustomerID",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2021-04-06T09:06:29.622Z",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T00:00:00.000Z",
"LastDecreaseDateTime": "1970-01-01T00:00:00.000Z",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 732,
"ItemCount": 6,
"TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/Addresses",
"GlobalSecondaryIndexes": [
{
"IndexName": "UserIndex",
"KeySchema": [
{
"AttributeName": "AddressID",
"KeyType": "HASH"
},
{
"AttributeName": "Country",
"KeyType": "RANGE"
}
],
"Projection": {
"ProjectionType": "ALL"
},
"IndexStatus": "ACTIVE",
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"IndexSizeBytes": 732,
"ItemCount": 6,
"IndexArn": "arn:aws:dynamodb:ddblocal:000000000000:table/Addresses/index/UserIndex"
}
]
}
Related
I have the graph model below which represents the sub-pattern I'd like to traverse or fetch. The nodes and their properties are shown below as well.
The expected response to my query would look something like this:
where 's', 'c', 'aid', 'qid', 'p', 'r1', 'r2' are the nodes that make up the subpattern or subgraph.
[
{
"s": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "severity",
"type": "vertex",
"properties": {
"severity": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "High"
}
],
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"c": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "cve",
"type": "vertex",
"properties": {
"cve_id": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "CVE-xxxx-xxxx"
}
],
"publishedOn": [
{
"id": "fc5dde4d-c027-4c19-9b16-b3314b2b10c6",
"value": "xxx"
}
],
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"aid": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "aid",
"type": "vertex",
"properties": {
"aid": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "xxxx-xxxx"
}
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"qid": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "qid",
"type": "vertex",
"properties": {
"qid": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "xxxx-xxxx"
}
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"p": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "package",
"type": "vertex",
"properties": {
"name": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "xxxxx"
}
],
"version": [
{
"id": "fc5dde4d-c027-4c19-9b16-b3314b2b10c6",
"value": "xxx"
}
],
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"r1": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "release",
"type": "vertex",
"properties": {
"source": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "xxxx-xxxx"
}
],
"status": [
{
"id": "fc5dde4d-c027-4c19-9b16-b3314b2b10c6",
"value": "xxx"
}
],
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
"r2": {
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4",
"label": "release",
"type": "vertex",
"properties": {
"source": [
{
"id": "a6a9e38f-0802-48b6-ac37-490f45e824e9",
"value": "xxxx-xxxx"
}
],
"status": [
{
"id": "fc5dde4d-c027-4c19-9b16-b3314b2b10c6",
"value": "xxx"
}
],
"pk": [
{
"id": "345fbdad-9c67-47bb-9f3b-cf50c8cdbee4|pk",
"value": "pk"
}
]
}
},
{
....
....
},
{
....
..
}
]
My question is how do I build my traversal query to achieve this end result?
What I have so far is this, but the project() step is not working as expected
g.V().hasLabel('cve').as('c').and(
__.in('severity').as('s'),
__.out('cve_to_aid').as('aid').and(
__.out('has_qid').as('qid'),
__.in('package_to_aid').as('p'),
or(
__.in('r1_to_aid').has('status', 'Patched').as('r1'),
__.in('r2_to_aid').has('status', 'Patched').as('r2')
)
)
).project('c', 's', 'aid', 'qid', 'p', 'r1', 'r2').
by(('c').values('cve_id')).
by(('s').values('severity')).
by(('aid').values('aid')).
by(('qid').values('qid')).
by(('p').values()).
by(('r1').values()).
by(('r2').values()).
I am doing this on CosmosDB, so please only provide answers using supported steps found here: https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin/support
It is possible to nest project() steps, e.g. on the TinkerGraph:
gremlin> g = TinkerFactory.createModern().traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V(1).as('x').project('x').by(
select('x').project('id', 'label','properties').by(id).by(label).by(
project('name').by(properties())
)
)
==>[x:[id:1,label:person,properties:[name:vp[name->marko]]]]
gremlin>
but then you end up coding your entire data model into your query.
In full TinkerPop you could turn your result into a subGraph() and write it to graphSon with the io() step. In Cosmos you can add the returned vertices to a TinkerGraph instance clientside and again use the io() step to serialize the TinkerGraph to graphSon.
I have this json format I'm making an API using ASP.NET.
{
"0": {
"order_id": 11748,
"complete_date": "2021-04-19 14:48:41",
"shipping_code": "aramex.aramex",
"awbs": [
{
"aramex_id": "1314",
"order_id": "11748",
"awb_number": "46572146154",
"reference_number": "11748",
"date_added": "2021-03-04 03:46:58"
}
],
"payment": {
"method": {
"name": "الدفع عند الاستلام",
"code": "cod"
},
"invoice": [
{
"code": "sub_total",
"value": "120.8700",
"value_string": "120.8700 SAR",
"title": "الاجمالي"
},
{
"code": "shipping",
"value": "0.0000",
"value_string": "0.0000 SAR",
"title": "ارمكس"
},
{
"code": "coupon",
"value": "-13.9000",
"value_string": "-13.9000 SAR",
"title": "قسيمة التخفيض(RMP425)"
},
{
"code": "cashon_delivery_fee",
"value": "5.0000",
"value_string": "5.0000 SAR",
"title": "رسوم الدفع عند الاستلام"
},
{
"code": "tax",
"value": "18.1300",
"value_string": "18.1300 SAR",
"title": " ضريبة القيمة المضافة (15%)"
},
{
"code": "total",
"value": "130.1000",
"value_string": "130.1000 SAR",
"title": "الاجمالي النهائي"
}
]
},
"product": [
{
"id": 69,
"name": "مخلط 4 أو دو بيرفيوم للجنسين - 100 مل",
"sku": "45678643230",
"weight": "0.50000000",
"quantity": 1,
"productDiscount": "",
"images": []
}
]
}
}
How can I reach order_id? I made an object let's say its name is obj1 I tried foreach obj1 and storing into a variable obj1.order_id;
It stored null in the variable. the {"0"} is the numbering of orders starts 0-1-2 etc.
You can deserialize that json to Dictionary<string,dynamic> without creating a new class as following:
var values = JsonConvert.DeserializeObject<Dictionary<string, dynamic>>(json);
var orderId = values["0"]["order_id"].ToString();
This will give you 11748 as a result.
I have this table test1 with a hash key s1 and range key s2 and the table has a global secondary index gsi1 with a hash key s3 and range key s4.
$ aws dynamodb describe-table \
--table-name test1
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "s1",
"AttributeType": "S"
},
{
"AttributeName": "s2",
"AttributeType": "S"
},
{
"AttributeName": "s3",
"AttributeType": "S"
},
{
"AttributeName": "s4",
"AttributeType": "S"
}
],
"TableName": "test1",
"KeySchema": [
{
"AttributeName": "s1",
"KeyType": "HASH"
},
{
"AttributeName": "s2",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2020-11-19T23:36:40.457000+08:00",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T08:00:00+08:00",
"LastDecreaseDateTime": "1970-01-01T08:00:00+08:00",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 3,
"WriteCapacityUnits": 3
},
"TableSizeBytes": 50,
"ItemCount": 3,
"TableArn": "arn:aws:dynamodb:ap-southeast-1:000000000000:table/test1",
"GlobalSecondaryIndexes": [
{
"IndexName": "gsi1",
"KeySchema": [
{
"AttributeName": "s3",
"KeyType": "HASH"
},
{
"AttributeName": "s4",
"KeyType": "RANGE"
}
],
"Projection": {
"ProjectionType": "ALL"
},
"IndexStatus": "ACTIVE",
"ProvisionedThroughput": {
"ReadCapacityUnits": 3,
"WriteCapacityUnits": 3
},
"IndexSizeBytes": 37,
"ItemCount": 1,
"IndexArn": "arn:aws:dynamodb:ddblocal:000000000000:table/test1/index/gsi1"
}
]
}
}
I have this item. Note that s4 is not there.
$ aws dynamodb get-item \
--table-name test1 \
--key '{"s1":{"S":"chen"}, "s2":{"S":"yan"}}'
{
"Item": {
"s3": {
"S": "fei"
},
"s1": {
"S": "chen"
},
"s2": {
"S": "yan"
}
}
}
I tried to query the index gsi1 for that item with this query and got the following error.
$ aws dynamodb query \
--table-name test1 \
--index-name gsi1 \
--expression-attribute-values '{
":s3": {"S": "fei"},
":s4": {"NULL": true}
}' \
--key-condition-expression 's3 = :s3 AND s4 = :s4'
An error occurred (ValidationException) when calling the Query operation: One or more parameter values were invalid: Condition parameter type does not match schema type
I also tried this query and got no items.
$ aws dynamodb query \
--table-name test1 \
--index-name gsi1 \
--expression-attribute-values '{
":s3": {"S": "fei"}
}' \
--key-condition-expression 's3 = :s3'
{
"Items": [],
"Count": 0,
"ScannedCount": 0,
"ConsumedCapacity": null
}
Any idea how I can query that index for that item?
You can't.
If the item doesn't have an attribute required for the GSI keys, it's not added to the GSI.
This is known as a "sparse index"
I am new to NoSQL and MongoDB, so please don't bash. I have used SQL databases in the past, but am now looking to leverage the scalability of NoSQL. One application that comes to mind is the collection of experimental results, where they are serialized in some manner with a start date, end date, part number, serial number, etc. Along with each experiment, there are many "measurements" collected, but the list of measurements may be unique in each experiment.
I am looking for ideas in how to structure the document to achieve the follow tasks:
1) Query based on date ranges, part numbers, serial numbers
2) See resulting table in a "spreadsheet" table
3) Perform statistical calculats, perhaps with R, on the different "measurements"
An example might look like:
[
{
"_id": {
"$oid": "5e680d6063cb144f9d1be261"
},
"StartDate": {
"$date": {
"$numberLong": "1583841600000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842007000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3102",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.68453",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Temperature",
"Value": {
"$numberDouble": "68.43"
},
"Units": "DegF",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Velocity",
"Value": {
"$numberDouble": "12.4"
},
"Units": "ft/s",
"Flag": {
"$numberInt": "1"
}
}
]
},
{
"_id": {
"$oid": "5e68114763cb144f9d1be263"
},
"StartDate": {
"$date": {
"$numberLong": "1583842033000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3103",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.70153",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Temperature",
"Value": {
"$numberDouble": "68.55"
},
"Units": "DegF",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Velocity",
"Value": {
"$numberDouble": "12.7"
},
"Units": "ft/s",
"Flag": {
"$numberInt": "1"
}
}
]
},
{
"_id": {
"$oid": "5e68115f63cb144f9d1be264"
},
"StartDate": {
"$date": {
"$numberLong": "1583842464000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3104",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.59243",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Weight",
"Value": {
"$numberDouble": "67.93"
},
"Units": "lbf",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Torque",
"Value": {
"$numberDouble": "122.33"
},
"Units": "ft-lbf",
"Flag": {
"$numberInt": "1"
}
}
]
}
]
Another approach might be:
[
{
"_id": {
"$oid": "5e680d6063cb144f9d1be261"
},
"StartDate": {
"$date": {
"$numberLong": "1583841600000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842007000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3102",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.68453",
"Pressure - Flag": "1",
"Temperature (degF)": "68.43",
"Temperature - Flag": "1",
"Velocity (ft/s)": "12.4",
"Velocity Flag": "1"
},
{
"_id": {
"$oid": "5e68114763cb144f9d1be263"
},
"StartDate": {
"$date": {
"$numberLong": "1583842033000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3103",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.70153",
"Pressure - Flag": "1",
"Temperature (degF)": "68.55",
"Temperature - Flag": "1",
"Velocity (ft/s)": "12.7",
"Velocity Flag": "1"
},
{
"_id": {
"$oid": "5e68115f63cb144f9d1be264"
},
"StartDate": {
"$date": {
"$numberLong": "1583842464000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3104",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.59243",
"Pressure - Flag": "1",
"Weight (lbf)": "67.93",
"Weight - Flag": "1",
"Torque (ft-lbf)": "122.33",
"Torque - Flag": : "1"
}
]
An example table might look like (probably with correct spacing):
StartDate EndDate PartNumber SerialNumber Pressure 'Pressure - Flag' Temperature 'Temperature - Flag' Velocity 'Velocity - Flag' Torque 'Torque - Flag' Weight 'Weight - Flag'
2020-03-10T12:00:00Z 2020-03-10T12:06:47Z 1Z45NP7X U84A3102 14.68453 1 68.43 1 12.4 1 N/A N/A N/A
N/A
2020-03-10T12:07:13Z 2020-03-10T12:13:54Z 1Z45NP7X U84A3103 14.70153 1 68.55 1 12.7 1 N/A N/A N/A
N/A
2020-03-10T12:07:13Z 2020-03-10T12:13:54Z 1Z45NP7X U84A3104 14.59243 1 N/A N/A N/A N/A 67.93 1 122.33
1
Any thoughts on the best structure? In reality, there might be 200+ "sensor values".
Thanks,
DG
I need to be able to select elements within a JSON document based on the values in sub-elements which, unfortunately, reside in a list of key-value pairs (this is the structure I have to work with). I'm using Jayway 2.4.0.
Here is the JSON document:
{
"topLevelArray": [
{
"elementId": "Elem1",
"keyValuePairs": [
{
"key": "Length",
"value": "10"
},
{
"key": "Width",
"value": "3"
},
{
"key": "Producer",
"value": "alpha"
}
]
},
{
"elementId": "Elem2",
"keyValuePairs": [
{
"key": "Length",
"value": "20"
},
{
"key": "Width",
"value": "8"
},
{
"key": "Producer",
"value": "beta"
}
]
},
{
"elementId": "Elem3",
"keyValuePairs": [
{
"key": "Length",
"value": "15"
},
{
"key": "Width",
"value": "5"
},
{
"key": "Producer",
"value": "beta"
}
]
}
]
}
Here is the JsonPath I thought would do the trick:
$..topLevelArray[ ?( #.keyValuePairs[ ?(#.key=='Producer' && #.value=='beta') ] ) ]
and
$.topLevelArray[ ?( #.keyValuePairs[ ?(#.key=='Producer' && #.value=='beta') ] ) ]
Unfortunately, both are returning everything in the list, including the entry with Producer of 'alpha'. Thx in advance.