I'm trying to read a JSON object with has nested lists. Which looks like this:
[{
"id": 70070037001,
"text": "List 1",
"isleaf": 0,
"children": [
{
"oid": 100,
"text": "Innerlistobject100",
"isleaf": 0,
"children": [
{
"sid": 1000,
"text": "Innerlistobject1000",
"isleaf": 1
},
{
"sid": 2000,
"text": "Innerlistobject2000",
"isleaf": 1
}
]
},
{
"oid": 200,
"text": "Innerlistobject200",
"isleaf": 0,
"children": [
{
"sid": 1000,
"text": "Innerlistobject1000",
"isleaf": 1
},
{
"sid": 2000,
"text": "Innerlistobject2000",
"isleaf": 1
}
]
}
]
}]
ref: https://sourceforge.net/p/pljson/discussion/935365/thread/375c0293/ - where the person is creating the object, but I want to do the opposite and read it.
Do I have to iterate like this (note name is children within children):
Declare
l_Children_List json_list;
JSON_Obj json;
l_Child_JSON_Obj json;
Begin
IF (JSON_Obj.exist ('children')) THEN
IF (JSON_Obj.get ('children').is_array)
l_Children_List := json_list (JSON_Obj.get ('children'));
FOR i IN 1 .. l_Children_List.COUNT
IF (JSON_Obj.exist ('children')) THEN
IF (JSON_Obj.get ('children').is_array)
l_Children_List := json_list (JSON_Obj.get ('children'));
FOR i IN 1 .. l_Children_List.COUNT
jSON_child_val := l_Children_List.get (i);
l_Child_JSON_Obj := json (jSON_child_val );
LOOP
End If;
LOOP
End If;
End;
with json_example as (
select '{
"id": 70070037001,
"text": "List 1",
"isleaf": 0,
"children": [
{
"oid": 100,
"text": "Innerlistobject100",
"isleaf": 0,
"children": [
{
"sid": 1000,
"text": "Innerlistobject1000",
"isleaf": 1
},
{
"sid": 2000,
"text": "Innerlistobject2000",
"isleaf": 1
}
]
},
{
"oid": 200,
"text": "Innerlistobject200",
"isleaf": 0,
"children": [
{
"sid": 1000,
"text": "Innerlistobject1000",
"isleaf": 1
},
{
"sid": 2000,
"text": "Innerlistobject2000",
"isleaf": 1
}
]
}
]
}' as json_document
from dual
)
SELECT tab.*
FROM json_example a
join json_table (a.json_document, '$'
COLUMNS
(id NUMBER PATH '$.id'
,text VARCHAR2(50) PATH '$.text'
,isleaf NUMBER PATH '$.isleaf'
,NESTED PATH '$.children[*]'
COLUMNS
(oid NUMBER PATH '$.oid'
,otext VARCHAR2(150) PATH '$.text'
,oisleaf NUMBER PATH '$.isleaf'
,NESTED PATH '$.children[*]'
COLUMNS
(sid NUMBER PATH '$.sid'
,stext VARCHAR2(250) PATH '$.text'
,sisleaf NUMBER PATH '$.isleaf'
)
)
)
) tab on 1=1
Related
I am trying to add a new user in below json which matches group NP01-RW. i am able to do without NP01-RW but not able to select users under NP01-RW and return updated json.
{
"id": 181,
"guid": "c9b7dbde-63de-42cc-9840-1b4a06e13364",
"isEnabled": true,
"version": 17,
"service": "Np-Hue",
"name": "DATASCIENCE-CUROPT-RO",
"policyType": 0,
"policyPriority": 0,
"isAuditEnabled": true,
"resources": {
"database": {
"values": [
"hive_cur_acct_1dev",
"hive_cur_acct_1eng",
"hive_cur_acct_1rwy",
"hive_cur_acct_1stg",
"hive_opt_acct_1dev",
"hive_opt_acct_1eng",
"hive_opt_acct_1stg",
"hive_opt_acct_1rwy"
],
"isExcludes": false,
"isRecursive": false
},
"column": {
"values": [
"*"
],
"isExcludes": false,
"isRecursive": false
},
"table": {
"values": [
"*"
],
"isExcludes": false,
"isRecursive": false
}
},
"policyItems": [
{
"accesses": [
{
"type": "select",
"isAllowed": true
},
{
"type": "update",
"isAllowed": true
},
{
"type": "create",
"isAllowed": true
},
{
"type": "drop",
"isAllowed": true
},
{
"type": "alter",
"isAllowed": true
},
{
"type": "index",
"isAllowed": true
},
{
"type": "lock",
"isAllowed": true
},
{
"type": "all",
"isAllowed": true
},
{
"type": "read",
"isAllowed": true
},
{
"type": "write",
"isAllowed": true
}
],
"users": [
"user1",
"user2",
"user3"
],
"groups": [
"NP01-RW"
],
"conditions": [],
"delegateAdmin": false
},
{
"accesses": [
{
"type": "select",
"isAllowed": true
}
],
"users": [
"user1"
],
"groups": [
"NP01-RO"
],
"conditions": [],
"delegateAdmin": false
}
],
"denyPolicyItems": [],
"allowExceptions": [],
"denyExceptions": [],
"dataMaskPolicyItems": [],
"rowFilterPolicyItems": [],
"options": {},
"validitySchedules": [],
"policyLabels": [
"DATASCIENCE-CurOpt-RO_NP01"
]
}
below is what i have tried but it returns part of the JSON matching NP01-RW and not full JSON
jq --arg username "$sync_userName" '.policyItems[] | select(.groups[] | IN("NP01-RO")).users += [$username]' > ${sync_policyName}.json
Operator precedence in jq is not always intuitive. Your program is parsed as:
.policyItems[] | (select(.groups[] | IN("NP01-RO")).users += [$username])
Which first streams all policyItems and only then changes them, leaving you with policyItems only in the output.
You need to make sure that the stream selects the correct values, which you can then assign:
(.policyItems[] | select(.groups[] | IN("NP01-RO")).users) += [$username]
This will do the assignment, but still return the full input (.).
Is there a way to setup a where clause in Kusto to get specific records with child records
Like if I wanted Kyle from below
Where address has Code = street and that value= grant AND Code = Number and that value= 55555
{
"Firstname": "Bob",
"lastName": "stevens"
"address": [
{
"code": "street",
"value": "Olsen"
},
{
"code": "Number",
"value": "123456"
}
},
{
"Firstname": "Kyle",
"lastName": "richards"
"address": [
{
"code": "street",
"value": "grant"
},
{
"code": "Number",
"value": "55555"
}
}
you could try using mv-apply, and filter for records in which the number of conditions met is as expected:
datatable(i:int, d:dynamic)
[
1, dynamic({"Firstname": "Bob", "lastName": "stevens", "address": [{ "code": "street", "value": "Olsen" }, { "code": "Number", "value": "123456" }]}),
2, dynamic({"Firstname": "Kyle", "lastName": "richards", "address": [{ "code": "street", "value": "grant" }, { "code": "Number", "value": "55555" }]}),
3, dynamic({"Firstname": "Kyle", "lastName": "richards", "address": [{ "code": "street", "value": "grant" }, { "code": "Number", "value": "11111" }]})
]
| mv-apply address = d.address on (
summarize c = countif((address.code == 'street' and address.value == 'grant') or
(address.code == 'Number' and address.value == 55555))
| where c == 2
)
| project-away c
i
d
2
{ "Firstname": "Kyle", "lastName": "richards", "address": [ { "code": "street", "value": "grant" }, { "code": "Number", "value": "55555" } ]}
update: in reply to your comment:
I'm trying to do this with a sproc, Would i need to put this into a datatable then query it like that? If so how do I put a query into a datatable
First, there are no stored procedures in Kusto. there are stored functions.
Second, if you want to invoke a similar logic over an existing table, you can define a stored function that takes a tabular argument as its input. And, optionally, use the invoke operator.
For example:
.create function my_function(T:(d:dynamic)) {
T
| mv-apply address = d.address on (
summarize c = countif((address.code == 'street' and address.value == 'grant') or
(address.code == 'Number' and address.value == 55555))
| where c == 2
)
| project-away c
}
let my_table = datatable(i:int, d:dynamic)
[
1, dynamic({"Firstname": "Bob", "lastName": "stevens", "address": [{ "code": "street", "value": "Olsen" }, { "code": "Number", "value": "123456" }]}),
2, dynamic({"Firstname": "Kyle", "lastName": "richards", "address": [{ "code": "street", "value": "grant" }, { "code": "Number", "value": "55555" }]}),
3, dynamic({"Firstname": "Kyle", "lastName": "richards", "address": [{ "code": "street", "value": "grant" }, { "code": "Number", "value": "11111" }]})
];
my_table
| invoke my_function()
I never have worked with json data in R and unfortunately, I was sent a sample of data as:
{
"task_id": "104",
"status": "succeeded",
"metrics": {
"requests_made": 2,
"network_errors": 0,
"unique_locations_visited": 0,
"requests_queued": 0,
"queue_items_completed": 2,
"queue_items_waiting": 0,
"issue_events": 9,
"caption": "",
"progress": 100
},
"message": "",
"issue_events": [
{
"id": "1234",
"type": "issue_found",
"issue": {
"name": "policy not enforced",
"type_index": 123456789,
"serial_number": "123456789183923712",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "low",
"confidence": "certain",
"caption": "/robots.txt",
"evidence": [
{
"type": "FirstOrderEvidence",
"detail": {
"band_flags": [
"in_band"
]
},
"request_response": {
"url": "https://test.com/robots.txt",
"request": [
{
"type": "DataSegment",
"data": "jaghsdjgasdgaskjdgasdgashdgsahdgasjkdgh==",
"length": 313
}
],
"response": [
{
"type": "DataSegment",
"data": "asudasjdgasaaasgdasgaksjdhgasjdgkjghKGKGgKJgKJgKJGKgh==",
"length": 303
}
],
"was_redirect_followed": false,
"request_time": "1234567890"
}
}
],
"internal_data": "jdfhgjhJHkjhdskfhkjhjs0sajkdfhKHKhkj=="
}
},
{
"id": "1235",
"type": "issue_found",
"issue": {
"name": "certificate",
"type_index": 12345845684,
"serial_number": "123456789165637150",
"origin": "https://test.com",
"path": "/",
"severity": "info",
"confidence": "certain",
"description": "The server description a valid, trusted certificate. This issue is purely informational.<br><br>The server presented the following certificates:<br><br><h4>Server certificate</h4><table><tr><td><b>Issued to:</b> </td><td>test.ie, test.com, www.test.com, www.test.ie</td></tr><tr><td><b>Issued by:</b> </td><td>GeoTrust EV RSA CA 2018</td></tr><tr><td><b>Valid from:</b> </td><td>Tue May 12 00:00:00 UTC 2020</td></tr><tr><td><b>Valid to:</b> </td><td>Tue May 17 12:00:00 UTC 2022</td></tr></table><h4>Certificate chain #1</h4><table><tr><td><b>Issued to:</b> </td><td>GeoTrust EV RSA CA 2018</td></tr><tr><td><b>Issued by:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Valid from:</b> </td><td>Mon Nov 06 12:22:46 UTC 2017</td></tr><tr><td><b>Valid to:</b> </td><td>Sat Nov 06 12:22:46 UTC 2027</td></tr></table><h4>Certificate chain #2</h4><table><tr><td><b>Issued to:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Issued by:</b> </td><td> High Assurance EV Root CA</td></tr><tr><td><b>Valid from:</b> </td><td>Fri Nov 10 00:00:00 UTC 2006</td></tr><tr><td><b>Valid to:</b> </td><td>Mon Nov 10 00:00:00 UTC 2031</td></tr></table>",
"caption": "/",
"evidence": [],
"internal_data": "sjhdgsajdggJGJHgjfgjhGJHgjhsdgfgjhGJHGjhsdgfjhsgfdsjfg098867hjhgJHGJHG=="
}
},
{
"id": "1236",
"type": "issue_found",
"issue": {
"name": "without flag set",
"type_index": 1254392,
"serial_number": "12345678965616",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "info",
"confidence": "certain",
"description": "my description text here....",
"caption": "/robots.txt",
"evidence": [
{
"type": "InformationListEvidence",
"request_response": {
"url": "https://test.com/robots.txt",
"request": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 313
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh=",
"length": 161
},
{
"type": "HighlightSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdf=",
"length": 119
},
{
"type": "DataSegment",
"data": "AasjkdhasjkhkjHKJSDHFJKSDFHKhjkHSKADJFHKhjkhjkh=",
"length": 23
}
],
"was_redirect_followed": false,
"request_time": "178454751191465"
},
"information_items": [
"Other: user_id"
]
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKH=="
}
},
{
"id": "1237",
"type": "issue_found",
"issue": {
"name": "without flag set",
"type_index": 1234567,
"serial_number": "123456789056704",
"origin": "https://test.com",
"path": "/",
"severity": "info",
"confidence": "certain",
"description": "long description here zjkhasdjkh hsajkdhsajkd hasjkdhbsjkdash d",
"caption": "/",
"evidence": [
{
"type": "InformationListEvidence",
"request_response": {
"url": "https://test.com/",
"request": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfhsfdsfdsfdsfdsfdsfsdfdsf",
"length": 303
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 151
},
{
"type": "HighlightSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh=",
"length": 119
},
{
"type": "DataSegment",
"data": "sdfdsfsdfSDFSDFdSFDS546SDFSDFDSFG657=",
"length": 23
}
],
"was_redirect_followed": false,
"request_time": "123541191466"
},
"information_items": [
"Other: user_id"
]
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsd=="
}
},
{
"id": "1238",
"type": "issue_found",
"issue": {
"name": "parameter pollution",
"type_index": 4137000,
"serial_number": "123456789810290176",
"origin": "https://test.com",
"path": "/robots.txt",
"severity": "low",
"confidence": "firm",
"description": "very long description text here...",
"caption": "/robots.txt [URL path filename]",
"evidence": [
{
"type": "FirstOrderEvidence",
"detail": {
"payload": {
"bytes": "Q3jkeiZkcmg8MQ==",
"flags": 0
},
"band_flags": [
"in_band"
]
},
"request_response": {
"url": "https://test.com/%3fhdz%26drh%3d1",
"request": [
{
"type": "DataSegment",
"data": "W1QOIC8=",
"length": 5
},
{
"type": "HighlightSegment",
"data": "WRMnBGR6JTI2ZHJoJTNkMQ==",
"length": 16
},
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfhcvxxcvklxcvjkxclvjxclkvjxcklvjlxckjvlxckjvklxcjvxcklvjxcklvjxckljvlxckjvxcklvjxckljvxcklvjcklxjvcxkl==",
"length": 298
}
],
"response": [
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 130
},
{
"type": "HighlightSegment",
"data": "Q4jleiZkcmg9MQ==",
"length": 10
},
{
"type": "DataSegment",
"data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh==",
"length": 163
}
],
"was_redirect_followed": false,
"request_time": "51"
}
}
],
"internal_data": "adjkhajksdhaskjdhkjHKJHjkhaskjdhkjasdhKHKJHkjsdhfkjsdhfkjsdhKHJKHjksdfhsdjkfhksdjhKHKJHJKhsdkfjhsdkfjhKHJKHjksdkfjhsdkjfhKHKJHjkhsdkfjhsdkjfhsdjkfhksdjfhKJHKjksdhfsdjkfhksdjfhsdkjhKHJKhsdkfhsdkjfhsdkfhdskjhKHKjhsdfkjhsdjkfh="
}
}
],
"event_logs": [],
"audit_items": []
}
I read it in R using jsonlite:
df_orig <- fromJSON('dast_sample_output.json', flatten= T)
This gives a nested list type R object. I wish to convert this list to a data frame in a tidy format with all the arrays and sub arrays being unnested.
If you run the str(df_orig), you could see the nested data frames in there.
How do I convert it to tidy format?
I tried unnest(), purrr but struggling to get into the tidy format for analysis? Any pointers would be highly appreciated.
Cheers,
use the jsonlite package function fromJSON()
edit:
set option flatten=T
edit2:
use content( x, 'text') before flattening
here is a full example converting to data.table:
get.json <- GET( apicall.text )
get.json.text <- content( get.json , 'text')
get.json.flat <- fromJSON( get.json.text , flatten = T)
dt <- as.data.table( get.json.flat )
I have been create a catalog service like blow json pattern.
Json pattern :
{
"id": "b01ee924-78d3-4f3a-9568-5ee80cbad7a7",
"VendorName": "string",
"Industy": [
{
"Id": "0350ac6c-ca15-4a1e-9211-ad078fbf443c",
"IdustryId": 0,
"IdustryName": "string",
"Category": [
{
"id": "a7b71770-9daf-4b67-b471-0a8390843544",
"Name": "string",
"Description": "string",
"Subcategory": [
{
"id": "76a6ead4-9f4d-4d6e-9c30-70938f088ea3",
"Name": "string",
"Description": "string",
"Product": [
{
"Id": "abf95277-ccbc-4f9d-aeda-b6cc9c99953b",
"Name": "string",
"CurrentQuantity": 0,
"Tag": "string",
"Unit": "string",
"Price": 0,
"hasMethodOfPreparation": true,
"MethodOfPreparation": [
{
"id": "a78cb9ea-276f-494b-840a-6eab5e7d8f4b",
"Description": "string",
"Price": 0
}
],
"Addons": [
{
"id": "bdf97be3-5dd1-49e9-bdec-7ac0d3288adb",
"Description": "one",
"Price": 0
},
{
"id": "8f03d2e2-be1f-446d-b943-be9b8fe8ec4c",
"Description": "new add",
"Price": 0
I query the data like blow
query:
SELECT product FROM catalog
join industry in catalog.Industy
join category in industry.Category
join product in category.Subcategory.Product
where catalog.id ='" + itemId + "'
actual result:For a specific vendor ,industry ,category,sub category i need to get and create product.
note: here more than one indutry,category,sub category .
{
"Id": "abf95277-ccbc-4f9d-aeda-b6cc9c99953b",
"Name": "string",
"CurrentQuantity": 0,
"Tag": "string",
"Unit": "string",
"Price": 0,
"hasMethodOfPreparation": true,
"MethodOfPreparation": [
{
"id": "a78cb9ea-276f-494b-840a-6eab5e7d8f4b",
"Description": "string",
"Price": 0
}
but i need to check the industry id ,category id and sub category id.
How to do that,
Please give me suggestion. thanks in advance .
Please use sql:
SELECT product FROM c
join industry in c.Industy
join category in industry.Category
join Subcategory in category.Subcategory
join product in Subcategory.Product
where industry.Id ='<your item id>'
and category.Id = '<your item id>'
and Subcategory.Id = '<your item id>'
Output:
This is not a data analysis issue, so I don't have a data to reproduce.
I installed the paws package from this Github page to extract facial features (i.e., smile) via Amazon Rekognition. I am doing it as a part of a study to test performance across Microsoft Azure and Face++. By the way, I replaced "AccessKeyHere" and "SecretKeyHere" with appropriate security IDs.
library(paws)
Sys.setenv(
AWS_ACCESS_KEY_ID = "AccessKeyHere",
AWS_SECRET_ACCESS_KEY = "SecretKeyHere",
AWS_REGION = "us-east-1"
)
ec2 <- paws::ec2()
resp <- ec2$run_instances(
ImageId = "ami-f973ab84",
InstanceType = "t2.micro",
KeyName = "default",
MinCount = 1,
MaxCount = 1,
TagSpecifications = list(
list(
ResourceType = "instance",
Tags = list(
list(Key = "webserver", Value = "production")
)
)
)
)
Unfortunately, I get this error:
Error: InvalidKeyPair.NotFound: The key pair 'default' does not exist
I tried following through the Setting Up Credentials document in the Github page without success.
The results I want would look something along the lines of this (taken directly from Amazon demo):
{
"FaceDetails": [
{
"BoundingBox": {
"Width": 0.20394515991210938,
"Height": 0.4204871356487274,
"Left": 0.1556132435798645,
"Top": 0.11629478633403778
},
"AgeRange": {
"Low": 20,
"High": 38
},
"Smile": {
"Value": true,
"Confidence": 98.88771057128906
},
"Eyeglasses": {
"Value": true,
"Confidence": 99.87944030761719
},
"Sunglasses": {
"Value": true,
"Confidence": 99.51188659667969
},
"Gender": {
"Value": "Female",
"Confidence": 99.98441314697266
},
"Beard": {
"Value": false,
"Confidence": 99.99455261230469
},
"Mustache": {
"Value": false,
"Confidence": 99.99205017089844
},
"EyesOpen": {
"Value": true,
"Confidence": 100
},
"MouthOpen": {
"Value": true,
"Confidence": 99.64435577392578
},
"Emotions": [
{
"Type": "ANGRY",
"Confidence": 0.5140029191970825
},
{
"Type": "DISGUSTED",
"Confidence": 0.36493897438049316
},
{
"Type": "SURPRISED",
"Confidence": 1.5832388401031494
},
{
"Type": "CALM",
"Confidence": 7.553433418273926
},
{
"Type": "CONFUSED",
"Confidence": 2.7683539390563965
},
{
"Type": "SAD",
"Confidence": 0.1280381977558136
},
{
"Type": "HAPPY",
"Confidence": 87.08799743652344
}
],
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.23317773640155792,
"Y": 0.2868470251560211
},
{
"Type": "eyeRight",
"X": 0.3252476453781128,
"Y": 0.27732565999031067
},
{
"Type": "mouthLeft",
"X": 0.2494768351316452,
"Y": 0.4339924454689026
},
{
"Type": "mouthRight",
"X": 0.32560691237449646,
"Y": 0.42571622133255005
},
{
"Type": "nose",
"X": 0.29963040351867676,
"Y": 0.3560841381549835
},
{
"Type": "leftEyeBrowLeft",
"X": 0.18990693986415863,
"Y": 0.25858017802238464
},
{
"Type": "leftEyeBrowRight",
"X": 0.2559714913368225,
"Y": 0.23907452821731567
},
{
"Type": "leftEyeBrowUp",
"X": 0.22477854788303375,
"Y": 0.23571543395519257
},
{
"Type": "rightEyeBrowLeft",
"X": 0.3101874887943268,
"Y": 0.23408983647823334
},
{
"Type": "rightEyeBrowRight",
"X": 0.3540191650390625,
"Y": 0.24142536520957947
},
{
"Type": "rightEyeBrowUp",
"X": 0.3341374397277832,
"Y": 0.2246120721101761
},
{
"Type": "leftEyeLeft",
"X": 0.21425437927246094,
"Y": 0.28872400522232056
},
{
"Type": "leftEyeRight",
"X": 0.2506107687950134,
"Y": 0.28627288341522217
},
{
"Type": "leftEyeUp",
"X": 0.23298975825309753,
"Y": 0.2797400951385498
},
{
"Type": "leftEyeDown",
"X": 0.2338254302740097,
"Y": 0.29329705238342285
},
{
"Type": "rightEyeLeft",
"X": 0.3053741455078125,
"Y": 0.2805119752883911
},
{
"Type": "rightEyeRight",
"X": 0.33686137199401855,
"Y": 0.2753002941608429
},
{
"Type": "rightEyeUp",
"X": 0.3239244222640991,
"Y": 0.2698554992675781
},
{
"Type": "rightEyeDown",
"X": 0.32346177101135254,
"Y": 0.28338298201560974
},
{
"Type": "noseLeft",
"X": 0.27390313148498535,
"Y": 0.37751662731170654
},
{
"Type": "noseRight",
"X": 0.3062724471092224,
"Y": 0.373584508895874
},
{
"Type": "mouthUp",
"X": 0.29330143332481384,
"Y": 0.4100639820098877
},
{
"Type": "mouthDown",
"X": 0.2929871082305908,
"Y": 0.4546505808830261
},
{
"Type": "leftPupil",
"X": 0.23317773640155792,
"Y": 0.2868470251560211
},
{
"Type": "rightPupil",
"X": 0.3252476453781128,
"Y": 0.27732565999031067
},
{
"Type": "upperJawlineLeft",
"X": 0.14384371042251587,
"Y": 0.3039131164550781
},
{
"Type": "midJawlineLeft",
"X": 0.1776188313961029,
"Y": 0.4594067335128784
},
{
"Type": "chinBottom",
"X": 0.2889330983161926,
"Y": 0.5328735709190369
},
{
"Type": "midJawlineRight",
"X": 0.3430669903755188,
"Y": 0.441012978553772
},
{
"Type": "upperJawlineRight",
"X": 0.3498701751232147,
"Y": 0.28120794892311096
}
],
"Pose": {
"Roll": -4.4155192375183105,
"Yaw": 10.105213165283203,
"Pitch": 0.32932278513908386
},
"Quality": {
"Brightness": 60.6755256652832,
"Sharpness": 94.08262634277344
},
"Confidence": 99.99998474121094
}
]
}
If I could advance to this stage, it would be fantanstic. But it would be even nicer if the extracted data could look consistent with my Microsoft Azure results:
anger contempt disgust fear happiness neutral sadness surprise
emotion 0 0 0 0 0 1 0 0
emotion1 0 0 0 0 0 0.997 0.002 0
emotion2 0 0.001 0 0 0 0.994 0.004 0.001
emotion3 0 0 0 0 0 0.965 0.035 0
The error is with this line:
KeyName = "default",
It is referring to an Amazon EC2 Key Pair that should be attached to the Amazon EC2 instance. However, there is no keypair named default. Therefore, it fails.
To fix it, instead of default you should use the name of a Keypair that has been created. You can see a list of keypairs in the EC2 management console. You could also remove this line (without specifying a KeyName), but then you would not be able to login to the instance.