Consider the following json responses..
If you run the graph query g.V().hasLabel('customer'), the response is:
[
{
"id": "75b9bddc-4008-43d7-a24c-8b138735a36a",
"label": "customer",
"type": "vertex",
"properties": {
"partitionKey": [
{
"id": "75b9bddc-4008-43d7-a24c-8b138735a36a|partitionKey",
"value": 1
}
]
}
}
]
If you run the sql query select * from c where c.label = 'customer', the response is:
[
{
"label": "customer",
"partitionKey": 1,
"id": "75b9bddc-4008-43d7-a24c-8b138735a36a",
"_rid": "0osWAOso6VYBAAAAAAAAAA==",
"_self": "dbs/0osWAA==/colls/0osWAOso6VY=/docs/0osWAOso6VYBAAAAAAAAAA==/",
"_etag": "\"2400985f-0000-0c00-0000-5e2066190000\"",
"_attachments": "attachments/",
"_ts": 1579181593
}
]
Q: With this difference in structure around the partitionKey section, should this be referenced as /properties/partitionKey/*, or /partitionKey/? in the indexing policy?
Currently i have hedged by bets with...
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [{
"path": "/properties/partitionKey/*"
},{
"path": "/partitionKey/?"
},{
"path": "/label/?"
}
],
"excludedPaths": [{
"path": "/*"
},{
"path": "/\"_etag\"/?"
}
]
}
TIA! ๐
This should be stored as "/partitionKey" in your index policy, not "/properties/partitionKey"
btw, another thing to point out here is it's generally better to only exclude paths you will never query on rather than have to include those you will. This way, if you add properties to your graph you won't have to rebuild the index to query on the new properties.
Related
I've been working on an issue and seem to be stuck, so asking on so in case anyone can help.
To describe the issue, I've got an existing Azure Key Vault setup, and wish to add a number of access policies to this resource group. It needs to be conditional as if the function name is "false" then that function should not be added to key vault access policy.
variable section:
"variables": {
"functionAccess": {
"value": [
{
"name": "[parameters('Function_1')]"
},
{
"name": "[parameters('Function_2')]"
},
{
"name": "[parameters('Function_3')]"
}
]
}
}
My Template :
{
"apiVersion": "2016-10-01",
"condition": "[not(equals(variables('functionAccess')[CopyIndex()].name, 'false'))]",
"copy": {
"batchSize": 1,
"count": "[length(variables('functionAccess'))]",
"mode": "Serial",
"name": "accessPolicies"
},
"name": "[concat(parameters('KeyVault_Name'), '/add')]",
"properties": {
"accessPolicies": [
{
"tenantId": "[subscription().tenantId]",
"objectId": "[if(not(equals(variables('functionAccess')[CopyIndex()].name, 'false')), reference(concat('Microsoft.Web/sites/', variables('functionAccess')[CopyIndex()].name), '2016-08-01', 'Full').identity.principalId, json('null'))]",
"permissions": {
"keys": [
"get",
"list"
],
"secrets": [
"get",
"list"
],
"certificates": [
"get",
"list"
]
}
}
]
},
"type": "Microsoft.KeyVault/vaults/accessPolicies"
}
When I deploy my ARM template for the azure key vault I got this error message:
The language expression property '0' can't be evaluated, property name must be a string.
also tried below, but same error:
{
"apiVersion": "2018-02-14",
"name": "[concat(parameters('KeyVault_Name'), '/add')]",
"properties": {
"copy": [
{
"batchSize": 1,
"count": "[length(variables('functionAccess'))]",
"mode": "serial",
"name": "accessPolicies",
"input": {
"condition": "[not(equals(variables('functionAccess')[copyIndex('accessPolicies')].name, 'false'))]",
"tenantId": "[subscription().tenantId]",
"objectId": "[if(not(equals(variables('functionAccess')[copyIndex('accessPolicies')].name, 'false')), reference(concat('Microsoft.Web/sites/', variables('functionAccess')[copyIndex('accessPolicies')].name), '2016-08-01', 'Full').identity.principalId, json('null'))]",
"permissions": {
"keys": [
"get",
"list"
],
"secrets": [
"get",
"list"
],
"certificates": [
"get",
"list"
]
}
}
}
]
},
"type": "Microsoft.KeyVault/vaults/accessPolicies"
}
There are a few options for dealing with filtering an array for copy operation. I deploy my ARM templates from PowerShell scripts and use PowerShell to setup parameter values. When I need special logic handle different inputs for different environments, I let PowerShell handle it.
If you must handle the filtering in ARM and you have the option to input a CSV list of functions, then perhaps the following will work. You can then use the functionAccessArray to iterate over in the copy operation.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
},
"variables": {
"functionAccessCsv": "Function-0,Function-1,false,Function-4,false,Function-6,Function-7",
"functionAccessFiltered": "[replace(replace(variables('functionAccessCsv'), 'false', ''), ',,', ',')]",
"functionAccessArray": "[split(variables('functionAccessFiltered'), ',')]"
},
"resources": [
],
"outputs": {
"functionAccessCsvFiltered": {
"type": "string",
"value": "[variables('functionAccessFiltered')]"
},
"functionAccessArray": {
"type": "array",
"value": "[variables('functionAccessArray')]"
}
}
}
The result:
I just had the same issue. By using an array parameter with a default value instead of a variable, I got it to work.
"parameters": {
"functionAccess": {
"type": "array",
"defaultValue": [
"value1",
"value2",
"value3"
]
}
}
I'm trying to filter the child documents returned when querying a parent document using the sql api in cosmos db.
For example given this document:
{
"customerName": "Wallace",
"customerReference": 666777,
"orders": [
{
"date": "20181105T00:00:00",
"amount": 118.84,
"description": "Laptop Battery"
},
{
"date": "20181105T00:00:00",
"amount": 81.27,
"description": "Toner"
},
{
"date": "20181105T00:00:00",
"amount": 55.12,
"description": "Business Cards"
},
{
"date": "20181105T00:00:00",
"amount": 281.00,
"description": "Espresso Machine"
}]
}
I would like to query the customer to retrieve the name, reference and orders over 100.00 to produce a results like this
[{
"customerName": "Wallace",
"customerReference": 666777,
"orders": [
{
"date": "20181105T00:00:00",
"amount": 118.84,
"description": "Laptop Battery"
},
{
"date": "20181105T00:00:00",
"amount": 281.00,
"description": "Espresso Machine"
}]
}]
the query I have so far is as follows
SELECT c.customerName, c.customerReference, c.orders
from c
where c.customerReference = 666777
and c.orders.amount > 100
this returns an empty set
[]
and if you remove "and c.orders.amount > 100" it matches the document and returns all orders.
To reproduce this issue I simply set up a new database, added a new collection and copied the json example in as the only document. The index policy is left as the default which I've copied below.
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/*",
"indexes": [
{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Range",
"dataType": "String",
"precision": -1
},
{
"kind": "Spatial",
"dataType": "Point"
}
]
}
],
"excludedPaths": []
}
Cosmos DB doesn't support the deep filtering in the way I attempted in my original query.
To achieve the results described you need to use a subquery using a combination of ARRAY and VALUE as follows:
SELECT
c.customerName,
c.customerReference,
ARRAY(SELECT Value ord from ord in c.orders WHERE ord.amount > 100) orders
from c
where c.customerReference = 666777
note the use of 'ord' - 'order' is a reserved word.
The query then produces the correct result - eg
[{
"customerName": "Wallace",
"customerReference": 666777,
"orders": [
{
"date": "20181105T00:00:00",
"amount": 118.84,
"description": "Laptop Battery"
},
{
"date": "20181105T00:00:00",
"amount": 281.00,
"description": "Espresso Machine"
}
]
}]
What is the correct JSON to exclude certain keys from an input json to be not indexed by Azure CosmosDB. We are using the CosmosDB in mongodb mode. Was planning to change the index configuration on the Azure Portal after creating the collection.
Sample Input Json being
{
"name": "test",
"age": 1,
"location": "l1",
"height":5.7
}
If I were to include name and age in the index and remove location and height from the index, what does the includedPaths and excludedPaths look like.
Finally got it to work with the below spec:-
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [{
"path": "/*",
"indexes": [{
"kind": "Range",
"dataType": "Number",
"precision": -1
},
{
"kind": "Hash",
"dataType": "String",
"precision": 3
}
]
}],
"excludedPaths": [{
"path": "/\"location\"/?"
},
{
"path": "/\"height\"/?"
}
]
}
It looks like underlying implementation has changed and at the time of writing the documentation does not cover changing indexPolicy in MongoDB flavoured CosmosBD. Because the documents are really stored in a wired way where all the keys start from root $v and all scalar fields are stored as documents, containing value and type information. So your document will be stored something like:
{
'_etag': '"2f00T0da-0000-0d00-0000-4cd987940000"',
'id': 'SDSDFASDFASFAFASDFASDF',
'_self': 'dbs/sMsxAA==/colls/dVsxAI8MBXI=/docs/sMsxAI8MBXIKAAAAAAAAAA==/',
'_rid': 'sMsxAI8MBXIKAAAAAAAAAA==',
'$t': 3,
'_attachments': 'attachments/',
'$v': {
'_id': {
'$t': 7,
'$v': "\\ร\x87\x14\x01\x15['\x18m\x01รบ"
},
'name': {
'$t': 2,
'$v': 'test'
},
'age': {
'$t': 1,
'$v': 1
},
...
},
'_ts': 1557759892
}
Therefore the indexingPolicy paths need to include the root $v and use /* (objects) instead of /? (scalars).
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/*",
"indexes": [
{
"kind": "Range",
"dataType": "Number"
},
{
"kind": "Hash",
"dataType": "String"
}
]
}
],
"excludedPaths": [
{"path": "/\"$v\"/\"location\"/*"},
{"path": "/\"$v\"/\"height\"/*"}
]
}
PS:
Also mongo api can be used to drop all the default indexes and create specific indexes as required
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-indexing
I have wriiten the query to get the all the list of Event Data entities. The result is Coming like this from the google Data Store.
[{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com","test2#xxx.com","test3#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test2#xxx.com","test3#xxx.com"]
}
}]
but i need to Write a Query to filter by Email'id. means i need to fetch the entities which are match with the Email id. For Eg if i pass the emailid as "test1#xxx.com" i should get final Result like this. Can anybody help me on this.
[{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com","test2#xxx.com","test3#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com"]
}
}]
The GQL query would be something like -
SELECT * FROM Event WHERE users='test1#xxx.com'
You need to make sure the users property is indexed in order for the search to work, otherwise you may not get any results back.
I have been using OpenRefine 2.6 Beta 1 w/o problems since its release, and later, with the reconciliation service at:
http://reconcile.freebaseapps.com/reconcile
However, in the past fee days, I have not been able to use it all. If I go to the URL:
http://reconcile.freebaseapps.com/
and type the multiple query:
{
"query": "Ford",
"type": "/people/person",
"properties": [
{
"pid": "/people/person/place_of_birth",
"v": "Detroit"
}
]
}
I obtain:
{
"result": [
{
"id": "/m/0j8pb6y",
"name": "Ford",
"type": [
{
"id": "/people/person",
"name": "Person"
},
{
"id": "/common/topic",
"name": "Topic"
},
{
"id": "/geography/mountaineer",
"name": "Mountaineer"
}
],
"notable": [],
"score": 1.1546246,
"match": false
},
{
"id": "/m/01vd3gv",
"name": "Ford",
"type": [
{
"id": "/common/topic",
"name": "Topic"
},
{
"id": "/music/artist",
"name": "Musical Artist"
}
],
"notable": [],
"score": 1.0330245999999998,
"match": false
},
{
"id": "/m/0cmdhzt",
"name": "James Meredith",
"type": [
{
"id": "/common/topic",
"name": "Topic"
},
{
"id": "/people/person",
"name": "Person"
},
{
"id": "/military/military_person",
"name": "Military Person"
},
{
"id": "/people/deceased_person",
"name": "Deceased Person"
}
],
"notable": [],
"score": 0.0681692,
"match": false
}
],
"duration": 369
}
But if I try a simple query:
{
"query": "Ford"
}
I get:
Status: error Error:undefined
Any insights into what's happening with the reconciliation service? Is there any other service I could use to replace freebaseapps.com?
Thanks
Try this in Queries Parameter at http://reconcile.freebaseapps.com/
{
"q0": {
"query": "Ford"
}
}
For some reason, single queries are not accepted in Query Parameter but in Queries Parameter in the format above. I have not tested this in OpenRefine, so you might have to modify it.
I don't know for certain about the date, but Freebase was announced earlier this year as being shutdown by Jun 30, 2015, for some services. Maybe service is intermittent until full shutdown? Sorry, this answer probably doesn't help much.