Conditional select using jq - jq

I have this below json format, I want to take the list of "id" which satisfies the condition
in this below I want to take the id which has matchers.value as dev-stack and status.state as active
{
"status": "success",
"data": [
{
"id": "b5e7f85d",
"matchers": [
{
"name": "stack",
"value": "dev-stack",
"isRegex": true
}
],
"startsAt": "2020-07-13T07:17:36Z",
"endsAt": "2020-07-15T07:15:44Z",
"updatedAt": "2020-07-13T07:15:59.643692023Z",
"createdBy": "api",
"comment": "Silence",
"status": {
"state": "active"
}
},
{
"id": "1fdaa4b5",
"matchers": [
{
"name": "stack",
"value": "qa-stack",
"isRegex": true
}
],
"startsAt": "2020-07-10T13:19:12Z",
"endsAt": "2020-07-10T13:20:55.510739499Z",
"updatedAt": "2020-07-10T13:20:55.510739499Z",
"createdBy": "api",
"comment": "Silence",
"status": {
"state": "expired"
}
}
]
}

Here is a solution which uses update assignment |=, map and select to update .data.
Note it avoids an undesirable cartesian product if multiple .matchers meet the criteria by using any.
.data |= map(select(
(.matchers | any(.value=="dev-stack")) and (.status.state=="active")
))
Try it online!

Related

Simplify array of objects with children using JQ

From Wikidata, I get the following json:
# Sparql query
query=$(cat ./myquery.sparql)
response=$(curl -G --data-urlencode query="${query}" https://wikidata.org/sparql?format=json)
echo "${response}" | jq '.results.bindings'
[
{
"language": {
"type": "uri",
"value": "https://lingualibre.org/entity/Q100"
},
"wikidata": {
"type": "literal",
"value": "Q36157"
},
"code": {
"type": "literal",
"value": "lub"
}
},
{
"language": {
"type": "uri",
"value": "https://lingualibre.org/entity/Q101"
},
"wikidata": {
"type": "literal",
"value": "Q36284"
},
"code": {
"type": "literal",
"value": "srr"
}
}
]
I would like to have the keys directly paired with their values, such as :
[
{
"language": "https://lingualibre.org/entity/Q100",
"wikidata": "Q36157",
"iso": "lub"
},
{
"language": "https://lingualibre.org/entity/Q101",
"wikidata": "Q36284",
"iso": "srr"
}
]
I currently have a non-resilient code, which will break whenever the key names change :
jq 'map({"language":.language.value,"wikidata":.wikidata.value,"iso":.code.value})'
How to pair the keys with their values in a resilient way (not naming the keys) ?
I want to "prune" the child objects so to only keep the value.
You could use map_values which works like the outer map but for objects, i.e. it retains the object structure, including the field names:
jq 'map(map_values(.value))'
[
{
"language": "https://lingualibre.org/entity/Q100",
"wikidata": "Q36157",
"code": "lub"
},
{
"language": "https://lingualibre.org/entity/Q101",
"wikidata": "Q36284",
"code": "srr"
}
]
Note that this solution lacks the name conversion from code to iso.

Firestore Pagination: how to define **unique** 'startAt'-cursor for REST?

This is a follow up question to an already solved one.
For this previous question an answer was given, how to define a cursor for query-pagination with 'startAt' for REST, that relates to a range of documents. In the example below, the cursor relates to all documents with an 'instructionNumber.stringValue' equal to "instr. 101". According to my testing, this results in skipping of documents.
New question:
How has the cursor to be defined, to not only relate to the stringValue of a field, that the query is ordered by? But instead to a distinct document (usually defined by its document-id)?
"structuredQuery": {
"from": [{"collectionId": "instructions"}],
"where": {
"fieldFilter": {
"field": {
"fieldPath": "belongsToDepartementID"
},
"op": "EQUAL",
"value": {
"stringValue": "toplevel-document-id"
}
}
},
"orderBy": [
{
"field": {
"fieldPath": "instructionNumber"
},
"direction": "ASCENDING"
}
],
"startAt": {
"values": [{
"stringValue": "instr. 101"
}]
},
"limit": 5
}
}
For better understanding, here is the condensed schema of the documents.
{
"document": {
"name": "projects/PROJECT_NAME/databases/(default)/documents/organizations/testManyInstructions/instructions/i104",
"fields":
"belongsToDepartementID": {
"stringValue": "toplevel-document-id"
},
"instructionNumber": {
"stringValue": "instr. 104"
},
"instructionTitle": {
"stringValue": "dummy Title104"
},
"instructionCurrentRevision": {
"stringValue": "A"
}
},
"createTime": "2022-02-18T13:55:47.300271Z",
"updateTime": "2022-02-18T13:55:47.300271Z"
}
}
For a query with no ordering:
"orderBy": [{
"direction": "ASCENDING",
"field": {"fieldPath": "__name__"}
}],
"startAt": {
"before": false,
"values": [{"referenceValue": "last/doc/ref"}]
}
For a query with ordering:
"orderBy": [
{
"direction": "DESCENDING",
"field": {"fieldPath": "instructionNumber"}
},
{
"direction": "DESCENDING",
"field": {"fieldPath": "__name__"}
}
],
"startAt":
{
"before": false,
"values": [
{"stringValue": "instr. 101"},
{"referenceValue": "last/doc/ref"}
]
}
Be sure to use the same direction for __name__ as the previous "orderBy" or it will need a composite index.
To ensure you have identify unique document for starting at you'll always want to include the document ID in your call to startAt.
I'm not sure of the exact syntax for the REST API, but the Firebase SDKs automatically pass this document ID when you call startAt with a DocumentSnapshot.

The language expression property '0' can't be evaluated, property name must be a string - ARM Template error while adding Key Vault access policy

I've been working on an issue and seem to be stuck, so asking on so in case anyone can help.
To describe the issue, I've got an existing Azure Key Vault setup, and wish to add a number of access policies to this resource group. It needs to be conditional as if the function name is "false" then that function should not be added to key vault access policy.
variable section:
"variables": {
"functionAccess": {
"value": [
{
"name": "[parameters('Function_1')]"
},
{
"name": "[parameters('Function_2')]"
},
{
"name": "[parameters('Function_3')]"
}
]
}
}
My Template :
{
"apiVersion": "2016-10-01",
"condition": "[not(equals(variables('functionAccess')[CopyIndex()].name, 'false'))]",
"copy": {
"batchSize": 1,
"count": "[length(variables('functionAccess'))]",
"mode": "Serial",
"name": "accessPolicies"
},
"name": "[concat(parameters('KeyVault_Name'), '/add')]",
"properties": {
"accessPolicies": [
{
"tenantId": "[subscription().tenantId]",
"objectId": "[if(not(equals(variables('functionAccess')[CopyIndex()].name, 'false')), reference(concat('Microsoft.Web/sites/', variables('functionAccess')[CopyIndex()].name), '2016-08-01', 'Full').identity.principalId, json('null'))]",
"permissions": {
"keys": [
"get",
"list"
],
"secrets": [
"get",
"list"
],
"certificates": [
"get",
"list"
]
}
}
]
},
"type": "Microsoft.KeyVault/vaults/accessPolicies"
}
When I deploy my ARM template for the azure key vault I got this error message:
The language expression property '0' can't be evaluated, property name must be a string.
also tried below, but same error:
{
"apiVersion": "2018-02-14",
"name": "[concat(parameters('KeyVault_Name'), '/add')]",
"properties": {
"copy": [
{
"batchSize": 1,
"count": "[length(variables('functionAccess'))]",
"mode": "serial",
"name": "accessPolicies",
"input": {
"condition": "[not(equals(variables('functionAccess')[copyIndex('accessPolicies')].name, 'false'))]",
"tenantId": "[subscription().tenantId]",
"objectId": "[if(not(equals(variables('functionAccess')[copyIndex('accessPolicies')].name, 'false')), reference(concat('Microsoft.Web/sites/', variables('functionAccess')[copyIndex('accessPolicies')].name), '2016-08-01', 'Full').identity.principalId, json('null'))]",
"permissions": {
"keys": [
"get",
"list"
],
"secrets": [
"get",
"list"
],
"certificates": [
"get",
"list"
]
}
}
}
]
},
"type": "Microsoft.KeyVault/vaults/accessPolicies"
}
There are a few options for dealing with filtering an array for copy operation. I deploy my ARM templates from PowerShell scripts and use PowerShell to setup parameter values. When I need special logic handle different inputs for different environments, I let PowerShell handle it.
If you must handle the filtering in ARM and you have the option to input a CSV list of functions, then perhaps the following will work. You can then use the functionAccessArray to iterate over in the copy operation.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
},
"variables": {
"functionAccessCsv": "Function-0,Function-1,false,Function-4,false,Function-6,Function-7",
"functionAccessFiltered": "[replace(replace(variables('functionAccessCsv'), 'false', ''), ',,', ',')]",
"functionAccessArray": "[split(variables('functionAccessFiltered'), ',')]"
},
"resources": [
],
"outputs": {
"functionAccessCsvFiltered": {
"type": "string",
"value": "[variables('functionAccessFiltered')]"
},
"functionAccessArray": {
"type": "array",
"value": "[variables('functionAccessArray')]"
}
}
}
The result:
I just had the same issue. By using an array parameter with a default value instead of a variable, I got it to work.
"parameters": {
"functionAccess": {
"type": "array",
"defaultValue": [
"value1",
"value2",
"value3"
]
}
}

How to Query Google Cloud Datastore for array

I have wriiten the query to get the all the list of Event Data entities. The result is Coming like this from the google Data Store.
[{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com","test2#xxx.com","test3#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test2#xxx.com","test3#xxx.com"]
}
}]
but i need to Write a Query to filter by Email'id. means i need to fetch the entities which are match with the Email id. For Eg if i pass the emailid as "test1#xxx.com" i should get final Result like this. Can anybody help me on this.
[{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com","test2#xxx.com","test3#xxx.com"]
}
},
{
"key": {
"id": 5678669024460800,
"kind": "Event",
"path": [
"Event",
5678669024460800
]
},
"data": {
"createdAt": "2017-03-27T06:28:58.000Z",
"users":["test1#xxx.com"]
}
}]
The GQL query would be something like -
SELECT * FROM Event WHERE users='test1#xxx.com'
You need to make sure the users property is indexed in order for the search to work, otherwise you may not get any results back.

How Can I group by Gremlin Server (Titan 1.0) Response on Basis of Vertex Id?

I'm trying following query :
g.V(835776).out('Follow').in('WallPost').order().by('PostedTimeLong', decr).range(0,2)
and I'm getting following response :
{
"requestId": "524462bc-5e46-40bf-aafd-64d00351dc87",
"status": {
"message": "",
"code": 200,
"attributes": { }
},
"result": {
"data": [
{
"id": 1745112,
"label": "Post",
"type": "vertex",
"properties": {
"PostImage": [
{
"id": "sd97-11ejc-2wat",
"value": ""
}
],
"PostedByUser": [
{
"id": "sc2j-11ejc-2txh",
"value": "orbitpage#gmail.com"
}
],
"PostedTime": [
{
"id": "scgr-11ejc-2upx",
"value": "2016-06-19T09:17:27.6791521Z"
}
],
"PostMessage": [
{
"id": "sbob-11ejc-2t51",
"value": "Hello #[tag:Urnotice_Profile|835776|1] , #[tag:Abhinav_Srivastava|872488|1] and #[tag:Rituraj_Rathore|839840|1]"
}
],
"PostedTimeLong": [
{
"id": "scuz-11ejc-2vid",
"value": 636019246476802029
}
]
}
},
{
"id": 1745112,
"label": "Post",
"type": "vertex",
"properties": {
"PostImage": [
{
"id": "sd97-11ejc-2wat",
"value": ""
}
],
"PostedByUser": [
{
"id": "sc2j-11ejc-2txh",
"value": "orbitpage#gmail.com"
}
],
"PostedTime": [
{
"id": "scgr-11ejc-2upx",
"value": "2016-06-19T09:17:27.6791521Z"
}
],
"PostMessage": [
{
"id": "sbob-11ejc-2t51",
"value": "Hello #[tag:Urnotice_Profile|835776|1] , #[tag:Abhinav_Srivastava|872488|1] and #[tag:Rituraj_Rathore|839840|1]"
}
],
"PostedTimeLong": [
{
"id": "scuz-11ejc-2vid",
"value": 636019246476802029
}
]
}
}
],
"meta": { }
}
}
since same post is posted on two different Id's it is coming twice in response. I want to group by response on basis of vertex id ( both have same vertex id. or i just want to get one object out of them as both are same only.
I've tried following queries but nothing worked for me :
g.V(835776).out('Follow').in('WallPost').groupBy{it.id}.order().by('PostedTimeLong', decr).range(0,3)
g.V(835776).out('Follow').in('WallPost').group().by(id).order().by('PostedTimeLong', decr).range(0,3)
How can I group by the result on basis of vertex id.
The query
g.V(835776).out('Follow').in('WallPost').group().by(id).order().by('PostedTimeLong', decr).range(0,3)
should work, although order().by() and range() will have no effect. However, I don'tthink you really want to group(), you more likely want to dedup():
g.V(835776).out('Follow').in('WallPost').dedup().order().by('PostedTimeLong', decr).limit(3)

Resources