filter the Json according to string in an array in JSONPATH - jsonpath

I have a situation where I have json String that has a child as Array that contains only Strings. Is there as way I can get the object reference of the arrays that contains a specific String.
Example:
{ "Books":{
"History":[
{
"badge":"y",
"Tags":[
"Indian","Culture"
],
"ISBN":"xxxxxxx",
"id":1,
"name":"Cultures in India"
},
{
"badge":"y",
"Tags":[
"Pre-historic","Creatures"
],
"ISBN":"xxxxxxx",
"id":1,
"name":"Pre-historic Ages"
}
]
}
}
To Achieve:
From the above JSON String, need to get all books in History which contains "Indian" inside the "tags" list.
I am using JSONPATH in my project but If there is other API that can provide similar functionality, any help is welcome.

If you're using Goessner JSONPath, $.Books.History[?(#.Tags.indexOf('Indian') != -1)] as mentioned by Duncan above should work.
If you're using the Jayway Java port (github.com/jayway/JsonPath), then
$.Books.History[?(#.Tags[?(# == 'Indian')] != [])] or more elegantly, use the in operator like this $.Books.History[?('Indian' in #.Tags)]. Tried them both here.

Assuming you are using Goessner JSONPath (http://goessner.net/articles/JsonPath/) the following should work:
$.Books.History[?(#.Tags.indexOf('Indian') != -1)]
According to the Goessner site, you can use underlying JavaScript inside the ?() filter. You can therefore use the JavaScript indexOf function to check if your Tags array contains the tag 'Indian'.
See a working example here using this JSONPath query tester:
http://www.jsonquerytool.com/sample/jsonpathfilterbyarraycontents

Did you try to use underscoreJS ? You can get the Indian books like this :
var data = {"Books:"....};
var indianBooks = _.filter(data.Books.History, function(book) { return _.contains(book.Tags, "Indian"); })

Related

VTL: Add a new property to an maps inside an array (AWS Appsync, DynamoDB)

I'm new to VTL and AWS appsync and try to get my head around how things are working.
The maps that are representing Steps in a List should have the property id with a UUID before there are stored in DynamoDB. To accomplish this I tried to iterate over the array and access the put method on the map like in the example below.
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key": {
"id" : $util.dynamodb.toDynamoDBJson($util.autoId())
},
#set($input = $util.dynamodb.toMapValues($ctx.args.input))
#set($input.createdAt = $util.dynamodb.toDynamoDB($util.time.nowISO8601()))
#foreach($step in $input.steps)
$step.put('id', $util.dynamodb.toDynamoDBJson($util.autoId())))
#end
"attributeValues": $util.toJson($input)
}
my second try:
#foreach($step in $input.steps)
#set($step.id = $util.dynamodb.toDynamoDBJson($util.autoId()))
#end
but still, for no reason, my maps do not have the property id.
Is there maybe the problem what the foreach loop is giving me just a copy of the map I try to modify and not the original object?
Thank you for your time!
Hopefully, my question will serve all newbies to VTL and appsync
They do, it's just that the toMapValues utility method is returning you DynamoDB types.
So if "input.steps" is supposed to be a list what you're gonna get in there is an object like {"L": [ ... ]}
Try this:
#foreach($step in $input.steps.L)
$step.put('id', $util.dynamodb.toDynamoDBJson($util.autoId())))
#end

Spring cloud contract: how to verify an array list (Kotlin based project)

I would like to write a groovy contract to verify an array list with string values.
Lets say I have an object:
data class MyDataObject(val messageList: List<String>)
my contract is the following:
package contracts
import org.springframework.cloud.contract.spec.Contract
Contract.make {
name("retrieve_list_of_objects")
description("""
given:
you want to have a list of MyObjects
when:
you get the list
then:
you have the list
""")
request {
method 'GET'
url '/10/my-objects'
headers {
contentType(applicationJson())
}
}
response {
status 200
body(
[
messageList: ["23412341324"]
]
)
headers {
contentType(applicationJson())
}
} }
the problem is that created test is translated to:
assertThatJson(parsedJson).array("['messageList']").contains("23412341324").value();
and that results in:
com.jayway.jsonpath.PathNotFoundException: Expected to find an object with property ['messageList'] in path $ but found 'net.minidev.json.JSONArray'. This is not a json object according to the JsonProvider: 'com.jayway.jsonpath.spi.json.JsonSmartJsonProvider'.
The question is: how can I write my contract to create the following test:
assertThatJson(parsedJson).array("['messageList']").contains("23412341324");
I ran your snippet in my project and it generated a test that looks like this (I don't know why my test generating looks different than yours)
MockMvcRequestSpecification request = given()
.header("Accept", "application/json")
.body("{\"messageList\":[\"23412341324\"]}");
If I am reading your question right, you want the body to be a list of MyObjects, and not just one.
I think the problem is that you need to surround MyObject with one more set of square brackets, if indeed you want this to verify a list of MyObjects.
body(
[[
messageList: ["23412341324"]
]]
)
In General
Use SQUARE BRACKETS to make objects (yes i know in JSON square brackets are for arrays, its weird, i didn't invent it)
You can surround field names with quotes or without, they both seem to work.
body([
stringField1: value(regex(".*")),
stringField2: value(regex(alphaNumeric()),
innerObject1: [
innerStringField1: "Hardcoded1",
innerIntegerField1: anyInteger()
]
])
Wait? How do I make JSON lists then if square brackets are for objects?
Double square brackets. Seriously.
body(
[[
stringFieldOfObjectInList: regex(".*")
]]
)

How to query documents where contains an array and the value of the array is ["val1", "val2"] Firestore

How can I get a collection where the query should be applicable to an array inside the document.
Document example: I would like to know how to query the document where the brands are fiat and seat
{
"name":"test 1",
"brands":[
{
"brand":{
"id":1,
"name":"Fiat",
"slug":"fiat",
"image":null,
"year_end":null,
"year_start":null
},
"released_at":"2018-10-26"
},
{
"brand":{
"id":2,
"name":"Seat",
"slug":"seat",
"image":null,
"year_end":null,
"year_start":null
},
"released_at":"2018-10-26"
},
{
"brand":{
"id":3,
"name":"Mercedes",
"slug":"mercedes",
"image":null,
"year_end":null,
"year_start":null
},
"released_at":"2018-10-26"
},
{
"brand":{
"id":4,
"name":"Yamaha",
"slug":"yamaha",
"image":null,
"year_end":null,
"year_start":null
},
"released_at":"2018-10-26"
}
]
}
I have tried something like:
.collection("motors")
.where("brands.slug", "array-contains-any", ["fiat", "seat"])
but this is not working I cannot figure out by the documentation how to get this.
When using the array-contains-any operator, you can check the values of your array against the value of a property of type String and not an array. There is currently no way you can use array-contains-any operator on an array. There are two options, one would be to create two separate fields and create two separate queries or, been only a document, you can get the entire document and filter the data on the client.
Edit:
What #FrankvanPuffelen has commented is correct, I made some research and I found that we can check against any type and even complex types, not just against strings, as mentioned before. The key to solving this issue is to match the entire object, meaning all properties of that object and not just a partial match, for example, one of three properties.
What you are trying to achieve is not working with your current database structure because your slug property exists in an object that is nested within the actual object that exists in your array. A possible solution might also be to duplicate some data and add only the desired values into an array and use the array-contains-any operator on this new creatded array.

Elastic Search in ASP.NET - using ampersand sign

I'm new to Elastic Search in ASP.NET, and I have a problem which I'm, so far, unable to resolve.
From documentation, I've seen that & sign is not listed as a special character. Yet, when I submit my search ampersand sign is fully ignored. For example if I search for procter & gamble, & sign is fully ignored. That makes quite a lot of problems for me, because I have companies that have names like M&S. When & sign is ignored, I get basically everything that has M or S in it. If I try with exact search (M&S), I have the same problem.
My code is:
void Connect()
{
node = new Uri(ConfigurationManager.AppSettings["Url"]);
settings = new ConnectionSettings(node);
settings.DefaultIndex(ConfigurationManager.AppSettings["defaultIndex"]);
settings.ThrowExceptions(true);
client = new ElasticClient(settings);
}
private string escapeChars(string inStr) {
var temp = inStr;
temp = temp
.Replace(#"\", #"\\")
.Replace(#">",string.Empty)
.Replace(#"<",string.Empty)
.Replace(#"{",string.Empty)
.Replace(#"}",string.Empty)
.Replace(#"[",string.Empty)
.Replace(#"]",string.Empty)
.Replace(#"*",string.Empty)
.Replace(#"?",string.Empty)
.Replace(#":",string.Empty)
.Replace(#"/",string.Empty);
return temp;
}
And then inside one of my functions
Connect();
ISearchResponse<ElasticSearch_Result> search_result;
var QString = escapeChars(searchString);
search_result = client.Search<ElasticSearch_Result>(s => s
.From(0)
.Size(101)
.Query(q =>
q.QueryString(b =>
b.Query(QString)
//.Analyzer("whitespace")
.Fields(fs => fs.Field(f => f.CompanyName))
)
)
.Highlight(h => h
.Order("score")
.TagsSchema("styled")
.Fields(fs => fs
.Field(f => f.CompanyName)
)
)
);
I've tried including analyzers, but then I've found out that they change the way tokenizers split words. I haven't been able to implement changes to the tokenizer.
I would like to be able to have following scenario:
Search: M&S Company Foo Bar
Tokens: M&S Company Foo Bar + bonus is if it's possible to have M S tokens too
I'm using elastic search V5.0.
Any help is more than welcome. Including better documentation than the one found here: https://www.elastic.co/guide/en/elasticsearch/client/net-api/5.x/writing-queries.html.
By default for a text field the analyzer applied is standard analyzer. This analyzer applies standard tokenizer along with lowercase token filter. So when you are indexing some value against that field, the standard analyzer is applied on that value and the resultant tokens are indexed against the field.
Let's understand this by e.g. For the field companyName (text type) let us assume that the value being passed is M&S Company Foo Bar while indexing a document. The resultant tokens for this value after the application of standard analyzer will be:
m
s
company
foo
bar
What you can notice is that not just whitespace but also & is used as delimiter to split and generate the tokens.
When you query against this field and don't pass any analyzer in the search query, it by default apply the same analyzer for search as well which is applied for indexing against the field. Therefore, if you search for M&S it get tokenised to M and S and thus actual search query search for these two tokens instead of M&S.
To solve this, you need to change the analyzer for the field companyName. Instead of standard analyzer you can create a custom analyzer which use whitespace tokenizer and lowercase filter (to make search case insensitive). For this you need to change the setting and mapping as below:
{
"settings": {
"analysis": {
"analyzer": {
"whitespace_lowercase": {
"tokenizer": "whitespace",
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"_doc": {
"properties": {
"companyName": {
"type": "text",
"analyzer": "whitespace_lowercase",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
}
Now for the above input the tokens generated will be:
m&s
company
foo
bar
This will ensure that when searching for M&S, & is not ignored.

AppSync Resolver only works when I hard code the input. context.arguments does not work

Edit for clarity: There are no error messages, it simply returns an empty list if the input string is from the context.arguments, suggesting that it simply isn't getting the input variable out on the query tester (setting it up incorrectly brings up that famous typing error of course). I've also made this into a pipeline with the exact same result. Looking around, people suggest making an intermediate object, but surely I'm just getting my input variables out wrong somehow.
I'm working on a project in AWS Appsync using DynamoDB and I've run into a problem with the context.arguments input.
Basically the code all works if I hardcode the string for the book id into the query (full context to follow), but if I use the context.arguments, it simply refuses to work properly, returning an empty array for the "spines".
I have the following types in my schema:
type Book {
id: ID!
title: String
spines: [Spine]
}
type Spine {
id: ID!
name: String
bookId: ID!
}
I use the following query:
type Query {
getBook(id: ID!): Book
query getBook($bookId: ID!){
getBook(id: $bookId){
title
id
spines {
name
bookId
}
}
}
With the following input (assume this is a relevant guid):
{
"bookId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
}
And this resolver for the spines object:
{
"version" : "2017-02-28",
"operation" : "Query",
"index" : "bookId-index",
"query" : {
"expression": "#bookId = :bookId",
"expressionNames" : {
"#bookId" : "bookId"
},
"expressionValues" : {
":bookId" : { "S" : "${context.arguments.id}" }
}
}
}
}
I made sure my data set contained false positives too (spines for other books) so that I know when my query brings back the correct data.
This works if I hardcode a guid as string instead of using context.arguments, and gets exactly what I'm looking for for each book guid.
For example, replacing the expression values with this works perfectly:
"expressionValues" : {
":bookId" : { "S" : "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }
}
Why does "${context.arguments.id}" not get the input variable here the same way as it seems to in other queries?
Thanks to #IonutTrestian for pointing me in the right direction.
$ctx.args was empty, but I decided to go up the chain to see what was in the entire context, so $util.error($util.toJson($ctx)).
The json object I found included a little object called "Source", which contained the query return for the Book object.
Long story short, $ctx.source.id when applied to my query worked a charm.
I also know a bit more about debugging DynamoDB resolvers in case I encounter problems like this in future. Thank you so much!

Resources