Is it possible to validate a JSONPath expression? - jsonpath

I want to know if it's possible to validate a JSONPath expression.
My JSONPath is $.phoneNumbers[:1].type
And my json is as follows:
{
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
I want to know if I am using the right/valid JSONPath expression.

You can got to http://jsonpath.curiousconcept.com/ and try it there.
Or you can do it in a programming language.
Here's how you could do it in Ruby, using the jsonpath gem:
require 'jsonpath'
data=<<-EOS
{ "phoneNumbers":
[
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
EOS
JsonPath.new("$.phoneNumbers[:1].type").on(data)

#Parag A - I just wrote a javascript library with which you can query JSON structure with XPath (which is standardised as opposed to JSONPath). Using this XPath evaluator, you can paste in your JSON structure and validate your XPath queries, in-place:
http://www.defiantjs.com/#xpath_evaluator
Putting your question and the answer in javascript code, it'll look something like this:
var obj = {
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
};
var qry = JSON.search(obj, '//phoneNumbers[1]/type');
// qry[0] = 'iPhone'
Defiant extends the global object JSON and the search method always returns an array with matching selections (empty array if no matches were found).

Related

how to parse using Jsonpath?

I want to know the value of 'pageName' where the value of 'createdTs' is '333' in JSON like below, how should I use Jsonpath?
[
{
"Test1":{
"pageName":"Apple"
},
"Test1-1":{
"createdTs":"333"
}
},
{
"Test2":{
"pageName":"Tomato"
},
"Test2-1":{
"createdTs":"555"
}
}
]
.Test1-1[?(#.createdTs == 333)] would look like this:
[
{
"createdTs" : "333"
}
]
The parsing result I want is this.
[
{
"Test1":{
"pageName":"Apple"
},
"Test1-1":{
"createdTs":"333"
}
}
]
if you are using Jayway JSONPath a Java DSL for reading JSON documents, then you can use the these jsonpath expressions.
$[?(#.['Test1-1'].createdTs == 333)]
OR using in filter operator
$[?(333 in #..createdTs)]
OR using contains filter operator
$[?(#..createdTs contains 333)]

How to keep FULL_TRANSITIVE compatibility while adding new types to nested map in avro schema?

I have an existing avro schema that contains a field with a nested map of map of a record type (let's call it RecordA for now). I'm wondering if it's possible to add a new record type, RecordB, to this nested map of maps while maintaining FULL_TRANSIENT compatibility?
My thinking was that as long as the inner maps gets defaulted to an empty map it still adheres to the schema so it's backwards/forward compatible.
I've tried to redefine the type map<map<RecordA>> maps to map<map<union{RecordA, RecordB}>> maps in an .avdl file, but the schema registry is telling me this is not compatible.
I've also tried to default each map individually to an empty map ({ }) in a generated .avsc file, but schema registry says that's incompatible as well.
I do want to acknowledge that I know map<map<..>> is a bad practice, but what's been done has been done.
Registered Schema (original) .avdl:
record Outer {
map<map<RecordA>> maps;
}
record RecordA {
string value;
string updateTime;
}
Attempt with .avdl:
record Outer {
map<map<union{RecordA, RecordB}>> maps = {};
}
record RecordA {
string value;
string updateTime;
}
record RecordB {
union{null, array<string>} values = null;
union{null, string} updateTime = null;
}
Attempt with .avsc:
{
"name" : "maps",
"type" : {
"type" : "map",
"values" : {
"type" : "map",
"values" : [ {
"type" : "record",
"name" : "RecordA",
"fields" : [ {
"name" : "value",
"type" : "string"
}, {
"name" : "updateTime",
"type" : "string"
} ],
"default": { }
}, {
"type" : "record",
"name" : "RecordB",
"fields" : [ {
"name" : "value",
"type" : [ "null", "string" ],
"default" : null
}, {
"name" : "values",
"type" : [ "null", "string" ],
"default" : null
}, {
"name" : "updateTime",
"type" : [ "null", "string" ],
"default" : null
} ],
"default": { }
} ]
}
},
"default" : { }
}
The end goal is to have a map of maps to a record who has a field that can either be a string or array<string>. The original schema was registered to a schema-registry where the field has type string with no union {} with null or a default, so I believe the map needs to be map to a union of types with either version of the field.
Each try has returned the following from the schema-registry compatibility API
{
"is_compatible": false
}
Any insight would be very much appreciated!

Insert date as epoch_seconds, output as formatted date

I have a set of timestamps formatted as seconds since the epoch. I'd like to insert to ElasticSearch as epoch_seconds but when querying would like to see the output as a pretty date, e.g. strict_date_optional_time.
My below mapping preserves the format that the input came in - is there any way to normalize the output to just one format via the mapping api?
Current Mapping:
PUT example
{
"mappings": {
"time": {
"properties": {
"time_stamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_second"
}
}
}
}
}
Example docs
POST example/time
{
"time_stamp": "2018-03-18T00:00:00.000Z"
}
POST example/time
{
"time_stamp": "1521389162" // Would like this to output as: 2018-03-18T16:05:50.000Z
}
GET example/_search output:
{
"total": 2,
"max_score": 1,
"hits": [
{
"_source": {
"time_stamp": "1521389162", // Stayed as epoch_second
}
},
{
"_source": {
"time_stamp": "2018-03-18T00:00:00.000Z"
}
}
]
}
Elasticsearch differentiates between the _source and the so called stored fields. The first one is supposed to represent your input.
If you actually use stored fields (by specifying store=true in your mapping) then specify multiple date formats this is easy: (emphasis mine)
Multiple formats can be specified by separating them with || as a separator. Each format will be tried in turn until a matching format is found. The first format will be used to convert the milliseconds-since-the-epoch value back into a string.
I have tested this with elasticsearch 5.6.4 and it works fine:
PUT /test -d '{ "mappings": {"doc": { "properties": {"post_date": {
"type":"date",
"format":"basic_date_time||epoch_millis",
"store":true
} } } } }'
PUT /test/doc/2 -d '{
"user" : "test1",
"post_date" : "20150101T121030.000+01:00"
}'
PUT /test/doc/1 -d '{
"user" : "test2",
"post_date" : 1525167490500
}'
Note how two different input-formats will result in the same format when using GET /test/_search?stored_fields=post_date&pretty=1
{
"hits" : [
{
"_index" : "test",
"_type" : "doc",
"_id" : "2",
"_score" : 1.0,
"fields" : {
"post_date" : [
"20150101T111030.000Z"
]
}
},
{
"_index" : "test",
"_type" : "doc",
"_id" : "1",
"_score" : 1.0,
"fields" : {
"post_date" : [
"20180501T093810.500Z"
]
}
}
]
}
If you want to change the input (in _source) you're not so lucky, the mapping-transform feature has been removed:
This was deprecated in 2.0.0 because it made debugging very difficult. As of now there really isn’t a feature to use in its place other than transforming the document in the client application.
If, instead of changing the stored data you are interested in formatting the output, have a look at this answer to Format date in elasticsearch query (during retrieval)

API Gateway and DynamoDB PutItem for String Set

I can't seem to find how to correctly call PutItem for a StringSet in DynamoDB through API Gateway. If I call it like I would for a List of Maps, then I get objects returned. Example data is below.
{
"eventId": "Lorem",
"eventName": "Lorem",
"companies": [
{
"companyId": "Lorem",
"companyName": "Lorem"
}
],
"eventTags": [
"Lorem",
"Lorem"
]
}
And my example template call for companies:
"companies" : {
"L": [
#foreach($elem in $inputRoot.companies) {
"M": {
"companyId": {
"S": "$elem.companyId"
},
"companyName": {
"S": "$elem.companyName"
}
}
} #if($foreach.hasNext),#end
#end
]
}
I've tried to call it with String Set listed, but it errors out still and tells me that "Start of structure or map found where not expected" or that serialization failed.
"eventTags" : {
"SS": [
#foreach($elem in $inputRoot.eventTags) {
"S":"$elem"
} #if($foreach.hasNext),#end
#end
]
}
What is the proper way to call PutItem for converting an array of strings to a String Set?
If you are using JavaScript AWS SDK, you can use document client API (docClient.createSet) to store the SET data type.
docClient.createSet - converts the array into SET data type
var docClient = new AWS.DynamoDB.DocumentClient();
var params = {
TableName:table,
Item:{
"yearkey": year,
"title": title
"product" : docClient.createSet(['milk','veg'])
}
};

ASP.Net Core - Get All data of a post form

I want to save all data of a form.
My form has these elements-
( Using Postman Plugin )
My controller is like this-
[HttpPost]
public async Task<IActionResult> Insert(IFormCollection data)
{
return Ok(data);
}
So, I am getting something like this-
[
{
"key": "user_id",
"value": [
"'12'"
]
},
{
"key": "title",
"value": [
"123"
]
},
{
"key": "text[]",
"value": [
"werwer",
"ghj"
]
}
]
I want to get the value of texts.
So, for this case-
"werwer",
"ghj"
So, I have tried something like this-
foreach (string description in data["text"])
{
// description => empty
}
and also tried this-
data.text
and also-
data->text
But nothing works for me.
Can anyone please help?
Thanks in advance for helping.
Why not loop through each keys and if the key is "text", get the values. Since the value is a comma seperated string, you can call the Split method on that to get an array which contains 2 items( from your sample input).
foreach (string description in data.Keys)
{
if (description.Equals("text"))
{
var v = data[description];
var stringItems = v.Split(',');
foreach (var stringItem in stringItems)
{
//do something with stringItem
}
}
}
BTW, the key should be text, not text[]. Even if you have muliple input fields with the same name "text", when you submit, It will be a single key ("text") with 2 items int he value property
{
"key": "text",
"value": [
"werwer",
"ghj"
]
}

Resources