Query:
g.withSack(0).V().hasLabel('A').has('label_A','A').union(__.emit().repeat(sack(sum).by(constant(1)).in()),emit().repeat(sack(sum).by(constant(-1)).out())).project('level','properties').by(sack()).by(tree().by(valueMap().by(unfold())).unfold())
Output:
{
"level": 1,
"properties": {
"key": {
"label_A": "A"
},
"value": {
"{label_A=A}": {}
}
}
},
{
"level": 2,
"properties": {
"key": {
"label_A": "A"
},
"value": {
"{label_A=A}": {}
}
}
}
}
Getting keys in json format but not values. Please suggest changes in query to acheive the values in json format.
The tree() step returns a tree structure in the form of a Map of Map instances essentially so the output is about what you can expect. In this case, I wonder if you need to use tree() and could instead get by with path() as it seems like it accomplishes the same result without the added structure:
g.withSack(0).
V().hasLabel('A').has('label_A','A').
union(__.emit().repeat(sack(sum).by(constant(1)).in()),
emit().repeat(sack(sum).by(constant(-1)).out())).
project('level','properties').
by(sack()).
by(path().by(elementMap()))
Related
I can successfully use the Newtonsoft.Json.Schema.Generation.JSchemaGenerator to generate valid JSONSchema for a given class. This works fine, however, a third party consumer requires that it has a
"allOf": [ { "$ref": "#/definitions/ClassName" } ]
block of code in the emitted JSON Schema.
I can currently get the output to appear as
{
"$id": "https://xxxxx/classname",
"definitions": {
"ClassName": {
"type": "object",
"properties": {
but I need it to look like
{
"$id": "https://xxxxx/classname",
"allOf": [
{
"$ref": "#/definitions/ClassName"
}
],
"definitions": {
"ClassName": {
"type": "object",
"properties": {
Does anybody know what I need to do to achieve this? There is only one class being referenced.
Currently, I have the generator using code like this:
var _jsonSchemaGenerator = new JSchemaGenerator();
_jsonSchemaGenerator.SchemaIdGenerationHandling = SchemaIdGenerationHandling.None;
_jsonSchemaGenerator.SchemaLocationHandling = SchemaLocationHandling.Definitions;
var schema = _jsonSchemaGenerator.Generate(typeof(T));
console.WriteLine(schema);
Any help would be appreciated.
Assuming we have the following data structure
"data": [
{
"type": "node--press",
"id": "f04eab99-9174-4d00-bbbe-cdf45056660e",
"attributes": {
"nid": 130,
"uuid": "f04eab99-9174-4d00-bbbe-cdf45056660e",
"title": "TITLE OF NODE",
"revision_translation_affected": true,
"path": {
"alias": "/press/title-of-node",
"pid": 428,
"langcode": "es"
}
...
}
The data returned is compliant with JSON API standards, and I have no problem retrieving and processing it, except for the fact that I need to be able to filter the nodes returned by the path pid.
How can I filter my data by path.pid?
I have tried:
- node-press?filter[path][pid]=428
- node-press?filter[path][pid][value]=428
to no avail
It's not well defined in the filters section of the specification but other parameters such as include describe accessing nested keys with dot-notation. You could try ?filter[path.pid]=428 and parse the filter that way.
"field_country": {
"data": {
"type": "taxonomy_term--country",
"id": "818f11ab-dd9d-406b-b1ca-f79491eedd73"
}
}
Above structure can be filtered by ?filter[field_country.id]=818f11ab-dd9d-406b-b1ca-f79491eedd73
I'm using Following Query :
g.V(741440).outE('Notification').order().by('PostedDateLong', decr).range(0,1).as('notificationInfo').match(
__.as('notificationInfo').inV().as('postInfo'),
).select('notificationInfo','postInfo')
it is giving following result :
{
"requestId": "9846447c-4217-4103-ac2e-de3536a3c62a",
"status": {
"message": "",
"code": 200,
"attributes": { }
},
"result": {
"data": [
{
"notificationInfo": {
"id": "c0zs-fw3k-347p-g2g0",
"label": "Notification",
"type": "edge",
"inVLabel": "Comment",
"outVLabel": "User",
"inV": 749664,
"outV": 741440,
"properties": {
"ParentPostId": "823488",
"PostedDate": "2016-05-26T02:35:52.3889982Z",
"PostedDateLong": 635998269523889982,
"Type": "CommentedOnPostNotification",
"NotificationInitiatedByVertexId": "1540312"
}
},
"postInfo": {
"id": 749664,
"label": "Comment",
"type": "vertex",
"properties": {
"PostImage": [
{
"id": "amto-g2g0-2wat",
"value": ""
}
],
"PostedByUser": [
{
"id": "am18-g2g0-2txh",
"value": "orbitpage#gmail.com"
}
],
"PostedTime": [
{
"id": "amfg-g2g0-2upx",
"value": "2016-05-26T02:35:39.1489483Z"
}
],
"PostMessage": [
{
"id": "aln0-g2g0-2t51",
"value": "hi"
}
]
}
}
}
],
"meta": { }
}
}
I want to get information of Vertex "NotificationInitiatedByVertexId" (Edge Property ) in the response as well.
For that i tried following query :
g.V(741440).outE('Notification').order().by('PostedDateLong', decr).range(0,2).as('notificationInfo').match(
__.as('notificationInfo').inV().as('postInfo'),
g.V(1540312).next().as('notificationByUser')
).select('notificationInfo','postInfo','notificationByUser')
Note : I tried directly with vertex Id in subquery as I wasn't aware how to dynamically get value from edge property in query itself.
It is giving error. I tried a lot but am not able to find any solution.
I'm assuming that you are storing a Titan generated identifier in that edge property called NotificationInitiatedByVertexId. If so, please consider the following even though this first part doesn't really answer your question. I don't think you should store a vertex identifier on the edge. Your graph model should explicitly track the relationship of NotificationInitiatedBy with an edge and by storing the identifier of the vertex on the edge itself you are bypassing that. Also, if you ever have to migrate your data in some way, the ids won't be preserved (Titan will generate new ones) and trying to sort that out will be a mess.
Even if that is not a Titan generated identifier and a logical one you created, I still think I would look to adjust your graph schema and promote that Notification to a vertex. Then your Gremlin traversals would flow more easily.
Now, assuming you don't change that, then I don't see a reason to not just issue two queries in the same request and then combine the results to one data structure. You just need to do a lookup with the vertex id which is going to be pretty fast and inexpensive:
edgeStuff = g.V(741440).outE('Notification').
order().by('PostedDateLong', decr).range(0,1).as('notificationInfo').
... // whatever logic you have
select('notificationInfo','postInfo').next()
vertexStuff = g.V(edgeStuff.get('notificationInfo').value('NotificationInitiatedByVertexId')).next()
[notificationInitiatedBy: vertexStuff, notification: edgeStuff]
This might be a silly question but I could not manage to filter elasticsearch indexes by a datetime field. I must be missing something.
This is the mapping:
"created_at": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
This is what I got:
{
"_index": "myindex",
"_type": "myindextype",
"_id": "21c",
"_score": 1,
"_source": {
"code": "21c",
"name": "hello",
...
"created_at": "2015-04-30T13:10:50.107769Z"
}
},
With this query:
"query": {
"filtered": {
"query": {},
"filter": {
"range": {
"created_at": {
"gte": "2015-05-02T13:10:50.107769Z"
"format": "strict_date_optional_time||epoch_millis"
}}}}}
I would expect to filter out the entry above. But it returns nothing.
Is there a problem with time format? Because it is directly coming from Django Rest Framework's serializers. They claim that it is ISO 8601 format and elasticsearch claims the same.
I would also like to filter them out by a simpler date like "2015-05-02".
I am stuck. Thank you in advance.
Edit: It does not matter whatever i write into the range filter. It always return all the entries.
This worked. I tried a lot of different things and lost my way at some point.
{
"query": {
"filtered": {
"filter": {
"range": {
"created_at": {
"gte": "2015-05-02"
}
}
}
}
}
}
I need to retrieve the document that has the closest geo location and date-time to the request, so I'm not looking for a match of the date-time, but the closest one. I solved it using a custom script, however I'm guessing there might be a better way to do it, similar to the way I'm filtering the geo location based on a location and a distance.
Here's my code (in python):
query = {
"query": {
"function_score": {
"boost_mode": "replace",
"query": {
"filtered": {
"query" : {
"match_all" : {}
},
"filter" : {
"geo_distance" : {
"distance" : "10km",
"location" : json.loads(self.request.body)["location"]
}
}
}
},
"script_score": {
"lang": "groovy",
"script_file": "calculate-score",
"params": {
"stamp": json.loads(self.request.body)["stamp"]
}
}
}
},
"sort": [
{"_score": "asc"}
],
"size": 1
}
response = requests.get('http://localhost:9200/meteo/meteo/_search', data=json.dumps(query))
The custom calculate-score.groovy script contains the following:
abs(new java.text.SimpleDateFormat("yyyy-MM-dd\'T\'HH:mm").parse(stamp).getTime() - doc["stamp"].date.getMillis()) / 60000
The script returns the score as the absolute difference in minutes between the document date-time and the requested date-time.
Is there any other way to achieve this?
You should be able to use function_score to do this.
You could use the decay functions mentioned in the doucmentation to give a larger score to documents closer to the origin timestamp. Below is the example
where the scale=28800 mins i.e 20d.
Example:
put test
put test/test/_mapping
{
"properties": {
"stamp": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
put test/test/1
{
"stamp":"2015-10-15T00:00"
}
put test/test/2
{
"stamp":"2015-10-15T12:00"
}
post test/_search
{
"query": {
"function_score": {
"functions": [
{
"linear": {
"stamp" : {
"origin": "now",
"scale": "28800m"
}
}
}
],
"score_mode" : "multiply",
"boost_mode": "multiply",
"query": {
"match_all": {}
}
}
}
}