Sum paths in weighed graph - gremlin

I have a bi-directional graph, similar to this: https://gremlify.com/6zxjsbstb5f, where out edges have a weighted property.
There is a closeness relationship between articles, the weight total of all paths between 2 articles
So far I've been able to get the paths between articles, but the weighting is only the weight value of the unique path. I would like the aggregate (sum) weight of all paths between the starting article from the set returned by: repeat(outE().inV().simplePath()).until(hasLabel('article'))
g.V('70679').
repeat(outE().inV().simplePath()).
until(hasLabel('article')).as('a').
path().as('p').
map(unfold().coalesce(values('weight'),constant(0)).sum()).as('weighting').
select('weighting', 'p')
Steps to create the sample graph (taken from Gremlify)
g.addV('article').as('1').
addV('brand').as('2').
addV('article').as('3').
addV('category').as('4').
addV('zone').as('5').
addV('article').as('6').
addV('article').as('7').
addE('zone').from('1').to('5'). property('weight', 0.1).
addE('category').from('1').to('4').property('weight', 0.5).
addE('brand').from('1').to('2').property('weight', 0.8).
addE('article').from('2').to('6').
addE('article').from('2').to('1').
addE('article').from('2').to('3').
addE('zone').from('3').to('5').property('weight', 0.1).
addE('category').from('3').to('4').property('weight', 0.3).
addE('brand').from('3').to('2').property('weight', 0.4).
addE('article').from('4').to('1').
addE('article').from('4').to('3').
addE('article').from('5').to('6').
addE('article').from('5').to('7').
addE('article').from('5').to('1').
addE('article').from('5').to('3').
addE('zone').from('6').to('5').property('weight', 0.1).
addE('brand').from('6').to('2').property('weight', 0.6).
addE('zone').from('7').to('5').property('weight', 0.1)
I've been able to get this query which is close to what we require, where 8630 is an article Id in the graph
g.V('8630')
.repeat(outE().inV().simplePath())
.until(hasLabel('article')).as('foundArticle')
.path()
.map(unfold().coalesce(values('weight'), constant(0)).sum()).as('pathWeight')
.group().by(select('foundArticle').id()).as('grouping')
This produces results similar to:
[
{
"8634": [0.1, 0.5, 0.8]
},
{
"8640": [0.1, 0.8]
},
{
"8642": [0.1]
}
]
More desirable would be a result set similar to:
[
{
"8634": 1.4
},
{
"8640": 0.9
},
{
"8642": 0.1
}
]

Just to make it easier I gave each article a custom ID. The ID A1 corresponds to the example output you showed for ID 8630.
g.addV('article').as('1').property(id,'A1').
addV('brand').as('2').
addV('article').as('3').property(id,'A2').
addV('category').as('4').
addV('zone').as('5').
addV('article').as('6').property(id,'A3').
addV('article').as('7').property(id,'A4').
addE('zone').from('1').to('5'). property('weight', 0.1).
addE('category').from('1').to('4').property('weight', 0.5).
addE('brand').from('1').to('2').property('weight', 0.8).
addE('article').from('2').to('6').
addE('article').from('2').to('1').
addE('article').from('2').to('3').
addE('zone').from('3').to('5').property('weight', 0.1).
addE('category').from('3').to('4').property('weight', 0.3).
addE('brand').from('3').to('2').property('weight', 0.4).
addE('article').from('4').to('1').
addE('article').from('4').to('3').
addE('article').from('5').to('6').
addE('article').from('5').to('7').
addE('article').from('5').to('1').
addE('article').from('5').to('3').
addE('zone').from('6').to('5').property('weight', 0.1).
addE('brand').from('6').to('2').property('weight', 0.6).
addE('zone').from('7').to('5').property('weight', 0.1)
The query you had produced, was actually very close to having the result you wanted. I just added a second by step to the group to sum up the values.
g.V('A1').
repeat(outE().inV().simplePath()).
until(hasLabel('article')).as('foundArticle').
path().
map(unfold().coalesce(values('weight'), constant(0)).sum()).as('pathWeight').
group().
by(select('foundArticle').id()).
by(sum()).
unfold()
Which yields
{'A2': 1.4}
{'A3': 0.9}
{'A4': 0.1}
I think your query can also be simplified. If I come up with something simpler I will add it to this answer,
UPDATED
Here's a version of the query that uses sack and avoids needing to collect the path and post process it.
g.withSack(0).
V('A1').
repeat(outE().sack(sum).by(coalesce(values('weight'),constant(0))).
inV().simplePath()).
until(hasLabel('article')).
group().
by(id()).
by(sack().sum()).
unfold()
which again yields
{'A2': 1.4}
{'A3': 0.9}
{'A4': 0.1}

Related

How is the number of random walks determined in GDS/Neo4j?

I am running the random walk algorithm on my Neo4j graph named 'example', with the minimum allowed walk length (2) and walks per node (1). Namely,
CALL gds.beta.randomWalk.stream(
'example',
{
walkLength: 2,
walksPerNode: 1,
randomSeed: 42,
concurrency: 1
}
)
YIELD nodeIds, path
RETURN nodeIds, [node IN nodes(path) | node.name ] AS event_name
And I get 41 walks. How is this number determined? I checked the graph and it contains 161 nodes and 574 edges. Any insights?
Added later: Here is more info on the projected graph that I am constructing. Basically, I am filtering on nodes and relationships and just projecting the subgraph and doing nothing else. Here is the code -
// Filter for only IDH Codel recurrent events
WITH [path=(m:IDHcodel)--(n:Tissue)
WHERE (m.node_category = 'molecular' AND n.event_class = 'Recurrence')
AND NOT EXISTS((m)--(:Tissue{event_class:'Primary'})) | m] AS recur_events
// Obtain the sub-network with 2 or more patients in edges
MATCH p=(m1)-[r:hasIDHcodelPatients]-(m2)
WHERE (m1 IN recur_events AND m2 IN recur_events AND r.total_common_patients >= 2)
WITH COLLECT(p) AS all_paths
WITH [p IN all_paths | nodes(p)] AS path_nodes, [p IN all_paths | relationships(p)] AS path_rels
WITH apoc.coll.toSet(apoc.coll.flatten(path_nodes)) AS subgraph_nodes, apoc.coll.flatten(path_rels) AS subgraph_rels
// Form the GDS Cypher projection
CALL gds.graph.create.cypher(
'example',
'MATCH (n) where n in $sn RETURN id(n) as id',
'MATCH ()-[r]-() where r in $sr RETURN id(startNode(r)) as source , id(endNode(r)) as target, { LINKS: { orientation: "UNDIRECTED" } }',
{parameters: {sn: subgraph_nodes, sr: subgraph_rels} }
)
YIELD graphName AS graph, nodeQuery, nodeCount AS nodes, relationshipQuery, relationshipCount AS rels
RETURN graph, nodes, rels
Thanks.
It seems that the documentation is missing the description for the sourceNodes parameter, which would tell you how many walks will be created.
We don't know the default value, but we can use the parameter to set the source nodes that the walk should start from.
For example, you could use all the nodes in the graph to be treated as a source node (the random walk will start from them).
MATCH (n)
WITH collect(n) AS nodes
CALL gds.beta.randomWalk.stream(
'example',
{ sourceNodes:nodes,
walkLength: 2,
walksPerNode: 1,
randomSeed: 42,
concurrency: 1
}
)
YIELD nodeIds, path
RETURN nodeIds, [node IN nodes(path) | node.name ] AS event_name
This way you should get 161 walks as there are 161 nodes in your graph and the walksPerNode is set to 1, so a single random walk will start from every node in the graph. In essence, the number of source nodes times the walks per node will determine the number of random walks.

Finding peaks with minimum peak width in R - similar to MATLAB function

I need to find peaks in a time series data, but the result needs to be equal to the result of the findpeaks function in MATLAB, with the argument 'MinPeakWidth" set to 10. I have already tried a lot of functions in order to achieve this: pracma::findpeaks, fluoR::find_peaks, splus2R::peaks, IDPmisc::peaks (this one has one argument regarding peak width, but the result is not the same). I have already looked in other functions as well, including packages for chromatography and spectoscropy analysis in bioconductor. Beyond that, I have tried the functions (and little alterations) from this other question in stackoverflow: Finding local maxima and minima
The findpeaks function in MATLAB is used for finding local maximas and has the following charcateristics:
Find the local maxima. The peaks are output in order of occurrence. The first sample is not included despite being the maximum. For the flat peak, the function returns only the point with lowest index.
The explanation for the "MinPeakWidth' argument in MATLAB web site is
Minimum peak width, specified as the comma-separated pair consisting of 'MinPeakWidth' and a positive real scalar. Use this argument to select only those peaks that have widths of at least 'MinPeakWidth'.
If you specify a location vector, x, then 'MinPeakWidth' must be expressed in terms of x. If x is a datetime array, then specify 'MinPeakWidth' as a duration scalar or as a numeric scalar expressed in days.
If you specify a sample rate, Fs, then 'MinPeakWidth' must be expressed in units of time.
If you specify neither x nor Fs, then 'MinPeakWidth' must be expressed in units of samples.
Data Types: double | single | duration
This is the data:
valores <- tibble::tibble(V1 = c(
0.04386573, 0.06169861, 0.03743560, 0.04512523, 0.04517977, 0.02927114, 0.04224937, 0.06596527, 2.15621006, 0.02547804, 0.03134409, 0.02867694,
0.08251871, 0.03252856, 0.06901365, 0.03201109, 0.04214851, 0.04679828, 0.04076178, 0.03922274, 1.65163662, 0.03630282, 0.04146608, 0.02618668,
0.04845364, 0.03202031, 0.03699149, 0.02811389, 0.03354410, 0.02975296, 0.03378896, 0.04440788, 0.46503730, 0.06128226, 0.01934736, 0.02055138,
0.04233819, 0.03398005, 0.02528630, 0.03694652, 0.02888223, 0.03463824, 0.04380172, 0.03297124, 0.04850558, 0.04579087, 1.48031231, 0.03735059,
0.04192204, 0.05789367, 0.03819694, 0.03344671, 0.05867103, 0.02590745, 0.05405133, 0.04941912, 0.63658824, 0.03134409, 0.04151859, 0.03502503,
0.02182294, 0.15397702, 0.02455722, 0.02775277, 0.04596132, 0.03900906, 0.03383408, 0.03517160, 0.02927114, 0.03888822, 0.03077891, 0.04236406,
0.05663730, 0.03619537, 0.04294887, 0.03497815, 0.03995837, 0.04374904, 0.03922274, 0.03596561, 0.03157820, 0.26390591, 0.06596527, 0.04050374,
0.02888223, 0.03824380, 0.05459656, 0.02969611, 0.86277224, 0.02385613, 0.03888451, 0.06496997, 0.03930725, 0.02931837, 0.06021005, 0.03330982,
0.02649659, 0.06600261, 0.02854480, 0.03691669, 0.06584168, 0.02076757, 0.02624355, 0.03679596, 0.03377049, 0.03590172, 0.03694652, 0.03575540,
0.02532416, 0.02818711, 0.04565318, 0.03252856, 0.04121822, 0.03147210, 0.05002047, 0.03809792, 0.02802299, 0.03399243, 0.03466543, 0.02829443,
0.03339476, 0.02129232, 0.03103367, 0.05071605, 0.03590172, 0.04386435, 0.03297124, 0.04323263, 0.03506247, 0.06225121, 0.02862442, 0.02862442,
0.06032925, 0.04400082, 0.03765090, 0.03477973, 0.02024540, 0.03564245, 0.05199116, 0.03699149, 0.03506247, 0.02129232, 0.02389752, 0.04996414,
0.04281258, 0.02587514, 0.03079668, 0.03895791, 0.02639014, 0.07333564, 0.02639014, 0.04074970, 0.04346211, 0.06032925, 0.03506247, 0.04950545,
0.04133673, 0.03835127, 0.02616212, 0.03399243, 0.02962473, 0.04800780, 0.03517160, 0.04105323, 0.03649472, 0.03000509, 0.05367187, 0.03858981,
0.03684529, 0.02941408, 0.04733265, 0.02590745, 0.02389752, 0.02385495, 0.03649472, 0.02508245, 0.02649659, 0.03152265, 0.02906310, 0.04950545,
0.03497815, 0.04374904, 0.03610649, 0.03799523, 0.02912771, 0.03694652, 0.05105353, 0.03000509, 0.02902378, 0.06425520, 0.05660319, 0.03065341,
0.04449069, 0.03638436, 0.02582273, 0.03753463, 0.02756006, 0.07215131, 0.02418869, 0.03431030, 0.04474425, 0.42589279, 0.02879489, 0.02872819,
0.02512494, 0.02450022, 0.03416346, 0.04560013, 1.40417366, 0.04784363, 0.04950545, 0.04685682, 0.03346052, 0.03255004, 0.07296053, 0.04491526,
0.02910482, 0.05448995, 0.01934736, 0.02195528, 0.03506247, 0.03157064, 0.03504810, 0.03754736, 0.03301058, 0.06886929, 0.03994190, 0.05130644,
0.21007323, 0.05630628, 0.02893721, 0.03683226, 0.03825290, 0.02494987, 0.02633410, 0.02721408, 0.03798986, 0.33473991, 0.04236406, 0.02389752,
0.03562747, 0.04662421, 0.02373767, 0.04918125, 0.04478894, 0.02418869, 0.03511514, 0.02871556, 0.05586166, 0.49014922, 0.03406339, 0.84823093,
0.03416346, 0.08729506, 0.03147210, 0.02889640, 0.06181828, 0.04940672, 0.03666858, 0.03019139, 0.03919279, 0.04864613, 0.03720420, 0.04726722,
0.04141298, 0.02862442, 0.29112744, 0.03964319, 0.05657445, 0.03930888, 0.04400082, 0.02722065, 0.03451685, 0.02911419, 0.02831578, 0.04001334,
0.05130644, 0.03134409, 0.03408579, 0.03232126, 0.03624218, 0.04708792, 0.06291741, 0.05663730, 0.03813209, 0.70582932, 0.04149421, 0.03607614,
0.03201109, 0.02055138, 0.03727305, 0.03182562, 0.02987404, 0.04142461, 0.03433624, 0.04264550, 0.02875086, 0.05797661, 0.04248705, 0.04476514))
From the data above, I obtain 22 peaks using pracma::findpeaks function with the code bellow:
picos_r <- pracma::findpeaks(-valores$V1, minpeakdistance = 10)
Using the MATLAB function
picos_matlab = findpeaks(-dado_r, 'MinPeakWidth', 10);
I obtain 11 peaks, as the following:
picos_matlab <- c(-0.02547804, -0.02618668, -0.01934736, -0.02182294, -0.0245572200000000, -0.0202454, -0.02385495, -0.01934736, -0.02373767, -0.02862442, -0.02722065)
I used pracma::findpeaks because it has already given an equal result in another part of the function that I am writting. I have already tried to change the code of the pracma::findpeaks, but with little success.
The package cardidates contains a heuristic peak hunting algorithm that can somewhat be fine-tuned using parameters xmax, minpeak and mincut. It was designed for a special problem, but may also used for other things. Here an example:
library("cardidates")
p <- peakwindow(valores$V1)
plot(p) # detects 14 peaks
p <- peakwindow(valores$V1, minpeak=0.18)
plot(p) # detects 11 peaks
Details are described in the package vignette and in https://doi.org/10.1007/s00442-007-0783-2
Another option is to run a smoother before peak detection.
I'm not sure what your test case is: -valores$V1, valores$V1, or -dado_r (what is that)?
I think pracma::findpeaks() does quite well if you do:
x <- valores$V1
P <- pracma::findpeaks(x,
minpeakdistance = 10, minpeakheight = sd(x))
plot(x, type = 'l', col = 4)
grid()
points(P[,2], P[, 1], pch=20, col = 2)
It finds 11 peaks that stick out while four or five others are too near to be counted. All the smaller ones (standard deviation) are being ignored.

Dart - How to take multiple output values and multiply with a single value?

This is my first project of my own on my self-taught, Dart/Flutter/coding road.
I’m trying to make an Animal Age Calculator. Basically, the user will select an animal from a group of animals and give the age of the animal and the code will output the relative age against other animals.
I’ve got a number of Maps (I think) with key:value pairs. Each Map has a k:v pair (animal and its corresponding age ratio). For example, if the user selects ‘dog’ the code will return the dogMap and the ratio values for bear, cat, chicken, elephant, human and rabbit.
I can get the ratio values but am stuck trying to use those values to calculate the relative age.
Here is the map -
var dogAgeRatio = {'dogTobear': 0.55, 'dogTocat': 0.88, 'dogTochicken': 1.47, 'dogToelephant': 0.31, 'dogTohuman': 0.28, 'dogTorabbit': 2.44};
and here is what I have so far to get the values -
if(animalName == "dog") {
print("these are the relatve animal ages: ${dogAgeRatio.values}");
Would appreciate a pointer as to how I should go about using those values and apply a calculation to return the relative age of all the other animals?
Thanks in advance.
Cal
There are different ways to do this but a simple way is something like this where we have a Map<String, Map<String, num>> to represent our comparisons:
import 'dart:io';
void main() {
final animalMap = {
'dog': {
'bear': 0.55,
'cat': 0.88,
'chicken': 1.47,
'elephant': 0.31,
'human': 0.28,
'rabbit': 2.44
},
'alien': {
'human': 100
}
};
print('Please enter one of the following animals:');
print('==========================================');
animalMap.keys.forEach(print);
print('==========================================');
final animalInput = stdin.readLineSync();
if (animalInput != null && animalMap.containsKey(animalInput)) {
print('Enter age of animal:');
final age = int.parse(stdin.readLineSync()!);
print('Age compared to different animals:');
print('==========================================');
animalMap[animalInput]!.entries.forEach((element) =>
print('${element.key} => ${element.value * age} years old.'));
print('==========================================');
} else {
print('Wrong animal!');
}
}
So first key is the starting animal and the sub-map is how we get from the starting animal age to the sub-animal.
I have made my solution a little interactive just for fun. :)
I'm not sure if I understaind you correctly, but you could use something like:
dogAgeRatio.values.forEach((element) { print(element * dogage);});
Which basically iterates over every item of your map and prints the value multiplied with the dogs age.

associative arrays in openscad?

Does openscad have any language primitive for string-keyed associative arrays (a.k.a hash maps, a.k.a dictionaries)? Or is there any convention for how to emulate associative arrays?
So far all I can think of is using vectors and using variables to map indexes into the vector to human readable names. That means there's no nice, readable way to define the vector, you just have to comment it.
Imagine I want to write something akin to the Python data structure:
bobbin_metrics = {
'majacraft': {
'shaft_inner_diameter': 9.0,
'shaft_outer_diameter': 19.5,
'close_wheel_diameter': 60.1,
# ...
},
'majacraft_jumbo': {
'shaft_inner_diameter': 9.0,
'shaft_outer_diameter': 25.0,
'close_wheel_diameter': 100.0,
},
# ...
}
such that I can reference it in model definitions in some recognisably hash-map-like way, like passing bobbin_metrics['majacraft'] to something as metrics and referencing metrics['close_wheel_diameter'].
So far my best effort looks like
# Vector indexes into bobbin-metrics arrays
BM_SHAFT_INNER_DIAMETER = 0
BM_SHAFT_OUTER_DIAMETER = 1
BM_CLOSE_WHEEL_DIAMETER = 2
bobbin_metrics_majacraft = [
9.0, # shaft inner diameter
19.5, # shaft outer diameter
60.1, # close-side wheel diameter
# ....
];
bobbin_metrics_majacraft_jumbo = [
9.0, # shaft inner diameter
25.0, # shaft outer diameter
100.0, # close-side wheel diameter
# ....
];
bobbin_metrics = [
bobbin_metrics_majacraft,
bobbin_metrics_majacraft_jumbo,
# ...
];
# Usage when passed a bobbin metrics vector like
# bobbin_metrics_majacraft as 'metrics' to a function
metrics[BM_SHAFT_INNER_DIAMETER]
I think that'll work. But it's U.G.L.Y.. Not quite "I write applications in bash" ugly, but not far off.
Is there a better way?
I'm prepared to maintain the data set outside openscad and have a generator for an include file if I have to, but I'd rather not.
Also, in honour of April 1 I miss the blink tag and wonder if the scrolling marquee will work? Tried 'em :)
I played around with the OpenSCAD search() function which is documented in the manual here;
https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Other_Language_Features#Search
The following pattern allows a form of associative list, it may not be optimal but does provide a way to set up a dictionary structure and retrieve a value against a string key;
// associative searching
// dp 2019
// - define the dictionary
dict = [
["shaft_inner_diameter", 9.0],
["shaft_outer_diameter", 19.5],
["close_wheel_diameter", 60.1]
];
// specify the serach term
term = "close_wheel_diameter";
// execute the search
find = search(term, dict);
// process results
echo("1", find);
echo ("2",dict[find[0]]);
echo ("3",dict[find[0]][1]);
The above produces;
Compiling design (CSG Tree generation)...
WARNING: search term not found: "l"
...
WARNING: search term not found: "r"
ECHO: "1", [2, 0]
ECHO: "2", ["close_wheel_diameter", 60.1]
ECHO: "3", 60.1
Personally, I would do this sort of thing in Python then generate the OpenSCAD as an intermediate file or maybe use the SolidPython library.
An example of a function that uses search() and does not produce any warnings.
available_specs = [
["mgn7c", 1,2,3,4],
["mgn7h", 2,3,4,5],
];
function selector(item) = available_specs[search([item], available_specs)[0]];
chosen_spec = selector("mgn7c");
echo("Specification was returned from function", chosen_spec);
The above will produce the following output:
ECHO: "Specification was returned from function", ["mgn7c", 1, 2, 3, 4]
Another very similar approach is using list comprehensions with a condition statement, just like you would in Python for example. Does the same thing, looks a bit simpler.
function selector(item) = [
for (spec = available_specs)
if (spec[0] == item)
spec
];

Neo4j: match with multiple relations in timely manner

Consider following nodes that are connected between each other with 2 type of edges: direct and intersect. The query needs to discover all possible paths between 2 nodes that satisfies all following rules:
0..N direct edges
0..1 intersect edge
intersect edge can be between direct edges
These paths are considered valid between nodeA and nodeZ:
(nodeA)-[:direct]->(nodeB)-[:direct]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:intersect]->(nodeB)-[:direct]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:direct]->(nodeB)-[:intersect]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:direct]->(nodeB)->[:direct]->(nodeC)-[:intersect]->(nodeZ)
Basically intersect edge can happen anywhere in the path but only once.
My ideal cypher query in non-existing neo4j version would be this:
MATCH (from)-[:direct*0..N|:intersect*0..1]->(to)
But neo4j doesn't support multiple constraints for edges type :(.
UPDATE 23.04.16
There 6609 nodes (out of 550k total), 5184 edges of type direct (out of 440k total) and 34119 of type intersect (out of 37289 total). There are some circular references expected (which neo4j avoids, isn't it?)
The query that looked promising but failed to finish in a manner of seconds:
MATCH p = (from {from: 1})-[:direct|intersect*0..]->(to {to: 99})
WHERE
123 < from.departureTS < 123 + 86400 //next day
AND REDUCE(s = 0, x IN RELATIONSHIPS(p) | CASE TYPE(x) WHEN 'intersect' THEN s + 1 ELSE s END) <= 1
return p;
Here is a query that conforms to the stated requirements:
MATCH p = (from)-[:direct|intersect*0..]->(to)
WHERE REDUCE(s = 0, x IN RELATIONSHIPS(p) |
CASE WHEN TYPE(x) = 'intersect' THEN s + 1 ELSE s END) <= 1
return p;
It returns all paths with 0 or more direct relationships and 0 or 1 intersect relationships.
This will do what you want:
// Cybersam's correction:
MATCH p = ((from)-[:direct*0..]->(middle)-[:intersect*0..1]->(middle2)-[:direct*0..]->(to)‌​) return DISTINCT p;
return p
Here's the test scenario I used:
create (a:nodeA {name: "A"})
create (b:nodeB {name: "B"})
create (c:nodeC {name: "C"})
create (z:nodeZ {name: "Z"})
merge (a)-[:direct {name: "D11"}]->(b)-[:direct {name: "D21"}]->(c)-[:direct {name: "D31"}]->(z)
merge (a)-[:intersect {name: "I12"}]->(b)-[:direct {name: "D22"}]->(c)-[:direct {name: "D32"}]->(z)
merge (a)-[:direct {name: "D13"}]->(b)-[:intersect {name: "I23"}]->(c)-[:direct {name: "D33"}]->(z)
merge (a)-[:direct {name: "D14"}]->(b)-[:direct {name: "D24"}]->(c)-[:intersect {name: "I34"}]->(z)
merge (a)-[:intersect {name: "I15"}]->(z)
// Cybersam's correction:
MATCH p = ((from)-[:direct*0..]->(middle)-[:intersect*0..1]->(middle2)-[:direct*0..]->(to)‌​) return DISTINCT p;
return p
I made the mistake of thinking the graph on the browser reflected the data that was returned in "p" - it did not, you have to look at the "rows" part of the report to get all the details.
This query will also return single nodes- which fits the requirements.

Resources