I have a graph in neo4j with vertices of:
person:ID,name,value:int,:LABEL
1,Alice,1,Person
2,Bob,0,Person
3,Charlie,0,Person
4,David,0,Person
5,Esther,0,Person
6,Fanny,0,Person
7,Gabby,0,Person
8,XXXX,1,Person
and edges:
:START_ID,:END_ID,:TYPE
1,2,call
2,3,text
3,2,text
6,3,text
5,6,text
5,4,call
4,1,call
4,5,text
1,5,call
1,8,call
6,8,call
6,8,text
8,6,text
7,1,text
imported into neo4j like:
DATA_DIR_SAMPLE=/data_network/
$NEO4J_HOME/bin/neo4j-admin import --mode=csv \
--database=graph.db \
--nodes:Person ${DATA_DIR_SAMPLE}/vertices.csv \
--relationships ${DATA_DIR_SAMPLE}/edges.csv
which looks like:
Now when querying the graph like:
MATCH (source:Person)-[*1]-(destination:Person)
RETURN source.name, source.value, avg(destination.value), 'undir_1_any' as type
UNION ALL
MATCH (source:Person)-[*2]-(destination:Person)
RETURN source.name, source.value, avg(destination.value), 'undir_2_any' as type
one can see that the graph is traversed multiple times, and additionally as I want to obtain a table like:
Vertex | value | type_undir_1_any | type_undir_2_any
Alice | 1 | 0.2 | 0
an additional aggregation step (pivot/reshape) would be required
In the future, I would like to add the following patterns
undirected | directed
all relations | type of relation
as outlined up to 3 levels into the graph
and all permutations of these
Is there a better way to combine the queries?
You need to aggregate along the path length, while with a custom function of calculating the average value:
MATCH p = (source:Person)-[*1..2]-(destination:Person)
WITH
length(p) as L, source, destination
RETURN
source.name as Vertex,
source.value as value,
1.0 *
sum(CASE WHEN L = 1 THEN destination.value ELSE 0 END) /
sum(CASE WHEN L = 1 THEN 1 ELSE 0 END) as type_undir_1_any,
1.0 *
sum(CASE WHEN L = 2 THEN destination.value ELSE 0 END) /
sum(CASE WHEN L = 2 THEN 1 ELSE 0 END) as type_undir_2_any
Or a more elegant version with function from the APOC library to calculate the average on the collection:
MATCH p = (source:Person)-[*1..2]-(destination:Person)
RETURN
source.name as Vertex,
source.value as value,
apoc.coll.avg(COLLECT(
CASE WHEN length(p) = 1 THEN destination.value ELSE NULL END
)) as type_undir_1_any,
apoc.coll.avg(COLLECT(
CASE WHEN length(p) = 2 THEN destination.value ELSE NULL END
)) as type_undir_2_any
Related
I am applying the series_decompose_anomalies algorithm to time data coming from multiple meters. Currently, I am using the ADX dashboard feature to feed my meter identifier as a parameter into the algorithm and return my anomalies and scores as a table.
let dt = 3hr;
Table
| where meter_ID == dashboardParameter
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt
| extend (anomalies,score,baseline) = series_decompose_anomalies( num, 3,-1, 'linefit')
| mv-expand timestamp, num, baseline, anomalies, score
| where anomalies ==1
| project dashboardParameter, todatetime(timestamp), toreal(num), toint(anomalies), toreal(score)
I would like to bulk process all my meters in one go and return a table with all anomalies found across them. Is it possible to feed an array as an iterable in KQL or something similar to allow my parameter to change multiple times in a single run?
Simply add by meter_ID to make-series
(and remove | where meter_ID == dashboardParameter)
| make-series num=avg(value) on timestamp from _startTime to _endTime step dt by meter_ID
P.S.
Anomaly can be positive (num > baseline => flag = 1) or negative (num < baseline => flag = -1)
Demo
let _step = 1h;
let _endTime = toscalar(TransformedServerMetrics | summarize max(Timestamp));
let _startTime = _endTime - 12h;
TransformedServerMetrics
| make-series num = avg(Value) on Timestamp from _startTime to _endTime step _step by SQLMetrics
| extend (flag, score, baseline) = series_decompose_anomalies(num , 3,-1, 'linefit')
| mv-expand Timestamp to typeof(datetime), num to typeof(real), flag to typeof(int), score to typeof(real), baseline to typeof(real)
| where flag != 0
SQLMetrics
num
Timestamp
flag
score
baseline
write_bytes
169559910.91717172
2022-06-14T15:00:30.2395884Z
-1
-3.4824039875238131
170205132.25708669
cpu_time_ms
17.369556143036036
2022-06-14T17:00:30.2395884Z
1
7.8874529842826
11.04372634506527
percent_complete
0.04595588235294118
2022-06-14T22:00:30.2395884Z
1
25.019464868749985
0.004552738927738928
blocking_session_id
-5
2022-06-14T22:00:30.2395884Z
-1
-25.019464868749971
-0.49533799533799527
pending_disk_io_count
0.0019675925925925924
2022-06-14T23:00:30.2395884Z
1
6.4686836384225685
0.00043773741690408352
Fiddle
Given a dynamic field, say, milestones, it has value like: {"ta": 1655859586546, "tb": 1655859586646},
How do I print a table with columns like "ta", "tb" etc, with the single row as unixtime_milliseconds_todatetime(tolong(taValue)), unixtime_milliseconds_todatetime(tolong(tbValue)) etc.
I figured that I'll need to write a function that I can call, so I created this:-
let f = view(a:string ){
unixtime_milliseconds_todatetime(tolong(a))
};
I can use this function with a normal column as:- project f(columnName).
However, in this case, its a dynamic field, and the number of items in the list is large, so I do not want to enter the fields manually. This is what I have so far.
log_table
| take 1
| evaluate bag_unpack(milestones, "m_") // This gives me fields as columns
// | project-keep m_* // This would work, if I just wanted the value, however, I want `view(columnValue)
| project-keep f(m_*) // This of course doesn't work, but explains the idea.
Based on the mv-apply operator
// Generate data sample. Not part of the solution.
let log_table = materialize(range record_id from 1 to 10 step 1 | mv-apply range(1, 1 + rand(5), 1) on (summarize milestones = make_bag(pack_dictionary(strcat("t", make_string(to_utf8("a")[0] + toint(rand(26)))), 1600000000000 + rand(60000000000)))));
// Solution Starts here.
log_table
| mv-apply kv = milestones on
(
extend k = tostring(bag_keys(kv)[0])
| extend v = unixtime_milliseconds_todatetime(tolong(kv[k]))
| summarize milestones = make_bag(pack_dictionary(k, v))
)
| evaluate bag_unpack(milestones)
record_id
ta
tb
tc
td
te
tf
tg
th
ti
tk
tl
tm
to
tp
tr
tt
tu
tw
tx
tz
1
2021-07-06T20:24:47.767Z
2
2021-05-09T07:21:08.551Z
2022-07-28T20:57:16.025Z
2022-07-28T14:21:33.656Z
2020-11-09T00:54:39.71Z
2020-12-22T00:30:13.463Z
3
2021-12-07T11:07:39.204Z
2022-05-16T04:33:50.002Z
2021-10-20T12:19:27.222Z
4
2022-01-31T23:24:07.305Z
2021-01-20T17:38:53.21Z
5
2022-04-27T22:41:15.643Z
7
2022-01-22T08:30:08.995Z
2021-09-30T08:58:46.47Z
8
2022-03-14T13:41:10.968Z
2022-03-26T10:45:19.56Z
2022-08-06T16:50:37.003Z
10
2021-03-03T11:02:02.217Z
2021-02-28T09:52:24.327Z
2021-04-09T07:08:06.985Z
2020-12-28T20:18:04.973Z
9
2022-02-17T04:55:35.468Z
6
2022-08-02T14:44:15.414Z
2021-03-24T10:22:36.138Z
2020-12-17T01:14:40.652Z
2022-01-30T12:45:54.28Z
2022-03-31T02:29:43.114Z
Fiddle
i have next data in pivot mode
my data in pivot mode
pivot query
database('MyDatabase').Test
| summarize AdjValue = sum(AdjValue) by Fylke, ClassSE
| extend p = pack(ClassSE, AdjValue)
| summarize bag=make_bag(p) by Fylke
| evaluate bag_unpack(bag)
need to devide each value on rowSum - (value / rowSum * 100 = some percent).
I tried to use join for temporary pivot table but not succesfull. Plese help.
expected result in pivot mode
// This is not a part of the solution only generation of a sample dataset
let Test = materialize(range i from 1 to 100 step 1 | extend AdjValue = rand()*100, Fylke = strcat('Fylke_',tostring(toint(rand()*10))), ClassSE = strcat('ClassSE_',tostring(toint(rand()*5))));
// The solution starts here
let sum_by_Fylke_ClassSE = materialize(Test | summarize AdjValue = sum(AdjValue) by Fylke,ClassSE);
let sum_by_Fylke = sum_by_Fylke_ClassSE | summarize Fylke_AdjValue = sum(AdjValue) by Fylke;
sum_by_Fylke
| join sum_by_Fylke_ClassSE on Fylke
| evaluate pivot(ClassSE, sum(AdjValue/Fylke_AdjValue*100), Fylke)
| order by Fylke asc
Fylke
ClassSE_0
ClassSE_1
ClassSE_2
ClassSE_3
ClassSE_4
Fylke_0
49.395106915030119
46.755319585100125
0
0
3.8495734998697557
Fylke_1
62.292139898464924
5.2693450408156046
7.6552025348509991
6.2015378618740726
18.581774663994409
Fylke_2
50.145053387669094
1.2587789001232987
41.166356893005975
7.4298108192016352
0
Fylke_3
10.564746410722819
35.571795098974818
9.817452610031193
6.7291651195813156
37.316840760689857
Fylke_4
0
11.770542330107656
25.250380537085615
12.46115402880039
50.517923104006343
Fylke_5
11.098011115225455
24.401878297613749
37.849873348947106
16.221012456995606
10.429224781218091
Fylke_6
31.340691613236839
53.496440433838153
0
15.16286795292501
0
Fylke_7
31.764625835537881
34.741929615153026
7.9119328065215306
6.2721731408556778
19.309338601931888
Fylke_8
25.3982395190392
32.868425203681305
28.605169017331683
3.0705116629208007
10.057654597027003
Fylke_9
14.778417432435949
29.9861720571239
19.118237524156271
15.091700930745427
21.025472055538462
Fiddle
I am trying to display result using Kusto KQL query in pie chart.The goal is to display pie chart as half n half color in case of failure and full color in case of pass.
Basically log from a site displays rows as pass and failed row .In case where all are pass , pie chart should display 100 % same color.In case of even single failure in any rows , it should display 50% one color and 50% other color.Below query works when 1) When all rows are pass as full color 2) when some are pass and some fail or even one fails (displays pie chart in half n half) color 3)BUT WHEN ALL ROW HAS FAILS ,this is displaying in one color and not splitting pie chart in half n half
QUERY I USED:
results
| where Name contains "jobqueues"
| where timestamp > ago(1h)
| extend PASS = (ErLvl)>2 )
| extend FAIL = ((ErLvl<2 )
| project PASS ,FAIL
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
| extend display = iff(count_>0,1,0)
| summarize percentile(display, 50) by status
| render piechart
Please suggest what can be done to solve this problem.Thanks in advance.
Let's summarize your question:
There are only two outcomes of your query:
A piechart showing 50% vs 50%
A piechart showing 100%
From your description we learn that when
All rows are PASS we plot piechart 2.
Any row has FAIL we plot piechart 1.
Lets see how we can achieve this after this line from your code:
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
We should have a table looking like so:
status
count_
PASS
x
FAIL
y
Looks like we need to perform some logic on this. You were originally plotting based on the operation result. My idea was to just generate a table of pass = 1 and fail = 1 for the 50%v50% case and another table of pass = 1 and fail = 0 for the 100% case.
So following that logic we need to perform the following mapping:
status
count_
status
count2
fail
>0
maps to
fail
1
pass
>0
pass
1
status
count_
status
count2
fail
>0
maps to
fail
1
pass
=0
pass
1
status
count_
status
count2
fail
=0
maps to
fail
0
pass
>0
pass
1
Logical representation:
(given count_ >=0):
if fail > 0 count2 = 0 else count 1
pass is always equal to 1
We only need to apply this to the row where status == FAILED but summarize doesn't guarantee a row if there are no observations
Guarantee summarize results:
| extend fail_count = iif(status == "FAIL", count_, 0)
| extend pass_count = iif(status == "PASS", count_, 0)
| project fail_count,pass_count
| summarize sum(fail_count), sum(pass_count)
Apply logic
| extend FAIL = iff(sum_fail_count > 0, 1, 0)
| extend PASS = 1
| project FAIL, PASS
Now our result is looking like:
PASS
FAIL
1
1 or 0
In order to plot this as a pie chart we just need to transpose it so the columns PASSED and FAILED are rows of the "status" column.
We can use a simple pack and mv-expand for this
//transpose for rendering
| extend tmp = pack("FAIL",FAIL,"PASS",PASS)
| mv-expand kind=array tmp
| project Status=tostring(tmp[0]), Count=toint(tmp[1])
| render piechart
And that's it!~
Final query:
results
| where Name contains "jobqueues"
| where timestamp > ago(1h)
| extend PASS = (ErLvl)>2 )
| extend FAIL = ((ErLvl<2 )
| project PASS ,FAIL
| extend status = iff(PASS==true,"PASS","FAIL")
| summarize count() by status
//ensure results
| extend fail_count = iif(status == "FAIL", count_, 0)
| extend pass_count = iif(status == "PASS", count_, 0)
| project fail_count,pass_count
| summarize sum(fail_count), sum(pass_count)
//apply logic
| extend FAIL = iff(sum_fail_count > 0, 1, 0)
| extend PASS = 1
| project FAIL, PASS
//transpose for rendering
| extend Temp = pack("FAIL",FAIL,"PASS",PASS)
| mv-expand kind=array Temp
| project Status=tostring(Temp[0]), Count=toint(Temp[1])
| render piechart
I'm using an Hash Table to store some values. Here are the details:
There will be roughly 1M items to store (not known before, so no perfect-hash possible).
Table is 10M large.
Hash function is MurMurHash3.
I did some tests and storing 1M values I get 350,000 collisions and 30 elements at the most-colliding hash table's slot.
Are these result good?
Would it make sense to implement Binary Search for lists that get created at colliding hash-table's slots?
What' your advice to improve performances?
EDIT: Here is my code
var
HashList: array [0..10000000 - 1] of Integer;
for I := 0 to High(HashList) do
HashList[I] := 0;
for I := 1 to 1000000 do
begin
Y := MurmurHash3(UIntToStr(I));
Y := Y mod Length(HashList);
Inc(HashList[Y]);
if HashList[Y] > 1 then
Inc(TotalCollisionsCount);
if HashList[Y] > MostCollidingSlotItemCount then
MostCollidingSlotItemCount := HashList[Y];
end;
Writeln('Total: ' + IntToStr(TotalCollisionsCount) + ' Max: ' + IntToStr(MostCollidingSlotItemCount));
Here is the result I get:
Total: 48169 Max: 5
Am I missing something?
This is what you get when you put 1M items randomly into 10M cells
calendar_size=10000000 nperson = 1000000
E/cell| Ncell | frac | Nelem | frac |h/cell| hops | Cumhops
----+---------+--------+----------+--------+------+--------+--------
0: 9048262 (0.904826) 0 (0.000000) 0 0 0
1: 905064 (0.090506) 905064 (0.905064) 1 905064 905064
2: 45136 (0.004514) 90272 (0.090272) 3 135408 1040472
3: 1488 (0.000149) 4464 (0.004464) 6 8928 1049400
4: 50 (0.000005) 200 (0.000200) 10 500 1049900
----+---------+--------+----------+--------+------+--------+--------
5: 10000000 1000000 1.049900 1049900
The left column is the number of items in a cell. The second: the number of cells having this itemcount.
WRT the binary search: it is obvious that for small tables like this (maximum chain length=4, but most chains are of length=1), linear search outperforms binary search. The takeover-point is probably somewhere between 10 and 100.