Gremlin.compile() cannot execute query that works in gremlin console - gremlin

When I execute the following in a gremlin console I get the expected result.
g.V('name', 'a').next().query().has('b', GREATER_THAN_EQUAL, 100).orderBy('timestamp', Order.DESC).edges()
Now I am trying to execute the same from Java (following this guide) however I cannot get it to work.
I've tried this
Pipe pipe = Gremlin.compile("_().query().has('b', GREATER_THAN_EQUAL, 100).orderBy('timestamp', Order.DESC).edges()");
pipe.setStarts(new SingleIterator<Vertex>(graph.getVertices("name", 'a').iterator().next()));
for(Object name : pipe) {
}
javax.script.ScriptException: javax.script.ScriptException:
groovy.lang.MissingMethodException: No signature of method:
com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline.query() is
applicable for argument types: () values: [] Possible solutions:
every(), every(groovy.lang.Closure), grep(),
tree([Lcom.tinkerpop.pipes.PipeFunction;),
tree([Lgroovy.lang.Closure;),
tree(com.tinkerpop.pipes.util.structures.Tree)
And this
Pipe pipe = Gremlin.compile("_().next().query().has('b', GREATER_THAN_EQUAL, 100).orderBy('timestamp', Order.DESC).edges()");
pipe.setStarts(new SingleIterator<Vertex>(graph.getVertices("name", 'a').iterator().next()));
for(Object name : pipe) {
}
javax.script.ScriptException: javax.script.ScriptException:
groovy.lang.MissingMethodException: No signature of method:
com.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.query()
is applicable for argument types: () values: [] Possible solutions:
every(), every(groovy.lang.Closure), grep(), grep(java.lang.Object),
any(), dump()
Any ideas?

This line looks suspicious to me:
Pipe pipe = Gremlin.compile("_().next().query().has('b', GREATER_THAN_EQUAL, 100).orderBy('timestamp', Order.DESC).edges()");
You are trying to compile a Pipe out of something that doesn't evaluate to a Pipeline. In other words, you start with the identity pipe (_()) but then you next() it out and drop down into a vertex query which returns edges() edges() returns a Iterator and not a Pipeline. If you look at the example of Gremlin.compile the evaluated code of the Gremlin string returns a pipeline.
Pipe pipe = Gremlin.compile("_().out('knows').name");
My guess is that if you instead changed your code to something like (untested):
Pipe pipe = Gremlin.compile("_().outE.has('b', GREATER_THAN_EQUAL, 100).order{it.b.timestamp <=> it.a.timestamp}");
pipe.setStarts(new SingleIterator<Vertex>(graph.getVertices("name", 'a').iterator().next()));
for(Object name : pipe) {
}
you might have some success. I suppose that if that worked, then you would want to figure out how to re-optimize your query as I guess that some backends could take optimize the orderBy of the Vertex query, whereas the order Gremlin step is just an in-memory sort.

Okay so I have decided to use GremlinGroovyScriptEngine instead of Gremlin.compile(). This approach is also described on the same guide and I actually prefer this because it gives me parameterisation and I don't need to modify the original query (replace g. with _()).
ScriptEngine engine = new GremlinGroovyScriptEngine();
Bindings bindings = engine.createBindings();
bindings.put("g", graph);
bindings.put("value", 100);
bindings.put("DESC", Order.DESC);
engine.eval("g.V('name', 'a').next().query().has('b', Compare.GREATER_THAN_EQUAL, value).orderBy('timestamp', DESC).edges()", bindings);
I would still be interested in knowing why Gremlin.compile did not work, hopefully the above will be helpful to someone else.

Related

How do I get information about function calls from a Lua script?

I have a script written in Lua 5.1 that imports third-party module and calls some functions from it. I would like to get a list of function calls from a module with their arguments (when they are known before execution).
So, I need to write another script which takes the source code of my first script, parses it, and extracts information from its code.
Consider the minimal example.
I have the following module:
local mod = {}
function mod.foo(a, ...)
print(a, ...)
end
return mod
And the following driver code:
local M = require "mod"
M.foo('a', 1)
M.foo('b')
What is the better way to retrieve the data with the "use" occurrences of the M.foo function?
Ideally, I would like to get the information with the name of the function being called and the values of its arguments. From the example code above, it would be enough to get the mapping like this: {'foo': [('a', 1), ('b')]}.
I'm not sure if Lua has functions for reflection to retrieve this information. So probably I'll need to use one of the existing parsers for Lua to get the complete AST and find the function calls I'm interested in.
Any other suggestions?
If you can not modify the files, you can read the files into a strings then parse mod file and find all functions in it, then use that information to parse the target file for all uses of the mod library
functions = {}
for func in modFile:gmatch("function mod%.(%w+)") do
functions[func] = {}
end
for func, call in targetFile:gmatch("M%.(%w+)%(([^%)]+)%)") do
args = {}
for arg in string.gmatch(call, "([^,]+)") do
table.insert(args, arg)
end
table.insert(functions[func], args)
end
Resulting table can then be serialized
['foo'] = {{"'a'", " 1"}, {"'b'"}}
3 possible gotchas:
M is not a very unique name and could vary possibly match unintended function calls to another library.
This example does not handle if there is a function call made inside the arg list. e.g. myfunc(getStuff(), true)
The resulting table does not know the typing of the args so they are all save as strings representations.
If modifying the target file is an option you can create a wrapper around your required module
function log(mod)
local calls = {}
local wrapper = {
__index = function(_, k)
if mod[k] then
return function(...)
calls[k] = calls[k] or {}
table.insert(calls[k], {...})
return mod[k](...)
end
end
end,
}
return setmetatable({},wrapper), calls
end
then you use this function like so.
local M, calls = log(require("mod"))
M.foo('a', 1)
M.foo('b')
If your module is not just functions you would need to handle that in the wrapper, this wrapper assumes all indexes are a function.
after all your calls you can serialize the calls table to get the history of all the calls made. For the example code the table looks like
{
['foo'] = {{'a', 1}, {'b'}}
}

What's the correct way to pass a cursor to a user defined function?

I'm trying to pass a cursor to a function like this:
.create-or-alter function GetLatestValues(cursor:string) {
Logs | where cursor_after(cursor)
}
But I get a Error cursor_after(): argument #1 is invalid response. Is there something I'm missing? Is string the wrong data type to use?
This approach works properly for lambda functions:
let GetLatestValues = (cursor:string) {
Logs | where cursor_after(cursor)
};
GetLatestValues('') | take 1
By default, defining a function in Kusto validates stored function, and in case you're using cursor the system will attempt to do it with empty parameter, leading to failure.
You can override this behavior using skipvalidation=true parameter (see docs at:
https://learn.microsoft.com/en-us/azure/kusto/management/functions#create-function)
.create-or-alter function with (skipvalidation = "true")
GetLatestValues(cursor:string)
{
Logs | where cursor_after(cursor)
}

Java 8 Functional Programming - Passing function along with its argument

I have a question on Java 8 Functional Programming. I am trying to achieve something using functional programming, and need some guidance on how to do it.
My requirement is to wrap every method execution inside timer function which times the method execution. Here's the example of timer function and 2 functions I need to time.
timerMethod(String timerName, Function func){
timer.start(timerName)
func.apply()
timer.stop()
}
functionA(String arg1, String arg2)
functionB(int arg1, intArg2, String ...arg3)
I am trying to pass functionA & functionB to timerMethod, but functionA & functionB expects different number & type of arguments for execution.
Any ideas how can I achieve it.
Thanks !!
you should separate it into two things by Separation of Concerns to make your code easy to use and maintaining. one is timing, another is invoking, for example:
// v--- invoking occurs in request-time
R1 result1 = timerMethod("functionA", () -> functionA("foo", "bar"));
R2 result2 = timerMethod("functionB", () -> functionB(1, 2, "foo", "bar"));
// the timerMethod only calculate the timing-cost
<T> T timerMethod(String timerName, Supplier<T> func) {
timer.start(timerName);
try {
return func.get();
} finally {
timer.stop();
}
}
IF you want to return a functional interface rather than the result of that method, you can done it as below:
Supplier<R1> timingFunctionA =timerMethod("A", ()-> functionA("foo", "bar"));
Supplier<R2> timingFunctionB =timerMethod("B", ()-> functionB(1, 2, "foo", "bar"));
<T> Supplier<T> timerMethod(String timerName, Supplier<T> func) {
// v--- calculate the timing-cost when the wrapper function is invoked
return () -> {
timer.start(timerName);
try {
return func.get();
} finally {
timer.stop();
}
};
}
Notes
IF the return type of all of your functions is void, you can replacing Supplier with Runnable and then make the timerMethod's return type to void & remove return keyword from timerMethod.
IF some of your functions will be throws a checked exception, you can replacing Supplier with Callable & invoke Callable#call instead.
Don't hold onto the arguments and then pass them at the last moment. Pass them immediately, but delay calling the function by wrapping it with another function:
Producer<?> f1 =
() -> functionA(arg1, arg2);
Producer<?> f2 =
() -> functionB(arg1, arg2, arg3);
Here, I'm wrapping each function call in a lambda (() ->...) that takes 0 arguments. Then, just call them later with no arguments:
f1()
f2()
This forms a closure over the arguments that you supplied in the lambda, which allows you to use the variables later, even though normally they would have been GC'd for going out of scope.
Note, I have a ? as the type of the Producer since I don't know what type your functions return. Change the ? to the return type of each function.
Introduction
The other answers show how to use a closure to capture the arguments of your function, no matter its number. This is a nice approach and it's very useful, if you know the arguments in advance, so that they can be captured.
Here I'd like to show two other approaches that don't require you to know the arguments in advance...
If you think it in an abstract way, there are no such things as functions with multiple arguments. Functions either receive one set of values (aka a tuple), or they receive one single argument and return another function that receives another single argument, which in turn returns another one-argument function that returns... etc, with the last function of the sequence returning an actual result (aka currying).
Methods in Java might have multiple arguments, though. So the challenge is to build functions that always receive one single argument (either by means of tuples or currying), but that actually invoke methods that receive multiple arguments.
Approach #1: Tuples
So the first approach is to use a Tuple helper class and have your function receive one tuple, either a Tuple2 or Tuple3:
So, the functionA of your example might receive one single Tuple2<String, String> as an argument:
Function<Tuple2<String, String>, SomeReturnType> functionA = tuple ->
functionA(tuple.getFirst(), tuple.getSecond());
And you could invoke it as follows:
SomeReturnType resultA = functionA.apply(Tuple2.of("a", "b"));
Now, in order to decorate the functionA with your timerMethod method, you'd need to do a few modifications:
static <T, R> Function<T, R> timerMethod(
String timerName,
Function<? super T, ? extends R> func){
return t -> {
timer.start(timerName);
R result = func.apply(t);
timer.stop();
return result;
};
}
Please note that you should use a try/finally block to make your code more robust, as shown in holi-java's answer.
Here's how you might use your timerMethod method for functionA:
Function<Tuple2<String, String>, SomeReturnType> timedFunctionA = timerMethod(
"timerA",
tuple -> functionA(tuple.getFirst(), tuple.getSecond());
And you can invoke timedFunctionA as any other function, passing it the arguments now, at invocation time:
SomeReturnType resultA = timedFunctionA.apply(Tuple2.of("a", "b"));
You can take a similar approach with the functionB of your example, except that you'd need to use a Tuple3<Integer, Integer, String[]> for the argument (taking care of the varargs arguments).
The downside of this approach is that you need to create many Tuple classes, i.e. Tuple2, Tuple3, Tuple4, etc, because Java lacks built-in support for tuples.
Approach #2: Currying
The other approach is to use a technique called currying, i.e. functions that accept one single argument and return another function that accepts another single argument, etc, with the last function of the sequence returning the actual result.
Here's how to create a currified function for your 2-argument method functionA:
Function<String, Function<String, SomeReturnType>> currifiedFunctionA =
arg1 -> arg2 -> functionA(arg1, arg2);
Invoke it as follows:
SomeReturnType result = currifiedFunctionA.apply("a").apply("b");
If you want to decorate currifiedFunctionA with the timerMethod method defined above, you can do as follows:
Function<String, Function<String, SomeReturnType>> timedCurrifiedFunctionA =
arg1 -> timerMethod("timerCurryA", arg2 -> functionA(arg1, arg2));
Then, invoke timedCurrifiedFunctionA exactly as you'd do with any currified function:
SomeReturnType result = timedCurrifiedFunctionA.apply("a").apply("b");
Please note that you only need to decorate the last function of the sequence, i.e. the one that makes the actual call to the method, which is what we want to measure.
For the method functionB of your example, you can take a similar approach, except that the type of the currified function would now be:
Function<Integer, Function<Integer, Function<String[], SomeResultType>>>
which is quite cumbersome, to say the least. So this is the downside of currified functions in Java: the syntax to express their type. On the other hand, currified functions are very handy to work with and allow you to apply several functional programming techniques without needing to write helper classes.

Impossibility to iterate over a Map using Groovy within Jenkins Pipeline

We are trying to iterate over a Map, but without any success. We reduced our issue to this minimal example:
def map = [
'monday': 'mon',
'tuesday': 'tue',
]
If we try to iterate with:
map.each{ k, v -> println "${k}:${v}" }
Only the first entry is output: monday:mon
The alternatives we know of are not even able to enter the loop:
for (e in map)
{
println "key = ${e.key}, value = ${e.value}"
}
or
for (Map.Entry<String, String> e: map.entrySet())
{
println "key = ${e.key}, value = ${e.value}"
}
Are failing, both only showing the exception java.io.NotSerializableException: java.util.LinkedHashMap$Entry. (which could be related to an exception occurring while raising the 'real' exception, preventing us from knowing what happened).
We are using latest stable jenkins (2.19.1) with all plugins up-to-date as of today (2016/10/20).
Is there a solution to iterate over elements in a Map within a Jenkins pipeline Groovy script ?
Its been some time since I played with this, but the best way to iterate through maps (and other containers) was with "classical" for loops, or the "for in". See Bug: Mishandling of binary methods accepting Closure
To your specific problem, most (all?) pipeline DSL commands will add a sequence point, with that I mean its possible to save the state of the pipeline and resume it at a later time. Think of waiting for user input for example, you want to keep this state even through a restart.
The result is that every live instance has to be serialized - but the standard Map iterator is unfortunately not serializable. Original Thread
The best solution I can come up with is defining a Function to convert a Map into a list of serializable MapEntries. The function is not using any pipeline steps, so nothing has to be serializable within it.
#NonCPS
def mapToList(depmap) {
def dlist = []
for (def entry2 in depmap) {
dlist.add(new java.util.AbstractMap.SimpleImmutableEntry(entry2.key, entry2.value))
}
dlist
}
This has to be obviously called for each map you want to iterate, but the upside it, that the body of the loop stays the same.
for (def e in mapToList(map))
{
println "key = ${e.key}, value = ${e.value}"
}
You will have to approve the SimpleImmutableEntry constructor the first time, or quite possibly you could work around that by placing the mapToList function in the workflow library.
Or much simpler
for (def key in map.keySet()) {
println "key = ${key}, value = ${map[key]}"
}

idl: pass keyword dynamically to isa function to test structure read by read_csv

I am using IDL 8.4. I want to use isa() function to determine input type read by read_csv(). I want to use /number, /integer, /float and /string as some field I want to make sure float, other to be integer and other I don't care. I can do like this, but it is not very readable to human eye.
str = read_csv(filename, header=inheader)
; TODO check header
if not isa(str.(0), /integer) then stop
if not isa(str.(1), /number) then stop
if not isa(str.(2), /float) then stop
I am hoping I can do something like
expected_header = ['id', 'x', 'val']
expected_type = ['/integer', '/number', '/float']
str = read_csv(filename, header=inheader)
if not array_equal(strlowcase(inheader), expected_header) then stop
for i=0l,n_elements(expected_type) do
if not isa(str.(i), expected_type[i]) then stop
endfor
the above doesn't work, as '/integer' is taken literally and I guess isa() is looking for named structure. How can you do something similar?
Ideally I want to pick expected type based on header read from file, so that script still works as long as header specifies expected field.
EDIT:
my tentative solution is to write a wrapper for ISA(). Not very pretty, but does what I wanted... if there is cleaner solution , please let me know.
Also, read_csv is defined to return only one of long, long64, double and string, so I could write function to test with this limitation. but I just wanted to make it to work in general so that I can reuse them for other similar cases.
function isa_generic,var,typ
; calls isa() http://www.exelisvis.com/docs/ISA.html with keyword
; if 'n', test /number
; if 'i', test /integer
; if 'f', test /float
; if 's', test /string
if typ eq 'n' then return, isa(var, /number)
if typ eq 'i' then then return, isa(var, /integer)
if typ eq 'f' then then return, isa(var, /float)
if typ eq 's' then then return, isa(var, /string)
print, 'unexpected typename: ', typ
stop
end
IDL has some limited reflection abilities, which will do exactly what you want:
expected_types = ['integer', 'number', 'float']
expected_header = ['id', 'x', 'val']
str = read_csv(filename, header=inheader)
if ~array_equal(strlowcase(inheader), expected_header) then stop
foreach type, expected_types, index do begin
if ~isa(str.(index), _extra=create_struct(type, 1)) then stop
endforeach
It's debatable if this is really "easier to read" in your case, since there are only three cases to test. If there were 500 cases, it would be a lot cleaner than writing 500 slightly different lines.
This snipped used some rather esoteric IDL features, so let me explain what's happening a bit:
expected_types is just a list of (string) keyword names in the order they should be used.
The foreach part iterates over expected_types, putting the keyword string into the type variable and the iteration count into index.
This is equivalent to using for index = 0, n_elements(expected_types) - 1 do and then using expected_types[index] instead of type, but the foreach loop is easier to read IMHO. Reference here.
_extra is a special keyword that can pass a structure as if it were a set of keywords. Each of the structure's tags is interpreted as a keyword. Reference here.
The create_struct function takes one or more pairs of (string) tag names and (any type) values, then returns a structure with those tag names and values. Reference here.
Finally, I replaced not (bitwise not) with ~ (logical not). This step, like foreach vs for, is not necessary in this instance, but can avoid headache when debugging some types of code, where the distinction matters.
--
Reflective abilities like these can do an awful lot, and come in super handy. They're work-horses in other languages, but IDL programmers don't seem to use them as much. Here's a quick list of common reflective features I use in IDL, with links to the documentation for each:
create_struct - Create a structure from (string) tag names and values.
n_tags - Get the number of tags in a structure.
_extra, _strict_extra, and _ref_extra - Pass keywords by structure or reference.
call_function - Call a function by its (string) name.
call_procedure - Call a procedure by its (string) name.
call_method - Call a method (of an object) by its (string) name.
execute - Run complete IDL commands stored in a string.
Note: Be very careful using the execute function. It will blindly execute any IDL statement you (or a user, file, web form, etc.) feed it. Never ever feed untrusted or web user input to the IDL execute function.
You can't access the keywords quite like that, but there is a typename parameter to ISA that might be useful. This is untested, but should work:
expected_header = ['id', 'x', 'val']
expected_type = ['int', 'long', 'float']
str = read_cv(filename, header=inheader)
if not array_equal(strlowcase(inheader), expected_header) then stop
for i = 0L, n_elemented(expected_type) - 1L do begin
if not isa(str.(i), expected_type[i]) then stop
endfor

Resources