Passing XQuery xml element as external variable to Marklogic via XCC - xquery

We have a fairly simple XQuery and Groovy code as follows.
Xquery code :
declare variable $criteria as element(criteria) external ;
<scopedInterventions>{
$criteria/equals/field
}</scopedInterventions>
Here is the test code that is trying to invoke it
def uri = new URI("xcc://admin:admin#localhost:8001")
def contentSource = ContentSourceFactory.newContentSource(uri)
def request = session.newModuleInvoke("ourQuery.xqy")
def criteria =
"""<criteria>
<equals>
<field>status</field>
<value>draft</value>
</equals>
</criteria>
"""
request.setNewVariable("criteria",ValueType.ELEMENT, criteria);
session.submitRequest(request).asString()
}
We are getting this error when executing:
Caused by: com.marklogic.xcc.exceptions.XQueryException: XDMP-LEXVAL:
xs:QName("element()") -- Invalid lexical value "element()" [Session:
user=admin, cb={default} [ContentSource: user=admin, cb={none}
[provider: address=localhost/127.0.0.1:9001, pool=1/64]]] [Client:
XCC/5.0-3, Server: XDBC/5.0-3] expr: xs:QName("element()") at
com.marklogic.xcc.impl.handlers.ServerExceptionHandler.handleResponse(ServerExceptionHandler.java:34)
at
com.marklogic.xcc.impl.handlers.EvalRequestController.serverDialog(EvalRequestController.java:83)
at
com.marklogic.xcc.impl.handlers.AbstractRequestController.runRequest(AbstractRequestController.java:84)
at
com.marklogic.xcc.impl.SessionImpl.submitRequestInternal(SessionImpl.java:373)
at
com.marklogic.xcc.impl.SessionImpl.submitRequest(SessionImpl.java:356)
at
com.zynx.galen.dataaccess.MarkLogicUtilities.executeQueryWithMultipleXMLParameters(MarkLogicUtilities.groovy:52)
at
com.zynx.galen.repositories.ScopedInterventionService.getScopedInterventionsByCriteria(ScopedInterventionService.groovy:20)
... 1 more
Any help would be greately appreciated.

http://docs.marklogic.com/javadoc/xcc/overview-summary.html has the answer, I think:
Passing Variables With Queries
Variables may be bound to Request objects. When an execution request
is issued to the server with Session.submitRequest(Request) all the
variables currently bound to the Request object are sent along and
defined as external variables in the execution context in the server.
XCC lets you create XdmNodes and XdmSequences, as well as XdmAtomic
values. However, in the initial XCC release values of this type may
not be bound as external variables because MarkLogic Server cannot yet
accept them. This capability is anticipated for a future release.
Since XdmNode is not supported, I suppose its subclass XdmElement is not supported either. So these classes are only useful for responses, not requests. The error message could stand to be improved.
You could pass the XML string using setNewStringVariable, then call xdmp:unquote in your XQuery module. Note that xdmp:unquote returns a document-node, so the /* XPath step yields its root element.
declare variable $xml-string as xs:string external ;
declare variable $criteria as element(criteria) := xdmp:unquote($xml-string)/* ;
....

Related

How to turn off reasoning in the Grakn python client

I am using the Grakn python client and I want to query for data without reasoning turned on.
client = GraknClient(uri=uri)
session = client.session(keyspace=keyspace)
tx = session.transaction().read()
Do I pass an argument in the transaction() method?
You can turn the reasoning off for every specific query by passing infer=False parameter like this
transaction.execute(query, infer=True, explain=False, batch_size=50);
Check out the documentation http://dev.grakn.ai/docs/client-api/python#lazily-execute-a-graql-query

In Azure Stream Analytics Bad Request results when calling Azure Machine Learning function even though Azure ML service is called fine from C#

We have an Azure Machine Learning web service that is called fine from a C# program. And it works fine when called as an HTML post (with Headers and a JSON string in the body). However, in Azure Stream Analytics you have to create a Function to call an ML service. And when this function is called in ASA, it fails with Bad Request.
The documentation for the ML service gives the following documentation:
Request Body
Sample Request
{
"Inputs":{
"input":[
{
"device":"60-1-94-49-36-c5",
"uid":"5f4736aabfc1312385ea09805cc922",
"weight":"9-9-9-9-9-8-9-8-9-9-9-9-9-9-9-9-9-8-9-9-8-8-9-9-9-9-9-
9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-
9-9-8-9-9-9-9-8-9-9-9-8-9-9-9-9-9-9-9-9-9-8-9-9-9-9-8-8-16-16-15-16-16-15-
15-16-15-15-15-15-16-15-15-16-15-15-9-15-15-15-15-15-15-15-9-15-16-15-15-9-
15-16-16-16-15-15-15-15-15-15-15-15-16-16-15-9-15-15-15-16-15-16-15-15-15-
15-15-16-15-15-16-16-15-15-15"
}
]
},
"GlobalParameters":{
}
}
The Azure Stream Analytics function (that calls the ML service above) has this signature:
FUNCTION SIGNATURE
SmartStokML2018Aug17 ( device NVARCHAR(MAX) ,
uid NVARCHAR(MAX) ,
weight NVARCHAR(MAX) ) RETURNS RECORD
Here the function is expecting 3 string arguments and NOT a full JSON string. The 3 parameters are strings (NVARCHAR as shown).
The 3 parameters have been passed in: device, uid and weight. And in different string formats. This includes passing the string arguments as JSON strings, using JSON.stringify() in a UDF, or sending in arguments with just data, no headers ("device", "uid", "weight"). But all calls to the ML service fail.
WITH QUERY1 AS (
SELECT DEVICE, UID, WEIGHT,
udf.jsonstringify( concat('{"device": "',try_cast(device as nvarchar(max)), '"}')) jsondevice,
udf.jsonstringify( concat('{"uid": "',try_cast(uid as nvarchar(max)), '"}')) jsonuid,
udf.jsonstringify( concat('{"weight": "',try_cast(weight as nvarchar(max)), '"}')) jsonweight
FROM iothubinput2018aug21 ),
QUERY2 AS (
SELECT IntellistokML2018Aug21(JSONDEVICE, JSONUID, JSONWEIGHT) AS RESULT
FROM QUERY1
)
SELECT *
INTO OUT2BLOB20
FROM QUERY2
Most of the errors are:
ValueError: invalid literal for int() with base 10: '\\" {weight:9'\n\r\n\r\n
In what format does the ML Service expect these parameters to be passed in?
Note: the queries have been tried with ASA Compatibility Level 1 and 1.1.
In an ASA function, you don't need to construct the JSON input to Azure ML yourself. You just specify your event fields directly. Eg:
WITH QUERY1 AS (
SELECT IntellistokML2018Aug21(DEVICE, UID, WEIGHT) AS RESULT
FROM iothubinput2018aug21
)
SELECT *
INTO OUT2BLOB20
FROM QUERY1
As mentioned in Dushyant post, you don't need to construct the JSON input for Azure ML. However, I've noticed that your input is in a nested JSON with Array, so you need to extract the field in your first step.
Here an example:
WITH QUERY1 AS(
SELECT
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'device') as device,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'uid') as uid,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'weight') as weight
FROM iothubinput2018aug21 )
Please note that if you can have several messages in the "Inputs.input" array, you can use CROSS APPLY to read all of them (in my example I only assumed there is one).
More information on querying JSON here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parsing-json
Let us know if it works for you.
JS (Azure Stream Analytics)
It turns out the ML Service is expecting devices with a KNOWN Mac ID. If a device is passed in with an UNKNOWN MAC ID, then there is a failure in the Python script. This should be handled more gracefully.
Now there are errors related to batch processing of rows:
"Error": "- Condition 'The number of events in Azure ML request ID 0 is 28 but the
number of results in the response is 1. These should be equal. The Azure ML model
is expected to score every row in the batch call and return a response for it.'
should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse'
(Parse at offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n",
"Message": "An error was encountered while calling the Azure ML web service. An
error occurred when parsing the Azure ML web service response. Please check your
Azure ML web service and data model., - Condition 'The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal. The Azure ML model is expected to score every row in the batch call and
return a response for it.' should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse' (Parse at
offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n, :
OutputSourceAlias:query2Callout;",
Type": "CallOutProcessingFailure",
"Correlation ID": "2f87188e-1eda-479c-8e86-e2c4a827c6e7"
I am looking into this article for guidance:
[Scale your Stream Analytics job with Azure Machine Learning functions][1]: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/stream-analytics/stream-analytics-scale-with-machine-learning-functions.md
I am unable to add a comment to the original thread regarding this so replying here:
"The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal"
ASA's call out to Azure ML is modeled as a scalar function. This means that every input event needs to generate exactly one output. In your case, seems that you are generating one output for 28 input events. Can you modify your logic to generate an output per input event?
Regarding the JSON format:
{ "Inputs":{ "input":[ { "device":"60-c5", "uid":"5f422", "weight":"9--15" } ] }, "GlobalParameters":{ } }
All the extra markup will be added by ASA when calling AML. Do you have a way of inspecting the input received by your AML web service? For eg, modify your model code to write to blob.
AML calls are expected to follow scalar semantics - one output per input.

Robot Framework Getting Keyword failure reason

Trying to implement a listener interface for robot framework in order to collect information about keyword executions like time taken for execution, Pass/Fail status, failure message in case if status is fail. Sample code is given below
import os.path
import tempfile
class PythonListener:
ROBOT_LISTENER_API_VERSION = 2
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
def __init__(self, filename='listen.txt'):
outpath = os.path.join(tempfile.gettempdir(), filename)
self.outfile = open(outpath, 'w')
def end_keyword(self, name, attrs):
self.outfile.write(name + "\n")
self.outfile.write(str(attrs) + "\n")
def close(self):
self.outfile.close()
All the information apart from keyword failure message is available in the attributes which is passed to end_test method from robot framework.
Documentation can be found here. https://github.com/robotframework/robotframework/blob/master/doc/userguide/src/ExtendingRobotFramework/ListenerInterface.rst#id36
The failure message is available in the attributes for end_test() method. But this will not have information if a keyword is run using RunKeywordAndIgnoreError.
I could see that there is a special variable ${KEYWORD MESSAGE} in robot framework, which contains the possible error message of the current keyword Is it possible to access this variable in the listener class.?
https://github.com/robotframework/robotframework/blob/master/doc/userguide/src/CreatingTestData/Variables.rst#automatic-variables
Are there any other ways to collect the failure message information at the end of every keyword?
That's an interesting approach, indeed, end_test will ensure an attributes.message field containing the failure. (so it goes for end_suite if it fails during the suite setup/teardown)
With end_keyword you don't have such message, but at least you can filter for the FAIL status and detect which one failed. Then the message returned by Run Keyword And Ignore Error has to be logged explicitly by you so you can capture such triggering logs with the log_message hook. Otherwise nobody is aware of the message of the exception handled by the wrapper keyword which returns a tuple of (status, message).
There's also the message hook but couldn't manage to get it called from a normal breaking robot:
Called when the framework itself writes a syslog message.
message is a dictionary with the same contents as with log_message method.
Side note: To not expose these hooks as keywords, you can precede the method names with _. Examples:
def _end_test(self, name, attributes): ...
def _log_message(self, message): ...

How to return error collection/object from AWS Lambda function and map to AWS API Gateway response code

I am attempting to return an object from a AWS Lambda function instead of a simple string.
// ...
context.fail({
"email": "Email address is too short",
"firstname": "First name is too short"
});
// ...
I have already used the errorMessage for mapping error responses to status codes and that has been great:
// ...
context.fail('That "username" has already been taken.');
// ...
Am I simply trying to do something that the AWS API Gateway does not afford?
I have also already found this article which helped: Is there a way to change the http status codes returned by Amazon API Gateway?.
Update
Since time of writing, lambda has updated the invocation signature and now passes event, context, callback.
Instead of calling context.done(err, res) you should use callback(err, res). Note that what was true for context.done still applies to the callback pattern.
Should also add that with API Gateways proxy and integration implementation this entire thread is pretty much obsolete.
I recommend reading this article if you are integrating API Gateway with Lambda: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html
Original response below
First things first, let's clear a few things up.
context.done() vs. context.fail()/context.success
context.done(error, result); is nothing but a wrapper around context.fail(error); and context.success(response);
The Lambda documentation clearly states that result is ignored if error is non null:
If the Lambda function was invoked using the RequestResponse (synchronous) invocation type, the method returns response body as follows:
If the error is null, set the response body to the string representation of result. This is similar to the context.succeed().
If the error is not null, set the response body to error.
If the function is called with a single argument of type error, the error value will be populated in the response body.
http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html
What this means is that it won't matter whether you use a combination of fail/success or done, the behaviour is exactly the same.
API Gateway and Response Code Mapping
I have tested every thinkable combination of response handling from Lambda in combination with Response code mapping in API Gateway.
The conclusion of these tests are that the "Lambda Error RegExp" is only executed against a Lambda error, i.e: you have to call context.done(error);or context.fail(error); for the RegExp to actually trigger.
Now, this presents a problem as, has already been noted, Lambda takes your error and sticks it in an object and calls toString() on whatever you supplied:
{ errorMessage: yourError.toString() }
If you supplied an error object you'll get this:
{ errorMessage: "[object Object]" }
Not very helpful at all.
The only workaround I have found thus far is to call
context.fail(JSON.stringify(error));
and then in my client do:
var errorObject = JSON.parse(error.errorMessage);
It's not very elegant but it works.
As part of my error I have a property called "code". It could look something like this:
{
code: "BadRequest",
message: "Invalid argument: parameter name"
}
When I stringify this object I get:
"{\"code\":\"BadRequest\",\"message\":\"Invalid argument: parameter name\"}"
Lambda will stick this string in the errorMessage property of the response and I can now safely grep for .*"BadRequest".* in the API Gateway response mapping.
It's very much a hack that works around two somewhat strange quirks of Lambda and API Gateway:
Why does Lambda insist on wrapping the error instead of just giving
it back as is?
Why doesn't API Gateway allow us to grep in the
Lambda result, only the error?
I am on my way to open a support case with Amazon regarding these two rather odd behaviours.
You don't have to use context.fail, use success but send different statusCode and an errorMessage, here is an example of how i format my output:
try {
// Call the callable function with the defined array parameters
// All the function called here will be catched if they throw exceptions
result.data = callable_function.apply(this, params);
result.statusCode = 200;
result.operation = operation;
result.errorMessage = ""
} catch (e) {
result.data = [];
result.statusCode = 500;
result.errorMessage = e.toString();
result.method = method;
result.resource = resource;
}
// If everything went smooth, send back the result
// If context succeed is not called AWS Lambda will fire the function
// again because it is not successfully exited
context.succeed(result);
Use the consumer logic to handle different errors case logic, don't forget that you pay for the time your function is running...
You should replace the use of your context.fail with context.done and use context.fail only for very serious Lambda function failures since it doesn't allow more than one output parameter. Integration Response is able to match mapping template by performing regex on the first parameter passed to context.done this also maps HTTP status code to the response. You can't pass this response status code directly from Lambda since it's the role of API Gateway Integration Response to abstract the HTTP protocol.
See the following:
context.done('Not Found:', <some object you can use in the model>);
and the Integration Response panel this setting:
You can replicate similar approach for any kind of error. You should also create and map the error model to your response.
For those who tried everything put on this question and couldn't make this work (like me), check the thedevkit comment on this post (saved my day):
https://forums.aws.amazon.com/thread.jspa?threadID=192918
Reproducing it entirely below:
I've had issues with this myself, and I believe that the newline
characters are the culprit.
foo.* will match occurrences of "foo" followed by any characters
EXCEPT newline. Typically this is solved by adding the '/s' flag, i.e.
"foo.*/s", but the Lambda error regex doesn't seem to respect this.
As an alternative you can use something like: foo(.|\n)*

How to write python function to test the matched strings (to use for Robot framework keyword)?

I am writing a custom library for robot framework in python. I don't want to use builtin library for some reasons.
My python code :
import os
import re
output = "IP address is 1.1.1.1"
def find_ip():
cmd = 'ipconfig'
output = os.popen(cmd).read()
match1 = re.findall('.* (1.1.1.1).*',output)
mat1 = ['1.1.1.1']
if match1 == mat1:
print "PASS"
In the above program I have written python function to :
Execute a windows command "ipconfig"
Written regular expression to match 1.1.1.1
create a list variable, mat1 = ['1.1.1.1']
Now I want to put condition like, if "match1" and "mat1" are equal my TEST should PASS. else it should fail in Robot framework.
Any one please give idea on how to write python function for this purpose?
Please note I dont want to use "Should Match Regexp" keyword in Robot Framework. Because I know it will do the same whatever I am asking.
To make a keyword pass, you don't need to do anything except return normally to the caller. To fail, you need to raise an exception:
def find_ip():
...
if match1 != mat1:
raise Exception('expected the matches to be similar; they are not")
This is documented in the robot user guide in the section Returning Keyword Status:
Reporting keyword status is done simply using exceptions. If an
executed method raises an exception, the keyword status is FAIL, and
if it returns normally, the status is PASS.
The error message shown in logs, reports and the console is created
from the exception type and its message. With generic exceptions (for
example, AssertionError, Exception, and RuntimeError), only the
exception message is used, and with others, the message is created in
the format ExceptionType: Actual message.

Resources