Azure Cosmosdb query runs fine in portal but not in text string - azure-cosmosdb-gremlinapi

My query string is propperly escaped:
"g.addV('planet').property('name','D\\PA')
.property('class','terrestrial')
.property('objid','2875301669077')
.property('label','planet')
.property('radius',0.0814)
.property('mass',0.9808)
.property('isSupportsLife','True')
.property('isPopulated','True')
.property('isHomeworld','True')
.property('username','BillmanLocal2')
.property('objtype','planet')"
Running the same query in the portal seems fine:
But running it as a string in callback = self.c.submitAsync(query) fashion causes the exception:
GraphSyntaxException
Gremlin query syntax error: Unexpected token: ; in input: 'g.addV('planet').property('name',''. # line 1, column 34.
Is it possible that Azure's Cosmos is doing some cleaning in the process?

Related

How can my Flask app check whether a SQLite3 transaction is in progress?

I am trying to build some smart error messages using the #app.errorhandler (500) feature. For example, my route includes an INSERT command to the database:
if request.method == "POST":
userID = int(request.form.get("userID"))
topicID = int(request.form.get("topicID"))
db.execute("BEGIN TRANSACTION")
db.execute("INSERT INTO UserToTopic (userID,topicID) VALUES (?,?)", userID, topicID)
db.execute("COMMIT")
If that transaction violates a constraint, such as UNIQUE or FOREIGN_KEY, I want to catch the error and display a user-friendly message. To do this, I'm using the Flask #app.errorhandler as follows:
#app.errorhandler(500)
def internal_error(error):
db.execute("ROLLBACK")
return render_template('500.html'), 500
The "ROLLBACK" command works fine if I'm in the middle of a database transaction. But sometimes the 500 error is not related to the db, and in those cases the ROLLBACK statement itself causes an error, because you can't rollback a transaction that never started. So I'm looking for a method that returns a Boolean value that would be true if a db transaction is under way, and false if not, so I can use it to make the ROLLBACK conditional. The only one I can find in the SQLite3 documentation is for a C interface, and I can't get it to work with my Python code. Any suggestions?
I know that if I'm careful enough with my forms and routes, I can prevent 99% of potential violations of db rules. But I would still like a smart error catcher to protect me for the other 1%.
I don't know how transaction works in sqlite but what you are trying to do, you can achieve it by try/except statements
use try/except within the function
try:
db.execute("ROLLBACK")
except:
pass
return render_template('500.html'), 500
Use try/except when inserting data.
from flask import abort
try:
userID = int(request.form.get("userID"))
[...]
except:
db.rollback()
abort(500)
I am not familiar with sqlite errors, if you know what specific error occurs except for that specific error.

Room unexpected behavior when working with BLOB & TEXT

I am migrating my app's SQLite helper to Room. Basically what I am doing is just copying data from old SQLite database to Room, so due to schema mismatch I need to provide migration. I am having this issue with BLOB data in Room.
I have below simple model
class NewCourse {
var weekday: Array<String> = arrayOf()
}
I also have TypeConverter as
#TypeConverter
fun toArray(concatenatedStrings: String?): Array<String>? {
return concatenatedStrings?.split(",".toRegex())?.dropLastWhile { it.isEmpty() }?.toTypedArray()
}
#TypeConverter
fun fromArray(strings: Array<String>?): String? {
return strings?.joinToString(",")
}
Within my old appdatabase.db database I have a corresponding table Course with a field weekday which has a type BLOB.
Well, because of my TypeConverter in room database I will have weekday with a type TEXT. While migrating I running below SQL script.
INSERT INTO NewCourse (weekday) SELECT weekday FROM Course
As weekday from Course table is BLOB type and in SQL you can basically store anything to anything, am I expecting it will copy BLOB-typed weekday in Course to TEXT-typed weekday in NewCourse.
Well at first I was expecting some error due to type mismatch. But "fortunately", but not expectedly, Room doesn't throw any exception and it gets value of BLOB and copies as TEXT.
My first question was why it is working? i.e. how it's copying the TEXT value of BLOB to my newly created table?
I never cared about it, as it was working perfectly, until I did some testing with Robolectic.
Unfortunately, I am getting error if I start testing with Robolectic. After copying data in my migration, when I query for NewCourse I am getting SQL error of
android.database.sqlite.SQLiteException: Getting string when column is blob. Row 0, col 10
So, I suppose here it is copying the data as BLOB and when querying for weekday it is throwing an exception as getWeekDay calls getString of cursor.
My second question would be "Why while testing with Robolectic it is not working as it is working with just running the app?"
I also tested the queries with just Sql not involving Android, and over there it copies the BLOB as BLOB even though the type of weekday at NewCourse is TEXT as expected.
Robolectric is a testing library for android applications. The keyword here is testing and by that it means there shouldn't be any exception. Robolectric showing you error maybe because of some android devices may throw exception so your application will crash. Try checking your logs while your application running. Maybe you are missing some warnings.

Azure Cosmos DB Entity Insert and Data Explorer Error

Just this morning when trying to view the Data Explorer UI for an Azure Cosmos DB table the window is totally blank and I see no rows (the table should not be empty). The only connection to this table is a Python script that pushes in simple rows with only a few variables however this has also stopped working just this morning.
I am still able to connect to the table service properly and I've even been able to create a new table through my Python script. However, as soon as I call table_service.insert_or_replace_entity('traps', task) ('traps' is the name of my table and task is the row I'm trying to push up) I receive back an HTTP Error 400. The request URL is invalid.
For reference, my connection in Python is as follows where Account_Name = my personal account name and Account_Key = my personal account key.
table_service = TableService(connection_string="DefaultEndpointsProtocol=https;AccountName=Account_Name;AccountKey=Account_Key;TableEndpoint=https://Account_Name.table.cosmosdb.azure.com:443/;")
for i in list(range(0,len(times))):
print(len(tags))
print(len(times))
print(len(locations))
task = {'PartitionKey': '1', 'RowKey': '{}'.format(tags[i]),'Date_Time' : '{}'.format(times[i]), 'Location' : '{}'.format(locations[i])}
table_service.insert_or_replace_entity('traps', task)
UPDATE
In reference to the HTTP Error 400 I discovered that I was trying to push a \n at the end of each of the tags string (i.e. tags[0] = 'ab123\n'). Stripping out the \n has resolved the HTTP 400 error but I am now receiving The specified resource does not exist. message when I attempt to upload which makes more sense as at why my Data Explorer is blank. I have tried uploading to a new table but its the same thing.
Second Update
Silly mistake on resource not found error was that my table is called "Traps" not "traps". Data appears to be uploading correctly now on the API side. However, the table is still not displaying at all in the data explorer page of the Azure portal. If anyone has insight on this it would be appreciated because the explorer is super helpful while we are still in development.
Third Update
I am able to connect to the table/database through Python and query data effectively. It all seems to be in there and up to date. The only thing I'm left unsure about is why the Data Explorer is not displaying properly. Aside from that, my recommendation is to obviously check your capital letters (my usual mistake haha) and DO NOT try to push up line feeds (\n) in the task/payload.
Want to provide an official update and response to your issue. This issue is being Hotfixed with an ETA rolled out by Monday (09/24/2018).

In Azure Stream Analytics Bad Request results when calling Azure Machine Learning function even though Azure ML service is called fine from C#

We have an Azure Machine Learning web service that is called fine from a C# program. And it works fine when called as an HTML post (with Headers and a JSON string in the body). However, in Azure Stream Analytics you have to create a Function to call an ML service. And when this function is called in ASA, it fails with Bad Request.
The documentation for the ML service gives the following documentation:
Request Body
Sample Request
{
"Inputs":{
"input":[
{
"device":"60-1-94-49-36-c5",
"uid":"5f4736aabfc1312385ea09805cc922",
"weight":"9-9-9-9-9-8-9-8-9-9-9-9-9-9-9-9-9-8-9-9-8-8-9-9-9-9-9-
9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-
9-9-8-9-9-9-9-8-9-9-9-8-9-9-9-9-9-9-9-9-9-8-9-9-9-9-8-8-16-16-15-16-16-15-
15-16-15-15-15-15-16-15-15-16-15-15-9-15-15-15-15-15-15-15-9-15-16-15-15-9-
15-16-16-16-15-15-15-15-15-15-15-15-16-16-15-9-15-15-15-16-15-16-15-15-15-
15-15-16-15-15-16-16-15-15-15"
}
]
},
"GlobalParameters":{
}
}
The Azure Stream Analytics function (that calls the ML service above) has this signature:
FUNCTION SIGNATURE
SmartStokML2018Aug17 ( device NVARCHAR(MAX) ,
uid NVARCHAR(MAX) ,
weight NVARCHAR(MAX) ) RETURNS RECORD
Here the function is expecting 3 string arguments and NOT a full JSON string. The 3 parameters are strings (NVARCHAR as shown).
The 3 parameters have been passed in: device, uid and weight. And in different string formats. This includes passing the string arguments as JSON strings, using JSON.stringify() in a UDF, or sending in arguments with just data, no headers ("device", "uid", "weight"). But all calls to the ML service fail.
WITH QUERY1 AS (
SELECT DEVICE, UID, WEIGHT,
udf.jsonstringify( concat('{"device": "',try_cast(device as nvarchar(max)), '"}')) jsondevice,
udf.jsonstringify( concat('{"uid": "',try_cast(uid as nvarchar(max)), '"}')) jsonuid,
udf.jsonstringify( concat('{"weight": "',try_cast(weight as nvarchar(max)), '"}')) jsonweight
FROM iothubinput2018aug21 ),
QUERY2 AS (
SELECT IntellistokML2018Aug21(JSONDEVICE, JSONUID, JSONWEIGHT) AS RESULT
FROM QUERY1
)
SELECT *
INTO OUT2BLOB20
FROM QUERY2
Most of the errors are:
ValueError: invalid literal for int() with base 10: '\\" {weight:9'\n\r\n\r\n
In what format does the ML Service expect these parameters to be passed in?
Note: the queries have been tried with ASA Compatibility Level 1 and 1.1.
In an ASA function, you don't need to construct the JSON input to Azure ML yourself. You just specify your event fields directly. Eg:
WITH QUERY1 AS (
SELECT IntellistokML2018Aug21(DEVICE, UID, WEIGHT) AS RESULT
FROM iothubinput2018aug21
)
SELECT *
INTO OUT2BLOB20
FROM QUERY1
As mentioned in Dushyant post, you don't need to construct the JSON input for Azure ML. However, I've noticed that your input is in a nested JSON with Array, so you need to extract the field in your first step.
Here an example:
WITH QUERY1 AS(
SELECT
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'device') as device,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'uid') as uid,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'weight') as weight
FROM iothubinput2018aug21 )
Please note that if you can have several messages in the "Inputs.input" array, you can use CROSS APPLY to read all of them (in my example I only assumed there is one).
More information on querying JSON here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parsing-json
Let us know if it works for you.
JS (Azure Stream Analytics)
It turns out the ML Service is expecting devices with a KNOWN Mac ID. If a device is passed in with an UNKNOWN MAC ID, then there is a failure in the Python script. This should be handled more gracefully.
Now there are errors related to batch processing of rows:
"Error": "- Condition 'The number of events in Azure ML request ID 0 is 28 but the
number of results in the response is 1. These should be equal. The Azure ML model
is expected to score every row in the batch call and return a response for it.'
should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse'
(Parse at offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n",
"Message": "An error was encountered while calling the Azure ML web service. An
error occurred when parsing the Azure ML web service response. Please check your
Azure ML web service and data model., - Condition 'The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal. The Azure ML model is expected to score every row in the batch call and
return a response for it.' should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse' (Parse at
offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n, :
OutputSourceAlias:query2Callout;",
Type": "CallOutProcessingFailure",
"Correlation ID": "2f87188e-1eda-479c-8e86-e2c4a827c6e7"
I am looking into this article for guidance:
[Scale your Stream Analytics job with Azure Machine Learning functions][1]: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/stream-analytics/stream-analytics-scale-with-machine-learning-functions.md
I am unable to add a comment to the original thread regarding this so replying here:
"The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal"
ASA's call out to Azure ML is modeled as a scalar function. This means that every input event needs to generate exactly one output. In your case, seems that you are generating one output for 28 input events. Can you modify your logic to generate an output per input event?
Regarding the JSON format:
{ "Inputs":{ "input":[ { "device":"60-c5", "uid":"5f422", "weight":"9--15" } ] }, "GlobalParameters":{ } }
All the extra markup will be added by ASA when calling AML. Do you have a way of inspecting the input received by your AML web service? For eg, modify your model code to write to blob.
AML calls are expected to follow scalar semantics - one output per input.

CosmosDB - Is there a way to get MongoDB API RequestCharge

So, when using MongoDB API with C# driver against CosmosDB, can we somehow get RequestCharge from CosmosDB response for each query?
So, for anyone struggling with same thing, here is the solution.
Cosmos DB MongoDB API has dedicated command called: getLastRequestStatistics
ref: https://learn.microsoft.com/en-us/azure/cosmos-db/request-units
So, immediately after a real query is executed, one should trigger:
var result = this._db.RunCommand<BsonDocument>(new BsonDocument{{ "getLastRequestStatistics", 1 }});
And that will give actual response from Cosmos DB with real cost. Response looks like:
{
"_t" : "GetRequestStatisticsResponse",
"ok" : 1,
"CommandName" : "find",
"RequestCharge" : 5.5499999999999998,
"RequestDurationInMilliSeconds" : NumberLong(25)
}

Resources