MS Azure Automated ML - Output JSON being sent as text - azure-machine-learning-studio

I have started using Azure ML Studio and have come across an issue with the Automated ML model. I create an AutoML run and get a decent precision. I deploy the model and get an endpoint using the out-of-the-box deploy button. I use postman to test the endpoint and get a response. But the response is in text format.
What i'm getting:
"{\"result\": [\"Prediction Label X\"]}"
What i'm expecting:
{"result":["Prediction Label X"]}
Postman has Accept and Content-Type both set to application/json.
Of course i could clean this text response up and parse it as JSON, but i'd rather get it directly from Azure in the correct format.
There doesnt appear to be anywhere in the ML Studio to modify the code or response format and i'm new to the Azure Studio.
Any thoughts?

The service is returning the raw text, you can use the json.loads(response.content.decode("utf-8")) to convert to Json.

Related

Get a bulk pipeline runs providing run ids as a list from .NET

I have an endpoint which tells me the status of a given adf pipeline from .NET. For that purpose I use the .NET sdk for ADF, specifically I run PipelineRun pipelineRun = client.PipelineRuns.Get(resourceGroup, dataFactoryName, runId); and then I retrieve the status from pipelineRun.Status. The only thing I get from an user here is runId. However, I have a scenario where I need to send a list of runIds. From what I've seen, reading the official documentation, I noticed that most of their apis work with runId of type str which means it works only per runId? Has any of you ever stumbled upon a scenario like this and how did you manage to get status of multiple runIds? Did you use an already built function from the SDK or you just for-looped the PipelineRun pipelineRun = client.PipelineRuns.Get(resourceGroup, dataFactoryName, runId); for listSize times?

Azure ML studio web service output

I made a web service from a python notebook and the output is:
{"Results":{"output1":{"type":"table","value":{"Values":[["1"]]}},"output2":{"type":"table","value":{"Values":[["data:text/plain,Execution OK\r\n",null]]}}}}
But I just wanted the response to be the value in the key "Values" so that way I don't have to parse it on the client side. Is that possible?

RavenDb patch api in embedded version of the server

Is there any difference in patch api in embedded and standard version of the server?
Is there a need to configure document store in some way to enable patch api?
I'm writing a test which use embedded raven. The code works correctly on the standard version but in test it doesn't. I'm constantly receiving patch result: DocumentDoesNotExists. I`ve checked with debugger and the document exists in the store - so it is not a problem with test.
Here you can find a repro of my issue: https://gist.github.com/pblachut/c2e0e227fa3beb51f4f9403505c292bb
I`ve reached the contact in the ravendb support and I have answer for my question.
There should be no difference between embedded and normal version of the server. The problem was that I did not passed explicitly for which database I want to invoke batch command. In the result I tried to patch document in system database.
var result = await documentStore.AsyncDatabaseCommands.ForDatabase("testDb).BatchAsync(new[] {command});
I assumed that database name will be taken from the session (beacuse I get documentStore from there). But the name of database should be always passed.
var documentStore = session.Advanced.DocumentStore;

AWS PDF upload through http post

I am new to AWS and I am trying to upload a pdf document to S3 trough an AWS API. I am using an HTML form with a post method. The action of the form is the URL of the deployed API. The API is integrated with a lambda function. My question is how can I extract the uploaded file to proceed within the lambda function, to perform some processing before uploading to S3. Is it even possible?
I have tried the instructions found in this post:
Passing HTTP Post from AWS API GW to Lambda
However, I return the event from the lambda function and this is what I get:
{file: file.pdf , acl:private,
success_action_redirect: http://localhost/, AWSAccessKeyId:my_aws_key}
The file I uploaded is called file.pdf.
Any guidance will be appreciated.
A pdf file is a binary format. API Gateway does not currently support binary data. We know that binary data does not work and there are no workarounds to make it work reliably. A number of customers have requested that we add binary support to API Gateway and it is prioritized on our backlog.

How to sending api payload content to bam

I'm using Api Manager Version 1.7.0 and BAM Version 4.2.0: installing API_Manager_Analytics toolbox I have any field values predefined (e.g payload_api, payload_apiPublisher etc...); For the requests I see them in Cassandra DB under EVENT_KS org_wso2_apimgt_statistics_request: how do I get the field values of the requests used to invoke the apis in org_wso2_apimgt_statistics_request? how do I pass the soap body payload content to BAM? Tks Gius
you can write custom data publisher as mentioned here[1] or you can use bam mediator[2] and publish in to separate stream
1.https://nadeesha678.wordpress.com/2015/12/14/how-to-publish-custom-set-of-data-from-api-manager-to-wso2-business-activity-monitor/
2.https://docs.wso2.com/display/ESB481/BAM+Mediator

Resources