Paw - Get last request made using specific environment - paw-app

I am trying to use a dynamic field from parsed response. The parsed response must be for the last request made using a specific environment. Is this possible?
Here's the scenario:
1. Make Request 1 using Environment A
Receive Response A1
2. Make Request 1 using Environment B
Receive Response B1
3. Make Request 2 using environment A, with field from parsed response A1
Receive Response A2
4. Make Request 2 using environment B, with field from parsed response B1
Receive Response B2
How do I orchestrate steps 3 and 4?

We are planning to implement it properly by using tabs on MacOS Sierra. With each tab operation as dedicated session and you will pin environment selection to a tab.
This is not properly implemented in Paw yet, but you can write a custom Dynamic value for this or use a hacky workaround:
Choose a partitioning variable in your environments
Set a X-paw-env header in Request 1 the partitioning environment variable. This way you get the current value of the partitioning variable depending on the environment
In Request 2 in the field where you are using the Response Parsed Body insert a Custom dynamic value instead. Inside there get the latest exchange for Request 1 where the request header matches the value of your partitioning variable for the current environment. Then extract the value you need from the response body using RegExp Match
function evaluate(context){
var variableValue = context.getEnvironmentVariableByName("myPartitioningVariable").getCurrentValue()
var exchanges = context.getRequestByName("Request1").getAllExchanges();
for (var i = 0; i < exchanges.length; i++) {
console.log(i, exchanges[i].requestHeaders["X-paw-env"]);
if (variableValue === exchanges[i].requestHeaders["X-paw-env"]) {
var dv = new DynamicValue("com.luckymarmot.RegExMatch", { re: '"user":\\s*"([^"]*)', input: exchanges[i].responseBody });
console.log(exchanges[i].responseBody)
console.log(i, "returning")
return dv.getEvaluatedString();
}
}
};

Related

In Azure Stream Analytics Bad Request results when calling Azure Machine Learning function even though Azure ML service is called fine from C#

We have an Azure Machine Learning web service that is called fine from a C# program. And it works fine when called as an HTML post (with Headers and a JSON string in the body). However, in Azure Stream Analytics you have to create a Function to call an ML service. And when this function is called in ASA, it fails with Bad Request.
The documentation for the ML service gives the following documentation:
Request Body
Sample Request
{
"Inputs":{
"input":[
{
"device":"60-1-94-49-36-c5",
"uid":"5f4736aabfc1312385ea09805cc922",
"weight":"9-9-9-9-9-8-9-8-9-9-9-9-9-9-9-9-9-8-9-9-8-8-9-9-9-9-9-
9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-8-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-9-
9-9-8-9-9-9-9-8-9-9-9-8-9-9-9-9-9-9-9-9-9-8-9-9-9-9-8-8-16-16-15-16-16-15-
15-16-15-15-15-15-16-15-15-16-15-15-9-15-15-15-15-15-15-15-9-15-16-15-15-9-
15-16-16-16-15-15-15-15-15-15-15-15-16-16-15-9-15-15-15-16-15-16-15-15-15-
15-15-16-15-15-16-16-15-15-15"
}
]
},
"GlobalParameters":{
}
}
The Azure Stream Analytics function (that calls the ML service above) has this signature:
FUNCTION SIGNATURE
SmartStokML2018Aug17 ( device NVARCHAR(MAX) ,
uid NVARCHAR(MAX) ,
weight NVARCHAR(MAX) ) RETURNS RECORD
Here the function is expecting 3 string arguments and NOT a full JSON string. The 3 parameters are strings (NVARCHAR as shown).
The 3 parameters have been passed in: device, uid and weight. And in different string formats. This includes passing the string arguments as JSON strings, using JSON.stringify() in a UDF, or sending in arguments with just data, no headers ("device", "uid", "weight"). But all calls to the ML service fail.
WITH QUERY1 AS (
SELECT DEVICE, UID, WEIGHT,
udf.jsonstringify( concat('{"device": "',try_cast(device as nvarchar(max)), '"}')) jsondevice,
udf.jsonstringify( concat('{"uid": "',try_cast(uid as nvarchar(max)), '"}')) jsonuid,
udf.jsonstringify( concat('{"weight": "',try_cast(weight as nvarchar(max)), '"}')) jsonweight
FROM iothubinput2018aug21 ),
QUERY2 AS (
SELECT IntellistokML2018Aug21(JSONDEVICE, JSONUID, JSONWEIGHT) AS RESULT
FROM QUERY1
)
SELECT *
INTO OUT2BLOB20
FROM QUERY2
Most of the errors are:
ValueError: invalid literal for int() with base 10: '\\" {weight:9'\n\r\n\r\n
In what format does the ML Service expect these parameters to be passed in?
Note: the queries have been tried with ASA Compatibility Level 1 and 1.1.
In an ASA function, you don't need to construct the JSON input to Azure ML yourself. You just specify your event fields directly. Eg:
WITH QUERY1 AS (
SELECT IntellistokML2018Aug21(DEVICE, UID, WEIGHT) AS RESULT
FROM iothubinput2018aug21
)
SELECT *
INTO OUT2BLOB20
FROM QUERY1
As mentioned in Dushyant post, you don't need to construct the JSON input for Azure ML. However, I've noticed that your input is in a nested JSON with Array, so you need to extract the field in your first step.
Here an example:
WITH QUERY1 AS(
SELECT
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'device') as device,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'uid') as uid,
GetRecordPropertyValue(GetArrayElement(inputs.input,0),'weight') as weight
FROM iothubinput2018aug21 )
Please note that if you can have several messages in the "Inputs.input" array, you can use CROSS APPLY to read all of them (in my example I only assumed there is one).
More information on querying JSON here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parsing-json
Let us know if it works for you.
JS (Azure Stream Analytics)
It turns out the ML Service is expecting devices with a KNOWN Mac ID. If a device is passed in with an UNKNOWN MAC ID, then there is a failure in the Python script. This should be handled more gracefully.
Now there are errors related to batch processing of rows:
"Error": "- Condition 'The number of events in Azure ML request ID 0 is 28 but the
number of results in the response is 1. These should be equal. The Azure ML model
is expected to score every row in the batch call and return a response for it.'
should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse'
(Parse at offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n",
"Message": "An error was encountered while calling the Azure ML web service. An
error occurred when parsing the Azure ML web service response. Please check your
Azure ML web service and data model., - Condition 'The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal. The Azure ML model is expected to score every row in the batch call and
return a response for it.' should not be false in method
'Microsoft.Streaming.CalloutProcessor.dll
!Microsoft.Streaming.Processors.Callout.AzureMLRRS.ResponseParser.Parse' (Parse at
offset 69 in file:line:column <filename unknown>:0:0\r\n)\r\n, :
OutputSourceAlias:query2Callout;",
Type": "CallOutProcessingFailure",
"Correlation ID": "2f87188e-1eda-479c-8e86-e2c4a827c6e7"
I am looking into this article for guidance:
[Scale your Stream Analytics job with Azure Machine Learning functions][1]: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/stream-analytics/stream-analytics-scale-with-machine-learning-functions.md
I am unable to add a comment to the original thread regarding this so replying here:
"The number of events in Azure ML
request ID 0 is 28 but the number of results in the response is 1. These should be
equal"
ASA's call out to Azure ML is modeled as a scalar function. This means that every input event needs to generate exactly one output. In your case, seems that you are generating one output for 28 input events. Can you modify your logic to generate an output per input event?
Regarding the JSON format:
{ "Inputs":{ "input":[ { "device":"60-c5", "uid":"5f422", "weight":"9--15" } ] }, "GlobalParameters":{ } }
All the extra markup will be added by ASA when calling AML. Do you have a way of inspecting the input received by your AML web service? For eg, modify your model code to write to blob.
AML calls are expected to follow scalar semantics - one output per input.

Test Step Move Properties on SoapUI with http post request

I created a Soap-UI Test-Suite with a Test-case.
This Test-case has a http request as Test-Step.
The method of the http request is post.
The http request has the Parameter P_FILNR=1111&P_HDLNR=123456.
How can I set/modify these Parameters with the Test-Step?
as Mentioned by #A Joly above here is the code which can help you. I have used the custom property and a groovy script
First of all you can mention the property name under the value like ${#TestCase#address} <-- This means a test case property with the name address.
You can now add a groovy step with the below code
def values=["India", "Russia","USA"]
for(int i=0; i < 3 ; i++)
{
testRunner.testCase.setPropertyValue("address",values[i])
testRunner.runTestStepByName("Request 1")
}
So what happens here is the test step which we have to run has the step name as "Request 1". We are setting the value of Address dynamically and running the step via Groovy.
Also you can disable the request 1 step so that it does not run when you run the suite because groovy will run the request 3 times for 3 values

AWS API Gateway - change to 404 if query returns nothing

I have a Dynamodb table with a few fields - my_id is the PrimaryKey. In the API gateway I set up a response with a method that takes in a parameter {my_id}.
Then I have an Integration Request mapping template that takes the passed in parameter and queries the table to return all the fields that match.
Then I have an Integration response mapping template that cleans up the returned items the way I want.
This all works perfect.
The thing I can't figure out how to do is if the parameter that is passed in doesn't match anything in the table, how do I get it to change from a 200 status into a 404?
From what I can tell when the passed in parameter doesn't match anything it doesn't cause an error, it just doesn't return anything.
It seems like I need to change the mapping template on the Integration response to first check if the params are empty and then somehow tell it to change the response status.
I can find info about this type of thing with people using Lambda, but I am not using Lambda - just the Dynamodb table and the API Gateway.
You can use Mapping Template to convert the response that you get from DDB and overrride the response code. You can get more details in the link https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html
If you are using cloud formation, you can do this by using below snippet
IntegrationResponses:
- StatusCode: "200"
ResponseTemplates:
application/json: |
{
"payload" : {
}
},
}
IntegrationResponses:
- StatusCode: "200"
ResponseTemplates:
application/json: |
#set($inputRoot = $input.path('$'))
#if($inputRoot.toString().contains("Item"))
$input.json("$")
#set($context.responseOverride.status = 200)
#else
#set($context.responseOverride.status = 404)
#end
Api gateway currently supports mapping the status code using the status code of the integration response (Here dynamodb response code). The only workaround is to use a lambda function which outputs different error messages that can be mapped using a error regex http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-method-settings-execution-console.html.

Benchmarking Solr indexing with Jmeter

I want to use Jmeter to update a document on solr using http post.
I want it to take a different file to update in every iteration, create a proper http post request and monitor the responses from the server.
Can someone guide me of how this can be done:
Taking a different file every time.
Creating a http post from it.
Your use case can be split into 2 parts:
Get list of files to send
Send them to server
In regards to point 1, I would suggest to obtain file list via scripting.
Assuming following Test Plan Structure:
Add a Thread Group (all defaults)
Add a JSR223 Sampler as a child of Thread Group
Select "beanshell" as language
In "Script" area add following code:
File folder = new File("PATH TO FOLDER WITH YOUR FILES");
File [] files2send = folder.listFiles();
int counter = 1;
for (File file : files2send)
{
vars.put("FILE_" + counter, file.getPath());
counter++;
}
This will save files, you will be sending as JMeter Variables like:
FILE_1=d:\2solr\sometxtfile.txt
FILE_2=d:\2solr\somewordfile.docx
FILE_3=d:\2solr\someexcelfile.xlsx
After that you can use For Each Controller to iterate through files and add them to your request
Add For Each Controller as a child of Thread Group (same level as JSR223 Sampler)
Make sure that For Each Controller has following configuration:
Input variable prefix: FILE
Output variable name: CURRENTFILE
Add _ before number is checked
Add HTTP Request as a child of For Each Controller
Access file you want to send as ${CURRENTFILE} in "Send Files With The Request" stanza of the HTTP Request
It's just one of the approaches, if you are not very comfortable with JSR233 or Beanshell you may wish to use CSV Data Set Config instead.

Is Angular http request call when a variable scope change?

I have to use a http get request in my angular script where I have to send some variables to the server.
My question is if the sending variable is changed somehow, then will the request call again automatically?, or do I have to call the request again??
Thanks
updated:
code in my controller:
$scope.startDate = "";
$http.get('/Controller/Action', {startDate: $scope.startDate}).success(data){
alert(data)
}
if somehow the value of the startDate is changed will the http request be called again or I have to place it into a watch.
While the question is unclear, I believe what you are referring to is a $watch setup on a scope property. If you make a normal request, such as this:
$scope.myResource = 'path/to/resource'; //could be used use without $scope for this example
$http.get($scope.myResource) //etc
the call is just made once, because that's all it is told to do. If you want it to update when the path "myResource" changes, then do this:
$scope.$watch('myResource', function(newPath) { //watching $scope.myResource for changes
$http.get(newPath) //etc
})
Now, when the value of $scope.myResource changes, the $http call will be again, this time requesting the new path.

Resources