Is it possible to use paw and create a parsed loop? - paw-app

I'd like to parse the return of a request that has many bits of data. Can paw create an array and rerun requests? Example: My get returns 50 segments and I want to run a subsequent request against each of those 50 segments. Not the most efficient but it's how the system I'm working on exists.

Related

How are requests and tests organized in postman?

I am investigating Postman but I can't find an optimal way to organize the requests. Let me give an example:
Let's imagine I'm working on testing two microservices and my goal is to automate the tests and create a report that informs us of the outcome. Each microservice has 10 possible requests to perform (either get, post, etc.).
The ideal solution would be to have a collection which contains two folders (one per microservice) and that each folder has 10 requests; and to be able to run that collection from newman with a CSV/JSON data file, generating a final report (with tools like htmlextra).
The problem comes with data and iterations. Regarding the iterations, some of the requests like for example some 'GET' need only 1 execution but others like for example some 'POST' need several executions to check the status(if it returns a 200, a 400, etc.). Regarding the data, having 10 requests in the same collection, and some of the requests occupy quite a few columns of the csv, the data file becomes unreadable and hard to maintain.
So what I get when I run the collection is an unreadable data file and many iterations on requests that don't need them. As an alternative, I could create 1 collection for each request, and a data file for each collection but in that case I could not get a report as a whole when executing from newman, since to execute several collections I must execute several newman instructions, generating a report file for each one. So I would get 20 html file as report, 1 for each request(also not readable).
Sorry if I am pointing in the wrong direction as I have no experience with the tool.
On the web I only see examples of reports and basic collections, but I'm left wanting to see something more 'real'.
Thanks a lot!!!
Summary of my notes:
Only 1 data file can be assigned to the collection runner.
There are many requests which have a lot of data, about 10 columns, so if we have one data file for several, it becomes unreadable. Ideally 1 file for each request.
This forces us to create a collection for each request, since only 1 file per collection is feasible.
Newman can only run 1 collection per command, if you want to launch more, you have to duplicate instruction. This forces to create a report per collection. If we have 1 collection per request, we would have 1 report for each one instead of a report with the status of all the requests, so it loses the sense.

Amazon Textract returns different results returns different results between WebApp Demo, AnalyzeDocumentRequest and StartDocumentAnalysisRequest

this is my first question on StackOverFlow, I would like to extract key-value pairs (FORMS) from a (scanned) PDf document via Amazon Textract. What I have noticed, however, is that some key-value pairs returned by the webapp demo (https://us-east-2.console.aws.amazon.com/textract/home?region=us-east-2#/demo) are absent from the methods that can be implemented in the code.
Furthermore, between these two methods, the Synchronous method (AnalyzeDocumentRequest), which does not accept PDF but forces a pre-conversion of the document into an image, in turn finds key-value pairs (Sync Result Example) which the Asynchronous method does not. (Async Result Example)
The problem is similar to this guy's, when he talks about the difference in results between the two methods of analyzing the document : AWS Textract - GetDocumentAnalysisRequest only returns correct results for first page of document
The code implementation is equal to these example:
Synchronous Method: https://docs.aws.amazon.com/textract/latest/dg/examples-extract-kvp.html
Asynchronous Method: https://github.com/awsdocs/amazon-textract-developer-guide/blob/master/doc_source/async-analyzing-with-sqs.md
Has anyone ever had the same problem?
We had this problem recently. The demo website provided by AWS found 50 fields, our own code using the provided API yielded 30 fields.
After some trial land error and a lot of googling we found that the response returned by GetDocumentAnalysisAsync included a NextToken which is used to ask for more results. Turns out we had to call GetDocumentAnalysisAsync again with this token (rinse-and-repeat) until the result response no longer included a NextToken.
At that point we knew we had all the data.

Jmeter: How do I create a separate report for each assertion?

Right now, my HTTP call has 3 assertions. The reporting options for the results listener only provide a checkbox for "Assertion Results", which lumps all of my assertion results into one value in the CSV output.
The team would like to create a csv output for each assertion. The problem is, you can't add a results listener under an assertion, it must be under the HTTP call. I can't think of a way to create separate reports besides making three separate HTTP calls, each with their own results listener writing a report. That is not ideal.
I tried with jtl with XML.
Below is the config. Use minimum parameter required as it consume a lot of resource.
Below is the output.
But i think if you try to have response time in the same then there will be same entry twice. Similarly, other calculation might get impacted. So, you can use two one this and other simple data writer.
Hope this helps.

Bulk operation on GET

What would be the best way to design an API to accept a request for a bulk GET. I currently have a scenario where I have about 100 id's. I don't want to call the API 100 times to get each resource but sending 100 GUIDs in a Query String doesn't seem right either.
What is the proper way to handle this scenario?
If you have too much specific parameters, it is common way to use POST with JSON body, where you specify, what you want.
But having big query string is not bad either (just remember there is limitation for maximum length). You can have even array in query string, it is sended (and consumed) as this : http://something.com?ids[0]=7&ids[1]=33&ids[2]=5
Frameworks (like Spring) are able to automatically convert these parameters into arrays or lists.

Asynchronous merging of web responses in clojure

I am trying to write a piece of code that would:
access 2 webservice with some request
the response will be sequences of objects, each object identified by id, the responses sorted by ID in ascending order
the responses will be large and streamed (or gzip chunked)
the result will be a merge of data from the two inputs based on IDs
What I try to achieve is that once the corresponding parts of responses are available, the output should be written out. I also don't want to wait for the whole response to be in place, since this will run out of memory. I want to start streaming output as soon as I can and keep as little in memory as possible.
What would be a good way to start?
I have taken a look at aleph and lamina, also async.http.client. It seems that these tools could help me, but I struggle with figuring out how to have one piece of code that would react to having the same part of responses from both webservices.
You can do something like this (using aleph - which under the hood used lamina channels abstraction).
Use sync-http-request to create the 2 http request
Get the :body from the above created 2 request object. Ex: at https://github.com/ztellman/aleph/wiki/Consuming-and-Broadcasting-a-Twitter-Stream
The :body is a lamina channel, use lamina join method to join the 2 channels into one channel
Subscribe to the above channel (which was result of join call).
Now the subscription callback will receive each JSON object as soon as it arrives on the either of the channels and you can than have a local atom which is a map with key being the value on which you want to combine the result from the 2 channels and value being a vector which will store values for the same key. So this will go something like this:
On receiving an item in callback, check if the local atom map has the key already
If key is already, store or do some other processing with the 2 items (one already in the map and other that you just received) for the key and remove the key from the map.
If key not there, create the key and value as [item] i.e the vector of one item that is being received now.

Resources