Request reuse in Postman - automated-tests

Our team wants to automate our REST API testing. Right now, we have a collection of Postman requests and make them jump through hoops manually.
We could create a collection/folder for each testing scenario, but that would mean a ton of duplication. Our API is still under heavy development and I really don't want to fix the same thing at twenty places after it changes.
I would like to have each endpoint request only once in a collection and some kind of independent logic that can execute them in an arbitrary order. I know Postman doesn't support request reuse in any clean way, so I am looking for at least a hacky way how to do it.

Create a file to load into the Postman Collection Runner, with the following structure:
[{
"testSequence": ["First request name", "Second request name", "..." ],
"anyOtherData": "Whatever the request needs",
"evenMoreData": "Whatever the request needs",
"...": "..."
},{
"testSequence": ["Login", "Check newsfeed", "Send a picture", "Logout" ],
"username": "Example",
"password": "correcthorsebatterystaple",
},{
"...": "keep the structure for any other test scenario or request sequence"
}]
Put all your test sequences in that file, then make Postman check the list after each request and decide what to execute next. This can be done e. g. in a "tests block" of the whole collection:
// Use the mechanism only if there is a test scenario file
// This IF prevents the block from firing when running single requests in Postman
if (pm.iterationData.get("testSequence")) {
// Is there another request in the scenario?
var sequence = pm.globals.get("testSequence");
if ((sequence instanceof Array) && (sequence.length > 0)) {
// If so, set it as the next one
var nextRequest = sequence.shift();
pm.globals.set("testSequence", sequence);
postman.setNextRequest(nextRequest);
} else {
// Otherwise, this was the last one. Finish the execution.
postman.setNextRequest(null);
}
}
If your requests need to use different data during different runs, you can define the data in the input file and use them as variables in the request.

Related

IPFS uri format: https://ipfs.io/ipfs/<CID> vs. ipfs://<CID>?

Here is my test tokenURI.json file w/ the imageURI I pass to my token contract.setTokenURI():
{
"attributes": [
{
"trait_type": "location",
"value": "West Awesomeville"
},
{
"display_type": "date",
"trait_type": "created",
"value": 1535250800
}
],
"description": "My awesome NFT.",
"image": "https://ipfs.io/ipfs/QmaUXii41ESnUMxLJUoVcrEeXowz7RHcdTiumvrBmUvcwG?filename=test4.png",
"name": "NFT 1"
}
Which is the best IPFS uri form to use esp. if I want to load this NFT into Opensea?
The docs in IPFS recommend:
https://ipfs.io/ipfs/<CID>
but the docs in Opensea recommend:
ipfs://<CID>
Which form is better and why?
In the above json I'm using the first form recommended by IPFS. It works but loading into Opensea is slow/somewhat unpredictable.
The form Opensea recommends is shorter, no gateway. Would the image load faster in Opensea if I used the 2nd form?
IPFS docs: Address IPFS on the Web
Opensea docs:
If you use IPFS to host your metadata, your URL should be in the format ipfs://CID. For example, ipfs://QmTy8w65yBXgyfG2ZBg5TrfB2hPjrDQH3RCQFJGkARStJb.
The ipfs:// url is the better way. Because gateways can go down. Now the ipfs pinner that you're using (pinata.cloud?) can also go down, or you can stop paying them and they will disappear your stuff.
Opensea is not likely to care, as long as they can find your metadata / images from the uri returned by the contract they will list your thing, and there's a way somewhere to do a metadata refresh (if you do a reveal)
And if I can also suggest, it probably might be a good idea to include a way to update the baseURI in the contract just in case.

Always get “Cannot parse non Measurement Protocol hits”

I have a little Python program that other people use and I would like to offer opt-in telemetry such that can get an idea of the usage patterns. Google Analytics 4 with the Measurement Protocol seems to be the thing that I want to use. I have created a new property and a new data stream.
I have tried to validate the request and set it to www.google-analytics.com/debug/mp/collect?measurement_id=G-LQDLGRLGZS&api_secret=JXGZ_CyvTt29ucNi9y0DkA via post and send this JSON payload:
{
"app_instance_id": "MyAppId",
"client_id": "TestClient.xx",
"events": [
{
"name": "login",
"params": {}
}
]
}
The response that I get is this:
{
"validationMessages": [
{
"description": "Cannot parse non Measurement Protocol hits.",
"validationCode": "INTERNAL_ERROR"
}
]
}
I seem to be doing exactly what they do in the documentation or tutorials. I must be doing something wrong, but I don't know, what is missing. What do I have to do in order to successfully validate the request?
Try to remove /debug part in the URL. In the example you followed it is not present so it is not quite exactly the same.
we just came across the same issue and the solution for us was to put https:// in front of the URL. Hope this helps.

Replicating Postman endpoints with vanilla python requests

After not being able to get the provided python API to work (I simply do not know enough about authentication), but being able to use provided Postman collections to work,
I decided to try and replicate these Collection Endpoints in Python.
I got off to a good start with the auth endpoint
Here it is in Postman:
and my python code replicating this:
base_url = 'https://demo.docusign.net/restapi/v2/'
params = {'api_password':'true'}
headers = {'X-DocuSign-Authentication':json.dumps({"Username":username,"Password":password,"IntegratorKey": clientid}),
'Content-Type':'application/json'}
auth_req = requests.get(base_url+'login_information', params, headers=headers)
Auth request yields 200, just like Postman
But then I try another request to /templates/
Here it is in Postman:
and headers same as Auth request above
I tried many variations of the following:
params = {'accountId':'7787022'}
get_templates = requests.get(base_url+'templates', params, headers=headers)
No matter what I try, I get a 404 instead of a 200 like with postman.
Any idea what I'm doing wrong?
As per your comment, it looks like you don't have a fully built BaseUrl. The full body of a base URL will include the server, the rest API version and your account number. Aside from the Login Information and other authentication calls, all standard* REST API calls will start with https://{{server}}.docusign.net/restapi/v2/accounts/{{accountId}}/
A call to GET templates would be made to https://{{server}}.docusign.net/restapi/v2/accounts/{{accountId}}/templates.
*Organization API calls are coming soon and will likely use a different URL.
The following did not fix, but I thought it would fix, and still think it is importnat information:
In Postman auth call under 'Tests' there is the followign code
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("accountId", jsonData.loginAccounts[0].accountId);
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("baseUrl", jsonData.loginAccounts[0].baseUrl);
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("password", jsonData.apiPassword);
even though this is 'Tests' it is useful for and often used to set variables, (some ppl at my old company used to do this).
In my python code, I need to take the response body from auth request:
{
"loginAccounts": [
{
"name": "Aiden McHugh",
"accountId": "7787022",
"baseUrl": "https://demo.docusign.net/restapi/v2/accounts/7787022",
"isDefault": "true",
"userName": "Aiden McHugh",
"userId": "e87........6a4eb",
"email": "aide....il.com",
"siteDescription": ""
}
],
"apiPassword": "HheDl......3MQ="
}
and use apiPassword variable to reset password in my header
You could also check out the python code example. It includes authentication and many examples.

Using webhooks with Google Analytics

I'm trying to integrate my CRM with Google Analytics to monitor lead changes (from lead to sell) and so on. As I understood, I need to use Google Measurement Protocol, to receive webhooks from CRM and translate it to Analytics Conversions.
But in fact, I don't really understand how to do it. I need to make some script, to translate webhook code to analytics, but where I need to place that script? Are there some templates? And so on.
So, If you know some tutorials/courses/freelancers to help me with intergrating webhooks with Analytics - I need your advice.
Example of webhook from CRM:
{
"leads": {
"status": {
"id": "25399013",
"name": "Lead title",
"old_status_id": "7039101",
"status_id": "142",
"price": "0",
"responsible_user_id": "102525",
"last_modified": "1413554372",
"modified_user_id": "102525",
"created_user_id": "102525",
"date_create": "1413554349",
"account_id": "7039099",
"custom_fields": [
{
"id": "427183",
"name": "Checkbox custom field",
"values": ["1"]
},
{
"id": "427271",
"name": "Date custom field",
"values": ["1412380800"]
},
{
"id": "1069602",
"name": "Checkbox custom field",
"values": ["0"]
},
{
"id": "427661",
"name": "Text custom field",
"values": ["Валера"]
},
{
"id": "1075272",
"name": "Date custom field",
"values": ["1413331200"]
}
]
}
}
}
"Webhook" is a fancy way of saying that your CRM can call a web based service whenever something interesting happens (i.e. the CRM can "hook" into a web based application). E.g. if a new lead is created you can call an url with the lead details as parameters.
Specifics depend on your CRM, but when you set up a webhook there should be a field to set a url; the script that evaluates the CRM data is located at the URL.
You have that big JSON thing as your example - No real way to tell without knowing your system, but I assume that is sent as request body. So in your script you evaluate the request body, extract the parameters you want to send to analytics (be mindful that you are not allowed to store personally identifiable information) and sent it via the measurement protocol as described in the documentation linked in the other answer.
Depending on the system you might even be able to call the measurement protocol without having a custom script in between (after all the measurement protocol is an url with a few parameters).
This is an awfully generic answer, but then the question is really broad.
I've done just this in my line of work.
You need to first decide your data model on how you would like the CRM data to look within Google Analytics. This could be just mapping Google Analytics' event category, event label, event action to your data, or perhpas using custom dimensions and metrics.
Then to make it most useful, you would like to be able to link the CRM activity of a customer to their online activity. You can do this if they login online. In that case, you can set the cid and/or uid of the user to your CRM id.
Then, if you send in a GA hit with the same cid/uid in your Measurement Protocol hit, you will link the online sessions with your offline CRM activity.
To make the actual record hit Google Analytics, you will need to program something that takes the CRM data and turns it into a Measurement Protocol hit, which is essentially just a URL with the correct parameters. Look here for reference: https://developers.google.com/analytics/devguides/collection/protocol/v1/reference
An example could be: http://www.google-analytics.com/collect?v=1&tid=UA-123456-1&cid=5555&t=pageview&dp=%2FpageA
We usually have this as a seperate process, that fires when the CRM data is written to its database (the webhook in your example). If its a lot of data, you should probably implement checks to see if the hit was sucessful, and caching in case the service is not online - you have an optional parameter that gives you 4 hours leeway in sending data.
Hope this gets you at least started.

Restful api data syncing only changed models on big collections

I'm trying to find a best practice method to make my api respond with a 204. Lets consider the following simplified steps:
Client GET: Server response 200 full collection of 10 records
No changes are made
Client GET: server response 304 data not changed
Changes are made to record 5. Record 11 is added. Record 2 is deleted
Client GET: server response 200 with the new collection of 10 records
For 10 records this is not a big issue. However when a collection is lets say a few thousands records you don't want to refresh your entire locally stored collection. In this case it's easier to change the 3 updated models (delete record 2, update record 5, add record 11) So I want something like this
Client GET: Server response 200 full collection (paginated or not)
No changes are made
Client GET: server response 304 data not changed
Changes are made to record 5. Record 11 is added. Record 2 is deleted
Client GET: server response 204 with only information about the 3 changed records
In the above case the request cycle is optimized, but the problem is: how do I get the server to respond like this. I must send some information from the client in either a header or body. I was thinking about exploiting the Last-modified-since header. This will be the date change on the server. The client stores this date when status is 200 or 204. When the client sends this header to the server, the server will only respond with records that are changed or deleted since then. For example:
{
"total": 113440,
"from": 0,
"till": 2,
"count": 3,
"removed": [1,2],
"parameters": [{"id"}, {"title"}],
"collection": [
{
"id": 3,
"title": "Updated record"
},
{
"id": 4,
"title": "New record"
},
{
"id": 5,
"title": "Another new record"
}
],
}
Downside is that the server will be more complex because it needs to keep track of the deleted data and the last updated records.
Keep in mind that I did think of sending silent push updates but I don't want to do this since the user is not always happy with background data traffic.
What do you guys think about this solutions and do you have a similar or better solution keeping the following in mind?
lower the amount of needed requests
make the api descriptive and cellular (api being it's own documentation allowing clientside generators)
be as live as possible
effectively deal with huge collections (ex: pagination, only fetch
updated records, caching etc)
You could send up a If-Modified-Since header with your GET requests for the collection. The endpoint could then return only those contents that have changed since the provided date.
Note that this is not the behaviour specified by the HTTP spec for conditional requests. Carefully consider whether this is the approach you want to take. I would not suggest it.
In the spec as written, there is no way I'm aware of to retrieve a subset of a collection conditionally. The closest I can think of is to have the collection contain only links to the individual elements. Make the client call a GET on each individual element, using If-None-Match or If-Modified-Since so you only pass stale entities over the wire. You'd still have to make a server hit for the collection entity, though you could also use a conditional header on that request.
IMHO this calls for a query interface, where you tell the rest service in the GET parameters what you want, simar to how you would do it in SQL or in a solr or elasticsearch query.
Why hide something in HTTP headers, that the client is explicitly querying, right?

Resources