How automate API testing for rest API built using Django Rest framework - automated-tests

I want to automate testing of Rest API built using DRF. This automation should run the test cases every 2 minutes and this has to run continuously not in local machine(API is deployed in AWS). If there is any test cases failure then it has to record that failure in log report. This can be any service type. I am using Postman now to run test cases since it is free plan I am using, so have limited API calls and monitors in postman don't have minute wise running.
please help!!
How can I do this?

Yes,
After some research and help of my mentors I found a way to automate API testing.
Coding part:
I have a python script which uses requests package to call the API and then I am using some conditions to ensure the response is behaving according to the requirements.
few example test cases are response status code, response time and schema test.
Automation: I am using AWS lambda function and AWS cloud event bridge to automate and schedule the execution of this script on required time intervals.
Incase of any exceptions, errors we can send alerts to slack channel or Microsoft teams

Related

what is the best practice for handling asynchronous api call that take time

So suppose I have an API to create a cloud instance asynchronously. So after I made an API call it will just return the success response, but the cloud instance will not been initialized yet. It will take 1-2 minutes to create cloud instance and after that it will save the cloud instance information (ex. ip, hostname, os) to db which mean I have to wait 1-2 minutes so I can fetch the data again to show cloud information. At first I try making a loading component, but the problem is that I don't know when the cloud instance is initialized (each instance has different time duration for creating). I'm considering using websocket or using cron or should I redesign my API? Has anyone design asynchronous system before how do you handle such a case.
If the API that you call gives you no information on when it's done with its asynchronous processing, it seems to me that you'll have to check at intervals until you find that the resource is ready; i.e. to poll it.
This seems to me to roughly fit the description and intent of the Polling Consumer pattern. In general, for asynchronous systems design, I can't recommend Enterprise Integration Patterns enough.
As other noted you can either have a notification channel using WebSockets or poll the backend. Personally I'd probably go with the latter for this case and would actually create several APIs, one for initiating the work and get back a URL with "job id" in it where the status of the job can be polled.
RESTfully that would look something like POST /instances to initiate a job GET /instances see all the instances that are running/created/stopped and GET /instances/<id> to see the status of a current instance (initiating , failed , running or whatever)
WebSockets would work, but might be an overkill for this use case. I would probably display a status of 'creating' or something similar after receiving the success response from the API call, and then start polling the API to see if the creation process has finished.

Processing Requests in the Background

I'm writing a REST API using ASP.Net Core and Microsoft SQL Server. One of my requirements is that clients will POST certain data to this API and the API will have to transform/process the data in some way before it is used or read. Turns out this processing is costly. So I'm thinking of doing it asynchronously in the background without blocking the POST request. I'm considering doing the processing:
In a scheduled SQL job
Using a separate Windows Service running in the background that reads from the DB, does the processing and writes back to it. It'll be slower than the SQL job I presume, but the code will be more readable.
Using Hangfire. Never used it. Not sure how well it works.
What are the best options for this? Are there there any best practices around this kind of thing?
Boilerplate
Store that data somewhere (RDBMS, nonSQL, etc)
Respond to user that his data has been scheduled for processing
Run some worker or pool of workers for job processing
Store result somewhere
Notify client that background job is complete (could be just a GET /jobs/id endpoint which client can check
Show that result
You can use your own daemon, process, script. If it's not enough and you need more features use that Hangfire which is looking solid.
I am using hangfire in production for almost 3 years, and yes this is a great way, retry policy from out of the box, UI dashboard, but extra options can be like this:
Serverless (Azure function, AWS Lambda)
AWS SQS or Azure Queue + Hosted services docs
Another option I've found is to implement IHostedService, a built-in interface in ASP.Net Core. See this page for details.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

Can we access firebase during performance testing of mobile apps using Apache jMeter?

I was planning to perform a load testing on an ios application which uses firebase for data storage.I have successfully recorded the test plan using Apache jMeter. But when I run the test plan in jMeter, it fails to access the firebase. Is there any way to access firebase during the process of load testing?
I have one field in firebase "last_logged_in_time". When I login with the ios app in iphone, the time gets automatically updated in the firebase . But when i run the test script using jMeter it is not updating.
It is just that you are most probably failing to really login.
Check the response you get after login using Viw Results Tree element.
Usually this is due to a missing :
- cookie manager
- header to correlate
- parameter in request to correlate
If you don't see the value updated when you run a JMeter test then the test doesn't do what it is supposed to be doing.
In the majority of cases you won't be able to replay a recorded JMeter test as you might need to pass a dynamic parameter(s) which are used for user identification, tracking, security purposes, etc.
The easiest way to detect whether your application is expecting some form of dynamic parameter is recording your test once again and comparing 2 recorded .jmx scripts. If you see any differences - you will need to correlate them. Correlation in JMeter is the process assuming:
Extracting dynamic parameter(s) from the previous response(s) using JMeter Post-Processors and storing them into JMeter Variables
Replacing recorded "hard-coded" values with the JMeter Variables from step 1 in the next request(s)
There is also an alternative way of recording a JMeter test, in this case you won't have to worry about proxies, SSL certificates and handling dynamic parameters - all will be done automatically, check out How to Cut Your JMeter Scripting Time by 80% guide for more details.

How to verify Jmeter Recorded Load Test Results

I have created a recorded test plan for my web application using Jmeter. My web application basically creates a financial plan for new and existing customers. I recorded all the steps required to create a financial plan for a new customer.
I am not sure how to validate if Jmeter actually runs recorded steps. I am using Graph Results and checking throughput at the end of the recorded plan.
I am not sure how to validate if Jmeter is actually running all Thread users with the recorded steps. Any suggestions would be appreciated. Thanks!
Add a View Results Tree listener to your test plan and execute your test with 1-2 virtual users. Inspect "Response Data" tab of each request to ensure it does what it is supposed to do
If you use any JMeter Variables and want to check their values - add Debug Sampler(s) to Test Plan where needed. Variables values can be checked via aforementioned View Results Tree listener.
See How to debug your Apache JMeter script guide for advanced information on debugging your JMeter test.
Don't forget to remove or disable View Results Tree listener for the actual load test as it is too resource intensive. Also make sure you run JMeter in command-line non-GUI mode for the actual load.

Resources