I have created a recorded test plan for my web application using Jmeter. My web application basically creates a financial plan for new and existing customers. I recorded all the steps required to create a financial plan for a new customer.
I am not sure how to validate if Jmeter actually runs recorded steps. I am using Graph Results and checking throughput at the end of the recorded plan.
I am not sure how to validate if Jmeter is actually running all Thread users with the recorded steps. Any suggestions would be appreciated. Thanks!
Add a View Results Tree listener to your test plan and execute your test with 1-2 virtual users. Inspect "Response Data" tab of each request to ensure it does what it is supposed to do
If you use any JMeter Variables and want to check their values - add Debug Sampler(s) to Test Plan where needed. Variables values can be checked via aforementioned View Results Tree listener.
See How to debug your Apache JMeter script guide for advanced information on debugging your JMeter test.
Don't forget to remove or disable View Results Tree listener for the actual load test as it is too resource intensive. Also make sure you run JMeter in command-line non-GUI mode for the actual load.
Related
What I'm trying to do:
Send http requests to my web application from a dynamic workflow. I would like the load tester (Jmeter) to generate it's own workflow and post/get content to/from my site.
What I have done:
I have a dummy Wordpress site that I'm using to test the CPU and memory utilization on my host machines as well as the efficiency of my load balancing algorithm. Currently, I'm using Jmeter to design my workflow and test my system. However, I realized that Jmeter is only sending the same workflow to the load balancer. Due to this, the resource utilization on my backend servers are equal across the board. I would like to test and utilize the differences in CPU utilization. Therefore, I need a way to dynamically post/receive content from my dummy Wordpress site.
As of now JMeter is not able to automatically generate end-to-end test plan simulating real users doing various stuff to your application, the options are in:
Recording anticipated Wordpress user activities using JMeter HTTP(S) Test Script Recorder and after correlation of dynamic parameters and parameterization of test data (i.e. usernames) you should be able to conduct a more or less realistic load test
Using a ready scripts collection like WordPress JMeter Template
Use a machine-learning AI-based test tool like up9
If you're testing a web site behind the load balancer make sure to add DNS Cache Manager to your test plan.
You can put all the requests under Random Order Controller
Random Order Controller is much like a Simple Controller in that it will execute each child element at most once, but the order of execution of the nodes will be random
I am using NHibernate and ASP.NET/MVC. I use one session per request to handle database operations. For integration testing I am looking for a way to have each test run in an isolated mode that will not change the database and interfere with other tests running in parallel. Something like a transaction that can be rolled back at the end of the test. The main challenge is that each test can make multiple requests. If one request changes data the next request must be able to see these changes etc.
I tried binding the session to the auth cookie to create child sessions for the following requests of a test. But that does not work well, as neither sessions nor transactions are threadsafe in NHibernate. (it results in trying to open multiple DataReaders on the same connection)
I also checked if TransactionScope could be a way, but could not figure out how to use it from multple threads/requests.
What could be a good way to make this happen?
I typically do this by operating on different data.
for example, say I have an integration test which checks a basket total for an e-commerce website.
I would go and create a new user, activate it, add some items to a basket, calculate the total, delete all created data and assert on whatever I need.
so the flow is : create the data you need, operate on it, check it, delete it. This way all the tests can run in parallel and don't interfere with each other, plus the data is always cleaned up.
I was planning to perform a load testing on an ios application which uses firebase for data storage.I have successfully recorded the test plan using Apache jMeter. But when I run the test plan in jMeter, it fails to access the firebase. Is there any way to access firebase during the process of load testing?
I have one field in firebase "last_logged_in_time". When I login with the ios app in iphone, the time gets automatically updated in the firebase . But when i run the test script using jMeter it is not updating.
It is just that you are most probably failing to really login.
Check the response you get after login using Viw Results Tree element.
Usually this is due to a missing :
- cookie manager
- header to correlate
- parameter in request to correlate
If you don't see the value updated when you run a JMeter test then the test doesn't do what it is supposed to be doing.
In the majority of cases you won't be able to replay a recorded JMeter test as you might need to pass a dynamic parameter(s) which are used for user identification, tracking, security purposes, etc.
The easiest way to detect whether your application is expecting some form of dynamic parameter is recording your test once again and comparing 2 recorded .jmx scripts. If you see any differences - you will need to correlate them. Correlation in JMeter is the process assuming:
Extracting dynamic parameter(s) from the previous response(s) using JMeter Post-Processors and storing them into JMeter Variables
Replacing recorded "hard-coded" values with the JMeter Variables from step 1 in the next request(s)
There is also an alternative way of recording a JMeter test, in this case you won't have to worry about proxies, SSL certificates and handling dynamic parameters - all will be done automatically, check out How to Cut Your JMeter Scripting Time by 80% guide for more details.
I have finally given up on this and I'm looking for some help on this. Here is what I have found so far.
First of all, web performance tests and/or load tests in visual studio do NOT use the browser (during playback it's not used, but it is used during the recording of the test) which is when/where the ASPSessionId is stored in a cookie or form post parameter.
I have web performance tests that have extraction rules to get the ASPSessionID from the server which I try to set in a later request as header/form post parameter, however this doesn't seem to help and it appears that I am just using the same one over and over causing the server to respond differently (presents different pages)
On the system I am testing a user will go to the site and fill out an application. If the user is in the same session the user can fill out multiple subsequent applications and have the ability to re-use some data. If that is true, the user is presented a page to select the re-usable data. If the session is new the user does not get to do this.
If I play the web test over and over manually, it works as expected (new session ID, no re-use data page presented). However, if I play that same test over and over in a load test, the first time it will pass and each time after it fails, because the session is kept open and then the server provides different pages than the ones that exist in my web performance test. The failures on the subsequent applications includes fails like (expected response URL, extraction rules...etc)
So I was using an extraction rule to get the ASPSessionID from the server and store it in a cookie and/or web form post parameters and then set it, but it is not working.
What can I do in the web performance test to successfully close the ASPSessionID so that the test runs like it is running for the first time in the load test?
In the LoadTest Test Mix, set the "Percentage of New Users" to 100. That completely solved it for me.
What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets?
I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this:
The HTTP packet format changes very often in our application (e.g. when
we optimize an AJAX call). We do not want to adapt all test scripts just because
there is a slight change in HTTP packet format.
Our test team shall not need to know any internals about our application
to write their test scripts. A tool that replays HTTP packets, however, requires
the team to know the format of HTTP requests and responses, such that they
can adapt details of the replayed HTTP packets (e.g. user name).
The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as:
Invoke web page at URL http://... .
First, enter XXX into text field XXX.
Then, press button XXX.
Wait until response has been received from web server.
Verify that text field XXX now contains the text XXX.
The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.
JMeter I've found to be pretty helpful, it also has a recording functionality to record use cases so you don't have to specify each GET/POST manually but rather "click" the use case once and then let JMeter repeat it.
http://jmeter.apache.org/
A license can be expensive for it (if you dont have MSDN), but Visual Studio 2010 Ultimate edition has a great set of load and stress testing tools that do what you describe. You can try it out for free for 90 days here.
TestMaker by PushToTest.com can run recorded scripts such as Selenium as well as many different languages like HTML, Java, Ruby, Groovy, .Net, VB, PHP, etc. It has a common reporting infrastructure and you can create load in your test lab or using cloud testing environments like EC2 for virtual test labs.
They provide free webinars on using open source testing tools on a monthly basis and there is one next Tuesday.
http://www.pushtotest.com
There are a few approaches; I've been in situations, however, where I've had to roll my own load generating utilities.
As far as your test script is concerned it involves:
sending a GET request to http://form entry page (only checking if a 200 response is given)
sending a POST request to http://form submit page with pre-generated key/value pairs for text XXX and performing a regexp check on the response
Unless your web page is complex AJAX there is no need to "simulate a button press" - this is taken care of by the POST request.
Given that your test consists of just a 2-step process there should be several automated load packages that could do this.
I've previously used httperf for load testing a large website: it can simulate a session consisting of several requests and can simulate a large number of users (i.e. sessions) simultaneously. For example, if your website generated a session cookie from the home page you could make that the first request, httperf would then use that cookie for subsequent requests, until it had finished doing the list of requests supplied.
What about http://watin.sourceforge.net/ ?