I created a simple JMeter Http test.
I specified 50 users, each users will do do 30 http requests (one after the other), and the user's ramp up time is 1 second.
Then I added a Graph Result Listener, then recorded the performance of my application for 10minutes.
Question : What is Graph Results Listener measuring - per http request of each user? or all 30 http requests of each user?
I mean, if I have an Average of 5seconds, does that mean that the each http requests gets a response 5 seconds on average? ...or does that mean that all 30 http requests (totaling their response times) gets 5 seconds on average?
I mean, if I have an Average of
5seconds, does that mean that the each
http requests gets a response 5
seconds on average? ...or does that
mean that all 30 http requests
(totaling their response times) gets 5
seconds on average?
This depend how you created your test plan. But if you have one action (which is the HTTP request) and you specified 30 iterations with 50 users, then it means that 5 sec is the average time for the action performed 50 * 30 times.
Related
I am using the Telegram Bot API getChatMemberCount to query the number of people in various Telegram groups. I am not trying to send messages with the Bot API.
https://api.telegram.org/YOURBOT/getChatMemberCount?chat_id=#telegram
Some of the requests receive a 429 error.
I'm aware of the rate limits for sending messages, detailed here, and have tried waiting 5 and 10 seconds between requests, but am still getting the 429.
I've also tried waiting 10 minutes, and 15 minutes after receiving a 429 to re-send the requests that errored, then waiting 30 seconds between each query, and still got a 429 on all of the ones that previously had a 429. If I run the request on Groups that were previously successful, they work, at the same time that groups that previously received a 429 are still receiving a 429. So it almost seems like it's the group # itself that is the issue, and not the actual length of time between requests.
For example:
INITIAL REQUEST: Group A (success), (wait 10 seconds), Group B (429), (wait 10 seconds), Group C (success), (wait 10 seconds), Group D (429)
WAIT 15 MINUTES
2nd REQUEST: Group B (429), Group D (429), Group A (success), Group C (success)
If anyone has insight into the Telegram Bot API rate limits for requests that are not sending messages, please let me know what worked for you.
We have a web portal and we are using Nginx for rate-limiting by IP in this portal. So our current settings are something like
rate: 100 requests per second,
burst: 50
As per the documentation, Nginx uses milliseconds to calculate the number of requests. So, 100 requests per second translate to 1 request every 10 milliseconds. My question/confusion is how the will "burst" parameter behave if our Nginx server receives 10 requests in 10 milliseconds.
And is rate: 100 requests per second, burst: 50 equal to rate: 140 requests per second
burst: 10 .
Response time for an search transaction with more than 1000 records has high response time for singapore client more than 4 to 5 time than USA client during 125 user load test.please suggest
As per JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + Actual Server Response time
So the reasons could be in:
Due to long distance from your load generators to Singapore you have worse results due to the time required for the network packets to travel back and forth presumably due to high latency
Your Singapore instance is slower than the USA one due to i.e. worse hardware specifications, bandwidth, etc.
I have 2 similar servers: 16 vCPUs, 2.4 GHz, Intel Xeon E5-2676v3, 64 GiB memory.
First of them generates load,second process requests.
Config load.ini:
[phantom]
address=0.0.0.0 ;target's address(chanched, of course)
port=443 ;target's port
rps_schedule=step(1000,10000,1000,15s) ;load scheme
ssl=1
header_http = 1.1
headers = [Host: api.somehost.io]
[Content-Type: application/json]
[Connection: close]
uris = /api/test
Expected:
Load will be generated step by step, start from 1 000 RPS, every 15 add 1 000 RPS, up to 10 000 RPS.
We have:
Expected 1000, have ~1000 (avg response time 7 ms).
Expected 2000, have ~2000 (avg response time 30 ms).
Expected 3000, have ~2700 (avg response time 250 ms).
Expected 4000, have ~2700 (avg response time 250 ms).
Further, no matter how much the planned increased RPS, actual remains within ~ 2700.
Have some suggestions:
1. Yandex Tank "understands", that server can not process such load and do not increase it.
2. Server can not establish more connections
Testing url - /api/test is processed by rails application + nginx as a proxy.
I carried out testing using static files to check second suggestion. Results: https://overload.yandex.net/8175
Number of connections more than 2700 = ~200 000.
But this number less than required in load.ini file - const(500000,15s).
Question: why Yandex Tand do not generate required load? or may be I understand results incorrectly?
With an average server's response time 250ms, for one second each phantom instance can send about 4 requests per second.
So with a default amount of phantom instances (1000) tank physically cannot send > ~4000rps - it has no available instances, all of them are busy sending and waiting data.
You could try to use more instances, like defining in [phantom] section instances=10000 It's mentioned in https://yandextank.readthedocs.io/en/latest/core_and_modules.html#basic-options
I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests.
During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued.
Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT.
I am trying to figure out if that is considered a reasonable backlog.
Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain.
Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue?
The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that.
thanks in advance,
David Jones
I don't know if you can specifically measure time before connection is accepted, but you can measure latency and variability of response times (and that's the part that really matters) using ab tool that comes with apache utils.
It will generate traffic with concurrency you configure and then break down response times and give you standard deviation.
Server Hostname: stackoverflow.com
Document Length: 192529 bytes
Concurrency Level: 3
Time taken for tests: 48.769 seconds
Complete requests: 100
Failed requests: 44
(Connect: 0, Receive: 0, Length: 44, Exceptions: 0)
Write errors: 0
Total transferred: 19427481 bytes
HTML transferred: 19400608 bytes
Requests per second: 2.05 [#/sec] (mean)
Time per request: 1463.078 [ms] (mean)
Time per request: 487.693 [ms] (mean, across all concurrent requests)
Transfer rate: 389.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 101 109 9.0 105 152
Processing: 829 1336 488.0 1002 2246
Waiting: 103 115 38.9 104 368
Total: 939 1444 485.2 1112 2351
Percentage of the requests served within a certain time (ms)
50% 1112
66% 1972
75% 1985
80% 1990
90% 2062
95% 2162
98% 2310
99% 2351
100% 2351 (longest request)
(SO didn't perform particularly well :)
The other thing you could do is to put request timestamp in the request itself and compare immediately when handling the request. If you generate traffic on the same machine or have clocks synchronised, it will let you measure request processing time.