How to measure nginx client's download speed rate - nginx

I want to have an entry in my access log which shows client's download speed .
I know about limit_rate but clearly that is not what i want .
I have also searched in lua_module , but i couldn't find this variable .

http://wiki.nginx.org/HttpLogModule
$request_time, the time it took nginx to work on the request, in seconds with millisecond precision (just seconds for versions older than 0.5.19)
Maxim Dounin (nginx core dev):
$request_time is always time since start of the request (when first
bytes are read from client) till end of the request (when last bytes
are sent to client and logging happens).
$bytes_sent, the number of bytes transmitted to client
$body_bytes_sent, the number of bytes, transmitted to client minus the response headers.
With these variaves you get the amount you need

$request_time only indicates the time to read the client request and is independent from the server's processing time or response. So $request_length divided by $request_time would indicate the client's upload speed.
But the OP asked about the client's download speed, which means we need two pieces of information:
The number of bytes sent by the server (e.g. $bytes_sent or $bytes_body/Content-length assuming the header is relatively small);
The elapsed time between the first byte sent by the server and the last byte received by the client - which we don't immediately have from the NGINX server variables, since it's up to the TCP subsystem to complete the delivery and this is done asynchronously.
So we need some logic on the client side to calculate and report the latter value back to the application server: a do-it-yourself "speed test".
The NGINX $msec (current time in seconds with the milliseconds resolution) could be used to timestamp the start of the response with millisecond accuracy, perhaps by using a custom HTTP header field (e.g. X-My-Timestamp).
The X-My-Timestamp value roughly maps to the HTTP response header Date: field.
Further, the client could read its system clock upon the complete reception of the response. The delta between the latter and X-My-Timestamp (assuming UTC and NTP-synchronized clocks) would yield the elapsed time required to calculate the download speed.
The last step would be to have the client report the value(s) back to the server in order to take appropriate action.

Related

Response time for an search with more than 1000 records is high

Response time for an search transaction with more than 1000 records has high response time for singapore client more than 4 to 5 time than USA client during 125 user load test.please suggest
As per JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + Actual Server Response time
So the reasons could be in:
Due to long distance from your load generators to Singapore you have worse results due to the time required for the network packets to travel back and forth presumably due to high latency
Your Singapore instance is slower than the USA one due to i.e. worse hardware specifications, bandwidth, etc.

Nginx: Need explanation of $request_time

I have a server with nginx and I am sending an API response to a client.
I am logging my $request_time in the logs. I need to know if $request_time logs the time taken for processing at my server and sending the request to the client or does it log the time when the response is received by the client.
Does anything change based on weather the connection is keep-alive or not?
I read the docs which said:
According to nginx doc value of $request_time variable (available only at logging) will be compute when all data have been send and connection have been closed (by all upstreams and proxy also). And only then info is appended to log.
But the connection being closed part is not explained there.
According to the documentation:
$request_time
request processing time in seconds with a milliseconds
resolution; time elapsed between the first bytes were read from the
client and the log write after the last bytes were sent to the client
I.e. it times up until all data has been sent to the client, but not including the time it takes for the client to receive it.

Any minimal source code to measure network latency (client server program)

I want to measure latency across two Linux boxes connected directly via a 10 gig optical fibre. Basically I want to measure RTT latency after a packet sent has been received back on the same machine. So basically client will send a packet to server and take the current time, server will return the packet back to client and second time stamp will be taken once the packet is received. Total latency will be difference of two time stamp.
I would like to meausure latency for both UDP and TCP protocols.
I have tried using sockperf and it claims doing similar things but I want something very simple one file code which I can use for bench-marking while understanding fully.
Can you share any links of simple program to do this? Please not my interest is in only latensy and not in throughput.
Sync the time in the two Linux box. Form a data buffer , filling the time stamp in the header & dummy data in the payload. Then send the data over the TCP/UDP socket to the other end & echo the data from the other end. Calculate the elapsed time from the header time stamp which would give you the accurate RTT.

Clarification on Fiddler's Transfer Timeline View

I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.

How Browser Cache Behaves If Local Clock is Not Consistent With Server Clock?

While reading http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html, I found that the caching algorithm is more complex than I thought.
According to RFC2616, HTTP request is sent if response is fresh. and
response_is_fresh = (freshness_lifetime > current_age)
The current_age can be derived from max-age or Expires header, which both have nothing to do with local clock. However, the calculation of freshness_lifetime depends on local clock.
So, if local clock of browser is not consistent with clock at server side. Is it possible that HTTP caching doesn't exempt unnecessary request sent?
Thanks
Their clocks do not have to be in sync, but the client needs a working clock to be able to determine the age of a cached resource and match that against max-age. If max-age is not present in the response, a client could calculate it by comparing the Date and Expires headers to each other.
However, if the client were to suffer from extreme clock skew, the cache would break down and resources could get cached incorrectly since their age can not reliably be determined.

Resources