I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.
Related
I'm troubleshooting a slowness in my web API application. The issue I find is:
- a packet arrives containing the HTTP POST with Headers only.
- Then time passes (sometimes very long ~30 sec, sometimes milliseconds).
- another packet arrives containing the payload.
The split is always between headers and payload.
This causes the processing of the request in the app to be delayed by this interval.
Is it normal a packet will be split like that?
Have you encountered it?
Is it possible an AWS ELBv1 is somehow doing that?
Any direction will help - I'm totally confused...
Where in Paw 2 can I see the duration of a request / response? I suppose I could subtract the response time from the request time but there must be an easier way.
It's shown in the title bar right after the request is completed.
Though, as of version Paw 2.1.1, this time isn't accurate enough to judge of your server or network connection. The underlying HTTP library is optimized for logging / debugging and is loosing time between connecting, sending, receving, etc.
For JBoss's AccessLogValve, we have activated the optional field,
%D - Time taken to process the request, in millis
This has been super-valuable in showing us which responses took the most time. I now need details on how this field is calculated. Specifically, when does JBoss stop the "time taken to process..." clock? For example, does JBoss stop it when the [first segment of the] response is given to the TCP/IP stack to transmit to the client? Or when the client sends an [ACK] following the last segment? Or when a network timeout occurs?
Knowing this will help me find root causes for some crazy-long response times in our production access logs.
I want to have an entry in my access log which shows client's download speed .
I know about limit_rate but clearly that is not what i want .
I have also searched in lua_module , but i couldn't find this variable .
http://wiki.nginx.org/HttpLogModule
$request_time, the time it took nginx to work on the request, in seconds with millisecond precision (just seconds for versions older than 0.5.19)
Maxim Dounin (nginx core dev):
$request_time is always time since start of the request (when first
bytes are read from client) till end of the request (when last bytes
are sent to client and logging happens).
$bytes_sent, the number of bytes transmitted to client
$body_bytes_sent, the number of bytes, transmitted to client minus the response headers.
With these variaves you get the amount you need
$request_time only indicates the time to read the client request and is independent from the server's processing time or response. So $request_length divided by $request_time would indicate the client's upload speed.
But the OP asked about the client's download speed, which means we need two pieces of information:
The number of bytes sent by the server (e.g. $bytes_sent or $bytes_body/Content-length assuming the header is relatively small);
The elapsed time between the first byte sent by the server and the last byte received by the client - which we don't immediately have from the NGINX server variables, since it's up to the TCP subsystem to complete the delivery and this is done asynchronously.
So we need some logic on the client side to calculate and report the latter value back to the application server: a do-it-yourself "speed test".
The NGINX $msec (current time in seconds with the milliseconds resolution) could be used to timestamp the start of the response with millisecond accuracy, perhaps by using a custom HTTP header field (e.g. X-My-Timestamp).
The X-My-Timestamp value roughly maps to the HTTP response header Date: field.
Further, the client could read its system clock upon the complete reception of the response. The delta between the latter and X-My-Timestamp (assuming UTC and NTP-synchronized clocks) would yield the elapsed time required to calculate the download speed.
The last step would be to have the client report the value(s) back to the server in order to take appropriate action.
I'm running my app on my local machine and I'm making ajax requests from Chrome. When I make a request, I see that the network tab shows 2 numbers in the Time column.
How should I interpret these numbers? The app is making a database call and then processes the data before sending it to the client. On the first row, it shows 133/106; does this mean that once the requests hits the local machine it only takes 27ms to process on the server?
Thanks.
Latency, the time between making the request and the server's first response, is shown in the lighter shade within each bar.
106ms is Latency. 133ms is Time. 27ms is receive data time.
Have a look at chrome network tab timeline view.
Waiting is time spent waiting for initial response from server. Receiving is time spent receiving the response data.
So, roughly,
Latency = Waiting
Time = Waiting + Receiving