Nginx: Need explanation of $request_time - http

I have a server with nginx and I am sending an API response to a client.
I am logging my $request_time in the logs. I need to know if $request_time logs the time taken for processing at my server and sending the request to the client or does it log the time when the response is received by the client.
Does anything change based on weather the connection is keep-alive or not?
I read the docs which said:
According to nginx doc value of $request_time variable (available only at logging) will be compute when all data have been send and connection have been closed (by all upstreams and proxy also). And only then info is appended to log.
But the connection being closed part is not explained there.

According to the documentation:
$request_time
request processing time in seconds with a milliseconds
resolution; time elapsed between the first bytes were read from the
client and the log write after the last bytes were sent to the client
I.e. it times up until all data has been sent to the client, but not including the time it takes for the client to receive it.

Related

How browser maps the web response back to request?

Say i make the web request(www.amazon.com) to amazon web server through browser. Browser makes the connection with Internet through Internet service providers.
Request reaches to amazon server which process it and send back the response. Two questions here :-
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket till amazon process the response ?
Once my browser receives the response how does it map the response(sent from amazon) back to particular request . I believe there must be some unique identifier like
requestId must be present in response through which browser must be mapping to request. Is that correct ?
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket
till amazon process the response ?
It uses the same connection. Most of the time it's not even possible to connect back to a web browser due to firewall restrictions or Network Address Translation (NAT).
Once my browser receives the request how does it map the response(sent from amazon) back to particular request . I believe
there must be some unique identifier like requestId must be present in
response through which browser must be mapping to request. Is that
correct ?
It receives the response on the same socket. So the socket is the identifier. If HTTP2 multiplexing is used, then each multiplexed stream has a stream identifier, which is used to map the response back to the request.
The client opens a TCP-connection to the server, sends an HTTP-request and the server sends the response using the same connection. So, the browser knows from the connection that the response belongs to a specific request. This applies to basic HTTP 1.
This has to be distinguished from the programming model of an AJAX web application which is asynchronous and not synchronous. The application does not actively wait for a response. It is instead triggered later when the response arrives. The connection handling described above is what happens "under the hood".
Back to the connection handling: There are optimizations of HTTP that make things more complicated. HTTP 1.1 has a feature called "keep alive" and HTTP 2 goes further into this direction. The idea is to send more data over a single TCP-connection because establishing a TCP-connection is expensive (-> three way handshake, slow start). So, multiple requests and responses are sent over a single TCP-connection. Your question arises again in case of this optimization. If e. g. there is a sequence of requests A, B and a sequence of corresponding responses B, A within a single HTTP-connection how does the browser know the request a response belongs to? HTTP 2 introduces the concept of streams (RFC 7540, section 5):
A single HTTP/2 connection can contain multiple concurrently open
streams, with either endpoint interleaving frames from multiple
streams.
The order in which frames are sent on a stream is significant.
Streams are identified by an integer.
So, the stream identifier and the order within a stream can be used by the browser to find out the request a response belongs to.
HTTP 2 introduces another interesting feature which is called "push". The client can proactively send resources to the client that the client has not even requested. So, resources like e. g. images can be already sent when the HTML is requested avoiding another communication roundtrip.
HTTP uses Transfer Control Protocol. This is how it happens-
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket till amazon process the response ?
No. Most browsers use HTTP 1.1 so the connection between client and server is established only once until closed (Persistent connection).
Once my browser receives the request how does it map the response(sent from amazon) back to particular request . I believe there must be some unique identifier like requestId must be present in response through which browser must be mapping to request. Is that correct ?
There is a protocol(HTTP) on how the messages are exchanged. HTTP dictates that responses must arrive in the order they were requested. So it goes like-
Request;Response;Request;Response;Request;Response;...
And there is also a specific format of HTTP request (from your browser- HTTP client) and HTTP response message (from amazon HTTP server). There are response status codes that let the browser know if their request has been succeeded, otherwise tell the errors.
A few sample codes-

HTTP Client Programming - How to know server have sent all the datas

I am new to HTTP client and TCP\IP programming, so my question might be vague to experienced persons but please try to answer it.
I am implementing a HTTP client , after sending request to server I am waiting for a read event(Asynchronous socket) and when the read event comes I am extracting the data using read command and storing it in a local buffer.
Here how to know that the server has sent all the data's so that I can start processing the information?
I am confused at this stage
Well the content can be returned all together or in chunks. When the server knows before hand the length of the payload, it will provide the Content-Length header in the response. But sometimes the server does not know the total length of payload before start transmitting, then it uses the chunk transfer.
The response from the server should contain a http-header which has a field named content-length. you can use that length to determine the amount of data the server should send. And you are done receiving data once the server has sent the given amount

JBoss access log duration

For JBoss's AccessLogValve, we have activated the optional field,
%D - Time taken to process the request, in millis
This has been super-valuable in showing us which responses took the most time. I now need details on how this field is calculated. Specifically, when does JBoss stop the "time taken to process..." clock? For example, does JBoss stop it when the [first segment of the] response is given to the TCP/IP stack to transmit to the client? Or when the client sends an [ACK] following the last segment? Or when a network timeout occurs?
Knowing this will help me find root causes for some crazy-long response times in our production access logs.

How to measure nginx client's download speed rate

I want to have an entry in my access log which shows client's download speed .
I know about limit_rate but clearly that is not what i want .
I have also searched in lua_module , but i couldn't find this variable .
http://wiki.nginx.org/HttpLogModule
$request_time, the time it took nginx to work on the request, in seconds with millisecond precision (just seconds for versions older than 0.5.19)
Maxim Dounin (nginx core dev):
$request_time is always time since start of the request (when first
bytes are read from client) till end of the request (when last bytes
are sent to client and logging happens).
$bytes_sent, the number of bytes transmitted to client
$body_bytes_sent, the number of bytes, transmitted to client minus the response headers.
With these variaves you get the amount you need
$request_time only indicates the time to read the client request and is independent from the server's processing time or response. So $request_length divided by $request_time would indicate the client's upload speed.
But the OP asked about the client's download speed, which means we need two pieces of information:
The number of bytes sent by the server (e.g. $bytes_sent or $bytes_body/Content-length assuming the header is relatively small);
The elapsed time between the first byte sent by the server and the last byte received by the client - which we don't immediately have from the NGINX server variables, since it's up to the TCP subsystem to complete the delivery and this is done asynchronously.
So we need some logic on the client side to calculate and report the latter value back to the application server: a do-it-yourself "speed test".
The NGINX $msec (current time in seconds with the milliseconds resolution) could be used to timestamp the start of the response with millisecond accuracy, perhaps by using a custom HTTP header field (e.g. X-My-Timestamp).
The X-My-Timestamp value roughly maps to the HTTP response header Date: field.
Further, the client could read its system clock upon the complete reception of the response. The delta between the latter and X-My-Timestamp (assuming UTC and NTP-synchronized clocks) would yield the elapsed time required to calculate the download speed.
The last step would be to have the client report the value(s) back to the server in order to take appropriate action.

Clarification on Fiddler's Transfer Timeline View

I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.

Resources