Fiddler ServerDoneResponse vs GotResponseHeaders - http

I am looking at fiddler statistics below is my result:
ACTUAL PERFORMANCE
--------------
ClientConnected: 13:37:48.551
ClientBeginRequest: 13:37:49.281
GotRequestHeaders: 13:37:49.281
ClientDoneRequest: 13:37:49.283
Determine Gateway: 0ms
DNS Lookup: 0ms
TCP/IP Connect: 0ms
HTTPS Handshake: 0ms
ServerConnected: 13:37:48.708
FiddlerBeginRequest: 13:37:49.283
ServerGotRequest: 13:37:49.284
ServerBeginResponse: 13:37:49.627
GotResponseHeaders: 13:37:49.627
ServerDoneResponse: 13:38:25.833
ClientBeginResponse: 13:38:25.835
ClientDoneResponse: 13:38:25.872
Overall Elapsed: 0:00:36.590
As you see, ServerDoneResponse minus GotResponseHeaders, seems to be time required for client to get response from server.
I have checked ServerDoneResponse means "Exact time that Fiddler got the first bytes of the server's HTTP response. "
ServerDoneResponse - Exact time that Fiddler got the last bytes of the server's HTTP response.
From URL = http://fiddler.wikidot.com/timers
But it does not mentions me about GotResponseHeaders ?
So my understanding is, for getting response from server to client it has taken most of the time i.e. ServerBeginResponse: 13:37:49.627 minus ServerDoneResponse: 13:38:25.833 ? Is it correct ?

I did not get any documentation for this but I think 'GotResponseHeaders' is the time when Fiddler starts getting the response back from the server. Not necessarily that the response has been completed at that time. To confirm this, I did a few tests with scenarios mentioned below:
Added a delay at server side code so that it starts the response a little later.
Downloaded a 10MB file so that it takes some time for the response to complete.
In the first case when I added a delay before responding, 'ServerBeginResponse' was delayed too and 'GotResponseHeaders' was same as 'ServerBeginResponse'. This suggests that Fiddler got the response at the same time server started responding(again, it's not necessary that the response has been completed).
for the second case too, 'GotResponseHeaders' was same as 'ServerBeginResponse' suggesting 'GotResponseHeaders' is the time when Fiddler started receiving the response. Note that since the file download took some time, 'ServerDoneResponse' was delayed by few seconds.
In above case, if ServerDoneResponse minus GotResponseHeaders is taking long then you should check the size of the response and also the network latency.

Related

In Fiddler, identify time it took to download a resource (like an image, js, css)

After capturing network traffic in Fiddler while accessing some application via browser, how can I determine the amount of time it took for browser to download a given resource.
For example, the browser is trying to download an image. I can see the usual statistics about Client and Server response time on the GET request, but which metric tells me how long it actually took to download the image itself?
ClientConnected: 09:12:32.951
ClientBeginRequest: 09:12:32.951
GotRequestHeaders: 00:00:00.000
ClientDoneRequest: 09:12:32.951
Determine Gateway: 0ms
DNS Lookup: 0ms
TCP/IP Connect: 0ms
HTTPS Handshake: 0ms
ServerConnected: 09:12:32.951
FiddlerBeginRequest: 09:12:32.951
ServerGotRequest: 09:12:32.951
ServerBeginResponse: 09:12:33.123
GotResponseHeaders: 00:00:00.000
ServerDoneResponse: 09:12:33.139
ClientBeginResponse: 09:12:33.139
ClientDoneResponse: 09:12:33.139
Overall Elapsed: 0:00:00.188
An HTTP request is a request irrespective of it's a api call or a http image resource request call. As #Robert mentioned in his comment you should use browser's integrated dev tools to measure such performance metrics.
One more thing you need to understand is typically an html page is composed of multiple resources hence you will see multiple HTTP requests in Fiddler/Integration network tool for the same page. e.g. below can execute in order when you hit a url which gives you HTML
Download HTML page
once Browser has the HTML page it starts parsing it to render it and wherever it finds a <img...> tag it makes another http://yourwebsite.com/image.jpg call and that should appear as a new http request in Fiddler for which you will get same performance stats.
one more thing you should pay attention to is the expiry policy set no resource i.e. typically browser downloads resource first time and after that for a specific period of time they fetch it from cache instead of downloading it again to improve performance so the stats might change next time.

Can HTTP request fail half way?

I am talking about only one case here.
client sent a request to server -> server received it and returned a response -> unfortunately the response dropped.
I have only one question about this.
Is this case even possible? If it's possible then what should the response code be, or will client simply see it as read timeout?
As I want to sync status between client/server and want 100% accuracy no matter how poor the network is, the answer to this question can greatly affect the client's 'retry on failure' strategy.
Any comment is appreciated.
Yes, the situation you have described is possible and occurs regularly. It is called "packet loss". Since the packet is lost, the response never reaches the client, so no response code could possibly be received. Web browsers will display this as "Error connecting to server" or similar.
HTTP requests and responses are generally carried inside TCP packets. If a TCP packet carrying the HTTP response does not arrive in the expected time window, the request is retransmitted. The request will only be retransmitted a certain number of times before a timeout error will occur and the connection is considered broken or dead. (The number of attempts before TCP timeout can be configured on both the client and server sides.)
Is this case even possible?
Yes. It's easy to see why if you picture a physical cable between the client and the server. If I send a request down the cable to the server, and then, before the server has a chance to respond, unplug the cable, the server will receive the request, but the client will never "hear" the response.
If it's possible then what should the response code be, or will client simply see it as read timeout?
It will be a timeout. If we go back to our physical cable example, the client is sitting waiting for a response that will never come. Hopefully, it will eventually give up.
It depends on exactly what tool or library you're using how this is wrapped up, however - it might give you a specific error code for "timeout" or "network error"; it might wrap it up as some internal 5xx status code; it might raise an exception inside your code; etc.

Tcp latency analysis in wireshark

I wanted to know what all factors do I need to check while analysing latency issue on the firewall Wireshark's capture.?
I know about timestamps (before previous packet reached).. But nothing after than that
If you are talking about latency of HTTP transaction, you can consider 3 aspects:
roundtrip time, typically it's the time from your HTTP request to the TCP ACK for the request
Initial response time: that's the time between your HTTP request and first packet in the HTTP response.
Total response time: that's the time between your HTTP request and last packet of HTTP response (Wireshark will tell you the last packet of response since that's when you see the full http response)
Good luck.

JBoss access log duration

For JBoss's AccessLogValve, we have activated the optional field,
%D - Time taken to process the request, in millis
This has been super-valuable in showing us which responses took the most time. I now need details on how this field is calculated. Specifically, when does JBoss stop the "time taken to process..." clock? For example, does JBoss stop it when the [first segment of the] response is given to the TCP/IP stack to transmit to the client? Or when the client sends an [ACK] following the last segment? Or when a network timeout occurs?
Knowing this will help me find root causes for some crazy-long response times in our production access logs.

Clarification on Fiddler's Transfer Timeline View

I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.

Resources