latency measurement in Chrome network tab - asp.net

I'm running my app on my local machine and I'm making ajax requests from Chrome. When I make a request, I see that the network tab shows 2 numbers in the Time column.
How should I interpret these numbers? The app is making a database call and then processes the data before sending it to the client. On the first row, it shows 133/106; does this mean that once the requests hits the local machine it only takes 27ms to process on the server?
Thanks.

Latency, the time between making the request and the server's first response, is shown in the lighter shade within each bar.
106ms is Latency. 133ms is Time. 27ms is receive data time.

Have a look at chrome network tab timeline view.
Waiting is time spent waiting for initial response from server. Receiving is time spent receiving the response data.
So, roughly,
Latency = Waiting
Time = Waiting + Receiving

Related

How to send big data to an HTTP server where the client has limited RAM?

I am designing a temperature and humidity logging system using microcontrollers and embedded C. The system sends the data to the server using GET method (could be POST,too) whenever it gathers a new data from the sensors. Whenever the power and/or the internet is gone, it logs to an SD card.
I want to send these logged data to the server whenever the internet comes back. When sent separately, each of my request is as follows and it takes about 5 seconds to complete even on my local network.
GET /add.php?userID=0000mac=000:000:000:000:000:000&id=0000000000sensor=000&temp=%2B000.000&hum=000.000 HTTP/1.1\r\nHost: 192.168.10.25\r\n\r\n
However, since my available RAM is very limited to only about 400 bytes in this microcontroller, I cannot buffer and send all the logged data in one request.
For an electricity/internet loss of 2 days, about 3000 of data set is logged. How do I send these to the server in a very quick way?
Well I think a simple for loop will do it. Here is the pseudocode:
foreach(loggedRequest){
loadTheRequestFromTheLogFile();
sendTheDataForThisRequest(); // Hang here until the server returns a response
clearMemoryFromPreviousRequest();
}
By waiting for the server's response, you will be sure that the data got to the server as well as that you will not have the ram full.
You can also go with an inner for loop with a fixed number of requests and then wait for all their responses. This will be faster but you have to do multiple tests to be sure you're not reaching to the ram limit.

Any minimal source code to measure network latency (client server program)

I want to measure latency across two Linux boxes connected directly via a 10 gig optical fibre. Basically I want to measure RTT latency after a packet sent has been received back on the same machine. So basically client will send a packet to server and take the current time, server will return the packet back to client and second time stamp will be taken once the packet is received. Total latency will be difference of two time stamp.
I would like to meausure latency for both UDP and TCP protocols.
I have tried using sockperf and it claims doing similar things but I want something very simple one file code which I can use for bench-marking while understanding fully.
Can you share any links of simple program to do this? Please not my interest is in only latensy and not in throughput.
Sync the time in the two Linux box. Form a data buffer , filling the time stamp in the header & dummy data in the payload. Then send the data over the TCP/UDP socket to the other end & echo the data from the other end. Calculate the elapsed time from the header time stamp which would give you the accurate RTT.

Implementing TCP keep alive at the application level

We have a shell script setup on one Unix box (A) that remotely calls a web service deployed on another box (B). On A we just have the scripts, configurations and the Jar file needed for the classpath.
After the batch job is kicked off, the control is passed over from A to B for the transactions to happen on B. Usually the processing is finished on B in less than an hour, but in some cases (when we receive larger data for processing) the process continues for more than an hour. In those cases the firewall tears down the connection between the 2 hosts after an inactivity of 1 hour. Thus, the control is never returned back from B to A and we are not notified that the batch job has ended.
To tackle this, our network team has suggested to implement keep-alives at the application level.
My question is - where should I implement those and how? Will that be in the web service code or some parameters passed from the shell script or something else? Tried to google around but could not find much.
You basically send an application level message and wait for a response to it. That is, your applications must support sending, receiving and replying to those heart-beat messages. See FIX Heartbeat message for example:
The Heartbeat monitors the status of the communication link and identifies when the last of a string of messages was not received.
When either end of a FIX connection has not sent any data for [HeartBtInt] seconds, it will transmit a Heartbeat message. When either end of the connection has not received any data for (HeartBtInt + "some reasonable transmission time") seconds, it will transmit a Test Request message. If there is still no Heartbeat message received after (HeartBtInt + "some reasonable transmission time") seconds then the connection should be considered lost and corrective action be initiated....
Additionally, the message you send should include a local timestamp and the reply to this message should contain that same timestamp. This allows you to measure the application-to-application round-trip time.
Also, some NAT's close your TCP connection after N minutes of inactivity (e.g. after 30 minutes). Sending heart-beat messages allows you to keep a connection up for as long as required.

Can someone interpret these apache bench results, is there something that stands out?

Below is a apache bench run for 10K requests with 50 concurrent threads.
I need help understanding the results, does anything stand out in the results that might be pointing to something blocking and restricting more requests per second?
I'm looking at the connection time section, and see 'waiting' and 'processing'. It shows the mean time for waiting is 208, and the mean time to connect is 0 and processing is 208..yet the total is 208. Can someone explain this to me as it doesn't make much sense to me.
Connect time is time it took ab to establish connection with your server. you are probably running it on same server or within LAN, so your connect time is 0.
Processing time is total time server took to process and send complete response.
Wait time is time between sending request and receiving 1st byte of response.
Again, since you are running on same server, and small size of file, your processing time == wait time.
For real benchmark, try ab from multiple points near your target market to get real idea of latency. Right now all the info you have is the wait time.
This question is getting old, but I've run into the same problem so I might as well contribute an answer.
You might benefit from disabling either TCP nagle on the agent side, or ACK delay on the server side. They can interact badly and cause an unwanted delay. Like me, that's probably why your minimum time is exactly 200ms.
I can't confirm, but my understanding is that the problem is cross-platform since it's part of the TCP spec. It might be just for quick connections with a small amount of data sent and received, though I've seen reports of issues for larger transfers too. Maybe somebody who knows TCP better can pitch in.
Reference:
http://en.wikipedia.org/wiki/TCP_delayed_acknowledgment#Problems
http://blogs.technet.com/b/nettracer/archive/2013/01/05/tcp-delayed-ack-combined-with-nagle-algorithm-can-badly-impact-communication-performance.aspx

Clarification on Fiddler's Transfer Timeline View

I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.

Resources