Any minimal source code to measure network latency (client server program) - networking

I want to measure latency across two Linux boxes connected directly via a 10 gig optical fibre. Basically I want to measure RTT latency after a packet sent has been received back on the same machine. So basically client will send a packet to server and take the current time, server will return the packet back to client and second time stamp will be taken once the packet is received. Total latency will be difference of two time stamp.
I would like to meausure latency for both UDP and TCP protocols.
I have tried using sockperf and it claims doing similar things but I want something very simple one file code which I can use for bench-marking while understanding fully.
Can you share any links of simple program to do this? Please not my interest is in only latensy and not in throughput.

Sync the time in the two Linux box. Form a data buffer , filling the time stamp in the header & dummy data in the payload. Then send the data over the TCP/UDP socket to the other end & echo the data from the other end. Calculate the elapsed time from the header time stamp which would give you the accurate RTT.

Related

Response time for an search with more than 1000 records is high

Response time for an search transaction with more than 1000 records has high response time for singapore client more than 4 to 5 time than USA client during 125 user load test.please suggest
As per JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + Actual Server Response time
So the reasons could be in:
Due to long distance from your load generators to Singapore you have worse results due to the time required for the network packets to travel back and forth presumably due to high latency
Your Singapore instance is slower than the USA one due to i.e. worse hardware specifications, bandwidth, etc.

Latency in TCP is Round Trip Time(RTT) or one way time

I would like to know how the latency in TCP communication is calculated. Is it the Round Trip Tme(RTT) i,e the time between the sending of message to the reciever and the time of reception of acknowledgement at the sender. Or one way time that is sending the message from the sender to the reciever.
Thanks,
Karthick
It's neither. It's the time between when data is sent by the application on one side and is received by the application on the other side. It must be at least the one way time. But depending on what's going on with the TCP protocol itself, it can include the time it takes an acknowledgement to go the other way or the time it takes certain TCP timers to expire.

How to send big data to an HTTP server where the client has limited RAM?

I am designing a temperature and humidity logging system using microcontrollers and embedded C. The system sends the data to the server using GET method (could be POST,too) whenever it gathers a new data from the sensors. Whenever the power and/or the internet is gone, it logs to an SD card.
I want to send these logged data to the server whenever the internet comes back. When sent separately, each of my request is as follows and it takes about 5 seconds to complete even on my local network.
GET /add.php?userID=0000mac=000:000:000:000:000:000&id=0000000000sensor=000&temp=%2B000.000&hum=000.000 HTTP/1.1\r\nHost: 192.168.10.25\r\n\r\n
However, since my available RAM is very limited to only about 400 bytes in this microcontroller, I cannot buffer and send all the logged data in one request.
For an electricity/internet loss of 2 days, about 3000 of data set is logged. How do I send these to the server in a very quick way?
Well I think a simple for loop will do it. Here is the pseudocode:
foreach(loggedRequest){
loadTheRequestFromTheLogFile();
sendTheDataForThisRequest(); // Hang here until the server returns a response
clearMemoryFromPreviousRequest();
}
By waiting for the server's response, you will be sure that the data got to the server as well as that you will not have the ram full.
You can also go with an inner for loop with a fixed number of requests and then wait for all their responses. This will be faster but you have to do multiple tests to be sure you're not reaching to the ram limit.

How To Compute HTTP request processing time without network latency?

Because of geographic distance between server and client network latency can vary a lot. So I want to get "pure" req. processing time of service without network latency.
I want to get network latency as TCP connecting time. As far as I understand this time depends a lot on network.
Main idea is to compute:
TCP connecting time,
TCP first packet receive time,
Get "pure" service time = TCP first packet receive (waiting time) – TCP connecting.
I divide TCP connecting by 2 because in fact there are 2 requests-response (3-way handshake).
I have two questions:
Should I compute TCP all packets receive time instead of only first packet?
Is this method okay in general?
PS: As a tool I use Erlang's gen_tcp. I can show the code.
If at all, i guess the "pure" service time = TCP first packet receive - TCP connecting.. You have written other way round.
A possible answer to your first questions is , you should ideally compute atleast some sort of average by considering pure service time of many packets rather than just first packet.
Ideally it can also have worst case, average case, best case service times.
For second question to be answered we would need why would you need pure service time only. I mean since it is a network application, network latencies(connection time etc...) should also be included in the "response time", not just pure service time. That is my view based on given information.
I have worked on a similar question when working for a network performance monitoring vendor in the past.
IMHO, there are a certain number of questions to be asked before proceeding:
connection time and latency: if you base your network latency metric, beware that it takes into account 3 steps: Client sends a TCP/SYN, Server responds with a TCP/SYN-ACK, the Client responds by a final ACK to set up the TCP connection. This means that the CT is equivalent to 1.5 RTT (round trip time). This validates taking the first two steps of the TCP setup process in acccount like you mention.
Taking in account later TCP exchanges: while this first sounds like a great idea to keep evaluating network latency in the course of the session, this becomes a lot more tricky. Here is why: 1. Not all packets have to be acknowledged (RFC1122 or https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment) , which will generate false measurements when it occurs, so you will need an heuristic to take these off your calculations. 2. Not all systems consider acknowledging packets a high priority tasks. This means that some high values will pollute your network latency data and simply reflect the level of load of the server for example.
So if you use only the first (and reliable) measurement you may miss some network delay variation (especially in apps using long lasting TCP sessions).

Can TCP handle a stream which never ends in a single connection?

This is more of a theoretical question. Let us say that there is an infinite data source, which keeps pushing data every second. Some device which monitors "Solar events", and sends events to a back-end system continuously, every nanosecond ( to mean its a continuous stream ). And the back-end system wants to transmit the live data to another remote system over TCP. Can TCP handle the infinite data stream in a single TCP connection ?
I'm aware of the sequence number limitation, but with TCP timestamps, the sequence numbers will properly wrap around, and it should not pose a problem. Also, assume that the system has several terabytes of memory ( which can be considered close to an infinite memory model ). If I just give the base address of where the stream starts, will TCP able to proceed ( segmenting, transmitting, re-transmitting .. etc ) continuously in a single TCP connection, without bothering on whether the data ever ends ?
My guess is that since TCP never expects any stream length parameter, it should be possible. Am I right ?
Basically, yes. As long as the data is byte, ('octet'), aligned, data on TCP streams can be piped anywhere, (see any router). TCP comms is a byte stream - it doesn't care about message boundaries. The windowed protocol has built-in flow-control, so it should all work.

Resources