Good day!
I know this is a simple question but I can't find its answer, whenever I look for RTT, it is usually loosely defined. So, is buffering time in the transmitting node included in RTT -received by ping-?
RTT simply means "round-trip time." I'm not sure what "buffering" you're concerned about. The exact points of measurement depend on the exact ping program you're using, and there are many. For BusyBox, the ping implementation can be found here. Reading it shows that the outgoing time is stamped when the outgoing ICMP packet is prepared shortly before sendto() is called, and the incoming time is stamped when the incoming ICMP packet is parsed shortly after recvfrom() is called. (Look for the calls to monotonic_us().) The difference between the two is what's printed. Thus the printed value includes all time spent in the kernel's networking stack, NIC handling and so on. It also, at least for this particular implementation, includes time the ping process may have been waiting for a time slice. For a heavily loaded system with scheduling contention this could be significant. Other implementations may vary.
Related
I read a little about PAWS (Protection Against Wrapping Sequence). It's very interesting. I didn't know such complicated things are implemented to guarantee the reliability of TCP. Without PAWS, in the case of high data rate, a delayed old packet can be received and regarded as the new packet by mistake.
I didn't think much about this before. But now I started to wonder how long a packet can stay in network (Especially UDP packet if the type of packet matters). A packet can be delayed, temporarily stay in the network before it's delivered. But it can only stay for a short period of time, right?
In other words, how much time does it take to wait for a (UDP) packet before concluding that it won't come?
If there is an answer, then how is it determined? How to estimate it? (for writing programs related to timeout of packet.)
A simplified example: A server received 2 UDP packets. Each contains an integer to indicate the order. It got No.1 and No.3. It knows No.2 is either delayed or lost. After a period of time, No.2 still doesn't come then it concludes the packet is lost. The packet doesn't exist anymore. (So it won't cause any trouble for new packets in the future, similar to the problem PAWS aims to solve.) But how long should the the server wait before concluding No.2 doesn't exist anymore?
See RFC 791 #3.2:
Time to Live
The time to live is set by the sender to the maximum time the
datagram is allowed to be in the internet system. If the datagram
is in the internet system longer than the time to live, then the
datagram must be destroyed.
This field must be decreased at each point that the internet header
is processed to reflect the time spent processing the datagram.
Even if no local information is available on the time actually
spent, the field must be decremented by 1. The time is measured in
units of seconds (i.e. the value 1 means one second). Thus, the
maximum time to live is 255 seconds or 4.25 minutes. Since every
module that processes a datagram must decrease the TTL by at least
one even if it process the datagram in less than a second, the TTL
must be thought of only as an upper bound on the time a datagram may
exist. The intention is to cause undeliverable datagrams to be
discarded, and to bound the maximum datagram lifetime.
UDP is a fire-and-forget, best-effort protocol. There is no expectation by the receiving host that the UDP packet is coming. Upper layers can use their own guarantees or expectations, but UDP has none.
UDP doesn't wait on packets the way TCP does.
Because of geographic distance between server and client network latency can vary a lot. So I want to get "pure" req. processing time of service without network latency.
I want to get network latency as TCP connecting time. As far as I understand this time depends a lot on network.
Main idea is to compute:
TCP connecting time,
TCP first packet receive time,
Get "pure" service time = TCP first packet receive (waiting time) – TCP connecting.
I divide TCP connecting by 2 because in fact there are 2 requests-response (3-way handshake).
I have two questions:
Should I compute TCP all packets receive time instead of only first packet?
Is this method okay in general?
PS: As a tool I use Erlang's gen_tcp. I can show the code.
If at all, i guess the "pure" service time = TCP first packet receive - TCP connecting.. You have written other way round.
A possible answer to your first questions is , you should ideally compute atleast some sort of average by considering pure service time of many packets rather than just first packet.
Ideally it can also have worst case, average case, best case service times.
For second question to be answered we would need why would you need pure service time only. I mean since it is a network application, network latencies(connection time etc...) should also be included in the "response time", not just pure service time. That is my view based on given information.
I have worked on a similar question when working for a network performance monitoring vendor in the past.
IMHO, there are a certain number of questions to be asked before proceeding:
connection time and latency: if you base your network latency metric, beware that it takes into account 3 steps: Client sends a TCP/SYN, Server responds with a TCP/SYN-ACK, the Client responds by a final ACK to set up the TCP connection. This means that the CT is equivalent to 1.5 RTT (round trip time). This validates taking the first two steps of the TCP setup process in acccount like you mention.
Taking in account later TCP exchanges: while this first sounds like a great idea to keep evaluating network latency in the course of the session, this becomes a lot more tricky. Here is why: 1. Not all packets have to be acknowledged (RFC1122 or https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment) , which will generate false measurements when it occurs, so you will need an heuristic to take these off your calculations. 2. Not all systems consider acknowledging packets a high priority tasks. This means that some high values will pollute your network latency data and simply reflect the level of load of the server for example.
So if you use only the first (and reliable) measurement you may miss some network delay variation (especially in apps using long lasting TCP sessions).
In talking to a MODBUS device, is there an upper bound on how long a device can take to respond before it's considered a timeout? I'm trying to work out what to set my read timeout to. Answers for both MODBUS RTU and TCP would be great.
In MODBUS over serial line specification and implementation guide V1.0 section 2.5.2.1 MODBUS Message ASCII Framing There are suggestions that inter-character delays of up to 5 seconds are reasonable in slow WAN configurations.
2.6 Error Checking Methods indicates that the timeouts are configured without specifying any values.
The current Modicon Modbus Protocol Reference Guide PI–MBUS–300 Rev. J also provides no quantitative suggestions for these settings.
Your application time-sensitivity, along with the constraints that your network enforces, will largely determine your choices.
If you identify the worst-case delays you can tolerate, take half that time to allow a single retransmission to fail, subtract reasonable transmission times for a message of maximal length, then you should have a good candidate for a timeout. This will allow you to recover from a single error, while not reporting errors unnecessarily often.
Of course, the real problem is, what to do when the error occurs. Is it likely to be a transient problem, or is it the result of a permanent fault that requires attention?
Alexandre Vinçon's comment about the ACKNOWLEDGEMENTs is also relevant. It may be your device does not implement this, and extended delays may be intended.
The specification does not mention a particular value for the timeout, because it is not possible to normalize a timeout value for a wide range of MODBUS slaves.
However, it is a good assumption that you should receive a reply within a few hundreds of milliseconds.
I usually define my timeouts to 1 second with RTU and 500 ms with TCP.
Also, if the device takes a long time to reply, it is supposed to return an ACKNOWLEDGE message to prevent the expiration of the timeout.
Here is a paper named "TCP-RTM: Using TCP for Real Time Multimedia Applications" by Sam Liang, David Cheriton.
This paper is to adapt tcp to be used in Real time application.
The two major modification which i actually want you to help me are:
On application-level read on the TCP connection, if there is no in sequence data queued to read but one or more out-of-order packets are queued for the connection, the first contiguous range of out-of-order packets is moved from the out-of-order queue to the receive queue, the receive pointer is advanced beyond these packets, and the resulting data delivered to the application. On reception of an out-of-order packet with a sequence number logically greater than the current receive pointer (rcv next ptr) and with a reader waiting on the connection, the packet data is delivered to the waiting receiver, the receive pointer is advanced past this data and this new receive pointer is
returned in the next acknowledgment segment.
In the case that the sender’s send-buffer is full due to large amount of backlogged data, TCP-RTM discards the oldest data segment in the buffer and accepts the new data written by the application. TCP-RTM also advances its send-window past the discarded data segment. This way, the application write calls are never blocked and the timing of the sender application is not broken.
They actually changed the 'tcpreno with sack' version of tcp in an old linux 2.2 kernel in real environment.
But, I want to simulate this in NS2.
I can work with NS2 e.g., analyzing, making performance graphs etc. I looked all the related files but can't find where to change.
So, would you please help me to do this.
I have noticed that viewing images or websites that are hosted on US servers (Im in europe) is considerably slower. The main reason would be the latency because of the distance.
But if 1 packet takes n milliseconds to be received, can't this be alleviated by sending more packets simultaneously?
Does this actually happen or are the packets sent one by one? And if yes what determines how many packets can be send simultaneously (something to do with the cable i guess)?
But if 1 packet takes n milliseconds
to be received, can't this be
alleviated by sending more packets
simultaneously?
Not in a boundless way, by TCP/IP standards, because there algorithms that determine how much can be in flight and not yet acknowledged to avoid overloading the whole network.
Does this actually happen or are the
packets sent one by one?
TCP can and does keep up to a certain amount of packets and data "in flight".
And if yes what determines how many
packets can be send simultaneously
(something to do with the cable i
guess)?
What cable? The same standards apply whether you're on cabled, wireless, or mixed sequences of connections (remember your packet goes through many routers on its way to the destination, and the sequence of router can change among packets).
You can start your study of TCP e.g. wikipedia. Your specific questions deal with congestion control algorithms and standard, Wikipedia will give you pointers to all relevant algorithms and RFCs, but the whole picture won't do you much good if you try to start studying at that spot without a lot of other understanding of TCP (e.g., its flow control concepts).
Wikipedia and similar encyclopedia/tutorial sites can only give you a summary of the summary, while RFCs are not studied to be readable, or understandable to non-experts. If you care about TCP, I'd recommend starting your study with Stevens' immortal trilogy of books (though there are many other valid ones, Stevens' are by far my personal favorites).
The issue is parallelism.
Latency does not directly affect your pipe's throughput. For instance, a dump truck across the country has terrible latency, but wonderful throughput if you stuff it full of 2TB tapes.
The problem is that your web browser can't start asking for things until it knows what to ask for. So, when you load a web page with ten images, you have to wait until the img tags arrive before you can send the request for them. So everything is perceptibly slower, not because your connection is saturated but because there's down time between making one request and the next.
A prefetcher helps alleviate this issue.
As far as "multiple packets at a time" are concerned, a single TCP connection will have many packets in transit at once, as specified by the window scaling algorithm the ends are using. But that only helps with one connection at a time...
TCP uses what's called a sliding window. Basically the amount of buffer space, X, the receiver has to re-assemble out of order packets. The sender can send X bytes past the last acknowledged byte, sequence number N, say. This way you can fill the pipe between sender and receiver with X unacknowledged bytes under the assumption that the packets will likely get there and if not the receiver will let you know by not acknowledging the missing packets. On each response packet the receiver sends a cumulative acknowledgment, saying "I've got all the bytes up to byte X." This lets it ack multiple packets at once.
Imagine a client sending 3 packets, X, Y, and Z, starting at sequence number N. Due to routing Y arrives first, then Z, and then X. Y and Z will be buffered at the destination's stack and when X arrives the receiver will ack N+ (the cumulative lengths of X,Y, and Z). This will bump the start of the sliding window allowing the client to send additional packets.
It's possible with selective acknowledgement to ack portions of the sliding window and ask the sender to retransmit just the lost portions. In the classic scheme is Y was lost the sender would have to resend Y and Z. Selective acknowledgement means the sender can just resend Y. Take a look at the wikipedia page.
Regarding speed, one thing that may slow you down is DNS. That adds an additional round-trip, if the IP isn't cached, before you can even request the image in question. If it's not a common site this may be the case.
TCP Illustrated volume 1, by Richard Stevens is tremendous if you want to learn more. The title sounds funny but the packets diagrams and annotated arrows from one host to the other really make this stuff easier to understand. It's one of those books that you can learn from and then end up keeping as a reference. It's one of my 3 go-to books on networking projects.
The TCP protocol will try to send more and more packets at a time, up to a certain amount (I think), until it starts noticing that they're dropping (the packets die in router/switch land when their Time To Live expires) and then it throttles back. This way it determines the size of the window (bandwidth) that it can use. If the sending host is noticing from its end a lot of dropped packets, then the receiving host is just going to see it as a slow connection. It could very well be blasting you with data, you're just not seeing it.
I guess parallel packets transmission is also possible(ya.. there can be limiit on No. of packets to be send at a time)..U will get more information about the packet transmission from topics::> message switching,packet switching ,circuit switching & virtual circuit packet switching...