Finding out much time is required for TCP to reach desired sequence - tcp

Assume a TCP is sending at 100Mbps. The sequence number starts with 329,114,852. how long will it take in seconds before the sequence number reaches 100,000,000
I don't have the answer for this so i am not sure if my working is correct. (329,144,852-100,000,000)/100,000,000 =2.29 seconds

Related

Is it possible to encode date AND time (with some caveats) into 12 bits?

I have at my disposal 16 bits. Of them, 4 bits are the header and cannot be touched. This leaves us with 12 bits. I would like to encode date and time data into them. These are essentially logs being sent over LPWAN.
Obviously, it's impossible to encode proper generic date and time into it. Since the unix timestamp uses 32 bits, and projects like Compact Time Format use 5 bytes.
Let's say we don't really need the year, because this information is available elsewhere. Let's also say the time resolution of seconds doesn't have to be super accurate, so we can split the seconds into 30 second intervals. If we were to simply encode the data as is then:
4 bits month (0-11)
5 bits day (0-31)
5 bits hour (0-23)
6 bits minute (0-59)
1 bit second (0,30)
-----------------------------
21 bits
21 bits is much better than 32. But it's still not 12. I could subtract one bit from the minutes (rounding to the nearest even minute), and remove the seconds but that still leaves us with 19 bits. Which is still far from 12.
Just wondering if it's possible, and if anyone has any ideas.
12 bits can hold 2^12 = 4096 values, which feels pretty tight for a task. Not sure much can be done in terms of compressing a date time into a 4096 number. It is too little space to represent this data.
There are some workarounds, none of them able to achieve what you want, but maybe something you could use anyway:
Split date and time. Alternate with some algorithm between sending date/time, one bit can be used to indicate what data is being sent. This leaves 11 bits to encode either date or time. You could go a bit further and split time like this as well. Receiving side can then reconstruct a full date time having access to the previously received data.
You could have a schema where one date packet is sent as a starting point, and subsequent packets are incremented in N-second intervals from the start of the epoch
Remove date time from data completely, saving 12 bits, but send it periodically as a stand-alone heartbeat/datetime packet.
You could try compressing the whole data packet which could allow using more bits to represent date time and still fit into a smaller overall packet size
If data is sent at reasonable fixed intervals, you could use a circular counter of an interval of N seconds, this may work if you have few devices and you can keep track of when they start transmitting. For example a satellite was launched on XYZ date time, it send counter every 30 seconds, we received counter value of 100, to calculate date we use simple math XYZ + 30*100 seconds
No. Unless you'd be happy with representing less than a span of a day and a half. You can just count 4096 30-second intervals, and find that that will cover 34 hours and eight minutes. 4096 two-minute intervals is just four times that, or five days, 16 hours, and 32 minutes. Still a small fraction of a year.
If you can assure that the difference between successive log entries is small, then you can stuff that in 12 bits. You will need a special entry to give an initial date, and maybe you could insert such an entry when the difference betweem successive entries is too large.
#oleksii has some other good suggestions as well.

AX.25 protocol interfering with sending data packet

I am very sorry to not be able to provide code for this question but it is more of a logical situation. My termination sequence for the AX.25 protocol is "111111" which is six 1s. So if this sequence of 1s is found inside my data packet, it will denote the end of the packet file and send it without correctly sending the rest of the packet. I will do my best to explain my conclusions and test results such that you can understand my dilemma.
***Programming in Arduino******
byte 1 contains 8 bits. Look below and attempt to picture a byte in a rectangular box. right next to it is byte 2 which also contains 8 bits.
Situation 1:
||_1_0_1_1_1_0_1_0_ ||_1_1_1_1_1_1_0_0_||
Attempted Solution 1: you could simply change 1 into 0 and keep track of it.
Situation 2:
||_1_0_1_1_1_0_1_1_ ||_1_1_1_1_0_0_1_0_||
Attempted Solution 2: attempted solution 1 breaks apart. and I am stuck here.
Individually the bytes are safe from activating AX.25 termination sequence but combined bytes results in a problem.
Here is a list of possible cases:
1) six 1s = termination sequence activated for end of packet
2) six 1s inside actual data of packet = premature termination
3) if 1s are changed to 0s than a sequence of six 0s can be a problem in reverting changes back
4) can only read 1 byte at a time (EEPROM) due to memory limitations
5) if six 1s occur between two bytes will also prematurely activate termination sequence.
Thank you in advance for any kind of help.
The solution mandated by the ax.25 protocol is bit stuffing.
Conceptually, any time the receiver sees five sequential one bits and a zero bit, it assumes that the zero bit has been stuffed by the sender (to break up erroneous frame sequences in the data), and removes it before emitting the received data. The only sequence of six 1-bits that can have been sent un-stuffed is the framing sequence; all data will have been sent stuffed. The receiver must always de-stuff.
To stuff or unstuff will generally require a couple of bytes of working ram (or a couple of bytes of registers), although there might be creative ways around that.
To quote the official TAPR protocol standard:
"In order to ensure that the flag bit sequence mentioned above does not appear accidentally anywhere else in a frame, the sending station monitors the bit sequence for a group of five or more contiguous “1” bits. Any time five contiguous “1” bits are sent, the sending station inserts a “0” bit after the fifth “1” bit. During frame reception, any time five contiguous “1” bits are received, a “0” bit immediately following five “1” bits is discarded."
A google search for AX.25 bit stuffing should return as much detail as you might need.

Sliding window protocol, calculation of sequence number bits

I am preparing for my exams and was solving problems regarding Sliding Window Protocol and I came across these questions..
A 1000km long cable operates a 1MBPS. Propagation delay is 10 microsec/km. If frame size is 1kB, then how many bits are required for sequence number?
A) 3 B) 4 C) 5 D) 6
I got the ans as C option as follows,
propagation time is 10 microsec/km
so, for 1000 km it is 10*1000 microsec, ie 10 milisec
then RTT will be 20 milisec
in 10^3 milisec 8*10^6 bits
so, in 20 milisec X bits;
X = 20*(8*10^6)/10^3 = 160*10^3 bits
now, 1 frame is of size 1kB ie 8000 bits
so total number of frames will be 20. this will be a window size.
hence, to represent 20 frames uniquely we need 5 bits.
the ans was correct as per the answer key.. and then I came across this one..
Frames of 1000 bits are sent over a 10^6 bps duplex link between two hosts. The propagation time is
25ms. Frames are to be transmitted into this link to maximally pack them in transit (within the link).
What is the minimum number of bits (l) that will be required to represent the sequence numbers distinctly?
Assume that no time gap needs to be given between transmission of two frames.
(A) l=2 (B) l=3 (C) l=4 (D) l=5
as per the earlier one I solved this one like follows,
propagation time is 25 ms
then RTT will be 50 ms
in 10^3 ms 10^6 bits
so, in 50 ms X bits;
X = 50*(10^6)/10^3 = 50*10^3 bits
now, 1 frame is of size 1kb ie 1000 bits
so total number of frames will be 50. this will be a window size.
hence, to represent 50 frames uniquely we need 6 bits.
and 6 is not even in the option. Answer key is using same solution but taking propagation time not RTT for calculation. and their answer is 5 bits. I am totally confused, which one is correct?
I don't see what RTT has to do with it. The frames are only being sent in one direction.
Round-Trip-Time means that you have to take into account the ACK (acknowledgement message) you must receive that tells you the frames you are sending are being received by on the other side of the link. This 'time' window is the period where you get to send the remaining frames that the window allows you to send before you anticipate an ACK.
Ideally you want to be able to transmit continuously, i.e not having to stop at the window frame limit to wait for an ACK (which is essentially turns into a stop-and-wait situation if you have to stop and wait for the ack. The solution to this question is: the minimum number of frames that will be transmitted from the moment the first frame is transmitted to the moment you get an ack. (also known as the size for a large window)
Your calculations look to be correct in both cases and it would be safe to assume the answer choices for the second question are wrong .
Here its duplex channel so YOUR RTT= Tp hence they have considered Tp
Now you will get X = 25*10³
So total bits of window will be 5..

The realationship between window size and sequence number

The question is :
We have a transport protocoll that uses pipelining and use a 8-bit long sequence number (0 to 255)
What is the maximum window size sender can use ? (How many packets the sender can send out on the net before it muse wait for an ACK?)
Go-Back-N the maximum window size is: w= 2^m -1 w=255.
Selective Repeat the maximu window size is: w=(2^m)/2 w=128.
I do not know which is correct and which formula shall I use.
Thanks for help
Those two are different protocols having different issues.
In case of Go-Back-N, you are correct. The window size can be up to 255. (2^8-1 is the last seq # of packets to send starting from 0. And it's also the maximum window size possible for Go-Back-N protocol.)
However, Selective Repeat protocol has limitation of window size up to half of the max seq # since the receiver cannot distinguish a retransmitted packet having the same seq # with an already ack'ed packet but lost and never reached to sender in previous window. Hence, the window size must be in half range of seq # so that the consecutive windows cannot have duplicated seq # each other.
Go-Back-N doesn't have this issue since the sender pushes n packets up to window size (which is at max: n-1) and never slides the window until it gets cumulative acks up to n. And those two protocol have different max size windows.
Note: For Go-Back-N, the maximum window size is maximum number of unique sequence numbers - 1. If the window is equal to the maximum number of unique sequence numbers, if all the acknowledgements are lost, the receiver will accept all the retransmitted messages as a separate set of messages and relays the messages an additional time to it's application. To avoid this inconsistency, maximum window size = maximum number of unique sequence numbers - 1. This answer has been updated according to the fact provided in the comment by #noamgot.

TCP sequence number question

This is more of a theoretical question than an actual problem I have.
If I understand correctly, the sequence number in the TCP header of a packet is the index of the first byte in the packet in the whole stream, correct? If that is the case, since the sequence number is an unsigned 32-bit integer, then what happens after more than FFFFFFFF = 4294967295 bytes are transferred? Will the sequence number wrap around, or will the sender send a SYN packet to restart at 0?
The sequence number loops back to 0. Source:
TCP sequence numbers and receive
windows behave very much like a clock.
The receive window shifts each time
the receiver receives and acknowledges
a new segment of data. Once it runs
out of sequence numbers, the sequence
number loops back to 0.
Also see chapter 4 of RFC 1323.
It wraps. RFC 793:
It is essential to remember that the actual sequence number space is finite, though very large. This space ranges from 0 to 2**32 - 1. Since the space is finite, all arithmetic dealing with sequence numbers must be performed modulo 2**32. This unsigned arithmetic preserves the relationship of sequence numbers as they cycle from 2**32 - 1 to 0 again. There are some subtleties to computer modulo arithmetic, so great care should be taken in programming the comparison of such values. The symbol "=<" means "less than or equal" (modulo 2**32).
Read more: http://www.faqs.org/rfcs/rfc793.html#ixzz0lcD37K7J
The sequence number is not actually the "index of the first byte in the packet in the whole stream" since sequence numbers deliberately start at a random value (this is to stop a form of attack known as the TCP Sequence Prediction Attack).
No SYN is required, the sequence number simply loops back to zero again once it gets to the limit.

Resources