I am preparing for my exams and was solving problems regarding Sliding Window Protocol and I came across these questions..
A 1000km long cable operates a 1MBPS. Propagation delay is 10 microsec/km. If frame size is 1kB, then how many bits are required for sequence number?
A) 3 B) 4 C) 5 D) 6
I got the ans as C option as follows,
propagation time is 10 microsec/km
so, for 1000 km it is 10*1000 microsec, ie 10 milisec
then RTT will be 20 milisec
in 10^3 milisec 8*10^6 bits
so, in 20 milisec X bits;
X = 20*(8*10^6)/10^3 = 160*10^3 bits
now, 1 frame is of size 1kB ie 8000 bits
so total number of frames will be 20. this will be a window size.
hence, to represent 20 frames uniquely we need 5 bits.
the ans was correct as per the answer key.. and then I came across this one..
Frames of 1000 bits are sent over a 10^6 bps duplex link between two hosts. The propagation time is
25ms. Frames are to be transmitted into this link to maximally pack them in transit (within the link).
What is the minimum number of bits (l) that will be required to represent the sequence numbers distinctly?
Assume that no time gap needs to be given between transmission of two frames.
(A) l=2 (B) l=3 (C) l=4 (D) l=5
as per the earlier one I solved this one like follows,
propagation time is 25 ms
then RTT will be 50 ms
in 10^3 ms 10^6 bits
so, in 50 ms X bits;
X = 50*(10^6)/10^3 = 50*10^3 bits
now, 1 frame is of size 1kb ie 1000 bits
so total number of frames will be 50. this will be a window size.
hence, to represent 50 frames uniquely we need 6 bits.
and 6 is not even in the option. Answer key is using same solution but taking propagation time not RTT for calculation. and their answer is 5 bits. I am totally confused, which one is correct?
I don't see what RTT has to do with it. The frames are only being sent in one direction.
Round-Trip-Time means that you have to take into account the ACK (acknowledgement message) you must receive that tells you the frames you are sending are being received by on the other side of the link. This 'time' window is the period where you get to send the remaining frames that the window allows you to send before you anticipate an ACK.
Ideally you want to be able to transmit continuously, i.e not having to stop at the window frame limit to wait for an ACK (which is essentially turns into a stop-and-wait situation if you have to stop and wait for the ack. The solution to this question is: the minimum number of frames that will be transmitted from the moment the first frame is transmitted to the moment you get an ack. (also known as the size for a large window)
Your calculations look to be correct in both cases and it would be safe to assume the answer choices for the second question are wrong .
Here its duplex channel so YOUR RTT= Tp hence they have considered Tp
Now you will get X = 25*10³
So total bits of window will be 5..
Related
I have at my disposal 16 bits. Of them, 4 bits are the header and cannot be touched. This leaves us with 12 bits. I would like to encode date and time data into them. These are essentially logs being sent over LPWAN.
Obviously, it's impossible to encode proper generic date and time into it. Since the unix timestamp uses 32 bits, and projects like Compact Time Format use 5 bytes.
Let's say we don't really need the year, because this information is available elsewhere. Let's also say the time resolution of seconds doesn't have to be super accurate, so we can split the seconds into 30 second intervals. If we were to simply encode the data as is then:
4 bits month (0-11)
5 bits day (0-31)
5 bits hour (0-23)
6 bits minute (0-59)
1 bit second (0,30)
-----------------------------
21 bits
21 bits is much better than 32. But it's still not 12. I could subtract one bit from the minutes (rounding to the nearest even minute), and remove the seconds but that still leaves us with 19 bits. Which is still far from 12.
Just wondering if it's possible, and if anyone has any ideas.
12 bits can hold 2^12 = 4096 values, which feels pretty tight for a task. Not sure much can be done in terms of compressing a date time into a 4096 number. It is too little space to represent this data.
There are some workarounds, none of them able to achieve what you want, but maybe something you could use anyway:
Split date and time. Alternate with some algorithm between sending date/time, one bit can be used to indicate what data is being sent. This leaves 11 bits to encode either date or time. You could go a bit further and split time like this as well. Receiving side can then reconstruct a full date time having access to the previously received data.
You could have a schema where one date packet is sent as a starting point, and subsequent packets are incremented in N-second intervals from the start of the epoch
Remove date time from data completely, saving 12 bits, but send it periodically as a stand-alone heartbeat/datetime packet.
You could try compressing the whole data packet which could allow using more bits to represent date time and still fit into a smaller overall packet size
If data is sent at reasonable fixed intervals, you could use a circular counter of an interval of N seconds, this may work if you have few devices and you can keep track of when they start transmitting. For example a satellite was launched on XYZ date time, it send counter every 30 seconds, we received counter value of 100, to calculate date we use simple math XYZ + 30*100 seconds
No. Unless you'd be happy with representing less than a span of a day and a half. You can just count 4096 30-second intervals, and find that that will cover 34 hours and eight minutes. 4096 two-minute intervals is just four times that, or five days, 16 hours, and 32 minutes. Still a small fraction of a year.
If you can assure that the difference between successive log entries is small, then you can stuff that in 12 bits. You will need a special entry to give an initial date, and maybe you could insert such an entry when the difference betweem successive entries is too large.
#oleksii has some other good suggestions as well.
This question is from the book computer network (top down approach by Kurose ; section : 1.3.2 Circuit Switching)
Refer this paragraph :
Suppose there are 10 users and that one user suddenly generates one thousand 1,000-bit packets (1,000,000 bits), while other users remain quiescent and do not generate packets. Under TDM circuit switching with 10 slots per frame and each slot consisting of 1,000 bits, the active user can only use its one time slot per frame to transmit data, while the remaining nine time slots in each frame remain idle. It will be 10 seconds before all of the active user’s one million bits of data has been transmitted
I don’t understand how the author arrived at 10 secs as the time to transfer the data.
From the context of that section, the link being discussed is one mega bit per second, 1 Mbps = 1,000,000 bits/sec.
1,000 bits/pkt * 1,000 pkts = 1,000,000 bits
1 active slot / 10 slots/frame = 1/10 frame active
1 Mbps * 1/10 active = 100,000 bits/sec
1000000 bits / 100000 bits/sec = 10 secs
I am very sorry to not be able to provide code for this question but it is more of a logical situation. My termination sequence for the AX.25 protocol is "111111" which is six 1s. So if this sequence of 1s is found inside my data packet, it will denote the end of the packet file and send it without correctly sending the rest of the packet. I will do my best to explain my conclusions and test results such that you can understand my dilemma.
***Programming in Arduino******
byte 1 contains 8 bits. Look below and attempt to picture a byte in a rectangular box. right next to it is byte 2 which also contains 8 bits.
Situation 1:
||_1_0_1_1_1_0_1_0_ ||_1_1_1_1_1_1_0_0_||
Attempted Solution 1: you could simply change 1 into 0 and keep track of it.
Situation 2:
||_1_0_1_1_1_0_1_1_ ||_1_1_1_1_0_0_1_0_||
Attempted Solution 2: attempted solution 1 breaks apart. and I am stuck here.
Individually the bytes are safe from activating AX.25 termination sequence but combined bytes results in a problem.
Here is a list of possible cases:
1) six 1s = termination sequence activated for end of packet
2) six 1s inside actual data of packet = premature termination
3) if 1s are changed to 0s than a sequence of six 0s can be a problem in reverting changes back
4) can only read 1 byte at a time (EEPROM) due to memory limitations
5) if six 1s occur between two bytes will also prematurely activate termination sequence.
Thank you in advance for any kind of help.
The solution mandated by the ax.25 protocol is bit stuffing.
Conceptually, any time the receiver sees five sequential one bits and a zero bit, it assumes that the zero bit has been stuffed by the sender (to break up erroneous frame sequences in the data), and removes it before emitting the received data. The only sequence of six 1-bits that can have been sent un-stuffed is the framing sequence; all data will have been sent stuffed. The receiver must always de-stuff.
To stuff or unstuff will generally require a couple of bytes of working ram (or a couple of bytes of registers), although there might be creative ways around that.
To quote the official TAPR protocol standard:
"In order to ensure that the flag bit sequence mentioned above does not appear accidentally anywhere else in a frame, the sending station monitors the bit sequence for a group of five or more contiguous “1” bits. Any time five contiguous “1” bits are sent, the sending station inserts a “0” bit after the fifth “1” bit. During frame reception, any time five contiguous “1” bits are received, a “0” bit immediately following five “1” bits is discarded."
A google search for AX.25 bit stuffing should return as much detail as you might need.
I'm writing a server that sends a "coordinates buffer" of game objects to clients every 300ms. But I don't want to send the full data each time. For example, suppose I have an array with elements that change over time:
0 0 100 50 -100 -50 at time t
0 10 100 51 -101 -50 at time t + 300ms
You can see that only the 2nd, 4th, and 5th elements have changed.
What is the right way to send not all the elements, but only the delta? Ideally I'd like a function that returns the complete data the first time and empty data when there are no changes.
Thanks.
Are you looking to optimize for efficiency, or is this a learning exercise? Some thoughts:
Unless there's a lot of data, it's probably easiest, and not terribly inefficient, to send all the data each time.
If you send deltas for all of the data points each time, you won't save much by sending zeroes for unchanged points instead of re-sending the previous vales.
If you send data for only those points that change, you'll need to provide an index for each value. For example, if point 3 increases by 5 and point 8 decreases by 2, then you might send 3 5 8 -2. But now, since you're sending two values for each point that changes, you'll only win if fewer than half the points change.
If the values change relatively slowly, as compared to the rate at which you transmit updates, you might increase efficiency by transmitting the delta for each data point, but using only a few bits. For example, with 4 bits you can transmit values from -8 to +7. That would work as long as the deltas are never larger than that, or if it's ok to transmit several deltas before they "catch up" to the actual values.
It may not be worthwhile to have 2 different mechanisms: one to send the initial values, and another to send deltas. If you can tolerate the lag, it may make more sense to assume some constant initial value for every point, and then transmit only deltas.
There are lots of options. If most data isn't changing, just send (index,value) pairs of the changed elements. If most values change but the changes are small, compute deltas and gzip (or run length encode, or lots of other possibilities) the result.
The question is :
We have a transport protocoll that uses pipelining and use a 8-bit long sequence number (0 to 255)
What is the maximum window size sender can use ? (How many packets the sender can send out on the net before it muse wait for an ACK?)
Go-Back-N the maximum window size is: w= 2^m -1 w=255.
Selective Repeat the maximu window size is: w=(2^m)/2 w=128.
I do not know which is correct and which formula shall I use.
Thanks for help
Those two are different protocols having different issues.
In case of Go-Back-N, you are correct. The window size can be up to 255. (2^8-1 is the last seq # of packets to send starting from 0. And it's also the maximum window size possible for Go-Back-N protocol.)
However, Selective Repeat protocol has limitation of window size up to half of the max seq # since the receiver cannot distinguish a retransmitted packet having the same seq # with an already ack'ed packet but lost and never reached to sender in previous window. Hence, the window size must be in half range of seq # so that the consecutive windows cannot have duplicated seq # each other.
Go-Back-N doesn't have this issue since the sender pushes n packets up to window size (which is at max: n-1) and never slides the window until it gets cumulative acks up to n. And those two protocol have different max size windows.
Note: For Go-Back-N, the maximum window size is maximum number of unique sequence numbers - 1. If the window is equal to the maximum number of unique sequence numbers, if all the acknowledgements are lost, the receiver will accept all the retransmitted messages as a separate set of messages and relays the messages an additional time to it's application. To avoid this inconsistency, maximum window size = maximum number of unique sequence numbers - 1. This answer has been updated according to the fact provided in the comment by #noamgot.