Selective Repeat protocol's window size - networking

Hi I'm a newbie to networking, I can understand the differences between Go-Back-N and Selective Repeat protocol on transport layer.I'm confused with Selective Repeat, I know receiver might not be able to tell the incoming packet is a new packet or a retransmission when window size is set too large, but what I don't understand is why receiver need to know? the receiver can just send a ack packet back to sender and let the sender decide what to do next, then everything is fine, the protocol is still working properly.

why receiver need to know? the receiver can just send a ack packet
back to sender and let the sender decide what to do next
It's easy to forget, but the point of networking is not just to send acks and sequence numbers between computers. The receiver also needs to actually receive the data.
If the receiver can't tell whether a packet carries new information or a re-transmission, then it can't tell what information it received.

Related

Alternating bit protocol delayed packet

In the alternating bit protocol, how does the receiver know the difference between a delayed packet and a correct one. For example if the sender sends a packet with seq#0, it gets severely delayed on the way there, so much so that the sender and receiver have completed 2 packets in between and the receiver is expecting the next packet with seq#0, and instead receives the delayed one. Should the receiver have a temporary storage of the last few packets to compare if it's just delayed or are there other ways to check?

Can transmition of a packet finish before the first bit has reached the reciever?

I should find a combination of the lenght of a packet, bandwidth and the link lenght and then find out if there is a combination for which transmitting time of a packet finishes before the first bit of the packet has reached the receiver. Is this even possible?
TCP or UDP?
TCP will require to receive a response from the destination before it actually starts sending the packet thus it won't be possible here.
UDP has no concept of knowing whether or not the packet got received, which means that as soon as the packet has left the sender there is no further communication between the two.
Your question is worded ambiguously though: how can you talk about 'transmission time' (which implies the time between sending and receiving) while comparing that to the 'receiving time' (which is already part of the transmission time).?

Is there a way to verify whether the packets are received at a switch?

I have a router1-switch-router2 connection. My problem is if I send a packet from router1 to router2, it is not received at router2. I am sure the ipaddress/subnet address are correct. And am also sure that packets are going out the router1. And I am also sure of the internal port connections of the switch. I have access to the onpath switch. Is there any specific command that can be used in the switch to check whether the packet is received or not? ARP itself not getting resolved
You can have a packet capture app running on both the sender and receiver that would tell the incoming and outgoing packets on both boxes.
In this case probably your packet is getting dropped either on the sender or receiver side. There can be million reasons for a packet drop. But this is a good step to start with.

Can CSMA/CD in ethernet tell the sender when the client has gotten a packet that is "damaged"?

If a client receives a damaged packet then it will know that after comparing the packet's checksum to the one in the header.
But can a sender know when a packet has reached the client in a damaged form?
First of all, it seems you are making a confusion. CSMA/CD is for detecting when somebody else is using the link so that a collision doesn't happen. It's its only purpose.
Second, ethernet senders cannot find out if the frame they sent arrives malformed. There is no acknowledgement. The upper-layer protocols must take precautions.

Can UDP retransmit lost data?

I know the protocol doesn't support this but is it common for clients that require some level of reliability to build into their application a method for requesting retransmission of a packet if found to be corrupt?
It is common for clients to implement reliability on top of UDP if they need reliability (or sometimes just some reliability) but not any of the other things that TCP offers, for example strict in-order delivery, and if they want, at the same time, low latency (or multicast, where it works).
In general, you will only want to use reliable UDP if there are urgent reasons (very low latency and high speed needed, e.g. for a twitch game). In every "normal" case, simply using TCP will serve you equally well or better.
Also note that it is easy to implement your own stack on top of UDP that performs worse than TCP.
See enet for an example of a library that implements reliability (and some other features) on top of UDP (Raknet or HawkNL would be other examples).
You might want to look at the answers to this question: What do you use when you need reliable UDP?
Of course. You can build a reliable protocol (like TCP) on top of UDP.
Example
Imagine you are building a fileserver:
* read the file using blocks of 1024 bytes
* construct an UDP packet with payload: 4 bytes for the "position" in the file, 4 bytes for the "length" of the contents of the packet.
The receiver now receives UDP packets. If he gets following packets:
* 0-1024: DATA
* 1024-2048: DATA
* 3072-4096: DATA
it realises a packet got missing, and asks the sending application to resend the part between 2048 and 3072.
This is a very basic example to explain your application code needs to deal with the packet construction and payload interpretation. Don't use it, it does not take edge cases (last packet, checksums for corrupted packets, ...) into account.
Short answer: No.
Long answer: UDP doesn't care for packet loss. If an UDP packet arrives and has a bad checksum, it is simply dropped. Neither the sender of the packet is informed about this, nor the recipient is informed. It is only dropped, that's all that happens. Also UDP packets are not numbered, so if you send four packets and only three arrive at the recipient, the recipient cannot know that there used to be four and one is missing. Last but not least, packets are not acknowledged, so the sender will never know if a packet it was sending out ever made it to the recipient or not.
Contrary to this, TCP breaks data into segments, each segment is numbered, so the recipient can know if a segment is missing. Also all segments must be acknowledged to the sender. If the sender receives no acknowledgment for a segment after a certain period of time, it will assume the segment was lost and send it again.
Of course, you can add an own frame header on top of every data fragment you sent over UDP, that way your application code can number the sent fragments and you can implement an acknowledgement-resent strategy in your code but the question is: Will this really be better than what TCP is already offering for free? Even if it would be equally good, save yourself the time and just use TCP. A lot of people already thought they can do better than TCP and usually they realize in the end, actually they cannot. TCP has its weaknesses but in general it is pretty good at what it does.

Resources