I was reading about the HDLC (High-Level Data Link Control ) protocol in which the frame's control field has a 'Type bit'. The type Bit 1 is for REJECT which is basically a Negative Acknowledgement packet asking to retransmit the damaged frame. Now I don't have a problem with that. But type bit 3 is for SELECTIVE REJECT. I googled it and it claimed to be the same as REJECT. This confuses me. What exactly is SELECTIVE REJECT?
Sorry for answering my own question. Just researched a little and found this amazing answer. I promise to research more thoroughly before posting questions on SO. So hope it is useful for someone.
Reject (REJ):
If the value of the code subfield is 01, it is a REJ S-frame. This is a NAK frame, but not like the one used for Selective Repeat ARQ. It is a NAK that can be used in Go-Back-N ARQ to improve the efficiency of the process by informing the sender, before the sender time expires, that the last frame is lost or damaged. The value of N(R) is the negative acknowledgment number.
Selective reject (SREJ):
If the value of the code subfield is 11, it is an SREJ S-frame. This is a NAK frame used in Selective Repeat ARQ. Note that the HDLC Protocol uses the term selective reject instead of selective repeat. The value of N(R) is the negative acknowledgment number.
Here's the full link. Link
Related
I have done some research but I'm still confuse with this field. I know its purpose is for retransmission if the bit is set to 1. However, is it correct for me to say that if the packet is sent but it does not receive acknowledgement back then it will try to send again and thus the bit is set to 1 in Retry field.
Correct explanation? If I am wrong, how do I explain it then?
It is a wireless frame that is re-transmitted because the receiver did not acknowledge it. For detail information refer : https://www.youtube.com/watch?v=0wm9zK7kDvU
I had a bug that indexed into a buffer that preceeded the buffer containing a SYN packet and it dropped a byte into the ISN of this packet. Unfortunaely even though the packet was sent I got nothing back from the other end. Now this surprised me. I didn't think the other end cared how this ISN was calculated, just that it should be difficult for someone to guess??. Surely not every TCP/IP software has the same calculation such that the other end flagged this one as an invalid packet?!?
Please can some kind soul explain this to my ignorant self.
nb. I believe the packet is going onto the wire as:
1. Wirehark displays it and
2. the light on my network card winks at me.
Othrwise I mght have thought my machine checked the ISN and stopped it.
I guess I'd not make the grade of a Security analyst if I cant understand this, hey!
I'm currently studying fairly basic networking, and I'm currently on the subject of reliable transmission. I'm using the book Computer Networking by Kurrose & Ross, and two of the review questions were as follows:
With the selective-repeat/go-back-n protocol, it is possible for the
sender to receive an ACK for a packet that falls outside of its
current window?
For the SR version, my answer to the question was as follows:
Yes, if the window size is too big for the sequence number space. For
example, a receiver gets a number of packets equal to the space of the
sequence numbers. Its receive window has thus moved so that it is
expecting a new set of packets with the same sequence numbers as the
last one. The receiver now sends an ACK for each of the packets, but
all of them are lost along the way. This eventually causes the sender
to timeout for each of the previous set of packets, and retransmits
each of them. The receiver think that this duplicate set of packets
are really the new ones that it is expecting, and it sends ACKs for
each of them that successfully reaches the sender. The sender now
experiences a similar kind of confusion, where it thinks that the ACKs
are confirmations that each of the old packets have been received,
when they are really ACKs meant for the new, yet-to-be-sent packets.
I'm pretty sure this is correct (otherwise, please tell me!), since this kind of scenario seems to be the classic justification of why window size should be less than or equal to half the size of the sequence number space when it comes to SR protocols, but what about GBN?
Can the same kind of wraparound issue occur for it, making the answers mostly identical? If not, are there any other cases that can cause a typical GBN sender to receive an ACK outside of its window?
Regarding the later, the only example I can think of is the following:
A GBN sender sends packets A & B in order. The receiver receives both in order, and sends one cumulative ACK covering every packet before and up to A, and then another one covering every packet before and up to B (including A). The first one is so heavily delayed that the second one arrives first to the sender, causing its window to slide beyond A & B. When the first one finally arrives, it needlessly acknowledges that everything up to A has been correctly received, when A is already outside of the sender's window.
This example seems rather harmless and unlikely in contrast to the previous one, so I doubt that its correct (but again, correct me if I'm wrong, please!).
In practical world, how about a duplicated ACK delayed long enough to fall out of the window?
The protocol is between the sender and the receiver, but it does not have control over how the media (network path) behaves.
The protocol would still be reliable according to design but the implementation shall be able to handle such out-of-window duplicated ACKs.
Background: I've spent a while working with a variety of device interfaces and have seen a lot of protocols, many serial and UDP in which data integrity is handled at the application protocol level. I've been seeking to improve my receive routine handling of protocols in general, and considering the "ideal" design of a protocol.
My question is: is there any protocol framing scheme out there that can definitively identify corrupt data in all cases? For example, consider the standard framing scheme of many protocols:
Field: Length in bytes
<SOH>: 1
<other framing information>: arbitrary, but fixed for a given protocol
<length>: 1 or 2
<data payload etc.>: based on length field (above)
<checksum/CRC>: 1 or 2
<ETX>: 1
For the vast majority of cases, this works fine. When you receive some data, you search for the SOH (or whatever your start byte sequence is), move forward a fixed number of bytes to your length field, and then move that number of bytes (plus or minus some fixed offset) to the end of the packet to your CRC, and if that checks out you know you have a valid packet. If you don't have enough bytes in your input buffer to find an SOH or to have a CRC based on the length field, then you wait until you receive enough to check the CRC. Disregarding CRC collisions (not much we can do about that), this guarantees that your packet is well formed and uncorrupted.
However, if the length field itself is corrupt and has a high value (which I'm running into), then you can't check the (corrupt) packet's CRC until you fill up your input buffer with enough bytes to meet the corrupt length field's requirement.
So is there a deterministic way to get around this, either in the receive handler or in the protocol design itself? I can set a maximum packet length or a timeout to flush my receive buffer in the receive handler, which should solve the problem on a practical level, but I'm still wondering if there's a "pure" theoretical solution that works for the general case and doesn't require setting implementation-specific maximum lengths or timeouts.
Thanks!
The reason why all protocols I know of, including those handling "streaming" data, chop up the datastream in smaller transmission units each with their own checks on board is exactly to avoid the problems you describe. Probably the fundamental flaw in your protocol design is that the blocks are too big.
The accepted answer of this SO question contains a good explanation and a link to a very interesting (but rather heavy on math) paper about this subject.
So in short, you should stick to smaller transmission units not only because of practical programming related arguments but also because of the message length's role in determining the security offered by your crc.
One way would be to encode the length parameter so that it would be easily detected to be corrupted, and save you from reading in the large buffer to check the CRC.
For example, the XModem protocol embeds an 8 bit packet number followed by it's one's complement.
It could mean doubling your length block size, but it's an option.
I've got a bit of an odd question. A friend of mine and I thought it would be funny to make a serial port kind of communication between computers using sound. Basically, computers emit a series of beeps to send data, and listen for beeps over a microphone to receive data. In short, the world's most annoying serial port. I have all of the basics worked out. I can filter out sounds of only one frequency and I have sent data from one computer to another. Although the transmission is fairly error free, being affected only by very loud noises, some issues still exist. My question is, what are some good ways to check the data for errors and, more importantly, recover from these errors.
My serial communication is very standard once you get past the fact it uses sound waves. I use one start bit, 8 data bits, and one stop bit in every frame. I have already considered Cyclic Redundancy Checks, and I plan to factor this into my error checking, but CRCs don't account for some of the more insidious issues. For example, consider sending two bytes of data. You send the first one, and it received correctly, but just after the stop bit of the first byte, and the start bit of the next, a large book falls on the floor, which the receiver interprets to be a start bit, now the true start bit is read as part of the data and the receiver could be reading garbage data for many bytes to come. Eventually, a pause in the data could get things back on track.
That isn't the worst of it though. Bits can be dropped too, and most error checking schemes I can think of rely on receiving a certain number of bytes. What happens when the receiver keeps waiting for bytes that may not come?
So, you can see the complexity of this question. If you can direct me to any resources, or just give me a few tips, I would greatly appreciate your help.
A CRC is just a part of the solution. You can check for bad data but then you have to do something about it. The transmitter has to re-send the data, it needs to be told to do that. A protocol.
The starting point is that you split up the data into packets. A common approach is a start byte that indicates the start of the packet, followed by a packet number, followed by a length byte that indicates the length of the packet. Followed by the data bytes and the CRC. The receiver sends an ACK or NAK back to indicate success.
This solves several problems:
you don't care about a bad start bit anymore, the pause you need to recover is always there
you start a timer when you receive the first bit or byte, declare failure when the timer expires before the entire packet is received
the packet number helps you recover from bad ACK/NAK returns. The transmitter times out and resends the packet, you can detect the duplicate
RFC 916 describes such a protocol in detail. I never heard of anybody actually implementing it (other than me). Works pretty well.