The Ethernet II frame format does not contain a length field, and I'd like to understand how the end of a frame can be detected without it.
Unfortunately, I have no idea of physics, but the following sounds reasonable to me: we assume that Layer 1 (Physical Layer) provides us with a way of transmitting raw bits in such a way that it is possible to distinguish between the situation where bits are being sent and the situation where nothing is sent (if digital data was coded into analog signals via phase modulation, this would be true, for example - but I don't know if this is really what's done). In this case, an ethernet card could simply wait until a certain time intervall occurs where no more bits are being transmitted, and then decide that the frame transmission has to be finished.
Is this really what's happening?
If yes: where can I find these things, and what are common values for the length of "certain time intervall"? Why does IEEE 802.3 have a length field?
If not: how is it done instead?
Thank you for your help!
Hanno
Your assumption is right. The length field inside the frame is not needed for layer1.
Layer1 uses other means to detect the end of a frame which vary depending on the type of physical layer.
with 10Base-T a frame is followed by a TP_IDL waveform. The lack of further Manchester coded data bits can be detected.
with 100Base-T a frame is ended with an End of Stream Delimiter bit pattern that may not occur in payload data (because of its 4B/5B encoding).
A rough description you can find e.g. here:
http://ww1.microchip.com/downloads/en/AppNotes/01120a.pdf "Ethernet Theory of Operation"
Related
When a computer X sends data through a network to computer Y the data goes down through the OSI layer. This is ok. I understand. But once the data is put on the media as eletric signals then how does the computer Y know what to reassmble, given the headers and trailers of the data model generated in OSI, once it is put on the electric media at layer 1 does not exist any more?
The physical layer is just 1's and 0's as you say - the trick is that there is a pattern that tells the receiver that this is the start of a packet. This is usual referred to as 'Framing'.
Once the receiver knows that, it simply reads in as many bits as its needs for the Layer 2 header and it then has that and so on.
The headers are clear in a typical OSI or networking diagrams, e.g. (https://www.ciscopress.com/articles/article.asp?p=2738463):
So the way the first two layers work on the receiver is:
layer 1 just recognises whether the signal is a one or a zero and creates the stream of ones and zeros.
layer 2 reads this stream and when it recognises the start pattern it then know the following bits are the header and so on and hence it can identify the frames.
You can see examples of start and stop patterns online e.g. (http://sinauonline.50webs.com/Cisco/Cisco%20Exploration%20Sem1Chap7.html):
I am new to DNP3 protocol and I have a question.
I understand that the data is stored in arrays
But, I did not understand if the array could be noncontinuous?Â
In addition, Is there any beginners source information for DNP3 protocol? (I have tried to read the DNP3 specification but it was unclear to me)
I would appreciate your answer!
Yes, data indexes may be non-contiguous.
To achieve "more efficient" transmission of data, section 5.1.2 of the IEEE Standard for Electric Power Systems Communications— Distributed Network Protocol (DNP3) states that "gaps in the point index range are permissible but should be avoided wherever possible."
The DNP3 standard does not specify how data is stored, but rather how it is transmitted. The indexes are part of an addressing scheme used to identify individual pieces of data in a device. A given piece of data, or point, is identified by its Group number and an Index. For example, "Group 30 : Index 9" is the 10th readable, analog value ("10th" because the lists are zero-based).
Another way to state the answer is that point addresses (meaning indexes within a Group) are not required to be contiguous.
Note that even if points in a device are indexed contiguously, the device could return data with non-contiguous indexes in a single transmission packet. For example, a packet of data from a device might contain the 2nd, 5th, and 12th readable, analog input.
I don't have any specific recommendation for beginners information.
In bit stuffing why always add non information bits after consecutive 5 bits? Any reason behind that?
Here is some information from tutorialspoint:
Bit-Stuffing: A pattern of bits of arbitrary length is stuffed in the message to differentiate from the delimiter.
The flag field is some fixed sequence of binary values like 01111110. Now the payload can also have similar pattern, but the machine on the network can get confused and misinterpret that payload data as the flag field (indicating end of frame). So, to avoid the machine getting confused, some bits are stuffed into the payload (especially at points where payload data looks like the flag) so as to differentiate it from flag.
Background: I've spent a while working with a variety of device interfaces and have seen a lot of protocols, many serial and UDP in which data integrity is handled at the application protocol level. I've been seeking to improve my receive routine handling of protocols in general, and considering the "ideal" design of a protocol.
My question is: is there any protocol framing scheme out there that can definitively identify corrupt data in all cases? For example, consider the standard framing scheme of many protocols:
Field: Length in bytes
<SOH>: 1
<other framing information>: arbitrary, but fixed for a given protocol
<length>: 1 or 2
<data payload etc.>: based on length field (above)
<checksum/CRC>: 1 or 2
<ETX>: 1
For the vast majority of cases, this works fine. When you receive some data, you search for the SOH (or whatever your start byte sequence is), move forward a fixed number of bytes to your length field, and then move that number of bytes (plus or minus some fixed offset) to the end of the packet to your CRC, and if that checks out you know you have a valid packet. If you don't have enough bytes in your input buffer to find an SOH or to have a CRC based on the length field, then you wait until you receive enough to check the CRC. Disregarding CRC collisions (not much we can do about that), this guarantees that your packet is well formed and uncorrupted.
However, if the length field itself is corrupt and has a high value (which I'm running into), then you can't check the (corrupt) packet's CRC until you fill up your input buffer with enough bytes to meet the corrupt length field's requirement.
So is there a deterministic way to get around this, either in the receive handler or in the protocol design itself? I can set a maximum packet length or a timeout to flush my receive buffer in the receive handler, which should solve the problem on a practical level, but I'm still wondering if there's a "pure" theoretical solution that works for the general case and doesn't require setting implementation-specific maximum lengths or timeouts.
Thanks!
The reason why all protocols I know of, including those handling "streaming" data, chop up the datastream in smaller transmission units each with their own checks on board is exactly to avoid the problems you describe. Probably the fundamental flaw in your protocol design is that the blocks are too big.
The accepted answer of this SO question contains a good explanation and a link to a very interesting (but rather heavy on math) paper about this subject.
So in short, you should stick to smaller transmission units not only because of practical programming related arguments but also because of the message length's role in determining the security offered by your crc.
One way would be to encode the length parameter so that it would be easily detected to be corrupted, and save you from reading in the large buffer to check the CRC.
For example, the XModem protocol embeds an 8 bit packet number followed by it's one's complement.
It could mean doubling your length block size, but it's an option.
I am a newbie to Serial Port Analysis and I would appreciate some help on this. my specific question is...
If I have raw data from a serial port analyzer program, how will I locate measures like temperature, pressure, energy etc?
What should I look for in the raw data that will help me identify these units of measure?
What is the best way the extract relevant data from this raw data?
I would be very grateful if you can provide me any help with respect to this. I am unable to figure out how to do this.
Thanks a lot.
The best way that I know of to do this is to find the "reset" identifier, also called the "End of Stream" identifier or sequence. I am assuming that the data is a continuous flow not a one-time transmission.
If the data is continuously cycling, you need to find where the transmit begins (or ends) and then start metering your capture from there. Most devices will have an associated manual or documentation that give you the end sequence (or optionally the start sequence) and then the method by which they identifier their data.
For instance, the device may end a message by sending 4 all zero bytes in a row, then begin again by sending one byte that identifies the sensor, and another two bytes with the data, followed by the next sensor etc.
You would then watch the stream for 4 zero byte entries, and then start capturing 3 bytes at a time, one for the sensor and two for the data, until you saw 4 zero byte entries in a row again.