Decode SNMP PDUs - Where to Start? [closed] - decode

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hello my first ever question on here, in need of bit of guidance.
I'm working on a packet sniffer mainly to decode SNMP PDUs, however I am not entirely sure where to go with it.
Simply put my packet sniffer can extract information from packets however I am interested in the data payload field. It is written in C++ and I am using winsock.
What way should I go about this? Are the SNMP fields encoded in basic encoding rules or will I have to delve into ASN.1?
I am only looking to decode those SNMP fields within the data payload field into human readable form. They are going to be dumped into a text file. So I will be looking at decoding OIDs also. I am verifying everything as I go along with Wireshark and using GETIF to query my SNMP node.
Any guidance is appreciated.
EDIT:
Thanks user1793963 very well explained. Sorry to all who have marked this as too broad.
To elaborate on my original question could anyone explain the initial part of the PDU itself.
Example: My program outputs these hex values 30 82 00 A3 02 01 00, which is SEQUENCE (30), LENGTH (82) and two other values. This is from a GetRequest PDU.
The GetResponse PDU shows these values 30 81 B7 02 01 00, SEQUENCE, 81 in LENGTH and another value.
Could someone explain the values marked in bold. If it uses the simple TLV structure what are the values representing? What I know is the start of the sequence (30) and the total PDU length (which is 82 and 81) and I know 02 01 00 are INTEGER 1 in LENGTH and VERSION 0 however I do not understand 00 A3 (GetRequest) and B7 (GetResponse). What do these values represent?
Many thanks.
I am also using Wireshark to check values, however they do not state the start of the PDU sequence
Update, 9 years later :-)
30 82 00 A3 and 30 81 B7 02 the 30 is data type = sequence. Following that are (long) length fields (> 127 bytes).
The rule for large numbers is that only the lower 7 bits in the byte are used for holding the value (0-127). The highest order bit is used as a flag
to let the recipient know that this number spans more than one byte. If more than two bytes are required to encode the number, all will have the top bit set apart from the last byte. Any number over 127 must be encoded using more than one byte.

SNMP packets are encoded in ASN.1, but it's a very simple protocol (at least for SNMP v1 and v2.c, I do not have a lot of experience with v3). It uses a simple TLV structure: type, length, value. For example the bytes 0x4 0x6 0x70 0x75 0x62 0x6c 0x63 are a string (type 4) with length 6 and value "public". You can find a list of types here.
I find it useful to write out packages like this:
1. 0x30 0x34
2. 0x2 0x1 0x1
3. 0x4 0x6 0x70 0x75 0x62 0x6c 0x63
4. 0xa2 0x27
5. 0x2 0x4 0x1 0x2 0x3 0x4
6. 0x2 0x1 0x0
7. 0x2 0x1 0x0
8. 0x30 0x19
9. 0x30 0x17
10. 0x6 0x8 0x2b 0x6 0x1 0x2 0x1 0x1 0x2 0x0
11. 0x6 0xb 0x2b 0x6 0x1 0x4 0x1 0x85 0x22 0xd5 0xf 0x97 0x54
This is a response on a get request where I requested the OID 1.3.6.1.2.1.1.2.0 (sysObjectID).
A list (type 0x30) with a length of 52 bytes
The version: SNMPv2.c (0=v1, 1=v2.c)
The community string: "public". (note how this is send in cleartext)
A SNMP get-response with length 39
The request ID, a 32 bit integer.
The error code, 0 means no error.
The error index.
A list with length 25
A list with length 23
An OID with length 8: 1.3.6.1.2.1.1.2.0
An OID with length 11: 1.3.6.1.4.1.674.10895.3028
As you can see integers and strings are easy, but OIDs are a bit trickier. First of all the first two parts ("1.3") are represented as a single byte (0x2b). They did this to make each message a few bytes shorter.
The second problem is representing numbers larger than 255. To do this SNMP uses only the 7 least significant bits to store data, the most significant bit is a flag to signal that the data continues in the next byte. Numbers lower than 128 are stored in a single byte.
0x7f
= 0 111 1111
= 127
0x85 0x22
= 1 000 0101, 0 010 0010
= 000 0101 010 0010
= 674
0xc0 0x80 0x80 0x80
= 1 100 0000, 1 000 0000, 1 000 0000, 0 000 0000
= 100 0000 000 0000 000 0000 000 0000
= 0x8000000
This method is also used if the length of a TLV field is larger than 127.
RFC1592 describes the structure of the messages, take a look at page 11 for a similar example.
I can also recommend using Wireshark to analyze packets, it does an excellent job of translating them to something readable.

Related

Assembly Language hex address

I'm just starting to learn assembly language, and we are working with hex addresses. Below is a question of ours. I'm not sure how it adds up though. I know the answer is 0x202C, but how did we get there? Can you help explain the processes step by step, in the most basic way possible to help me understand? Thank you!!
The following data segment starts at memory address 0x2000 (hexadecimal)
.data
printString BYTE "Assembly is fun",0
moreBytes BYTE 24 DUP(0)
dateIssued DWORD ?
dueDate DWORD ?
What is the hexadecimal address of dueDate?
You have three data definitions to add together:
printString is an ASCII text followed by a zero byte. The string part is 15 bytes long, and with the terminal zero byte that makes 16. So the offset of the next data item is 0x2010 (16 decimal is 0x10 hex). printString starts at 0x2000, and the next one starts after the last byte of printString, so you have to add its length to its offset to get to the next offset.
moreBytes is 24 bytes long, because that's how DUP works. BYTE x DUP (y) means "X bytes of value Y". So the offset of the next data item is 0x2028, as 24 decimal is 0x18 hex.
dateIssued is 4 bytes long, because that's the definition of a DWORD. So the next one is at 0x0x2C, since 8+4=12, and that's 0xC in hex notation.
Alternatively, you could add the three lenghts together, getting 44. 44 in hex would be 0x2C.

How to prove CRC can detect even number of isolated bit errors

A 1024-bit message is sent that contains 992 data bits and 32 CRC bits. CRC is com- puted using the IEEE 802 standardized, 32-degree CRC polynomial. For each of the following, explain whether the errors during message transmission will be detected by the receiver:
(a) There was a single-bit error.
(b) There were two isolated bit errors.
(c) There were 18 isolated bit errors.
(d) There were 47 isolated bit errors.
(e) There was a 24-bit long burst error.
(f) There was a 35-bit long burst error.
In the above question can anyone explain for option (c).
This 41-bit codeword with weight 18 (expressed as six bytes in hexadecimal) can be exclusive-ored with any message starting at any bit position, and leave the CRC-32 of that message unchanged:
2f 18 3b a0 70 01

Serial point to point protocol but with 8 bytes instead of 16

I was looking at answers in Simple serial point-to-point communication protocol and it doesn't help me enough with my issue. I am also trying to communicate data between a computer and an 8-bit microcontroller at first, then eventually I want to communicate the one microcontroller to about 40 others via wireless radio modules. Basically one is designated as a master and the rest are slaves.
speed is an issue
The issue at hand is speed. because communication of every packet needs to be done at least 4x a second back and forth between the master and each slave.
Let's assume baud rate for data is 9600bps. That's 960 bytes a second.
If I used 16-byte packets then: 40 (slaves) times 16 (bytes) times 2 (ways) = 640. Divide that into 960 and that would mean well more than 1/2 a second. Not good.
If I used 8-byte packets then: 40 (slaves) times 8 (bytes) times 2 (ways) = 320. Divide that into 960 and that would mean 1/3 second. It's so-so.
But the thing is I need to watch my baud because too high of baud might mean missed data at larger distances, but you can see the speed difference between an 8 and 16 byte packet.
packet format idea
In my design, I may have a need to transmit a number in the low millions so that will use 24-bits which fits in my idea. But here's my initial idea:
Byte 1: Recipient address 0-255
Byte 2: Sender address 0-255
Byte 3: Command
Byte 4-6: Data
Byte 7-8: 16-bit fletcher checksum of above data
I don't mind if the above format is adjusted, just as long as I have at least 6 bits to identify the sender and receiver (since I'll only deal with 40 units), and the data with command included should be at least 4 bytes total.
How should I modify my data packet idea so that even the device that just turned on in the middle of reception can be in sync with the next set of data? Is there a way without stripping a bit from each data byte?
Rely on the check sum! My packet would consists of:
Recipient's address (0..40) XORed with 0x55
Sender's address (0..40) XORed with 0xAA
Command Byte
Data Byte 0
Data Byte 1
Data Byte 2
CRC8 sum, as suggested by Vroomfondel
Every receiver should have a sliding window of the last seven received bytes. When a byte was shifted in, that window should checked if it is valid:
Are the two addresses in the valid range?
Is it a valid command?
Is the CRC correct?
Especially the last one should safely reject packets on which the receiver hopped on off-sync.
If you have less than 32 command codes, you may go down to six bytes per packet: 40[Senders] times 40[Receivers] times 32[Commands] evaluates to 51200, which would fit into 16 bits instead of 24.
Don't forget to turn off the parity bit!
Update 2017-12-09: Here a receiving function:
typedef uint8_t U8;
void ByteReceived(U8 Byte)
{
static U8 Buf[7]; //Bytes received so far
static U8 BufBC=0;
Buf[BufBC++] = Byte;
if (BufBC<7) return; //Msg incomplete
/*** Seven Byte Message received ***/
//Check Addresses
U8 Adr;
Adr = Buf[0] ^ 0x55; if (Adr >= 40) goto Fail;
Adr = Buf[1] ^ 0xAA; if (Adr >= 40) goto Fail;
if (Buf[2] > ???) goto Fail; //Check Cmd
if (CalcCRC8(Buf, 6) != Buf[6]) goto Fail;
Evaluate(...);
BufBC=0; //empty Buf[]
return;
Fail:
//Seven Byte Msg invalid -> chop off first byte, could use memmove()
Buf[0] = Buf[1];
Buf[1] = Buf[2];
Buf[2] = Buf[3];
Buf[3] = Buf[4];
Buf[4] = Buf[5];
Buf[5] = Buf[6];
BufBC = 6;
}

In which cases, the TCP checksum will not detect an error?

I have a question related to computer networks, the question is in which of the following case the TCP checksum will not find an error:
1) a single bit flip occurs in the 10th byte (i.e., one bit in the 10th byte goes from 0 to 1, or from 1 to 0)
2) the first byte of the payload that was originally 00000001 becomes 00000000 and the third byte of the payload that was originally 00000000 becomes 00000001
3) the third bit of the first byte of the payload flips from 1 to 0, AND the third bit of the second byte of the payload flips from 0 to 1
4) the first byte of the payload that was originally 00000001 becomes 00000000, and the second byte of the payload that was originally 00000000 becomes 00000001
RFC 793 says:
The checksum field is the 16 bit one's complement of the one's
complement sum of all 16 bit words in the header and text.
1) A single bit flip change the checksum.
2) As the sum is on 16 bit words, this will leave the checksum unchanged.
3) The two bit changes will not sum up and will change the checksum.
4) Same as 3)
Only the second case will leave the checksum unchanged.

Can someone explain hex offsets to me?

I downloaded Hex Workshop, and I was told to read a .dbc file.
It should contain 28,315 if you read
offset 0x04 and 0x05
I am unsure how to do this? What does 0x04 mean?
0x04 is hex for 4 (the 0x is just a common prefix convention for base 16 representation of numbers - since many people think in decimal), and that would be the fourth byte (since they are saying offset, they probably count the first byte as byte 0, so offset 0x04 would be the 5th byte).
I guess they are saying that the 4th and 5th byte together would be 28315, but did they say if this is little-endian or big-endian?
28315 (decimal) is 0x6E9B in hexadecimal notation, probably in the file in order 0x9B 0x6E if it's little-endian.
Note: Little-endian and big-endian refer to the order bytes are written. Humans typical write decimal notation and hexadecimal in a big-endian way, so:
256 would be written as 0x0100 (digits on the left are the biggest scale)
But that takes two bytes and little-endian systems will write the low byte first: 0x00 0x01. Big-endian systems will write the high-byte first: 0x01 0x00.
Typically Intel systems are little-endian and other systems vary.
Think of a binary file as a linear array of bytes.
0x04 would be the 5th (in a 0 based array) element in the array, and 0x05 would be the 6th.
The two values in 0x04 and 0x05 can be OR'ed together to create the number 28,315.
Since the value you are reading is 16 bit, you need to bitshift one value over and then OR them together, ie if you were manipulating the file in c#, you would use something like this:
int value = (ByteArray[4] >> 8) | ByteArray[5]);
Hopefully this helps explain how hex addresses work.
It's the 4th and the 5th XX code your viewing...
1 2 3 4 5 6
01 AB 11 7B FF 5A
So, the 0x04 and 0x05 is "7B" and "FF".
Assuming what you're saying, in your case 7BFF should be equal to your desired value.
HTH
0x04 in hex is 4 in decimal. 0x10 in hex is 16 in decimal. calc.exe can convert between hex and decimal for you.
Offset 4 means 4 bytes from the start of the file. Offset 0 is the first byte in the file.
Look at bytes 4 and five they should have the values 0x6E 0x9B (or 0x9B 0x6E) depending on your endianess.
Start here. Once you learn how to read hexadecimal values, you'll be in much better shape to actually solve your problem.

Resources