proper udp csum using linux kernel functions - networking

I have un issue with calculating the checksum for IPv6 packet in the linux kernel module.
I tried the next way:
struct in6_addr LINK_LOCAL_MULTICAST = {{{ 0xff,02,0,0,0,0,0,0,0,0,0,0,0,1,0,2 }}};
struct in6_addr LINK_LOCAL_SRC = {{{ 0xfe,0x80,0,0,0,0,0,0,0x0a,0x00,0x27,0xff,0xfe,0x5b,0x58,0xcf }}};
udph->len = htons(sizeof(struct udphdr)+sizeof(struct udp_payload));
__wsum csum = csum_partial((char*) udph, udhp->len, 0);
udph->check = csum_ipv6_magic(&LINK_LOCAL_SRC, &LINK_LOCAL_MULTICAST, udph->len, IPPROTO_UDP,csum);
But the checksum seems to be incorrect. Could you please suggest what I have to change to get correct checksum.
EDIT1:
Please find the packet in wireshark. I changed the offload settings(tx,rx) but the checksum still incorrect. I am afraid that the value in the checksum is wrong.

Related

decode register to 32bit float big endian in python code on raspberry pi 3B with python library pymodbus2.5.3

I'm trying to get the data stream of a sensor transmitter that uses the modbus rtu communication protocol on my raspberry pi 3B. I'm able to get the data with the pymodbus2.5.3 library.
For this I use this code:
from pymodbus.client.sync import ModbusSerialClient # Import the pymodbus library part for syncronous master (=client)
client = ModbusSerialClient(
method='rtu', #Modbus Modus = RTU = via USB & RS485
port='/dev/ttyUSB0', #Connected over ttyUSB0, not AMA0
baudrate=19200, #Baudrate was changed from 38400 to 19200
timeout=3, #
parity='N', #Parity = None
stopbits=2, #Bites was changed from 1 to 2
bytesize=8 #
)
if client.connect(): # Trying to connect to Modbus Server/Slave
#Reading from a holding register
res = client.read_holding_registers(address=100, count=8, unit=1) #Startregister = 100, Registers to be read = 8, Answer size = 1 byte
if not res.isError(): #If Registers don't show Error
print(res.registers) #Print content of registers
else:
print(res) #Print Error Message, for meaning look at (insert git hub)
else: #If not able to connect, do this
print('Cannot connect to the Transmitter M80 SM and Sensor InPro 5000i.')
print('Please check the following things:')
print('Does the RS485-to-USB Adapter have power? Which LEDs are active?')
print('Are the cables connected correctly?')
And get the following output:
[15872, 17996, 16828, 15728, 16283, 45436, 16355, 63231]
With the help of the Modbus Poll and Slave Programms I know that those results should decoded be:
[0.125268, --, 23.53, --, 1.21094, --, 1.77344, --]
To get to the right results I tried the command that the pymodbus github suggests .decode():
res.decode(word_order = little, byte_order = little, formatters = float64)
[I know that those aren't the needed options but I copied the suggested github code to check if it works.]
After putting the code segment into the code the changed part looks like this:
if not res.isError(): #If Registers don't show Error
res.decode(word_order = little, byte_order = little, formatters = float64)
print(res.registers) #Print content of registers
else:
print(res) #Print Error Message, for meaning look at (insert git hub)
When I run this code, I get the following output, that traces to the decoding segment:
NameError: name 'little' is not defined
After this, I imported also the pymodbus part translation. But it showed the same output.
How can I decode my incoming data?
You can use BinaryPayloadDecoder to help decoding your payload, here is a simplified example, change Endian.Big and Endian.Little if needed.
if client.connect(): # Trying to connect to Modbus Server/Slave
#Reading from a holding register
res = client.read_holding_registers(address=100, count=8, unit=1) #Startregister = 100, Registers to be read = 8, Answer size = 1 byte
# ====== added code start ======
decoder = BinaryPayloadDecoder.fromRegisters(res.registers, Endian.Little, wordorder=Endian.Little)
first_reading = decoder.decode_32bit_float()
second_reading = decoder.decode_32bit_float()
# ====== added code end ======
if not res.isError(): #If Registers don't show Error
print(res.registers) #Print content of registers
else:
print(res) #Print Error Message, for meaning look at (insert git hub)
Remember to import from pymodbus.payload import BinaryPayloadDecoder at top and add necessary exception handlers in your final code.
Reference document: https://pymodbus.readthedocs.io/en/latest/source/library/pymodbus.html#pymodbus.payload.BinaryPayloadDecoder

how to extract ip address from QueueDiscItem in ns3?

I'm new to NS3 and i was trying to extract ip address of a packet from QueueDiscItem,
when i have:
Ptr< QueueDiscItem > item initiated and call:
item->Print(std::cout);
the output i get is
"tos 0x0 DSCP Default ECN Not-ECT ttl 63 id 265 protocol 6 offset (bytes) 0 flags [none] length: 76 10.1.4.2 > 10.1.2.1 0x7fffc67ec880 Dst addr 02-06-ff:ff:ff:ff:ff:ff proto 2048 txq"
but when i call:
Ipv4Header header;
item->GetPacket()->PeekHeader(header);
header.Print(std::cout);
the output i get is
"tos 0x0 DSCP Default ECN Not-ECT ttl 0 id 0 protocol 0 offset (bytes) 0 flags [none] length: 20 102.102.102.102 > 102.102.102.102"
How to get the Header data
According to the list of TraceSources, the TraceSources associated with QueueDiscItems are for Queues. I'm guessing you were trying to attach to one of those TraceSources.
A QueueDiscItem encapsulates several things: a Ptr<Packet>, a MAC address, and several more things. Since you are using IPv4, the QueueDiscItem is actually an Ipv4QueueDiscItem (the latter is a subclass of the former). So, let's start by casting the QueueDiscItem to an Ipv4QueueDiscItem by
Ptr<const Ipv4QueueDiscItem> ipItem = DynamicCast<const Ipv4QueueDiscItem>(item);
Next, you need to know that at this point in the simulation, the Ipv4Header has not been added to the Ptr<Packet> yet. This is probably a design choice (that I don't understand). So, how can we get this information? Well, the Ipv4QueueDiscItem encapsulates the Ipv4Header, and at some point before passing the Ptr<Packet> to L2, the header is added to the packet. This Header can be retrieved by
const Ipv4Header ipHeader = ipItem->GetHeader();
So, now we have the Ipv4Header of the packet you're interested in. Now, we can safely get the address from the Ipv4QueueDiscItem by
ipHeader.GetSource();
ipHeader.GetDestination();
In summary, your TraceSource function should look something like this:
void
EnqueueTrace (Ptr<const QueueDiscItem> item) {
Ptr<const Ipv4QueueDiscItem> ipItem = DynamicCast<const Ipv4QueueDiscItem>(item);
const Ipv4Header ipHeader = ipItem->GetHeader();
NS_LOG_UNCOND("Packet received at " << Simulator::Now() << " going from " << ipHeader.GetSource() << " to " << ipHeader.GetDestination());
}
Why does item->Print(std::cout); work?
All of the above makes sense, but why does
item->Print(std::cout);
print the correct addresses? First, it is important to realize that here Print() is a function of the QueueDiscItem, not the Packet. If we go to the source of this function, we find that Print() just prints the Header if it has already been added.

Arduino Ethernet shield not receiving UDP packets correctly

I'm using a Adafruit Ethernet FeatherWing plugged into an Adafruit Feather 328P and I want to send and receive UDP packets from a Python application. I'm using the stock Arduino UDP code just to see what I'm sending. Here's my Python code:
def write_Arduino(self):
sock_readresp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock_readresp.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock_readresp.bind((self.config["DEFAULT"]["THIS_IP_ADDRESS"], int(self.config["DEFAULT"]["RECEIVE_PORT"] )))
sock_readresp.settimeout(.2)
MESSAGE = struct.pack("30c", b'0', b'1', b'2', b'3', b'4', b'5', b'6', b'7', b'8', b'9',
b'0', b'1', b'2', b'3', b'4', b'5', b'6', b'7', b'8', b'9',
b'0', b'1', b'2', b'3', b'4', b'5', b'6', b'7', b'8', b'9')
print("Message is {}".format(MESSAGE))
sock_read = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock_read.setblocking(0)
sock_read.bind((self.config["DEFAULT"]["THIS_IP_ADDRESS"], 0))
sock_read.sendto(MESSAGE,(self.config["DEFAULT"]["ARDUINO_IP_ADDRESS"],int(self.config["DEFAULT"]["SEND_PORT"])))
sock_read.close()
My settings are:
THIS_IP_ADDRESS = 192.168.121.1
ARDUINO_IP_ADDRESS = 192.168.121.2
SEND_PORT = 8888
RECEIVE_PORT = 32001
and I've updated the Arduino code to reflect that. When I send this packet, I can confirm it through Wireshark on my PC
I seem to be sending exactly what I think I am. A string of "012345678901234567890123456789" (Wireshark shows the ASCII characters in hex, as seen here). However, receiving it on the Arduino looks like this:
Received packet of size 30
From 192.168.121.1, port 64143
Contents:
012345678901234567890123D6789
The 25th and 26th byte always show up like that, and I'm missing the actual data. What could be going on here?

Fake access point not showing up as a wireless network

I'm trying to write a fake access point script in ruby; the script is below:
require 'packetgen'
def fake_ap
print 'Making a fake ap...'
while true
bssid = 'aa:aa:aa:aa:aa:aa'
iface = 'mon0'
ssid = 'NoWifi'
broadcast = 'ff:ff:ff:ff:ff:ff'
pkt = PacketGen.gen('RadioTap')
pkt.add('Dot11::Management', mac1: broadcast, mac2: bssid, mac3: bssid)
pkt.add('Dot11::Beacon', cap: '0x1114')
pkt.dot11_beacon.add_element(type: 'SSID', value: ssid)
pkt.dot11_beacon.add_element(type: 'Rates', value: "\x82\x84\x8b\x96\x24\x30\x48\x6c")
pkt.dot11_beacon.add_element(type: 'DSset', value: "\x06")
pkt.dot11_beacon.add_element(type: 'TIM', value: "\x00\x01 \0x00\0x00")
pkt.calc
pkt.to_w(iface)
end
end
fake_ap
Hexdump of packet
The program is supposed to send beacon frames; I ran the program (with my wireless card on monitor mode) however it doesn't show up as an access point. It there a problem with my code or something else. The docs for the packetgen library are here. Thanks!

iOS UDP broadcast vs. PHP UDP broadcast

I'm trying to send data via UDP to the network. I've got some PHP code running on my local machine which works:
#!/usr/bin/php -q
<?php
$socket = stream_socket_client('udp://225.0.0.0:50000');
for($i=0;$i<strlen($argv[1]);$i++) $b.="\0\0\0".$argv[1][$i];
fwrite($socket,$b,strlen($argv[1])*4);
fclose($socket);
?>
Gives me the output in tcpdump:
18:53:24.504447 IP 10.0.1.2.52919 > 225.0.0.0.50000: UDP, length 36
I'm trying to get to the same result on a remote iOS with the following code:
- (void)broadcast:(NSString *)dx {
NSData* data=[dx dataUsingEncoding:NSUTF8StringEncoding];
NSLog(#"Broadcasting data: %#", dx);
int fd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
struct sockaddr_in addr4client;
memset(&addr4client, 0, sizeof(addr4client));
addr4client.sin_len = sizeof(addr4client);
addr4client.sin_family = AF_INET;
addr4client.sin_port = htons(PORT);
addr4client.sin_addr.s_addr = htonl(INADDR_BROADCAST);
int yes = 1;
if (setsockopt(fd, SOL_SOCKET, SO_BROADCAST, (void *)&yes, sizeof(yes)) == -1) {
NSLog([NSString stringWithFormat:#"Failure to set broadcast! : %d", errno]);
}
char *toSend = (char *)[data bytes];
if (sendto(fd, toSend, [data length], 0, (struct sockaddr *)&addr4client, sizeof(addr4client)) == -1) {
NSLog([NSString stringWithFormat:#"Failure to send! : %d", errno]);
}
close(fd);
}
Which gives me the following output in tcpdump:
19:01:22.776192 IP 10.0.1.4.60643 > broadcasthost.50000: UDP, length 9
Looks basically OK, but doesn't arrive in Quartz Composer for some reason, I guess there should be the IP address or something instead of 'broadcasthost'.
Any idea?
The problem was not in the implementation of the broadcaster, but the format of the string. To work with Quartz Composer, every character needs to be preceded by a backslash-zero combination: "\0\0\0", so "abc" has to be formatted and sent as "\0\0\0a\0\0\0b\0\0\0c".
See also Celso Martinho's blog article: Leopard’s Quartz Composer and Network events.
I suggest using AsyncSocket ( google it, its on googlecode ), very well tested objective-c code that runs on iOS.
That way you can send data really easy using a NSData object. AsyncSocket manages the hard part for you.
If that isn't an option for you you should use CFSocket. What you are doing is implementing code that has been written for you already, CFSocket.

Resources