How to handle the leading 4-byte protocol of tun FD packets when reading tun FD directly in C instead of through packetFlow.readPackets in Swift? - vpn

In iOS SimpleTunnel VPN demo app, it reads the virtual tun interface through packetFlow.readPackets(). In my case, Swift hands over control to C to move data packets between the tunnel and the virtual tun interface. The tun file descriptor will be sent over from Swift to C through this conversion:
let tunFd = self.packetFlow.value(forKeyPath: "socket.fileDescriptor") as! Int32
as explained by spensaurus in this post, and also Dinesh's post.
But I'm stuck with the issue that there is no internet connection even though VPN is showing activated on the device and logs show that the C code is functioning in moving data packets. Following the idea to remove the leading 4 bytes out of the packets read from tun FD doesn't seem to help.
I realize that Apple support made it clear that the above conversion and the way of using tun FD is not supported. I still want to take this approach because of this seems to be a much easier approach instead of porting a whole C library over which can take a lot more time.
So my questions are:
1. Stripping off the 4 leading bytes of the tun packets should be done all the time, or just under what conditions?
2. Is it required to add leading 4 bytes protocol to any packets into tun FD?
3. Anything else you are doing and I might have missed?
I'd very much appreciate any insight.

With trial and error, I found that stripping the 4 leading bytes after reading, and adding them prior to writing tun FD works.

Related

How to read specific information from a TCP packet

I have a network analyzer written in C that sniffs the packages that transit from a specific network card (something like Wireshark but way simpler)
I have implemented a function where, given the position in memory of the first bit of the packet, shifting my pointer, I can extract information as the IP source and destination. For instance, shifting my pointer from the first bit.
For example, I can read layer 2 moving the pointer of 8 octets to find the MAC destination according to the position described here https://en.wikipedia.org/wiki/Ethernet_frame.
I'm able to do the same thing for the layer 3 and 4 with the TCP protocol (and others) following the table below
Now I would like to read if the packet is using TLS and the port it's using.
For the port, I can't find the position inside the packet.
About the TLS, I've checked with Wireshark and it seems that TLS is always described in the packet with the HEX code 0303 and it's inside the Application layer.
I would like to know if there's a standard encapsulation in the application layer to read information such as Transport Layer Security.
tHANKS

Multi-drop bus to rs232 Convert

I have a project using MDB (multi-drop bus) for vending machine (VDM).
The VDM has a MDB-RS232.
I'm not sure if it converts 9bit - 8bit (MDB-UART).
How do I read data from VDM in my computer?
Thanks all
MDB (multi-drop bus) is 9 bit, because after the standard 8 data bits (like in standard RS232 UART communication) there is a 9th bit called "mode".
(Wikipedia on MDB: "the mode bit differentiates between ADDRESS and DATA bytes.")
But you can read such data even with regular 8-bit RS232 interfaces, e.g. a plain standard USB-to-RS232 device for PC.
Here is how:
Use 9600 baud, 8 data bits, 1 stop bit, but RS232 parity setting "Space". Make sure you receive the original character value even in case of a Parity Error indication. Any MDB address byte from your VDM will be received with a Parity Error (but still be displayed correctly). Any data byte will be displayed without error.
For sending MDB ADDRESS and DATA bytes using a standard 8-bit RS232 port, you could apply temporary parity changes: Change the parity setting to "Mark" before sending an address byte, then change back to "Space" before sending data bytes.
On Windows, you can do such tricks with our Docklight software (see Docklight and MDB). It's free for basic testing and there is also a related 9-bit example project.
On Linux / Raspberry Pi other users have successfully implemented the parity trick, too, see this stackexchange post about a MDB + Pi.
But also with RealTerm, Teraterm, Termite, Bray, YAT or any other RS232 application you should be able to read the data, as long as it handles "Space" or "Mark" parity settings correctly.
You'll need an adapter which will do all convert operations on-the-fly and in real time. If you want to emulate VMC (master), you'll need MDB-UART master adapter. If you want to emulate MDB peripheral device (such as coin changer, bill validator etc), you'll need this. For two-way "sniffing" MDB bus you'll need a combination of these devices.
Direct connection PC's RS-232 to MDB will not work due to strict MDB timings (delay between VMC command and peripheral response must not exceed 5ms, delays between POLL requests are 50-300ms in general). I mean pretty reliable functioning available for commercial purposes.

Can't get OK response from XBee upon "+++"

I have been trying to set up two XBees to communicate since the last three days. X-CTU seems to be the perfect option to do so, however, it is a real menace when it comes to discovering XBees on serial ports.
I was able to detect one XBee by luck just once and the other one never showed up. I have even replaced both my XBees. I am trying to figure out the alternative, i.e. using a serial console to perform the operation. I haven't been able to receive an OK response from the device upon issuing +++.
Since I haven't had a good experience using a PC to communicate with ESP8266 devices earlier, I tried to figure out a workaround by using the second Serial port of an Arduino to send such configuration messages and read the response by printing it out on the default serial console.
It also appears that configuration messages can differ depending on the mode of the device. If it's in API mode, the frame has to be generated in a specific format (I use the X-CTU frame generator for this purpose).
Why am I not able to receive a response from the XBee upon issuing a +++?
The devices are Series 1 XBees and the exact part number is XB24-AWI-001. Any help is highly appreciated.
Have you considered the XBee being in API mode? Maybe should you consider to reflash the device in AT mode to start playing with it.
To test if it's in API mode, you can refer to the guide, chapter 9 for the API mode structure:
http://eewiki.net/download/attachments/24313921/XBee_ZB_User_Guide.pdf?version=1&modificationDate=1380318639117&api=v2
Basically, a datagram in API mode starts with ~, and it's built as follows:
[0x7E|length(2B)|Command(1B)|Payload(length-1B)|Checksum(1B)]
As 0x7E is ~ on the ASCII table, you should try typing a bogus datagram in a serial terminal session like:
~ <C-d> AAAA
N.B.: The <C-d> characters means Control-d under unix., which is the EOF character.
Obviously such a message isn't likely to work, and you will receive a reply asking you to send that datagram again. That's because the EOF character being ASCII code 4, it means that the length of the datagram will be 4 bytes. So then you send four bogus bytes, the checksum will be A, which is very likely to be right, and the receiver will assume the transmission has been corrupted. So the datagram will be asked again, meaning you will receive a datagram to do that query.
Though I can only advice you to consider running it only in API mode (more reliable and a better API, but you cannot play around with it and understand what's going on by tapping on the line with a logic analyzer… though giving enough time, you'll start to read API datagrams like it's English ☺).
I wrote a page with a few resources to check on how to reflash the XBees:
https://github.com/hackable-devices/polluxnzcity/wiki/Flash-zigbee
and here's other advices from another totally unrelated project:
https://github.com/andrewrapp/xbee-api#documentation
And I also wrote a lib (aimed at beaglebones but you can tweak it for your use) that handles API mode 2 with XBees:
https://github.com/hackable-devices/polluxnzcity/blob/master/PolluxGateway/include/xbee/xbee_communicator.h
https://github.com/guyzmo/polluxnzcity/blob/master/PolluxGateway/src/xbee/xbee_communicator.C
but I bet with a little google search you can find more widely used libraries than those ones, and even some aimed to be run on Arduinos (N.B.: that lib was originally written for Arduinos, and then adapted to run for Beaglebone, so reversing the operation shouldn't be hard).

Understanding serial device settings

Please feel free to slap me and send a link if this question has already been answered; I just couldn't find it. I did search though.
I've been trouble-shooting communication with a serial device. In looking over lots of documentation, I now understand what the settings for "baud rate," "data bits," "stop bit," and "parity" mean. But what I can't seem to understand is who (sender or receiver) determines these settings.
Say I have a serial device plugged into my computer. In my code, I open a connection to the serial port and specify something like 9600,8,E,1. When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver? Or is it more common for a sender to expect a receiver to comply with strict settings?
The issue I'm having is that I attempted to use "Even" parity, and that resulted in tons of irregular transfer errors. When I use "Odd" parity, however, those errors go away. There is also a USB to Serial adapter involved in my set up. There aren't any transfer errors with Even or Odd parity without the adapter in the middle. So I'm just having a hard time understanding whether the device itself doesn't support sending with Even parity, or whether the adapter is the thing causing trouble, etc.
Thanks.
When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver?
No.
To expand on the comment by Hans Passant, both sides of the serial port have to agree on the settings, otherwise they won't talk to each other. If they don't agree, you will get gibberish data on either side as the hardware will read the data at an incorrect time. The settings are normally documented in the manual for the device that you are attempting to communicate with. For example, to communicate with a Cisco router, you will generally use the following settings:
Bits per sec : 9600
Data bits : 8
Parity : none
Stop bits : 1
Flow control : none
When you setup the serial port on your side, you must use these same settings, there is no hardware-level handshake between the two devices that determines the speed that they will communicate at.
Sometimes, the format for the serial port settings may be given in a format like the following:
9600,8,N,1
Which is just shorthand for the above quote(9600 baud, 8 data bits, no parity, 1 stop bit)
In my experience, most devices default to 9600,8,N,1, the next common serial setting is 115200,8,N,1

how can you count the number of packet losses in a file transfer?

One of my networks course projects has to do with 802.11 protocol.
Me and my parther thought about exploring the "hidden terminal" problem, simulating it.
We've set up a private network. We have 2 wireless terminals that will attempt to send a file
to a 3rd terminal that is connected to the router via ethernet. RTS/CTS will be disabled.
To compare results, we'd like to measure the number of packet collisions that occured during the transfer so as to conclude that is due to RTS being disabled.
We've read that it is imposible to measure packet collisions as it is basically noise. We'll have to make do with counting the packets that didnt recieve an "ACK". Basically, the number of retransmitions.
How can we do that?
I suggested that instead of sending a file, we could make the 2 wireless terminals ping the 3rd terminal continually. The ping feature automatically counts the ping packets that didnt recieve the "pong". Do you think its a viable approach?
Thank you very much.
No, you'll get incorrect results. Ping is an application, i.e. working at application (highest) level of the network. 802.11 protocol operates at MAC layer - there are at least 2 layers separating between ping and 802.11. Whatever retransmissions happen at MAC layer - they are hidden by the layers above it. You'll see failure in ping only if all the retransmissions initiated by lower levels have failed.
You need to work on the same level that you're investigating - in your case it's the MAC layer. You can use a sniffer (google for it) to get the statistics you want.

Resources