How to send 16 bytes hexadecimal value with can-utils cansend? - hex

I'm trying to send a 16 bytes hexadecimal via cansend from the can-utils package on Ubuntu 16.04.3 LTS.
The commands i tried:
cansend can0 100#000a000b000c000d
cansend can0 100#000a.000b.000c.000d
But my canbus logger shows me that 8 bytes values are sent by cansend.
So my Question: Is it even possible to send 16 bytes hexadecimal values per cansend or does someone know a workaround?

The CAN standard doesn't allow the transmission of payload of more than 8 bytes. This is why cansend only sent 8 bytes out of 16.
There are a few solutions to your problem:
- Send your payload with two cansend commands
- Use the ISO-TP(a protocol allowing to send packet with a size greater than 8 bytes over CAN) kernel module that allows you to do ISO-TP with SocketCAN (how to)
- Use CAN FD, cansend support this protocol without having anything to install but if your bus only support CAN, you cannot use this solution

You can try sending to two IDs one after the other, in which case the end point should listen to those both and reconstruct the message.
Another solution could be to use CANOpen, which allows for more complex messaging on top of the CAN Bus. Basically it does just what I've said above, but of course at a more complex level.

Related

How is Data Length Extension (DLE) enabled for BLE?

How is data length extension enabled? I have BlueZ on a Raspberry Pi 4B (the peripheral).
I want to maximise my L2CAP channel throughput and hence want to move from a data channel payload of 27 bytes to 251.
Is there a way to do this similar to how one changes the connection interval by editing
/sys/kernel/debug/bluetooth/hci0/conn_min_interval?
There are many articles explaining the concept but not how to actually enable it.

Is there any memory limit for NFC card emulation?

I want to send information from an Arduino to a phone via NFC.
To do this I have a PN532 module. The way I want to send information is to use the module to emulate an NFC tag and read the message from the phone. The reason I don’t want to use a real NFC card is due to the memory limitations. Most of them have near 800 bytes of memory and the ones with more memory are expensive. In case I emulate a card with PN532 module, will I still have some memory limitation?
I founded this in the documentation:
PN532-HCE
What I saw that was important was APDU bytes limitations. I’m not really an expert in NFC and I don’t know if this would affect me in the emulated card memory.
The information that I wanted to have is a JSON in plain text. I think that is supported in NDEF messages and so iPhones would be able to read it. The JSON could have up to 2500 characters or bytes and would change a lot of times each day so the rewrite part of the physical card is a problem as well.
My understanding is that ISO 14443-4 is a transmission protocol https://webstore.iec.ch/preview/info_isoiec14443-4%7Bed4.0%7Den.pdf and therefore is a limit of how much that you can send/receive in one Command. This does not limit you from using multiple commands to send and receive to emulate more memory.
So really what should happen a device would issue iso 7816-4 commands to the emulated card over ISO 14443-4.
A device when reading should obey what has been identified as the max transceive length which the device says it should supports (in your case should be 256 Bytes for Short APDU command) and thus it should read multiple 256 Byte chunks to read the whole file (memory)
See the ISO 7816-4 read binary command https://cardwerk.com/smart-card-standard-iso7816-4-section-6-basic-interindustry-commands/#chap6_1 it has offset and length parameters
So for larger data basically your HCE response code on the Arduino should get passed from the PN532 a "read binary of 0 to 255 bytes" command for which you would respond with the first 256 bytes of the JSON data.
Then a second "read binary of 256 to 512 bytes" would be issued by the device, etc until all data you want to return has been returned.
Therefore it is reading the emulated file (memory) is chunks of the max size that can be transmitted by the short APDU (256k) supported by this device.
Note I've not done any coding with this just have knowledge of the standards.
Note you can get cards with up to 32K storage, yes they cost more but a 4Kbyte Desfire card is only about 150% the price of an Ntag216 with 888byte memory.

Force OS to Send TCP Message

I wonder whether there is a way to force the OS to send TCP messages as soon as send() is called.
I have already turned on TCP_NODELAY but according to tcpdump, I can see some messages are merged into 1 packet (e.g. I saw a 7K byte packet although each of my messages was only 186 bytes). It seems like the OS/NIC can't keep up and so somewhere buffers the messages.
I also checked TCP_MAXSEG was set to 536 by default. So it seems like TCP_MAXSEG is ignored?
I am looking for a solution to tell the OS not to send a huge packet. If OS/NIC can't keep up, OS should send multiple packets based on the max size I specify (somehow?).
Any input is welcome.

should tcp/udp data field be converted to network byte order

if machine A and machine B are communicating with each other, but they are with different host byte order
then in network-programming,on the sending side, should tcp/udp data field be converted to network byte order
why?
thanks!
Unless you're following a pre-existing specification, it will be safest to always use network byte order (aka "big-endian"):
You need to specify some byte order; you can't just send binary data and hope that the receiver can figure it out.
Because big-endian data is a standard of the Internet, there are lots of tools to convert to/from host byte order. You'd have to write your own tools to convert between host and little-endian.
The traditional argument against is "all the world's a VAX" (or today, x86), which is little-endian, and so network byte order imposes a performance tax on data. Perhaps that was a valid argument 20 or 30 years ago, but it certainly isn't today. The amount of time that your processor takes to convert data is an infinitesimal fraction of the time it takes to move that data across the network.
It is certainly recommended in many cases. To explain the reason why, let's look at an example:
You have a program that takes a 32-bit unsigned int, places it in a packet and sends to another host
The other host pulls the data out of the packet and stores it as a 32-bit unsigned int.
Sending host is big endian, receiving host is little endian.
If the sending host in the above example sends the number 1024, that number is stored on the sending host's machine as 0x00000400. If the receiving host doesn't change the byte order when receiving those bytes and stores 0x00000400 in memory, this will be interpreted as a totally different number than 1024. The little-endian representation of the decimal number 1024 would be 0x00040000. On a little-endian machine, 0x00000400 is the decimal number 262,144.
Converting to network-byte order allows the programs to rely on a standard encoding of the data to avoid confusion like we see in the above example. Functions on the receiver side to convert from network-byte order to whatever byte-order it uses are easily available and simple to use.
TCP has built-in mechanisms for re-ordering received packets. UDP hasn't. I'm not sure what do you mean by "different host byte order", but it the packets are received with byte-level errors then that's the layer 2 role to retransmit such a packets.

Send large data on UDP socket

I need to send and receive very large data using udp. Unfortunately udp provides 8192 bytes per diagram, so there is need to divide data into smaller pieces.
I'm using Qt and QUdpSocket. There is a QByteArray with length of 921600 I want to send to client. I want to send 8192 bytes each time.
What is the fast way to split a QByteArray?
You should never need to explicitly split the data, just step through it 8 KB at a time. Typically the functions that write data to a socket (including QUdpSocket::writeDatagram, it seems) accept a pointer to the first byte and a byte count, so you can just provide a pointer into the array.
Do note that sending 8 KB datagrams is quite aggressive; they will very likely be fragmented at the IP layer which can affect delivery speed and reliability negatively.
Research the concept of "path MTU", and try to use that for the sends, it might be faster although it will result in more datagrams.
Actually the length field on a UDP header is 16 bits so a UDP datagram can be up to ~65k (minus the size of the header stuff).
However, as unwind pointed out, it will most likely be fragmented along the way to fit the smallest MTU on the path to the destination.
8192 bytes is the default receive buffer size for Windows operating systems. The MTU is likely 1500 bytes if you are using Ethernet. Any UDP datagram larger than that will be fragmented. If your datagram encounters a device along the route with an even smaller MTU, it will be fragmented yet again.
You can use the QByteArray.mid(int start, int len) method (see documentation here) to get a QByteArray of length len starting from start.
Just make len your datagram size and start with 0*len, 1*len, 2*len, ... until everything is sent.

Resources