Reliable message protocol with only 2 bits as data - serial-port

I have severe hardware limitations on a school project. To add a feature we desire, I am limited to 2 bit communication between two micro controllers. They share two wires that can independently go low or high.
If I wanted to send the string "Hello World" hex encoded across the wire, what protocol would ensure the message arrived safely and with minimal loss?

I2C ( I squared C ) is a easy to bitbash synchronous protocol that has collision detection but requires open collector lines as multiple devices could be driving the lines at the same time.
Otherwise You can bit bash serial you just have to find a slow enough baud rate where it's reliable if you only have software delays to control timing with
On top of that then have a protocol something like
STX <DATA> ETX <CRC>
You can make this more complex as needed. For instance sometimes it's way nicer if you can say up front how big the message is <STX> <LENGTH> <DATA> <ETX> <CRC>

Related

The elegant way to handle ADCs with DMA in a RTOS

I'm currently setting up an AZURE RTOS (ThreadX on STM32), with Ethernet, SPI and ADCs activated.
This STM32 has to pass-through configuration information from time to time, coming from my PC over the Ethernet-Port.
It has to pass these information via SPI to two other STM32, which makes the first STM32 the system-controller / system-interface. This will be a low-priority task, since the activation of the passed configuration will be started by sync-lines, running from the system-controller to the two other STMs.
While doing so, the system-controller has to read-in ADC values constantly and pass them via Ethernet / TCP to my computer.
I've used the ThreadX TCP server example, as given by STM, as a starting point.
From there I've managed to set up three servers on three ports, communicating sucessfully with a python script on my PC (as a first test).
Now come the two great questions:
1)
Since my input signal may contain frequencies up to 2.5 MHz, I want to digitize this signal with the full 5 MSPS (Nyquist), which ADC3 is capable of.
The smallest internally available data-type at full resolution is uint16_t, which makes the data rate work out to be R = 16 * 5 MSPS = 80 MBit/s (worst-case, I bet, there is optimization possible ... e.g. 8 bits resolution, which halves the data-rate ... but this resolution might not be enough ... or 16 bits, and FFT afterwards, which is also sufficient, since I'm mostly interested in energy per frequency band, but initially I wanted to do this on my computer, for best flexibility).
Even if the Ethernet-IF is capable of doing 100MBit/s, the TCP layer of NetXDuo, I bet, is not.
(There is also USB OTG on this board available, but since networked devices are in my opinion more versatile, I prefer using Ethernet ... nevertheless, USB might be a backup solution)
From my measurements, a data-stream transmitted to the uC via TC from within python, and mirrored back within a thread to my PC allows for relatively consistent 20 MBit/s.
... How do I push this speed to a better level?
(I think 20MBit/s is the back-and-forth data-rate, so one-way may be faster)
However. Second question:
2)
The ADC within the STM is capable of storing data via DMA to memory.
There are two callbacks available, one at half-full, one at full buffer state.
My problem is mostly about the way of reading out the DMA and/or triggering the conversion in the first place.
How do you do this the "right" way on a RTOS (such that you don't brake the RT in RTOS)?
I see some options here, what are the pros/cons you can think of?
a) Let the ADC run freely, calling the call-backs at the respective fill-levels, triggering a TCP-transmission whenever one of the call-backs is reached
-> may lead to glitches due to insufficient speed of the TCP layer in my opinion.
b) Let the ADC conversion be triggered by a thread, which is preempted and will later TCP-transmit the data, as soon as the memory-buffer is full
-> may lead to inconsistency in the converted values, since you get burst-style conversions, with gaps in between, while the buffer is read
c) Let a thread trigger each conversion individually
-> A no-go I think, since threads are not triggered that often, to get a decent sample-frequency
d) Let a free-running ADC trigger callbacks, let a thread do the FFT, transmit within another thread the data via TCP
-> May work, but is less flexible, since the data gets crunched within the uC.
--> Are there other ways you can think of / what do you think about the ways I named here?
--> What do you think about question 1)?
Have a nice day!

Serial Comms baud rate, parity and stop bits. Which options to use and when?

I'm trying to pick up some serial comms for a new job I am starting. I have done some reading which has helped a lot however, a lot of the reading tells you about the specification of serial comms and what everything is, but not when is best to use particular options.
My searches for this information so far only seem to pull in the spec; perhaps as a novice I am searching for the wrong terms.
My questions then!
Baud Rate - I have read this is signal changes per second and is often mislabelled as bits per second. Is this essentially bits per second including the frame data if asynchronous, and actually bits per second if synchronous?
Parity - Even/Odd.. Is there any difference at all between the two? I'm thinking in terms of efficiency or similar. Does this only still exist for compatabilities sake?
Stop Bits - I have read so far you can have 1 or 2 stop bits. In C# there seems to be an option for 1.5 too. I can't find anything on why you would want/need more than 1.
If anyone can advise on these points, or point me to some recommended reading material I would be very grateful.
Thanks for reading.
edit: typo
You very rarely have a choice, you must make it compatible with the settings that the device uses. If you don't know then you need to look in a manual or pick up a phone. Do keep in mind that it is increasing very rare to work with a real serial port device, one that uses an UART. Most commonly you actually talk to an emulated serial port, implemented by a USB or Bluetooth device driver. The settings you use don't matter in such a case since the actual signaling is implemented by the underlying bus.
If you can configure the device then basic guidelines are:
Baudrate is directly related to the length of the cable and the amount of electrical interference that's present. You have to go slower when you get bit errors. The RS-232 spec only allows for a maximum of 50 ft at 9600 baud.
Parity ought to be used when you don't use an error-correcting protocol. It does not matter whether you pick Odd or Even. Odd people pick odd, it's their prerogative.
Stopbits is usually 1. Picking 1.5 or 2 help a bit to relieve pressure on a device whose interrupt response times are poor, detected by data loss.
Databits is almost always 8, sometimes 7 if the device only handles ASCII codes.
Handshaking is an important setting that never stop causing trouble since many programmers just overlook it. Modern computers are almost always fast enough to not need it but that's not necessarily true for devices. The most basic stay-out-of-trouble configuration is to turn DTR on when you open the port and to tell the device driver to take care of RTS/CTS handshaking. Xon/Xoff handshaking is sometimes used, depends on the device.
A good 90% of the battle is won by implementing solid error checking. It is almost always skimped on, bad idea. Very important for serial port devices since they have no error correcting capabilities themselves and very weak error detection. Always make sure that you can detect and properly report overrun, parity and framing errors. And test them by getting the settings intentionally wrong.

Endianess of network data transmissions over TCP/IP

Here is a question I've been trying to solve since quite some time ago. This does not attain a particular languaje, although it's not really beneficial for some that have a VM that specifies endianess. I know, like the 99.9999% of people that use sockets to send data using TCP/IP, that the protocol specifies a endianess for the transmission elements, like destination address, port and such. The thing I don't know is if it requires the payload to be in a specific format to prevent incompatibilities.
For example, let's say I develop a protocol that is not a presentation layer, and that I, due to the inmense dominance that little endian devices have nowadays, decide to make it little endian (for example the positions of the players and such are transmitted in little endian order). For example a network module for a game engine, where latencies matter and byte conversion would cost a noticeable amount of time. Of course the address, port and all of that data that is protocol related would be specified in big endian as is mandatory, I'm talking about the payload, and only that.
Would that protocol work out of the box (translating the contents as necessary, of course, once the the transmission is received) on a big endian machine? Or would the checksums of the IP protocol or something of the kind get computed wrong since the data is in a different order, and the programmer does not have control of them if raw_sockets aren't used?
Since the whole explanation can be misleading, feel free to ask for clarifications.
Thank you very much.
The thing I don't know is if it requires the payload to be in a specific format to prevent incompatibilities.
It doesn't, and it doesn't have a way of telling. To TCP it's just a byte-stream. It is up to the application protocol to decide endian-ness, and it is up to the implementors at each end to implement it correctly. There is a convention to use big-endian, but there's no compulsion.
Application-layer protocols dictate their own endianness. However, by convention, multi-byte integer values should be sent in network-byte order (big endian) for consistency across platforms, such as by using platform-provided hton...() (host-to-network) and ntoh...() (network-to-host) function implementations in your code. On little-endian systems, they will do the necessary byte swapping. On big endian systems, they are no-ops. The functions provide an abtraction layer so code does not have to worry about that.

What is the difference between using mark/space parity and parity none?

What is the purpose having created three type of parity bits that all define a state where the parity bit is precisely not used ?
"If the parity bit is present but not used, it may be referred to as mark parity (when the parity bit is always 1) or space parity (the bit is always 0)" - Wikipedia
There is a very simple and very useful reason to have mark or space parity that appears to be left out here: node address flagging.
Very low-power and/or small embedded systems sometimes utilize an industrial serial bus like RS485 or RS422. Perhaps many very tiny processors may be attached to the same bus.
These tiny devices don't want to waste power or processing time looking at every single character that comes in over the serial port. Most of the time, it's not something they're interested in.
So, you design a bus protocol that uses for example maybe 9 bits... 8 data bits and a mark/space parity bit. Each data packet contains exactly one byte or word (the node address) with the mark parity bit set. Everything else is space parity. Then, these tiny devices can simply wait around for a parity error interrupt. Once it get's the interrupt, it checks that byte. Is that my address? No, go back to sleep.
It's a very power-efficient system... and only 10% wasteful on bandwidth. In many environments, that's a very good trade-off.
So... if you've then got a PC-class system trying to TALK to these tiny devices, you need to be able to set/clear that parity bit. So, you set MARK parity when you transmit the node addresses, and SPACE parity everywhere else.
So there are five possibilities, not three: no parity, mark, space, odd and even. With no parity the extra bit is just omitted in the frame, often selected when the protocol is already checking for errors with a checksum or CRC or data corruption is not deemed likely or critical.
Nobody ever selects mark or space, that's just wasting bandwidth. Modulo some odd standard, like 9-bit data protocols that hardware vendors like to force you to buy their hardware since you have no real shot at reprogramming the UART on the fly without writing a driver.
Setting mark or space parity is useful if you're generating data to send to hardware that requires a parity bit (perhaps because it has a hard coded word length built into the electronics) but doesn't care what its value is.
RS485 requires 9 bits transmission, as described above. RS485 is widely used in industrial applications, whatever the controlled device 'size' (for instance there are many air conditioners or refrigerators offering a RS485 interface, not really 'tiny' things). RS485 allows up to 10Mbs throughput or distances up to 4000 feet. Using the parity bit to distinguish address/data bytes eases hardware implementation, each node of the network can have their own hardware to generate interrupts only if an address byte on the wire matches the node's address.
Very clear and helpful answers and remarks.
For those who find the concept perverse, relax; the term is a problem of semantics rather than information theory or engineering, the difficulty arising from the use of the word "parity".
"Mark" and "space" bits are not parity bits in those applications, and the term arises from the fact that they occupy the bit position in which a parity bit might be expected in other contexts. In reality they have nothing to do with parity, but are used for any relevant purpose where a constant bit value is needed, such as to mark the start of a byte or other signal, or as a delay,or to indicate the status of a signal as being data or address or the like.
Accordingly they sometimes are more logically called "stick (parity) bits", being stuck in "on" or "off" state. Sometimes they really are "don't cares".

What are the implications of half-duplex serial connections?

What are the implications of using a half-duplex serial connection versus a full-duplex one? What happens if both sides try sending data at the same time? Do you end up with corrupt data arriving on both ends? Does flow-control help you with this?
On the line the data will be garbled, which may or may not lead to devices receiving the garbled data. Sometimes this will be used to detect that the transmission failed due to a collision.
Normally you wouldn't use half-duplex in the same way as full-duplex to send single characters in asynchronous mode. Rather you'd use some packet protocol which determines who has the right to send at which times, and which includes some checksum (usually a CRC) to detect corruption.
Flow control doesn't help much for this. It's purpose is to ensure that the receiver is not overrun by to much data. There is software flow control which uses the ASCII characters XON and XOFF to start and stop transmission, and hardware flow control which uses the RTS (Request To Send) and CTS (Clear To Send) control lines. XON/XOFF-style software flow control won't work with half duplex.
These days you don't see half duplex with ordinary RS-232 and modems (I used it with acoustic couplers in the eighties, it was rare even then). But it is common for RS-485, which is used in industrial control with various protocols. There are also many other data transmission standards which operate in a half-duplex way, mostly when there are more than two devices attached to the same line (ancient 10base2 Ethernet, CAN, LIN, FlexRay, I2C, ...).
My God, where did you find a half-duplex line in this day and age?
Anyway, the answer is that if both ends drive the line, it gets all confused. For this reason, there are specified ASCII characters lie Clear to Send and Data Terminal Ready (CTS and DTR) that are used to make a handshake. See this tutorial for more.
Augh, I should have gone to bed. Tutorial right, me stoopid.

Resources