I have a PIC24 based system equipped with a 24 bit, 8 channels ADC (google MCP3914 Evaluation Board for more details...).
I have got the board to sample all of the 8 channels, store the data in a 512x8 buffer and transmit the data to PC using a USB module when the buffer is full (it's is done by different interrupts).
The only problem is that when the MCU is transmitting data (UART transmission interrupt has higher priority than the ADC reading interrupt) the ADC is not sampling data hence there will be data loss (sample rate is around 500 samples/sec).
Is there any way to prevent this data loss? maybe some multitasking?
Simply transmit the information to the UART register without using interrupts but by polling the bit TXIF
while (PIR1.TXIF == 0);
TXREG = "the data you want to send";
The same applies to the ADC conversion : if you were using interruptions to start / stop a conversion, simply poll the required bits (ADON) and thats it.
The TX bits and AD bits may vary depending on your PIC.
That prevents the MCU to enter an interrupt service routine and loose 3-4 samples.
In PIC24 an interrupt can be assigned one of the 8 priorities. Take a look at the corresponding section in the "Family Reference Manual" -> http://ww1.microchip.com/downloads/en/DeviceDoc/70000600d.pdf
Alternatively you can use DMA channels which are very handy. You can configure your ADC to use the DMA, and thus sampling and feeding the buffer won't use any CPU Time, same goes for UART I beleive.
http://ww1.microchip.com/downloads/en/DeviceDoc/39742A.pdf
http://esca.atomki.hu/PIC24/code_examples/docs/manuallyCreated/Appendix_H_ADC_with_DMA.pdf
Related
For a project I need to make communicate in a CANBus network, ethernet network and with RS-232. I want to use one single MCU that will act as the main unit of CANBus start topology, Ethernet start topology and that MCU also will be transfering the RS232 data that comes to it to another device. Now I want to use high speed CAN which can be up to 1 Mbits per second. However,RS-232 is max 20 k baud. I wonder if it is doable with 1 MCU to handle 3 different communications ( CANBus, ethernet and RS-232). I am afraid of to get overrun with data at some point. I can buffer data short term if data comes in bursts that can be averaged out. For continuous data where I'll never be able to keep up, I'll need to discard messages, perhaps in a managed way. But I do not want to discard any data. So my question is: Would using 1 MCU for this case work? And are there any software tricks that would help me with this case? (Like giving CANBus a higher priority etc.)
Yes, this can be done with a single MCU. Even a simple MCU should easily be able to handle data rates of 1 Mbps. Most likely you want to use DMA enabled transfer so the CPU core will only need to act when the transmission of a chunk of data has completed.
The problem of being overrun by data due to the mismatch in data rate is a separate topic:
If the mismatch persists, no system can handle it, no matter how capable.
If the mismatch is temporary, it's just a function of the available buffer size.
So if the worst case you want to handle is 10s of incoming data at 1 Mbps (with an outgoing rate of 20kbps), then you will need 10s x (1Mbps - 20kps) = 9.8 Mbit = 1.225 MByte of buffer memory.
I am connecting external ADC to STM32L0 Microcontroller using SPI interface .
Will it be slower than using internal ADC .
External ADC might be faster but does SPI interface acts as a bottelneck.
Or any other way round to use external ADC
The speed of an ADC's conversion depends on how and what you measure:
How long does it take to initiate a conversion?
Some ADCs have an automated mode that repeatedly converts its input. Others need just a bit set, yet others need a whole command sequence.
How long does a conversion take?
There are different conversion algorithms, SAR, flash, you name it. Clocked conversions have a clock, and most clocked ADC have some overhead cycles for sampling etc.
How long does it take to fetch the result?
This depends mainly on the interface. You mention SPI, so its clock rate and the number of bytes to exchange add to this. There are ADCs with a parallel interface.
Therefore, to decide whether this or that ADC is faster, you need to calculate all of these values. You might get a butt feeling after some examplary calculations, though.
I have been using the Teensy 3.6 microcontroller board (180 MHz ARM Cortex-M4 processor) to try and implement a driver for a sensor. The sensor is controlled over SPI and when it is commanded to make a measurement, it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal. The data frame itself consists of 1,024 16-bit values.
My first attempt consisted a relatively naïve approach: I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line. The main loop of the program generates a uint_16 value from these bits and collects 1,024 of these values for the full measurement frame.
However, this program locks up the Teensy almost immediately. From my experiments, it seems to lock up as soon as the interrupt is attached. I believe that the microprocessor is being swamped by interrupts.
I think that the correct way of doing this is by using the Teensy's DMA controller. I have been reading Paul Stoffregen's DMAChannel library but I can't understand it. I need to trigger the DMA measurements from the PCLK digital pin and have it read in bits from the DOUT digital pin. Could someone tell me if I am looking at this problem in the correct way? Am I overlooking something, and what resources should I view to better understand DMA on the Teensy?
Thanks!
I put this on the Software Engineering Stack Exchange because I feel that this is primarily a programming problem, but if it is an EE problem, please feel free to move it to the EE SE.
Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?
There is more than one source of 'high speed digital data'. DMA is not the globally correct solution for all data, but it can be a solution.
it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal.
I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line.
This approach would be call 'bit bashing'. You are using a CPU to physically measure the pins. It is a worst case solution that I see many experienced developers implement. It will work with any hardware connection. Fortunately, the Kinetis K66 has several peripherals that maybe able to assist you.
Specifically, the FTM, CMP, I2C, SPI and UART modules may be useful. These hardware modules are capable of reducing the work load from processing each bit to groups of bits. For instance, the FTM support a capture mode. The idea is to ignore the PCLK signal and just measure the time between edges. These times will be fixed in a bit period/CLK. If the timer captures a two bit period, then you know that two ones or zeros were sent.
Also, your signal seems like SSI which is an 'digital audio' channel. Unfortunately, the K66 doesn't have an SSI module. Typical I2C is open drain and it always has a start bit and fixed word size. It maybe possible to use this if you have some knowledge of the data and/or can attach some circuit to fake some bits (to be removed later).
You could use the UART and time between characters to capture data. The time will be a run of bits that aren't the start bit. However it looks like this UART module requires stop bits (the SIM feature are probably very limited).
Once you do this, the decision between DMA, interrupt and polling can be made. There is nothing faster than polling if the CPU uses the data. DMA and interrupts are needed if you need to multiplex the CPU with the data transfer. DMA is better if the CPU doesn't need to act on most of the data or the work the CPU is doing is not memory intensive (number crunching). Interrupts depend on your context save overhead. This can be minimized depending on the facilities your main line uses.
Some glue circuitry to adapt the signal to one of the K66 modules could go a long way to making a more efficient solution. If you can't change the signal, another (NXP?) SOC with an SSI module would work well. The NXP modules usually support chaining to an eDMA module as well as interrupts.
What is the difference between SPI and serial? In reading an article talking about inter-processor communications, it states that serial interfaces are being replaced with SPI for better/faster comms? What exactly is the difference?
The word "serial" doesn't mean much. But I'll assume that you are talking about traditional serial communication standards. What's fundamentally different about SPI is that it is synchronous. As opposed to, say, RS-232, an asynchronous signaling standard.
An important property of asynchronous signaling is the baudrate, the frequency at which the bits in a byte are sent. The receiver has to do extra work to recover the clock that was used by the transmitter. A typical UART does so by over-sampling the signal at a rate 16 times the baudrate. The start-bit is important, which synchronizes the over-sampling clock. Delays between bytes can be arbitrary, the receiver re-synchronizes for each individual byte. Problems with this scheme are a mismatch between the transmitter and the receiver clock frequencies and clock jitter, effectively limiting the baudrate.
This is not a problem with SPI, it has an extra signal line that carries the clock signal so that both the transmitter and receiver uses the exact same clock. And is therefore immune from mismatches and jitter, allowing higher transfer rates. No stability requirements at all in the clock frequency, the signals can simply be generated in software. Typical four line wiring looks like this:
SCLK is the clock signal. MOSI and MISO carry the data, SS is a chip select signal. Common ground is assumed. More about it in this Wikipedia article. electronics.stackexchange.com is a good site to ask more questions about it.
The previous answer is somewhat misleading.
SPI and UART both transfer binary data as bytes and/or words, depending on the hardware. As explained above, one is synchronous and one is asynchronous. Both require an extra data line to be bidirectional. ASCII is an agreed upon interpretation of the binary data and is not actually a factor in either.
The first answer is almost correct with some small comments:
1) SPI is a subtype of SSI (another example is RS-422)
2) SPI uses the master/slave concept with CS/SS (chips select, slave select) pin ...
Thus a master can have multiple slaves and select between them using the SS pin. Also, on some chips, using the SS the chip can be switched from master to slave.
SPI is a bidirectional data protocol. The difference is that SPI uses an exchange of binary data. And UART uses ASCII, making it much slower data transfer
I'm new to microcontroller programming and I have interfaced my microcontroller board to another device that provides a status based on the command send to it but, this status is provided on the same I/O pin that is used to provide data. So basically, I have an 8-bit data line that is used as an output from the microcontroller, but for certain commands I get a status back on one of the data lines if I choose to read it. So I would be required to change the direction of this one line to read the status thus converting this line as an ouput to an input and then back to an output. Is this acceptable programming or will this changing of the I/O pin this frequently cause instability?
Thanks.
There should not be any problem with changing the direction of the I/O line to read the status returned by the peripheral provided that you change the state of the line to an input before the peripheral starts to drive the line and then do not try to drive the line as an output until the peripheral stops driving it. What you must try to avoid is contention between the two driver devices, i.e. having the two ends being driven to opposite states by the processor and peripheral. This would result in, at best a large spike in the power consumption or worse blown pin driver circuitry in the processor, peripheral or both.
You do not say what the processor or peripheral are so I cannot tell whether there are any control bits in the interface that enable the remote device to output the status so that you can know whether the peripheral is driving the line at any time.
I've done this on digital I/O pins without any problems but I'm very far from an expert on this. It probably depends entirely on which microcontroller you are using though.
Yes, it's perfectly fine to change I/O direction on microcontroller repeatedly.
That's the standard method of communicating over open-collector buses such as I2C and the iButton. (see PICList: busses for links to assembly-language code examples).
transmit 0 bit: set output LATx bit to 0, and then set TRISx bit to OUTPUT.
transmit 1 bit: keep output LATx bit at 0, and set TRIS bit to INPUT (let external resistor pull-up line to high)
listen for response from peripheral: keep output LATx bit at 0, and set TRIS bit to INPUT. Let external resistor pull-up line to high when peripheral is transmitting a 1, or let the peripheral pull the line low when peripheral is transmitting a 0. Read the bit from the PORTx pin.
If both ends of the bus correctly follow this protocol (in particular, if neither end actively drives the line to high), then you never have to worry about contention or current spikes.
It`s important to remember that any IO switching in high speed generates EMI.
Depending of switching frequency, board layout and devices sensibilities, this EMI can affect performance and reliability of your application.
If you are having problems in your application use an oscilloscope to check for irradiated EMI in your board lanes.