I'm using an SSD1306 OLED and have a question about it.
When writing data to its buffer via I2C, some libraries write 16 bytes every time.
For example:
void SSD1306::sendFramebuffer(const uint8_t *buffer) {
// Set Column Address (0x00 - 0x7F)
sendCommand(SSD1306_COLUMNADDR);
sendCommand(0x00);
sendCommand(0x7F);
// Set Page Address (0x00 - 0x07)
sendCommand(SSD1306_PAGEADDR);
sendCommand(0x00);
sendCommand(0x07);
for (uint16_t i = 0;i < SSD1306_BUFFERSIZE;) {
i2c.start();
i2c.write(0x40);
for (uint8_t j = 0;j < 16; ++j, ++i) {
i2c.write(buffer[i]);
}
i2c.stop();
}
}
Why don't they write 1024 bytes directly?
Most of the I2C libraries I've seen source code for, including that for the Aruduino, chunk the data in this fashion. While the I2C standard doesn't require this, as other poster mentioned, there may be buffer considerations. The .stop() command here might signal the device to process the 16 bytes just sent and prepare for more.
Invariably, you need to read the datasheet for your device and understand what it expects in order to display properly. They say "RTFM" in software, but hardware is at least as unforgiving. You must read and follow the datasheet when interfacing with external hardware devices.
Segmenting data into more frames helps when the receiving device has not enough buffer space or is simply not fast enough to digest the data at full rate. The START/STOP approach might give the receiving device a bit of time to process the received data. In your specific case, the 16-byte chunks seem to be exactly one line of the display.
Other reasons for segmenting transfers are multi-master operations, but that doesn't seem to be the case here.
Related
I am coding an offline, battery-powered esp32 to take periodic sensor readings and store them until a hotspot is found, in which it connects and pushes the data elsewhere. I am relatively new to esp32 and ask for suggestions on the best way to do this.
I was thinking of storing the reading and DateTime in SPIFFS memory and running a webserver that starts when a network is found, checking every minute or so. Since it is battery-powered, I would also like to deep sleep the board to save power. Does the setup() function run again when the board comes out of deep sleep or would I need to have my connectToWiFi function inside the loop?
Is this viable? And are there any better routes to take? I've seen things on asynchronous servers and using the esp32 as an access point that could maybe work. Is it best to download the file through a web server or send the file line by line through a free online database?
Deep sleep on the ESP32 is almost the equivalent of being power cycled - the CPU restarts, and any dynamic memory will have lost its contents. An Arduino program will enter setup() after deep sleep and will have to completely reinitialize everything the program needs to run.
There is a very small area (8Kbytes) of static memory associated with the real time clock (RTC) which is retained during deep sleep. You can directly reference variables stored there using a special decorator (RTC_DATA_ATTR) when you declare the variable.
For instance, you could use a variable stored in this area to count the number of times the CPU has slept and woken up.
RTC_DATA_ATTR uint64_t sleep_counter = 0;
void setup() {
sleep_counter++;
Serial.begin(115200);
Serial.print("ESP32 has woken up ");
Serial.print(sleep_counter);
Serial.println(" times");
}
Beware that it's generally not safe to store objects in this area - you don't necessarily know whether they've allocated memory that won't persist during deep sleep. So storing a String in this memory won't work. Also storing a struct with pointers generally won't work as the pointers won't point to storage in this area.
Also beware that if the ESP32 loses power, RTC_DATA_ATTR will be wiped out.
The RTC static RAM also has the advantage of not costing as much power to write to as SPIFFS.
If you need more storage than this, SPIFFS is certainly an option. Beware that ESP32's generally use cheap NAND flash memory which is rated for a maximum of maybe 100,000 writes.
SPIFFS performs wear-leveling, which will help avoid writing to the same location in flash over and over again, but eventually it will still wear out. This isn't a problem for most projects but suppose you're writing to SPIFFS once a minute for two years - that's over a million writes. So if you're looking for persistent storage that's frequently written to over a very long time you might want to use a better quality of flash storage like an external SD card.
If an SD card is not an option (beware you don’t pull out the SD while writing!) I would write to SPIFFS or a direct write with esp_partition_write().
For the latter: if you use predefined structs with you sensor data (plus time etc) and you start the partition with a small table with the startvalue which has to be updated next time until the other (a mini fat) it’s easy to retrieve data (no fuzz with reading lines). Keep in mind that every time you wipe the flash the wear counts for that whole block! So if you accept old data to be ignored but present, this could dramatically reduce wear.
For example say you write:
Struct:
Uint8_t day;
Uint8_t month;
Uint8_t year; //year minus 2000, max 256
Uint8_t hour;
Uint8_t minutes;
Uint8_t seconds;
Uint8_t sensorMSB;
Uint8_t sensorLSB;
That’s 8 bytes.
The first struct (call it the mini fat):
Uint8_t firstToProcessMSB;
Uint8_t firstToProcessLSB;
Uint8_t amountToProcessMSB;
Uint8_t amountToProcessLSB;
Uint8_t ID0;
Uint8_t ID1;
Uint8_t ID2;
Uint8_t ID3;
Also eight bytes. For the ID you can use some values to recognize a well written partition. But that’s up to you. It’s only a suggestion!
In an 65.536 byte sized partition you can add 8192 (-1) bytes before you have to erase. Max 100.000 times…
When your device makes contact you read out the first bytes. Check if it’s ok, read the start byte. With fseek you step 8 bytes every hop to the end position and read all values in one time until you reached the end. If succesfull, you change the startposition to the end + 1 and only erase when things go tricky. (Not enough space)
It’s wise to do before the longest suspected amount of time to run out. Otherwise you will also lose data. Or you just make the partition bigger.
But if in this case you could write every minute for 5 days.
THE GOAL
In my Qt application, I need to control a GPIO pin, depending on data being sent over the serial bus. So, I need to set it to HIGH for as long as I transmit data, and to LOW, immediately after the transmission ends. Consider it as a serial communication flow control pin, which when set to 1 it enables transmission, and when set to 0 enables receive of data. The entire system is half-duplex and communicates in a master-slave fashion.
THE PROBLEM
I managed to come close to a solution, by setting it to HIGH immediately before any transmission, introducing some constant delay (I used QThread:usleep() ) depending on the baud rate and then setting it to low again, but I was getting random "stretchings" of the pulse (staying HIGH longer than it should) when I was visualizing it with an oscilloscope.
ATTEMPTED SOLUTIONS
Well, it seems that some "magic" is taking place, which adds some extra delay, on top of the one I have manually defined. In order to get rid of that possibility, I used the bytesWritten() signal, so I can fire my setPinLow() slot when we finish writing the actual data to the port. So my code now looks like this:
classTTY::classTTY(/*someStuff*/) : port(/*some other stuff*/)
{
s_port = new QSerialPort();
connect(s_port, SIGNAL(bytesWritten(qint64)), this, SLOT(setPinLow()));
if(GPIOPin->open(QFile::ReadWrite | QFile::Truncate | QFile::Text | QFile::Unbuffered)) {
qDebug() << "GPIO pin ready to switch.";
} else {
qDebug() << "Failed to access GPIO pin";
}
bool classTTY::sendData(data, replyLength)
{
directionPinEnable(true);
if(m_port->isOpen()) {
s_expectedReplyLength = replyLength;
s_receivedData.clear();
s_port->flush();
s_port->write(data);
return true;
}
return false;
}
void classTTY::setPinLow()
{
gpioPinEnable(false);
}
void classTTY::gpioPinEnable(bool enable){
if(enable == true){
GPIOPin->write("1");
} else if (enable == false) {
GPIOPin->write("0");
}
}
After implementing it the pin started to give really short pulses, much more like "spikes", which implies (I think) that now it stays HIGH for as long as the Qt write() process lasts, and not while the actual propagation of the data lasts.
THE QUESTION(S)
What is that extra delay being added when I use the naive,
QThread::usleep approach, that causes the stretch of the pulse?
Why the signal-slot approach is not working, since it is
event-driven?
In general, how can I instruct the pin to go active ONLY during the
transmission of data and then drop again to zero, so I can receive
the slave's reply?
What is that extra delay being added when I use the naive, QThread::usleep approach, that causes the stretch of the pulse?
Linux is not a real-time operating system a thread sleep suspends the process fo no less than the time specified. During the sleep, other threads and processes may run and may not yield the processor for a longer time than your sleep period, or may not yield at all and consume their entire OS allocated time-slice. Beside that kernel driver interrupt handlers will always preempt a user-level process. Linus has a build option for real-time scheduling, but the guarantees remain less robust that a true RTOS and latencies typically worse.
Note also that not only can your thread be suspended for longer than the sleep period, but the transmission may be extended by more than the number of bits over baud-rate - the kernel driver can be preempted by other drivers and introduce inter-character gaps over which you have no control.
Why the signal-slot approach is not working, since it is event-driven?
The documentation for QSerialPort::waitForBytesWritten() states:
This function blocks until at least one byte has been written to the serial port and the bytesWritten() signal has been emitted.
So it is clear that the semantics of this are that "some data has been written" rather than "all data has been written". It will return whenever a byte is written, then if you call it again, it will likely return immediatly if bytes are continuing to be written (because QSerialPort is buffered and will write data independently of you application).
In general, how can I instruct the pin to go active ONLY during the transmission of data and then drop again to zero, so I can receive the slave's reply?
Qt is not unfortunately the answer; this behaviour needs to be implemented in the serial port kernel driver or at least at a lower-level that Qt. The Qt QSerialPort abstraction does not give you the level of control or insight into the actual occurrence "on the wire" that you need. It is somewhat arms-length from the hardware - for good reason.
However there is a simple solution - don't bother! it seems entirely unnecessary. It is a master-slave communication, and as such the data itself is flow control. The slave does not talk until spoken to, and the master must expect and wait for a reply after it has spoken. Why does the slave need any permission to speak other than that implied by being spoken to?
I'm trying to do something really simple and I'm having a bit of trouble getting it to work. I'm working with the MPL3115A2 Altitude/Pressure Sensor and a pic32 uC32 board, and I'm trying to communicate between the two using I2C. (uC32 board is similar enough to arduino that it's practically the same in terms of coding).
I'm using the wire library and I'm simply trying to read register 0x0C from the MPL3115A2, which should give me the device ID.
Here's a code snippet (the define is at the top of the code and the rest is in the main loop):
#define barAddress 0x60
Wire.beginTransmission(barAddress);
Wire.send(0x0C);
Wire.endTransmission();
Wire.requestFrom(barAddress, 1);
uint8_t a = Wire.receive();
Serial.println(a, HEX);
So I start the transmission with address 0x60 (From the datasheet: The standard 7-bit I2C slave address is 0x60 or 1100000. 8-bit read is 0xC1, 8-bit write is 0xC0.). Then I send 0x0C because that's the register I want to access. I then end transmission, and request 1 byte from address 0x60, receive that bit into a 8-bit variable, then print it.
The problem I run into is that when I print it, I just get 0. I don't get the device ID, just 0. No matter what register I try to read, I get 0.
I've been banging my head against a wall for the past few days trying to get this to work. I've attached something I've captured with a logic analyzer, as well as a list of registers from the datasheet of the MPL3115A2 that I've been trying to access.
Using a logic analyzer I can see the clock and data lines. The clock seems normal and the data line gives me the following:
START
Write['192'] + ACK
'12' + ACK
STOP
START
Read['193'] + ACK
'0' + NAK
STOP
This all seems correct to me (192 and 193 come from 8-bit write and read being 0xC0 and 0xC1), except for the '0'. I should be getting the device ID, not 0.
Thanks for any help with this!
You should look at Freescale's app note AN4481, which is referred to by the datasheet. Page 5 shows the single-byte read operation which is what you are doing, except that the register address write must not be followed by a STOP but instead uses a REPEATED-START.
I'm not familiar with the Wire library but it look like all you need to do is remove the Wire.endTransmission(); between the send and requestFrom.
Hopefully, this will solve your problem.
I'm using Bluetooth serial port profile to communicate with Arduino. The bluetooth module (HC-06) is connected to my digital pins 10 and 11 (RX, TX). The module is working properly, but I need an interrupt on data receive. I can't periodically check for incoming data as Arduino is working on a time-sensitive task (music-playing through a passive buzzer) and I need control signals to interrupt immediately on receive. I've looked through many documents including Arduino's own site, and they all explain how to establish regular communication using checking for serialPort.available() periodically. I've found one SO question Arduino Serial Interrupts but that's too complicated for my level. Any suggestions on reading realtime input through serial?
Note that the current version of SoftSerial actually uses PCINT to detect the individual bits. Hence I believe defining it again at the main loop would conflict with the SoftSerial's actual detection of bits.
I am reluctant to suggest this as it is modifying a core library. Which is difficult not to do when sharing interrupts. But if desperate, you could modify that routine, with your need.
within
\arduino-1.5.7\hardware\arduino\avr\libraries\SoftwareSerial\SoftwareSerial.cpp.
//
// The receive routine called by the interrupt handler
//
void SoftwareSerial::recv()
{
...
// if buffer full, set the overflow flag and return
if ((_receive_buffer_tail + 1) % _SS_MAX_RX_BUFF != _receive_buffer_head)
{
// save new data in buffer: tail points to where byte goes
_receive_buffer[_receive_buffer_tail] = d; // save new byte
_receive_buffer_tail = (_receive_buffer_tail + 1) % _SS_MAX_RX_BUFF;
#ifdef YOUR_THING_ENABLE
// Quickly check if it is what you want and DO YOUR THING HERE!
#endif
}
...
}
But beware your are still in a ISR and all Interrupts are OFF and you are blocking EVERYTHING. One should not lollygag nor dilly dally, here. Do you something quick and get out.
Using two USARTs running at 115200 baud on a STM32F2, one to communicate with a radio module and one for serial from a PC. The clock speed is 120MHz.
When receiving data from both USARTs simultaneously overrun errors can occur on one USART or the other. Doing some quick back of the envelope calculations there should be enough time to process both, as the interrupts are just simple copy the byte to a circular buffer.
In both theory and from measurement the interrupt code to push byte to buffer should/does run in the order of 2-4µS, at 115200 baud we have around 70us to process each char.
Why are we seeing occassional OREs on one or other USART?
Update - additional information:
No other ISRs in our code are firing at this time.
We are running Keil RTX with systick interrupt configured to fire every 10mS.
We are not disabling any interrupts at this time.
According this book (The Designer's Guide to the Cortex-M Processor Family) the interupt latency is around 12cycles (not really deadly)
Given all the above 70us is at least a factor of 10 over the time we take to clear the interrupts - so I'm not sure its is so easy to explain. Should I be concluding that there must be some other factor I am over looking?
MDK-ARM is version 4.70
The systick interrupt is used by the RTOS so cannot time this the other ISRs take 2-3µS to run per byte each.
I ran into a similar problem as yours a few months ago on a Cortex M4 (SAM4S). I have a function that gets called at 100 Hz based on a Timer Interrupt.
In the meantime I had a UART configured to interrupt on char reception. The expected data over UART was 64 byte long packets and interrupting on every char caused latency such that my 100 Hz update function was running at about 20 Hz. 100 Hz is relatively slow on this particular 120 MHz processor but interrupting on every char was causing massive delays.
I decided to configure the UART to use PDC (Peripheral DMA controller) and my problems disappeared instantly.
DMA allows the UART to store data in memory WITHOUT interrupting the processor until the buffer is full saving lots of overhead.
In my case, I told PDC to store UART data into an buffer (byte array) and specified the length. When UART via PDC filled the buffer the PDC issued an interrupt.
In PDC ISR:
Give PDC new empty buffer
Restart UART PDC (so can collect data while we do other stuff in isr)
memcpy full buffer into RINGBUFFER
Exit ISR
As swineone recommended above, implement DMA and you'll love life.
Had a similar problem. Short solution - change oversampling to 8 bits which makes USART clock more precise. And choose your MCU clock wisely!
huart1.Init.OverSampling = UART_OVERSAMPLING_8;
Furthermore, add USART error handler and mechanism to check that your data valid such as CRC16. Here is example for the STM32F0xx series, I am assuming it should be pretty similar across the series.
void UART_flush(void) {
// Flush UART RX buffer if RXNE is set
if READ_BIT(huart1.Instance->ISR, USART_ISR_RXNE) {
SET_BIT(huart1.Instance->RQR, UART_RXDATA_FLUSH_REQUEST);
}
// Not available on F030xx devices!
// SET_BIT(huart1.Instance->RQR, UART_TXDATA_FLUSH_REQUEST);
// Clear All Errors (if needed)
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE)) {
SET_BIT(huart1.Instance->ICR, USART_ICR_ORECF | USART_ICR_FECF | USART_ICR_NCF);
}
}
// USART Error Handler
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart) {
if(huart->Instance==USART1) {
// See if we have any errors
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE | USART_ISR_RXNE)) {
// Flush errors
UART_flush();
// Raise Error Handler
_Error_Handler(__FILE__, __LINE__);
}
}
}
DMA might help as well. My problem was related to USART clock tolerances which might even cause overrun error with DMA implemented. Since it is USART hardware problem. Anyway, hope this helps someone out there! Cheers!
I have this problem recently, so I implemented a UART_ErrorCallback function that had was not implanted yet (just the _weak version).
Is like this:
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart)
{
if(huart == &huart1)
{
HAL_UART_DeInit(&huart1);
MX_USART1_UART_Init(); //my initialization code
...
And this solve the overrun issue.