I just wanted to know, what is more efficient
Using NTPClient library and making an http request to get the time.
Using a RTC and read the time from it.
Taking into account that I used deep sleep, and once it wakes up has to set up the wireless because I am sending data into cloud storage, but just before I get the time with the NTPClient library.
But I am thinking that I may save some battery if I use a RTC.
What do you think?
Thanks
I'm starting to work on a project that use both RTC and NTP running on battery and using deep sleep.
The advantage of using RTC module (in my case with i2c protocol) is that to get the time you need tens of ms, instead of using NTP for which you need 1 or 2 seconds at least depending on the library used.
Furthermore, the use of an RTC module is much more reliable as there is no possibility of connection problems or anything else. During my tests the RTC module has never failed, on the contrary, the wifi/internet/ntp connection sometimes has failed.
The RTC module can be programmed offline and then mounted in the circuit. It has a back-up battery that should guarantee a duration of a couple of years (like a wristwatch). In my case (as also recommended by Marcel Stör) I will use date and time coming from the RTC module and only once a week I will try the calibration using the NTP protocol.
At last but not least, keep in mind that many iot cloud platforms accept only the data and as a timestamp use that of receiving stream itself and not one provided by the device.
Then, for battery saving (and reliability), is better to use RTC.
Related
I am involved in a project where we have some kind of IoT device. An nxp processor with an LTE modem on a PCB. The software running on it connects to the modem over a single uart interface. It will initialize the modem through AT commands, and finally made a data call to the provider (PPP).
Then, it uses lwIP (light weight IP) to open some mqtt subscriptions, and allow user code to make http get/post requests to our servers.
Every 15 minutes we want to retrieve signal strength from the modem and report this back to the server. What I do now, is put the modem back in command mode, retrieve the signal strength info, go back to data mode, and resume normal operation.
The round trip from data mode, to commando mode, and back to data mode takes several seconds (4-5 ish). This is annoying, because during that time we are not receptive for commands.
I've read about gsm mux 07.10. By following some defined protocol it allows to create virtual serial ports, over one physical uart. That sounds nice, although I realize this will go at the cost of performance (bytes will be added to each frame we send to either command mode / data mode).
The gsm mux 07.10 spec dates from 1999. I am far from an expert in mobile solutions. I was wondering: is muxing still the way to go? How does a typical smart phone deals with this for example? Do they include modems with more than one uart to have parallel access to AT commands and a live internet connection? Or do they in fact still rely on gsm mux?
If somebody would be so kind to give some insights. Also on potential C libraries that are available that implement gsm mux 07.10? It seems that TinyGSM implements it (although I can't seem to find where), and I also can find the linux kernel driver that implements gsm mux 07.10. But that driver is written on top the tty interfaces in linux, so that would mean I would have to reverse engineer the kernel driver and strip out the tty stuff and replace it with my own uart implementation.
First of all, the spec numbering is the old GSM specification numbering, so those old specs will never be updated, the new specifications with new numbering scheme will. I do not remember when the switch was made, but I do remember someone at work giving a presentation on 07.10 probably around 1998/1999, so probably a few years after that or around that time (and definitely before 2009).
The newer spec numbering scheme uses three digits for the first part.
So for instance the old AT command spec 07.07 is now 27.007, and the current 07.10 multiplex specification is 27.010.
The following is what I remember of 07.10.
The motivations for developing 07.10 was to exactly support the kind of scenario that you describe. Remember back in the mid 90's, if mobile phones had a serial interface then that was RS-232 though each manufacturer's proprietary connector at the bottom of the phone. One single serial interface.
However, in order to use 07.10 mux in serial communication you needed to install some specific serial drivers in Windows with support for 07.10 (and I think maybe there was some reliability issue with them?), and for that reason 07.10 never took of and became anything more than an rarely used solution.
Also by the end of the 90's additional serial interfaces like Bluetooth and IrDA became available on many phones, and later USB as well, which both added additional physical interfaces as well as natively multiplexing within each protocol.
So the need for multiplexing over physical RS-232 became less of an issue, and whatever little popularity 07.10 ever had dwindled down to virtual nothing.
Fast forward a couple of decades and suddenly someone asks about it on stackoverflow. Good on you :) As far as I can tell I cannot see any fundamental problems with using it for the purpose you present.
Modern smart phones that support AT commands will most likely have a code base for the AT command parsing with roots in the 90's, which most likely include the AT+CMUX command. Of course manufacturers today have zero explicit wish for supporting it, but when it is already present it will just come along with the collection of all other legacy AT commands that they support.
So if the modem supports AT+CMUX you should be good to go. I have no experience or recommendation with regards to client protocol libraries.
I am using an ESP8266 for a project which requires the ESP to establish a connection to the Access Point , with as less delay as possible, but as of now it takes a minimum of 4-5 mins for establishing the connection which is too much delay. I have tried to set a static ip, gateway, subnet and DNS by passing them as parameter to WiFi.config() function, still no success. Would someone help me regarding this issue ?
I have seen lengthy delays on ESP8266 WiFi connection if the WiFi is persisting its configuration to the flash memory. Anywhere from a few seconds to a minute or so.
Try to call WiFi.persistent( false ) before you call WiFi.mode() and WiFi.begin().
At the very least, that will help you narrow down the cause of the problem.
Ensure access point frequency is 2.4Ghz (not 5Ghz). This will cause prolonged connection time (never connecting).
"The ESP8266 is not designed for 5 GHz." Source
NodeMCU V1.0 (as pictured) uses an ESP8266's (ESP-12E) chip, Antenna is configured for 2.4 Ghz only.
I think the library <ESP8266WiFi.h> you are using have problems, you can use older versions of it.
We have a custom-built microcontroller card (ST32 / ARM Cortex M3) which has a camera attached. The camera captures 10-bit greyscale at 1280x1024 resolution. We need to send that image data back to a PC host over serial. That's quite a big chunk of data; at 115200 baud transfer would be 3 minutes, assuming everything goes fine. Anything I implement to ensure robustness would seem to slow that process down (eg split into blocks, checksum the blocks, ask for resend if corrupt). So was wondering how people might make a good compromise between speed and integrity.
We are currently seeing real transfer times of about 6 minutes. We had to set the UART baud rate at a weird value - 1036800 - because at 115200 there were issues (PC is running at 115200). I'm more software than hardware so any thoughts as to why that might happen would be helpful!
Start by doing some easy compression on your image.
Either run-length encoding or delta encoding will give you less data to send.
There are much better algorithms like TIFF but you may want to trade off the complexity of TIFF-ing your buffer for easier software on the embedded side.
Then you can afford something simple like Xmodem for your compressed data.
That has the useful property of being a standard protocol too.
That might lead you to using a terminal+xmodem transfer style interface to your host.
That would make debugging the interface pretty simple too.
Tim Williscroft's answer about compressing your data is a nice first step.
Now from the serial protocol side, the real transfer rate depends a lot on how you configure and implement your software both side. The baud rate is not the only thing to care about:
Are you using hardware flow control? If using hardware flow control, you will be able to significantly increase the baud rate (x10) without generating overrun errors.
From the STM32 are you using DMA, interrupts or even worth polling method to manage the data transmission? I don't know the exact STM32 reference you are using but on the STM32 I used, the UART transmission FIFO was limited to 1 byte. So you are merely obliged to use DMA if you have performance issues.
Still from the STM32 side, you can greatly improve performances taking care on the bus accesses (and possible conflicts arbitration) your application is doing.
Moreover on STM32, all clocks are configurable. Using an external high speed oscillator (if one available on board) may be a good way to improve performance over the internal RC oscillator. Also take care about internal bus clock configuration!
Now from the PC side, the performance may be impacted depending how your application bufferize & treat the received data.
The first thing to do is to look where the time is taken:
Observe your UART signal with a scope. As you said the transfer takes twice the theoretical time, you shouldn't see a continuous signal. Without hardware flow control it is the STM32 that takes time to output data. With hardware flow control, also look at the flow control signal to determinate which side causes the pauses (it may be both).
I have an Arduino Uno connected to a PC via USB and I am communicating via serial to a temperature sensor from PHP.
At present, the temperature sensor records a value and sends it straight down the serial connection to the PC. However, this may not be read for a long period of time. Therefore, I think this method may be inefficient.
I was thinking I could listen on the Arduino for a serial message from the PX requesting the temperature before actually checking it and sending the message back to the PC via serial, therefore becoming more efficient as its not checking the temperature every 0.1 seconds.
My Questions are as follows:
Is this actually worth doing from a code efficiency point?
Is there a better way to improve this than my suggested method?
Would these changes improve battery performance (Eg if I was using a
different communication model and not Serial and therefore might
need a batteries)
A1: Since you already have the routines to measure the temperature and then send it to the PC there should not be much coding left to do to wait for a trigger from the PC before performing the routine.
A2: There always is a 'better' way :)
A3: If your µC does not have many other tasks to perform that keep it busy you can definitely save a lot of juice by putting the µC to sleep between those short periods of activity - which you should do anyway when running off batteries.
I'm building a clock. I want to set the clock by plugging an Ethernet cable into the clock. Most of the time the clock would not be plugged into the Internet.
I have an Arduino board and an Ethernet shield that can successfully connect to a time server and read the time (See the UdpNtpClient example file under Examples > Ethernet).
The problem is that to configure the Ethernet shield, the Ethernet.begin() call blocks for 60 sec if the shield is not connected to the Internet. I would like the clock to tell the time and periodically check to see if it has an Ethernet cable plugged in, and if so, make any corrections to the time. Most of the time this check is going to have a negative result, however, so I can't have the clock freeze for 60 sec each time.
Is it is possible to detect if the cable is connected in a quicker way than the Ethernet.begin() function? Is it possible to write a "multithreading" solution, where Ethernet.begin() is non-blocking?
Looking at the stock Ethernet library, it's not possible to prevent it from blocking.
I'm guessing you're using DHCP? This appears to be where the blocking comes from. Do you get the same problem when using a static IP address?
There's a number of blog posts available on Google covering this exact issue, including some forks of the Ethernet library that would allow you to do this in a non-blocking fashion.
In the DHCP.h header file you can find the class definition for a new DHCP connection.
Then you can see that there is a default timeout value of 60000ms.
(helpful hint: if you get past the initial effort, and start using eclipse to manage your adruino projects, its really great because you can just press F3 on a functions like Ethernet.begin and take a bit of a trip through the libraries to find these types of settings)
Its difficult to know how long the timeout should be. But a minute seems like a really long time. Of course you don't want to go to short.
I wouldn't go less than 15 seconds.
/David Cox