ESP32-S2 QT PY Deep Sleep Wakeup - tcp

I currently have an ESP32-S2 QT Py that I am using to send sensor data to a PC via wifi using TCP. I want to preserve battery life while not sensing, so I put it into deep sleep. I see that there are timer, pin, and touch alarms to wake it back up, but none of these work well for my design. Is there any way that I can wake up a deep sleeping ESP32-S2 with a TCP packet send from the PC server?

This functionality is commonly called "wake on LAN" and is generally used, as you described, to allow a "magic" packet to wake up a sleeping or hibernating device.
In this case, no. The ESP32's wifi radio is off during deep sleep. With the radio off there's nothing active that can receive or detect a packet.
If you're trying to conserve power and maximize battery life, you need to keep the radio off as it uses quite a bit of power just to stay connected to a wifi network.
The only parts of the ESP32 that are active and available during deep sleep are the ULP ("Ultra Low Power") processor, a very limited slow processor that can perform a few I/O actions during sleep, and a small amount of static RAM associated with the real time clock. You can learn more about the ULP and what it can do in Espressif's documentation.

Related

Is there a way to send data from the FPGA logic on a Zedboard to an external CPU without involvement of the ZYNQ PS?

I am a high school student, who is not very familiar with FPGAs and the Xilinx line.
I am running a ring oscillator module on a Zybo Z7 board. I am also running a counter module, which I want to sample at a high rate. I am currently sending the data through AXI to the ZYNQ processing system, and then using the inbuilt UART to USB buffer to send the data through a USB cable to my computer. On the computer side, I treat this input as a virtual serial line, and use a python script to take and log the data from an IOstream. This method takes very long, however, and I am trying to increase the sample speed. Thus, I was wondering if I could bypass the onboard PS, and connect the FPGA fabric directly to the UART buffer.
I have tried optimizing my PS code, which I have written in C. I have reached the point where it takes 30 oscillations of the onboard ZYNQ clock between samples. Now, however, I have created a newer and more reliable sampling framework in the FPGA logic, which requires a 'handshake' mechanism to start and stop the counter between samples. It takes a very long time for the PS to sample the counter, send the sample, and then restart the counter. Thus, the uptime of my sampling framework is a fraction of what I want it to be. Removing the PS would be ideal, as I know I can automate this handshake signal within the PL if I am able to connect it to a UART interface.
You can implement logic in the PL that can handle the UART communication thus bypassing the PS.
Here's an implementation you can try using:https://github.com/jakubcabal/uart-for-fpga
You would connect the UART pins to one of the Zybo Z7 Pmod ports and use an external USB to UART adapter such as this one, anyone would work as long it supports 3.3V: https://www.adafruit.com/product/5335
The adapter built into the board is connected to directly to the PS MIO pins and cannot be used by the PL.

What is the lowest latency communication method between a computer and a microcontroller?

I have a project in which I need to have the lowest latency possible (in the 1-100 microseconds range at best) for a communication between a computer (Windows + Linux + MacOSX) and a microcontroller (arduino or stm32 or anything).
I stress that not only it has to be fast, but with low latency (for example a fast communication to the moon will have a low latency).
For the moment the methods I have tried are serial over USB or HID packets over USB. I get results around a little less than a millisecond. My measurement method is a round trip communication, then divide by two. This is OK, but I would be much more happy to have something faster.
EDIT:
The question seems to be quite hard to answer. The best workaround I found is to synchronize clocks of the computer and microcontroler. Synchronization requires communication indeed. With the process below, dt is half a round trip, and sync is the difference between the clocks.
t = time()
write(ACK);
read(remotet)
dt = (time() - t) / 2
sync = time() - remotet - dt
Note that the imprecision of this synchronization is at most dt. The importance of the fastest communication channel stands, but I have an estimation of the precision.
Also note technicalities related to the difference of timestamp on different systems (us/ms based on epoch on Linux, ms/us since the MCU booted on Arduino).
Pay attention to the clock shift on Arduino. It is safer to synchronize often (every measure in my case).
USB Raw HID with hacked 8KHz poll rate (125us poll interval) combined with Teensy 3.2 (or above). Mouse overclockers have achieved 8KHz poll rate with low USB jitter, and Teensy 3.2 (Arduino clone) is able to do 8KHz poll rate with a slightly modified USB FTDI driver on the PC side.
Barring this, and you need even better, you're now looking at PCI-Express parallel ports, to do lower-latency signalling via digital pins directly to pins on the parallel port. They must be true parallel ports, and not through a USB layer. DOS apps on gigahertz-level PCs were tested to get sub-1us ability (1.4Ghz Pentium IV) with parallel port pin signalling, but if you write a virtual device driver, you can probably get sub-100us within Windows.
Use raised priority & critical sections out of the wazoo, preferably a non-garbage-collected language, minimum background apps, and essentially consume 100% of a CPU core on your critical loop, and you can definitely reliably achieve <100us. Not 100% of the time, but certainly in the territory of five-nines (and probably even far better than that). If you can tolerate such aberrations.
To answer the question, there are two low latency methods:
Serial or parallel port. It is possible to get latency down to the millisecond scale, although your performance may vary depending on manufacturer. One good brand is Brainboxes, although their cards can cost over $100!
Write your own driver. It should be possible to achieve latencies on the order of a few hundred micro seconds, although obviously the kernel can interrupt your process mid-way if it needs to serve something with a higher priority. This how a lot of scientific equipment actually works. (and a lot of the people telling you that a PC can't be made to work on short deadlines are wrong).
For info, I just ran some tests on a Windows 10 PC fitted with two dedicated PCIe parallel port cards.
Sending TTL (square wave) pulses out using Python code (actually using Psychopy Builder and Psychopy coder) the 2 channel osciloscope showed very consistant offsets between the two pulses of 4us to 8us.
This was when the python code was run at 'above normal' priority.
When run at normal priority it was mostly the same apart from a very occassional 30us gap, presumably when task switching took place)
In short, PCs aren't set up to handle that short of deadline. Even using a bare metal RTOS on an Intel Core series processor you end up with interrupt latency (how fast the processor can respond to interrupts) in the 2-3 µS range. (see http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/industrial-solutions-real-time-performance-white-paper.pdf)
That's ignoring any sort of communication link like USB or ethernet (or other) that requires packetizing data, handshaking, buffering to avoid data loss, etc.
USB stacks are going to have latency, regardless of how fast the link is, because of buffering to avoid data loss. Same with ethernet. Really, any modern stack driver on a full blown OS isn't going to be capable of low latency because of what else is going on in the system and the need for robustness in the protocols.
If you have deadlines that are in the single digit of microseconds (or even in the millisecond range), you really need to do your real time processing on a microcontroller and have the slower control loop/visualization be handled by the host.
You have no guarantees about latency to userland without real time operating system. You're at the mercy of the kernel and it's slice time and preemption rules. Which could be higher than your maximum 100us.
In order for a workstation to respond to a hardware event you have to use interrupts and a device driver.
Your options are limited to interfaces that offer an IRQ:
Hardware serial/parallel port.
PCI
Some interface bridge on PCI.
Or. If you're into abusing IO, the soundcard.
USB is not one of them, it has a 1kHz polling rate.
Maybe Thunderbolt does, but I'm not sure about that.
Ethernet
Look for a board that has a gigabit ethernet port directly connected to the microcontroller, and connect it to the PC directly with a crossover cable.

Does a BLE device reads advertising packets when not scanning? (autoconnect)

I read in some places that advertising packets are sent to every one in the distance range. However, should the other device be scanning to receive them or it will receive it anyways?
The problem:
let's say I'm establishing a piconet between 5 or 6 BLE devices. At some point I have some connections between the slaves and one master. Then if one of the devices get removed/shut off for a few days I would like it to reconnect back to the network as soon as turned on.
I read about the autoconnect feature but it seems when you set it true, the device creates a background scanning which is actually slower (in frequency) than the manual scanning. This makes me conclude that for the autoConnect to work the device which is being turned on again needs to advertise again, right? Therefore, if autoconnect really runs a slow scan on background so it seems to me that you can never receive the adv packets instantly unless you're scanning somehow. Does that make sense?
If so, is there any way around it? I mean, detect the device that is comming back to the range instantly?
Nothing is "Instant". You are talking about radio protocols with delays, timeouts, retransmits, jamming, etc. There are always delays. The important thing is what you consider acceptable for your application.
A radio transceiver is either receiving, sleeping or transmitting, on one given channel at a time. Transmitting and receiving implies power consumption.
When a Central is idle (not handling any connection at all), all it has to do is scanning. It can do it full time (even if spec says this should be duty cycled). You can expect to actually receive an advertising packet from peer Peripheral the first time it is transmitted.
When a Central is maintaining a connection to multiple peripherals, its transceiver time is shared between all the connections to maintain. Background scanning is considered low priority, and takes some of the remaining transceiver time. Then an advertising Peripheral may send its ADV packet while Central is not listening.
Here comes statistical magic:
Spec says interval between two advertising events must be augmented with a (pseudo-)random delay. This ensures Central (scanner) and Peripheral (advertiser) will manage to see each other at some point in time. Without this random delay, their timing allocations could become harmonic, but out of phase, and it could happen they never see each other.
Depending on the parameters used on Central and Peripheral (advInterval, advDelay, scanWindow, scanInterval) and radio link quality, you can compute the probability to be able to reach a node after a given time. This is left as an exercise to the reader... :)
In the end, the question you should ask yourself looks like "is it acceptable my Peripheral is reconnected to my Central after 300 ms in 95% of cases" ?

Bluetooth Low Energy Profile/Service Selection

My requirement is as follows:
I need to send Proximity Sensor (Reed Switches/Magnetic Sensor) reading (On/Off) from two Input Pins to a central PC.
I need to use coin cell. So basically the app should be in sleep mode and once there is any interrupt on any of these two pins it should wake up to send its state to the central PC.
I have DA18450 chip and development board (murata ZY type) with me.
Dialog Semiconductor 18450
Murata Bluetooth Smart Development Board
I am a beginner to bluetooth technology and started reading about it just a week back.
Could someone guide me about the most apt Profile/Service suitable for my application?
If you want the device to actually sleep then it'd probably be best for it to just transmit data via advertising packets when the device awakens. Otherwise you have to maintain a connection which requires staying awake at some level. However, advertising packets are broadcast and the device can't know if anything received those packets (you could have it broadcast several times for a fixed period of time or have it constantly broadcast while the proximity alert is valid). Also, on the receiving end, with no connection there's no way of knowing the transmitting device is even there when nothing is being transmitted.
The advertising packets have a section for limited information and that's where you'd transmit data if you don't want to establish a connection.

Serial Transfer UART Delay

I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.

Resources