GPRS modem ATD timeout unspecified, how to interrupt? - gprs

With my GPRS modem I encounter very long durations for ATD*99***1# command used for establishing the connection (more than 2 minutes in some areas with low RSSI - Received signal strength indication).
My question is twofold:
As the timeout for the ATD command is not specified in the datasheet of the modem, what is the maximum duration that I could expect ? 5, 10 minutes ? (I would like to have the timeout in the chat script consistent with the timeout of the modem)
In case I cannot manage a reasonable timeout, how could I interrupt the ATD command (the modem is still in AT-command mode, not in data mode, so +++ won't work I suppose)
For information the sequence in nominal case is:
send(ATD*99***1#)
recv(CONNECT)
Thank you.

I finally got some answers:
The ATD command is decomposed internally as:
GPRS attachment
switch to data mode
PDP context establishment
The timeout of the GPRS modem for GPRS attachment is 5 minutes, and this cannot be interrupted.
The timeout for PDP context establishment is 160 seconds. This one can be interrupted by +++.
This is of course dependant on the modem you use.

Related

Sim800l: how to know network state reliably

In Sim800L AT-Commands Guide there are a lot of different status commands, that should tell, what state does is have.
For example:
AT+CPAS - check if device is ready
AT+CGREG? - check registration status in network
AT+CGATT? - check if device "attached to network"
AT+CSQ - get signal level
But, in some cases, answers to this commands could be an "ERROR", or no answer at all.
I need a reliable and fast method to know is device connected to network, or it's already in GPRS mode, and for now I came to use blinking LED on Sim800L to detect it's state.
LED has three blinking frequencies:
Fast blinking - GPRS connection is active
Medium speed of blinking - Network connection is not established yet
Slow blinging - Device is connected to network (but not the GPRS)
I can use photodiode and "read" blinging of LED, or I can wire LED's power pin to analog pin of Arduino, and read it's voltage. Next, I can count how fast LED is blinking, and determine, which state Sim800L in.
But how do I get this level of reliability without using such a crutch?
Given fast means 1 sec you could send an AT command e.g. five times and take 3 non error responses as a valid result. See pseudo code
uint8_t errorCount = 0;
for (uint8_t i=0;i<5;i++){
.... send AT command ...
if (response == "error") errorCount++;
if (errorCount >=3) errorHandling();
}
... process successful AT command ...
or your HW approach with LED to analog pin

BLUEZ - is propagation delay of ~50ms normal for BLE?

I'm trying to sample TI CC2650STK sensor data at ~120Hz with my Raspberry Pi 3B and when comparing the signal trace to a wired MPU6050 I seem to have a ~50ms phase-shift in the resultant signal (in the image below orange is the data received over BLE and blue is the data received over I2C with another sensor (MPU6050):
The firmware on the sensor side doesn't seem to have any big buffers:
(50{ ms }/8{ ms/sample } = ~6 { samples }), where each sample is 18bytes long -> 6*18 buffer size req'd I guess...).
On the RPi side I use Bluez with Bluepy library and again I see no buffers that could cause such a delay. For test purposes the sensor is lying right next to my pi, so surely OTA transmission cannot be taking 40-50ms of time? More so, timing my code that handles the incoming notifications shows that the whole handling (my high level code + bluepy library + BLUEZ Stack) takes less than 1-2 ms.
Is it normal to see such huge propagation delay or would you say I'm missing something in my code?
Looks legit to me.
BLE is timeslotted. Peripheral cannot transmit any time it wants, it has to wait for next connection event for sending its payload. If next connection event is right after sensor data update, message gets sent with no big latency. If sensor data is generated right after connection event, peripheral stack has to wait a complete connection interval for next connection event.
Connection interval is an amount of time, multiple of 1.25 ms between 7.25 ms and 4 s, set by Master of the connection (your Pi's HCI) upon connection. It can be updated by Master arbitrarily. Slave can kindly ask for modification of parameters from the Master, but master can do whatever it wants (most Master implementation try to respect constraints from Slave though).
If you measure an average delay of 50 ms, you are probably using a connection interval of 100 ms (probably a little less because of constants delays in the chain).
Bluez contains a hcitool lecup command line that is able to change the connection parameters for a given connection.

Possible reason why Xbee is not able to send data

I am using an Arduino Pro Mini 328P (3.3v, 8Mhz) with Xbee series 1. I have set the frequency to 1 Mhz and the baudrate to 9600. Also I have set baudrate to 9600 in the Xbee. I have also tested that at this baudrate Xbee is sending the data properly in a normal scenario.
Now what I have done in my project:
I have registered my Xbee with the gateway and then it will go to sleep (I have used pin hibernate mode) then it will wake up by a digital pin of the Pro Mini. I have put a delay of 19ms, after which the Xbee will try to send data. After sending the data it will go back to sleep.
The problem is that it behaves randomly when sending data to the gateway (which also has the same Xbee series1). Sometimes it sends the data perfectly, sometimes sending fails. I have also enabled RR to retry 6 times in case the Xbee fails to send the data the first time.
I have no idea how to solve this problem because of the randomness in sending the data.
I have put two Xbees nearer (I have two nodes with the same hardware and the same code). There is an interval between of around 4 minutes. So when one Xbee sends the data perfectly, after that 4 minutes gae (time difference of two RTC on different nodes) the other one fails to send the data. In this condition what can I conclude?
As a side note, the Xbee will try to send the data every hour. To calculate that hour I have to use an RTC, which seems to work fine (I am sure because I have taken the logs, the RTC never fails to generate an interrupt).
So I am wondering what could be the possible reason and how can I fix this problem (without restarting anything if it is possible then nothing will be better than that).
And I have no choice to restart my controller.
How to debug this?
A few things. If possible, increase your baud rate so you spend less time sending data to/from the XBee. If you have a limited power budget, faster baud rates save time and energy. I don't know how the UARTs work on the Arduino, so I can't say whether 115,200bps is possible with a 1MHz CPU clock.
Second, make sure you wait for the XBee to assert CTS back to the Arduino after you wake it up. Never send to the XBee unless it's "clear to send".
Third, if you use API mode, you can watch for a "Transmit Status" frame from the local XBee back to the Arduino which will let you know when the module has successfully sent the frame, and it's safe for you to put it back to sleep.

Serial driver hw fifo overrun at 460800 baud rate

I am using 2.6.32 OMAP based linux kernel. I have observed that at high speed data rate (Serial port set to 460800 baud rate) serial port HW fifo overflow happens.
The serial port is configured to generate interrupt at every 8 bytes in rx and tx both direction (i.e when the serial port HW fifo is 8 byte full serial interrupt is generated which reads the data from the serial port at once).
I am transmitting 114 bytes packet continuously (Serial driver has no clue about the packet mode, it receives data in raw mode). Based on calculations,
460800 bits/sec => 460800/10 = 46080 bytes/sec (Where 1 stop bit and 1 start bit) so in 1 second I can transmit under worst case 46080/114 => 404.21 packets without any issue.
But, I expect the serial port to handle at least 1000 packets per second as such I have configured serial driver to generate interrupt every 8 bytes.
I tried the same using windows XP and I am able to read upto 600 packets / second.
Do you think this is feasible on linux under above circumstances? or I am missing something? Let me know your comments.
could someone also, send some important configuration settings that needs to be configured in .config file. I am unable to attach .config file otherwise, I can share it.
There are two kind of overflows that can occur for a serial port. The first one is the one you are talking about, the driver not responding to the interrupt fast enough to empty the FIFO. They are typically around 16 bytes deep so getting a fifo overflow requires the interrupt handler to be unresponsive for 1 / (46080 / 16) = 347 microseconds. That's a really, really long time. You have to have a pretty drastically screwed up driver with a higher priority interrupt to trip that.
The second kind is the one you didn't consider and offers more hope for a logical explanation. The driver copies the bytes from the fifo into a receive buffer. Where they will sit until the user mode program calls read() to read them. Getting an overflow on that buffer will happen when don't configure any kind of handshaking with the device and the user mode program is not calling read() often enough. It looks exactly like a fifo buffer overflow, bytes just disappear. There are status bits to warn about these problems but not checking them is a frequent oversight. You didn't mention doing that either.
So start by improving the diagnostics, do check the overflow status bits to know what's going on. Then do consider enabling handshaking if you find out that it is actually a buffer overflow issue. Increasing the buffer size is possible but not a solution if this is a fire-hose problem. Getting the user mode program to call read() more often is a fix but not an easy one. Just lowering the baud rate, yeah, that always works.

TCP Socket no connection timeout

I open a TCP socket and connect it to another socket somewhere else on the network. I can then successfully send and receive data. I have a timer that sends something to the socket every second.
I then rudely interrupt the connection by forcibly losing the connection (pulling out the Ethernet cable in this case). My socket is still reporting that it is successfully writing data out every second. This continues for approximately 1hour and 30 minutes, where a write error is eventually given.
What specifies this time-out where a socket finally accepts the other end has disappeared? Is it the OS (Ubuntu 11.04), is it from the TCP/IP specification, or is it a socket configuration option?
Pulling the network cable will not break a TCP connection(1) though it will disrupt communications. You can plug the cable back in and once IP connectivity is established, all back-data will move. This is what makes TCP reliable, even on cellular networks.
When TCP sends data, it expects an ACK in reply. If none comes within some amount of time, it re-transmits the data and waits again. The time it waits between transmissions generally increases exponentially.
After some number of retransmissions or some amount of total time with no ACK, TCP will consider the connection "broken". How many times or how long depends on your OS and its configuration but it typically times-out on the order of many minutes.
From Linux's tcp.7 man page:
tcp_retries2 (integer; default: 15; since Linux 2.2)
The maximum number of times a TCP packet is retransmitted in
established state before giving up. The default value is 15, which
corresponds to a duration of approximately between 13 to 30 minutes,
depending on the retransmission timeout. The RFC 1122 specified
minimum limit of 100 seconds is typically deemed too short.
This is likely the value you'll want to adjust to change how long it takes to detect if your connection has vanished.
(1) There are exceptions to this. The operating system, upon noticing a cable being removed, could notify upper layers that all connections should be considered "broken".
If want a quick socket error propagation to your application code, you may wanna try this socket option:
TCP_USER_TIMEOUT (since Linux 2.6.37)
This option takes an unsigned int as an argument. When the
value is greater than 0, it specifies the maximum amount of
time in milliseconds that transmitted data may remain
unacknowledged before TCP will forcibly close the
corresponding connection and return ETIMEDOUT to the
application. If the option value is specified as 0, TCP will
use the system default.
See full description on linux/man/tcp(7). This option is more flexible (you can set it on the fly, just right after a socket creation) than tcp_retries2 editing and exactly applies to a situation when you client's socket doesn't aware about server's one state and may get into so called half-closed state.
Two excellent answers are here and here.
TCP user timeout may work for your case: The TCP user timeout controls how long transmitted data may remain unacknowledged before a connection is forcefully closed.
there are 3 OS dependent TCP timeout parameters.
On Linux the defaults are:
tcp_keepalive_time default 7200 seconds
tcp_keepalive_probes default 9
tcp_keepalive_intvl default 75 sec
Total timeout time is tcp_keepalive_time + (tcp_keepalive_probes * tcp_keepalive_intvl), with these defaults 7200 + (9 * 75) = 7875 secs
To set these parameters on Linux:
sysctl -w net.ipv4.tcp_keepalive_time=1800 net.ipv4.tcp_keepalive_probes=3 net.ipv4.tcp_keepalive_intvl=20

Resources