mqtt.ping() in Adafruit mqtt library - arduino

Following code comes form the Adafruit Mqtt documentation:
// Adjust as necessary, in seconds. Default to 5 minutes (300 seconds).
#define MQTT_CONN_KEEPALIVE 300
// ping the server to keep the mqtt connection alive
// NOT required if you are publishing once every KEEPALIVE seconds
if(! mqtt.ping()) {
mqtt.disconnect();
}
What does the "MQTT_CONN_KEEPALIVE" actually do? I cannot figure it out.. If i write the code as shown above here and put it in my loop, then the ping is executed constantly, and all packets are rejected... I was expecting that the MQTT_CONN_KEEPALIVE variable is used in the ping() function to execute the ping only if the 300 seconds have passed, but id does not seem to be like that.
How am I supposed to write the code in order to ping only once every few minutes?

MQTT Keep Alive is part of MQTT protocol to maintain a connection between broker and clients. You can read more about it in the documentation.
MQTT uses a TCP/IP connection that is normally left open by the client so that is can send and receive data at any time. To detect a connection failure MQTT uses a ping system where it sends messages to the broker at a pre-determined interval if no messages have been sent after a certain period (KeepAlive).
Specific to Adafruit_MQTT implementation, if you publish data and you are sure that you will publish data within the time period set by MQTT_CONN_KEEPALIVE, then you are good to go.
If the server/broker didn't receive data or PINGREQ from client within the MQTT_CONN_KEEPALIVE + an extra of 50% of MQTT_CONN_KEEPALIVE, the broker will disconnect from the network(timeout) and the client will have to re-establish the connection.
So if a MQTT client only subscribe to a topic without publishing, the client then must send the ping (PINGREQ) to the broker at least once in every MQTT_CONN_KEEPALIVE sec. However, you don't want to constantly ping the server. One way of doing it is only send the mqtt.ping() every MQTT_CONN_KEEPALIVE sec.
#define MQTT_KEEP_ALIVE 300
unsigned long previousTime = 0;
loop() {
if((millis() - previousTime) > MQTT_KEEP_ALIVE * 1000) {
previousTime = millis();
if(! mqtt.ping()) {
mqtt.disconnect();
}
}
// do something else
}

Related

BLUEZ - is propagation delay of ~50ms normal for BLE?

I'm trying to sample TI CC2650STK sensor data at ~120Hz with my Raspberry Pi 3B and when comparing the signal trace to a wired MPU6050 I seem to have a ~50ms phase-shift in the resultant signal (in the image below orange is the data received over BLE and blue is the data received over I2C with another sensor (MPU6050):
The firmware on the sensor side doesn't seem to have any big buffers:
(50{ ms }/8{ ms/sample } = ~6 { samples }), where each sample is 18bytes long -> 6*18 buffer size req'd I guess...).
On the RPi side I use Bluez with Bluepy library and again I see no buffers that could cause such a delay. For test purposes the sensor is lying right next to my pi, so surely OTA transmission cannot be taking 40-50ms of time? More so, timing my code that handles the incoming notifications shows that the whole handling (my high level code + bluepy library + BLUEZ Stack) takes less than 1-2 ms.
Is it normal to see such huge propagation delay or would you say I'm missing something in my code?
Looks legit to me.
BLE is timeslotted. Peripheral cannot transmit any time it wants, it has to wait for next connection event for sending its payload. If next connection event is right after sensor data update, message gets sent with no big latency. If sensor data is generated right after connection event, peripheral stack has to wait a complete connection interval for next connection event.
Connection interval is an amount of time, multiple of 1.25 ms between 7.25 ms and 4 s, set by Master of the connection (your Pi's HCI) upon connection. It can be updated by Master arbitrarily. Slave can kindly ask for modification of parameters from the Master, but master can do whatever it wants (most Master implementation try to respect constraints from Slave though).
If you measure an average delay of 50 ms, you are probably using a connection interval of 100 ms (probably a little less because of constants delays in the chain).
Bluez contains a hcitool lecup command line that is able to change the connection parameters for a given connection.

Arduino Ethernet Shield delay

I am working on an arduino board with Ethernet Shield (v2).
I have took a look to this sample program: Files > Samples > Ethernet > WebServer
There is something very strange for me in loop function: When the server ends to print data to client, i see a delay:
// give the web browser time to receive the data
delay(1);
// close the connection:
client.stop();
What happens if it takes more than one second to send data to client ?
What happens if data is immediatly send and if i have a second client trying to connect to the server in the 1 second delay ?
Is there a way to do something clean, for example waiting data to be flushed instead of waiting 1 second ?
Thanks

How does port numbering works for receiving MODBUS TCP packets?

I am running an application on my microcontroller(MSP432), which writes data to an Ethernet cable to send it over to PC.
I am using Packet sender to view the data received on the port(502) on PC from MC.
Data received on PC
As we can see in the above picture, the port numbers of MC are increment for every packet sent.
What will happen when it reaches to the maximum number?
Will it restart at some other port number and proceed with the process or will it stop?
Edit1: Modbus protocol library used from http://myarduinoprojects.com/modbus.html
Edit2:
Making a call to this function everytime i have a new data to send through MODBUS. Mb.Req(MB_FC_WRITE_MULTIPLE_REGISTERS, 0,11,0);
if (MbmClient.connect(ServerIp,502)) {
digitalWrite(GREEN_LED, HIGH);
#if DEBUG
//Serial.println("connected with modbus slave");
// Serial.print("Master : ");
for(int i=0;i<MbmByteArray[5]+6;i++) {
if(MbmByteArray[i] < 16){
//Serial.print("0");
}
//Serial.print(MbmByteArray[i],HEX);
if (i != MbmByteArray[5]+5) {
//Serial.print(".");
} else {
//Serial.println();
}
}
#endif
MbmClient.write(MbmByteArray,13+(Count*2));
MbmCounter = 0;
MbmByteArray[7] = 0;
MbmPos = Pos;
MbmBitCount = Count;
*state= true;
MbmClient.stop();
delay(100);
digitalWrite(GREEN_LED, LOW);
} else {
*state= false;
MbmClient.stop();
}
It seems you are using this Modbus example
I have never worked with that but I suppose that because the destination port in the code is the same you have in your sniffing image: 502
Probably you are repeatedly calling this method:
void MgsModbus::Req(MB_FC FC, word Ref, word Count, word Pos)
Inside this method you can see this line:
if (MbmClient.connect(ServerIp,502)) {
...
So every time you call that function a new connection is open. When you open a connection through a socket, the operating system or the network stack needs to select a source port and IP address from where the TCP message is sent.
This is why you see always a new source port and that port is increasing. This is what is called an ephemeral port. How the source port is selected by the TCP stack you are using is implementation dependent, though it's very common to begin with some port and every time a connection is open, it selects the next available port.
If the stack is well programmed, most probably your TCP stack will wrap around and begin with some specific port from 1024 up (First 1024 ports are restricted). The code I saw seems to close the port with this function:
MbmClient.stop()
You need to check ports, after being used, are closed. Otherwise, at some point you will run out of available ports (resource leak).
If you want your socket bound to a specific source port, you need to use a function similar to Linux socket bind
Now, a wiser way is to use all the time the same connection. You may need to modify that example.

Modem GPRS indicates be connected but doesn't receive or send any data

I'm implementing an application that sends data to a server and also receives. So, to always be able to receive data, I choosed keep it always connected. To get this, I use GL865 Telit modem into the client. It's configured using TCP.
The problem I have is that sometimes the modem socket status indicates that it is connected, also, when I send data it says that is OK but nothing is arriving on server and, when I send data from server to client, it doesn't receive too.
I'm testing sending data every 1 second. Many minutes after it begins to loose data, it becomes to result error at sending data and later indicates that there isn't a connection with gprs.

Wavecom GSM modem as a TCP client

I've been trying to do TCP communication using my Wavecom Fastrack modem. What I want to achieve is make the modem connect to a specified TCP server port to enable me to transfer data to and from the server. I found some information on than in the user's guide.
Basing on the information you can find on page 66 I created an application that opens the serial port to which the modem is connected and writes the following AT commands:
AT+WIPCFG=1 //start IP stack
AT+WIPBR=1,6 //open GPRS bearer
AT+WIPBR=2,6,11,"APN" //set APN of GPRS bearer
AT+WIPBR=2,6,0 //username
AT+WIPBR=2,6,1 //password
AT+WIPBR=4,6,0 //start GPRS bearer
AT+WIPCREATE=2,1,"server_ip_address",server_port //create a TCP client on port "server_port"
AT+WIPDATA=2,1,1 //switch do data exchange mode
This is exactly what the user's guide says. After the last command is sent to the modem, the device switches to data exchange mode and from then on everything what is written to the serial port opened by my application should be received by the server and everything the server sends should appear in the input buffer of that port.
The thing is that I did not manage to maintain stable bidirectional communication between the server and my modem. When I write some data to the serial port (only a few bytes), it takes a lot of time before the data appears on the server's side and in many cases the data does not reach the server at all.
I performed a few tests writing about 100 bytes to the serial port at once. Logging the data received by my server application I noticed that the first piece of data (8-35 bytes) is received after a second or two. The rest of the data appears in 2-5 seconds (either as a whole or in pieces of the said size) or does not appear at all.
I do not know where to look for the reason of that behaviour. Did I use wrong AT commands to switch the modem to TCP client mode? I can't believe the communication may be so slow and unstable.
Any advice will be appreciated. Thank you in advance.
what OS are you running? Windows does a pretty good job of hiding the messy details of communicating with the GPRS modem, all you have to do is create a new dial-up connection. To establish the connection you can make a call to the Win32 RasDial function. Once connected, you can use standard sockets to transfer data on a TCP port.
i have been using wavecomm modem for 2 years now.As far as i know from my experience is that if you are able to send some of the data then you can send all of the data.
the problem might be in the listening application which receives the data on the server side.
It could be that it is unable to deal with the amount of data that you are trying to send.
try sending the same data in smaller busts
with some delay in between them,then you might receive all data intact.

Resources