SMS vs TCP for weak signal GSM-based M2M communications? - tcp

I have a remote sensor that talks to the world via a low bandwidth TCP connection over GSM. It can often be in a location with extremely patchy GSM connectivity though. At the moment, the remote sensor waits for a GPRS network and then initiates a TCP connection with the server and then listens for commands (which are only a dozen or so bytes every hour or so)
Is an SMS more or less likely to get through to the remote sensor than being able complete a TCP connection? I guess I'm wondering how likely it is that a network signal strength is sufficient for SMS but not for TCP.

Using standard text messaging services for M2M is good idea if you can fit your data in 140 bytes. For short transmissions, opening an GPRS/1xRTT (2G) IP session, transmitting the data to a server and closing the session is less efficient and more likely to go wrong than sending an SMS.
As a side note, you can also use SMS (MT-SMS) to bring IoT device online (mechanism called "Shoulder-Taps" ).

Compared to data (or voice), SMS use only the signaling part of the mobile network. It's a quite cheap operation in terms of network resources reservation so you'll more likely get through with an SMS than with a data connection in a weak signal environment. One additional advantage is much lower power consumption of your terminal which might be important for M2M context.
As stated by #Jarek previously, you need to be able to pack your data into 140 bytes or to forge "long" SMSes which are kind of concatenation of "simple" 140-bytes SMSes

Related

How do programs apply backpressure over a network?

Consider the example of a download stream that can be throttled (eg. torrent client, dropbox sync, etc). How does a program apply backpressure to the network?
My thoughts are that, from a software perspective you can choose to read from a socket at a certain speed. But how does the socket you're reading from know that you only want your device to receive data so quickly? Does the actual NIC apply backpressure over the network somehow? If so, by what mechanism?
Backpressure is embedded in TCP/IP protocol. If slow consumer does not read bytes from connection in timely manner, producer is unable to put more bytes than there are buffer memory on sending and receiving sides.
In contrast, UDP messages are not counted and can be dropped if there is no free memory on receiver side to store them.

NodeMCU broadcast to all clients

I want to broadcast a request to all client that are connected to my esp8266 12f access-point
I used this to create a connection per client, means if there're 3 clients it will create 3 connections.
for mac,ip in pairs(wifi.ap.getclient()) do
srv= net.createConnection(net.TCP, 0)
srv:on("receive", function(client, b_response) srv:close() collectgarbage() end)
srv:on("connection", function(client, b_request) client:send(request) end)
srv:connect(80, ip)
end
I tried the broadcast ip srv:connect(80, "255.255.255.255") but nothing was sent
The Problem :-
What I used that every srv will overwrite the previous srv so I can not get a response if it was delayed, even so I can name every srv with a different name like srv_1, srv_2, srv_3 but this take too much memory.
What I want
Create only one connection ?
Your code is using TCP, which is inherently a single connection, point-to-point protocol. There's no such thing as a "broadcast" TCP connection. TCP simply does not work using broadcast. That's like trying to use a car as a boat.
If you're sending a small amount of information, you might try UDP instead. The drawbacks are that UDP is unreliable - you can't be certain your message was received - and you'd need a lot more code to receive a response, if you want one, and you'd need to build a reliability mechanism (retransmit if no answer received, detect retransmissions in case the answer was dropped) if you care about that.
I'd recommend that you check out the MQTT protocol - it's designed to make it easy to communicate with multiple clients. It's lightweight and MQTT clients run well on NodeMCU and Arduino processors. There's an MQTT client built into NodeMCU's LUA implementation.
The downside is you'd need an MQTT broker that all of your NodeMCU's will connect to. The broker is usually run on a more capable processor (a Raspberry Pi is a good choice) or externally on the Internet (Adafruit offers a broker at https://io.adafruit.com/), although there are some implementations that run on an ESP8266.

How to spoof individual BLE packets

I'm doing a security analysis project on an IoT device that uses an unencrypted BLE connection (with ATT protocol) and I want to spoof an individual BLE packet with the source address of an already connected device. Is there some tool or API that would allow me to do this easily? I've already tried gatttool and spooftooph but they seem to be connection based and don't allow you to send out single packets with modified fields (as far as I could tell).
You will need some hardware where you can access the radio peripheral directly. What you basically need to do is to find or write a ble sniffer firmware, with the modification that it at a given moment sends a packet on the connection it is currently listening to. But note that the signal strength must be stronger than the original device's signal so it doesn't interfere.
The only open source project I'm aware of is Ubertooth. You will also be able to do this with an nRF52 but then you need to write your own sniffer firmware since Nordic Semiconductor's is closed source.
I can't comment on Emils reply yet, < 50 rep:
Nordic Semis nRF Sniffer v2 needs only the nRF52DK and wireshark to work as a general BLE sniffer. At 40$ it's not that expensive. I know for a fact they will release a new dongle soon that will sell for ~10-15 bucks if you can wait a a month or two.

Is TCP/IP a mandatory for MQTT?

If so, do you know examples of what can go wrong in a non-TCP network?
Learning about MQTT I came across several mentions of the fact that MQTT relies on TCP/IP stack. For example, from mqtt.org:
MQTT for Sensor Networks is aimed at embedded devices on non-TCP/IP
networks, whereas MQTT itself explicitly expects a TCP/IP stack.
But if you read the reference documents, you won't finding anything like that. Moreover, there's QoS field that can be used for reliable delivery and whose values other than 0 are essentially useless in TCP/IP networks. Right now I see nothing that would prevent me from establishing an MQTT connection using UNIX pipe, domain or UDP socket rather than TCP socket.
MQTT just needs a delivery that is ordered and reliable, it doesn't have to be TCP. SCTP works just fine, for example, but UDP doesn't because there is no way to guarantee your large PUBLISH packet made up of multiple UDP packets will arrive in order and complete.
With regards TCP reliability, in theory what you are saying is true, but in practice when an application calls write() and receives a successful return, it can't guarantee when the data has actually made it out of the computer to the remote host. All write() (or send()) does is copy the data to the kernel buffers, at which point you have no further control.
The only way to be sure that a message reaches the remote host at the application level is to have the remote host reply.
MQTT-SN (for sensor Network ) is the solution for the problem that MQTT was having while running over TCP/IP .
There is a concept of MQTT gateway which is brought in in for MQTT-SN which helps in bringing non-TCP / IP implementation.
http://emqttd-docs.readthedocs.io/en/latest/mqtt-sn.html

Benefits of Polling Multiple Ble connections vs simultaneous connections?

Say I have a BLE device that is both a server(has info) and a peripheral(needs to access outside info), that upon receiving or generating it's own data has to share with other Server/Peripherals in it's vicinity.
Would it be beneficial that I only attempt to connect to the devices through BLE when their is data to be transfered "even though it periodically connects to each server sequentially to see if it can" or would it be better to keep connections simultaneously use callbacks to determine when connected and simply transfer data when required to(through from what i understand the devices that I use can only process on gatt operation at a time meaning having 4 connections to quickly transfer data is irrelevant).
In other words is it beneficial to continually reconnect and disconnect a peripheral and server or simply have a connection to as many servers as I need(even though apparently I can only perform one gatt operation at a time i.e 1 characteristic read/wright).
balanced requirements.
If possible, instead of reading to see if there's data available, I would stay in connection with the devices and have them use Bluetooth notifications to communicate data when it becomes available.
Alternatively you could consider having the Peripherals advertise only when they have data to invite the Central mode device to connect. It would need to be periodically scanning to detect this however.
Advantages/disadvantages depend on your priorities and the nature of the two types of device.
FYI you're mixing terminology a little btw. When discovering devices you have a GAP Peripheral which advertises and a GAP Central which scans. After the Central connects to the Peripheral connect you have a GATT Client and a GATT Server. Usually the GAP Peripheral becomes the GATT Server but it does not need to be this way. The GAP Peripheral can just as easily become the GATT Client. It's the GATT Server that has the state data in an attribute table in the form of Services, Characteristics and Descriptors.

Resources