Sim7080G module can't send data over TCP while using GNSS - tcp

I bought a Simcom (Sim7080G) module to use it for geolocation and send data over TCP. These modules are quite new on the market (it was first launched in mid 2019) and there is some weird functionning with it. I would like to know if people that use it struggle with the same problem.
My goal is to take GNSS (latitude/longitude) informations, and to send them over TCP.
Activate and take GNSS informations
AT+CGNSPWR=1
returns OK
AT+CGNSINF returns +CGNSINF: 1,1,20200517191239.000,4x.xxxxxx,6.xxxxxx,473.769,0.00,,0,,1.9,2.1,1.0,,7,,7.9,6.0
Connect to any TCP server
AT+CNACT=0,1 returns OK\r\n\r\n+APP PDP: 0,ACTIVE
AT+CAOPEN=0,0,"TCP",151.101.1.69,80 (151.101.1.69 is stackoverflow.com ip address)
--> Some time waiting (like +40s) and then:
+CAOPEN: 0,23\r\n\r\nOK
The code 23 means: 23 Remote refuse, but in my case the connection never reached the server.
Only GNSS or TCP can be used, but not both
What is weird about all of this, is the fact that I can connect on a TCP server, but this stops to work when I activate GNSS.

I sent an email to Simcom technical support.
After insisting that my problem be addressed by a Simcom engineer, I received an answer. I encouraged them to be more clear on their documentation because this information was nowhere (this module is pretty new so I think they missed it on documentation).
Hope to help somebody in the same way or not to do the same pricey error:
Hi Dardan,
"it is not possible to use GNSS and TCP at the same time"
This is known limitation for this module, as there are limitation for
LTE and GNSS part, they can not run simultaneously because they are
sharing part of RF components(SIM7070G low cost version of SIM7000G)
so it is time MUX for LTE and GNSS which means the GNSS performance
could not be good(if customer needs to send GNSS data to server in
very short interval such as <10 seconds), for SIM7000G LTE and GNSS
can work simultaneously without problem. so SIM7070G could be a good
solution for "parcel tracking" etc, which do not need the continue
navigation. please go for SIM7000G, thanks.
xxxx Sun

I found this issue some hours before read this post. My product publishs in AWS IoT MQTT broker; it works, but I need the GPS coordinates to be sent together in message. I'm a lot frustrated, because such problem should be said in Simcom's documents. My previous version was using sim7600g, more expensive, but works fine. I live in Brazil, everytime I need to test some different module I loose more than 1 month to buy from China or somewhere and arrive a new one to me. I tried to turn ON/OFF GNSS and the TCP conection in alternate way, but is too slow the process to reconnect on gsm network.

Ran into this as well. With MQTT it is possible to power down the GPS unit to be able to send and receive MQTT messages without reconnecting/subscribing to the broker again. I've made a 30 second interval to enable/disable the GPS unit and tested it with HiveMQ public broker. Receiving of position data every 60 seconds is possible with this setup, which might be sufficient for some applications. For the price, it has this is still a good module.

I receive no response for HTTP Get request for the http://httpbin.org/get URL. Following is the output using the AT Command Tester from https://m2msupport.net
Checking registration status...
AT+CREG?
+CREG: 2,1,"912","3D73",0
OK
The device is registered in home network.
AT+CGREG?
+CGREG: 2,1,"912","3D73",0,"1"
OK
The device is registered in home network.
Device is registered..
Check the network APN...
AT+CGNAPN
+CGNAPN: 0,""
OK
Network did not send APN to the device.
Activate the network bearer...
AT+CNACT=0,1
OK
+APP PDP: 0,ACTIVE
Set up the HTTP URL...
AT+SHCONF="URL","httpbin.org"
OK
Set up the HTTP body length...
AT+SHCONF="BODYLEN",1024
OK
Set up the HTTP header length...
AT+SHCONF="HEADERLEN",350
OK
Initiating HTTP connection...
AT+SHCONN
OK
Get the HTTP connection state...
AT+SHSTATE?
+SHSTATE: 1
OK
HTTP connection is successful...
HTTP get request...
AT+SHREQ="http://httpbin.org/get",1
OK
No reponse received..

Related

Why do we need a half-close socket?

According to this blog, it seems half open connection is what we want to avoid.
So why does Java still provides the facility to make a socket half close?
According to this blog, it seems half open connection is what we want to avoid.
This author of the blog explicitly notes that he does not talk about deliberately half-closed connections but about half-open connections which are caused by intermediate devices like routers which drop the connection state after some timeout.
So why does Java still provides the facility to make a socket half close?
Because there are useful? Half-close just means that no more data will be send on the socket but it will still be able to receive data. This kind of behavior is actually useful for various situations where the client sends only a request and receives a response because it can be used to indicate the end of the request to the peer.

Remote server push notification to arduino (Ethernet)

I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino

Delay before sending message over socket - how does that help?

I have a tcpip socket interface to a third party software app. I've implemented this interface for several customer sites with no problem. The latest customer, though... problems. We've turned on logging in the apps on either end, and also installed Wireshark on the PC to log raw tcpip traffic. With that, we've proved that my server app successfully sends the message out, the pc receives the message, but the client app doesn't see it. (This is a totally intermittent problem, which is why it's such a pain to troubleshoot.)
The socket details are as simple as they come: one socket handling two way communications between the server and the pc. The messages are plain ascii text and fairly short (not XML). The server initiates communications by sending the first message, and then the client responds with several messages. The socket is kept open at all times while the apps are running. The client app is designed so that the end user can only process one case at a time, which prevents message collisions from happening. They have some sort of polling set up, their app "hibernates" until it sees the initiating message from the server.
The third party vendor has advised me to add a few second delay before I send them the initiating message. I can't see how that helps. If the client is "sleeping", just polling the socket waiting for a message, how does adding a delay before the first message help? It's not like we send two messages and the second one gets lost. It's losing the first message. So I don't see how it matters if we send that message now or two seconds from now.
I've asked them and they haven't given me details. It could be some proprietary details in their coding that they don't want to disclose to me, and that's fair. So I'm asking here because I'm always learning new things about socket programming. Maybe you guys can shed some light on how polling a tcpip socket can be affected by message timing?
Since its someone else's client and they won't tell you what its doing (other than saying 'insert a delay'), the answer is probably that their client is reading and discarding the message because its not yet in a state to deal with it. The delay will allow the client time to get into a state where it can respond to the message properly.
In other words, the client has a race condition. One easy way this can happen is if they have one thread for reading messages and another for dealing with them.
Short of running strace(1) on the client to see what system calls it is making, its tough to tell what the client is actually doing.

Using NetConnection and URLStream to send/recieve data at high frequency

I'm writing a Comet-like app using Flex on the client and my own hand-written server.
I need to be able to send short bursts of data from the client at quite a high frequency (e.g. of the order of 10ms between sends).
I also need the server to push short bursts of data at a similarly high frequency.
I'm using NetConnection.call() to send the data to the server, and URLStream (with chunked encoding) to push the data from the server to the client.
What I've found is that the data isn't being sent/received as soon as it's available. For example, in IE, it seems the data is sent every 200ms rather than as soon as NetConnection.call() is called. Similarly, URLStream isn't making the data available as soon as the server is sending it.
Judging by the difference in behaviour between the browsers, it seems as though the Flash Player (version 10) is relying on the host browser to do all the comms. Can anyone confirm this? Update: This is very likely as only the host browser would know about the proxy settings that might be set.
I've tried using the Socket class and there's no problem with speed there: it works perfectly. However, I'd like to be able to use HTTP-based (port 80) connections so that my app can run in heavily fire-walled environments (I tried using a Socket over port 80, but that has its problems).
Incidentally, all development/testing has been done on an internal LAN, so bandwidth/latency is not an issue.
Update: The data being sent/received is in small packets and doesn't need to be in any particular format. For example, I might need to send a short array of Numbers, and this could either be encoded in AMF (e.g. via NetConnection.call()) or could be put into GET parameters (e.g. using sendToURL()). The main point of my question is really to see whether anyone else has experienced the same problem in calling NetConnection/URLStream frequently, and whether there is a workaround (it's also possible that the fault lies with my server code of course, rather than Flash).
Thanks.
Turns out the problem had nothing to do with Flash/Flex or any of the host browsers. The problem was in my server code (written in C++ on Linux), and without access to my source code the cause is hard to find (so I couldn't have hoped for an answer from this forum).
Still - thank you everyone who chipped in.
It was only after looking carefully at the output shown in Wireshark that I noticed the problem, which was twofold:
Nagle's algorithm
I was sending replies in multiple packets by calling write() multiple times (e.g. once for the HTTP response header, and again for the HTTP response body). The server's TCP/IP stack was waiting for an ACK for the first packet before sending the second, but because of Nagle's algorithm the client was waiting 200ms before sending back the ACK to the first packet, so the server took at least 200ms to send the full HTTP response.
The solution is to use send() with the flag MSG_MORE until all the logically connected blocks are written. I could also have used writev() or setsockopt() with TCP_CORK, but it suited my existing code better to use send().
Chunk-encoded streams
I'm using a never-ending HTTP response with chunk encoding to push data back to the client. Naggle's algorithm needs to be turned off here because even if each chunk is written as one packet (using MSG_MORE), the client OS TCP/IP stack will still wait up to 200ms before sending back an ACK, and the server can't push a subsequent chunk until it gets that ACK.
The solution here is to ask the server not to wait for an ACK for each sent packet before sending the next packet, and this is done by calling setsockopt() with the TCP_NODELAY flag.
The above solutions only work on Linux and aren't POSIX-compliant (I think), but that isn't a problem for me.
I'm almost 100% sure the player relies on the browser for such communications. Can't find an official page stating so atm, but check this out for example:
Applications hosting the Flash Player
ActiveX control or Flash Player
plug-in can use the
EnforceLocalSecurity and
DisableLocalSecurity API calls to
control security settings.
Which I think somehow implies the idea. Also, I've suffered some network related bugs on FF/IE only which again points out to the player using each browser for networking (otherwise there wouldn't be such differences).
And regarding your latency problem, I think that if speed is critical, your best bet is sockets. You have some work to do, but seems possible, check out the docs again:
This error occurs in SWF content.
Dispatched if a call to
Socket.connect() attempts to connect
either to a server outside the
caller's security sandbox or to a port
lower than 1024. You can work around
either problem by using a cross-domain
policy file on the server.
HTH,
Juan

Detecting missing responses to long running HTTP (SOAP) requests

I need a way to detect a missing response to a long running HTTP POST request. This problem arises when the network infrastructure (firewalls, proxies, unplugged cables, etc.) drops the response packets. The server may detect this failure, but the client cannot send additional bytes after the POST to probe the state of the TCP connection. The failure may be limited to a single TCP connection. For example I may be able to subsequently open a new TCP connection to the server.
I'm looking for a solution that still uses HTTP POST and does not change the duration of the server side processing.
Some solutions that I can think of are:
Provide a side channel interface to retrieve request & response history. If the history lists the response as having been send (presumably resulting in a TCP error) but I have not yet received it within a reasonable time I can generate a local error.
Use an X header to request that the server deliver "spurious" 100 Continue provisional responses on a regular interval. If I fail to see an expected 100 Continue or a non-provisional response I can generate a local error.
Is there a state of the art solution for this problem?
It sounds to me like you are using Soap for something that would be much better done using a stateful connection, or a server side push technology.

Resources