I am communicating with a server from Verifone Nurit8320(DTE) via siemens MC55 gsm modem(DCE).
I am passing AT commands via UART to give commands to siemens MC55 gsm modem (DCE).
I have given a delay of 100 ms (required) between every AT command and I am flushing the UART of DTE before sending any command on it.
Now the problem is this
In many cases DCE is responding with the response of the previously executed AT command. The DCE UART is never flushed.
Where can I get the set of AT commands so that I can flush the UART buffer of DCE?
The problem you are trying to solve (flushing the DCE UART) is the wrong problem to focus on, because it is a problem that does not exist in AT command communication.
After sending an AT command to the DCE you MUST read every single character sent back as response from the DCE, and parse the text until you have received a Final Result Code (e.g. OK, ERROR and a few more) before you can send the next AT command. Any other way is doomed to bring in an endless list of problems and will never, never, ever work reliably.
See this answer for a general outline of how your AT command sending/parsing should
look like. Using a fixed time delay should never be done; it will either abort the command or in best case waste time while waiting unnecessarily long while never removing the risk of aborting despite of waiting. See this answer for more information about abortion of AT commands.
Related
I am designing the software controlling several serial ports, operating system is OpenWrt. The device application is running in is single core ARM9 # 450 MHz. Protocol for serial ports is Modbus.
The problem is with Modbus slave implementation. I designed it in real-time manner, looping reading data from serial port (port is open in non-blocking mode with 0 characters to wait for and no timeouts). The sequence/data stream timeout is about 4 milliseconds # 9600/8/N/1 (3.5 characters as advised). Timeout is checked if application does not see anything in the buffer, therefore if application is slower that incoming stream of characters, timeouts mechanism will not take place - until all characters are removed from the buffer and bus is quiet.
But I see that CPU switches between threads and this thread is missed for about 40-60 milliseconds, which is a lot to measure timeouts. While I guess serial port buffer still receives the data (how long this buffer is?), I am unable to assess how much time passed between chars, and may treat next message as continuation of previous and miss Modbus request targeted for my device.
Therefore, I guess, something must be redesigned (for the slave - master is different story). First idea coming to my mind is to forget about timeouts, and just parse the incoming data after being synchronized with the whole stream (by finding initial timeout). However, there're several problems - I must know everything about all types of Modbus messages to parse them correctly and find out their ends and where next message starts, and I do not see the way how to differentiate Modbus request with Modbus response from the device. If developers of the Modbus protocol would put special bit in command field identifying if message is request or response... but it is not the case, and I do not see the right way to identify if message I am getting is request or response without getting following bytes and checking CRC16 at would-be byte counts, it will cost time while I am getting bytes, and may miss window for the response to request targeted for me.
Another idea would be using blocking method with VTIME timeout setting, but this value may be set to tenths of seconds only (therefore minimal is 100 ms), and this is too much given another +50 ms for possible CPU switching between threads, I think something like timeout of 10 ms is needed here. It is a good question if VTIME is hardware time, or software/driver also subject to CPU thread interruptions; and the size of FIFO in the chip/driver how many bytes it can accumulate.
Any help/idea/advice will be greatly appreciated.
Update per #sawdust comments:
non-blocking use has really appeared not a good way because I do not poll hardware, I poll software, and the whole design is again subject to CPU switching and execution scheduling;
"using the built-in event-based blocking mode" would work in blocking mode if I would be able to configure UART timeout (VTIME) in 100 us periods, plus if again I would be sure I am not interrupted during the executing - only then I will be able to have timing properly, and restart reading in time to get next character's timing assessed properly.
I'm working on the firmware of a device that is going to be connected to PCs using Bluetooth in serial port emulation mode.
During testing, I found out that modem-manager on Linux "helpfully" tries to detect it as a modem, sending the AT+GCAP command; to this, currently my device replies with something like INVALIDCMD AT+GCAP. That is the correct response for my protocol, but obviously isn't an AT reply, so modem-manager isn't satisfied and tries again with AT+GCAP and other modem-related stuff.
Now, I found some workarounds for modem-manager (see here and thus here, in particular the udev rule method), but:
they are not extremely robust (I have to make a custom udev rule that may break if we change the Bluetooth module);
I fear that not only modem-manager, but similar software/OS features (e.g. on Windows or OS X) may give me similar annoyances.
Also, having full control over the firmware, I can add a special case for AT+GCAP and similar stuff; so, coming to my question:
Is there a standard/safe reply to AT+GCAP and other similar modem-probing queries to tell "I'm not a modem, go away and leave me alone?"
(making an answer out of the comments)
In order to indicate I do not understand any AT commands at all (aka I am not a modem) the correct response to any received AT commands should be silence.
In order to indicate I do not understand this particular AT command the correct response should be ERROR.
Anything between will trigger implementation defined behaviour of the entity sending AT commands. Some will possibly give up right away while modem-manager apparently is set on retrying sending the command until it gets a "proper" response.
I have an app that is connected to the balance through the serial port. The balance is quite large and pressing the PRINT button is not an option. So my app asks the balance to print programmatically upon a certain user action. The balance interface allows it, and defines a print command. All works for awhile. Then after weighting few items, the balance starts outputing the previous weight....I am buffled at this point since there are few commands defined and there is not too many options to what can be done. I am already flushing out the OUT buffer after each time. So I don't know why it keeps giving me the old value.
Here is my code:
if (askedToPrint)
{
_sp.DiscardOutBuffer();
//ask the balance to print
_sp.Write("P\r\n");
}
_sp - is a SerialPort object
I am using WinCE 6.0 and Compact Framework 2.0/C#
if you are reading data from serial port using Readline() or Read() then there is a possibility that balance have sent multiple packets queued. So before reading you have to discard already pending packets. other way around is before writing request to print use ReadExisting() method to read all available data. So after sending command if your balance still sending old packets then there might be a problem with balance.
I am designing and testing a client server program based on TCP sockets(Internet domain). Currently , I am testing it on my local machine and not able to understand the following about SIGPIPE.
*. SIGPIPE appears quite randomly. Can it be deterministic?
The first tests involved single small(25 characters) send operation from client and corresponding receive at server. The same code, on the same machine runs successfully or not(SIGPIPE) totally out of my control. The failure rate is about 45% of times(quite high). So, can I tune the machine in any way to minimize this.
**. The second round of testing was to send 40000 small(25 characters) messages from the client to the server(1MB of total data) and then the server responding with the total size of data it actually received. The client sends data in a tight loop and there is a SINGLE receive call at the server. It works only for a maximum of 1200 bytes of total data sent and again, there are these non deterministic SIGPIPEs, about 70% times now(really bad).
Can some one suggest some improvement in my design(probably it will be at the server). The requirement is that the client shall be able to send over medium to very high amount of data (again about 25 characters each message) after a single socket connection has been made to the server.
I have a feeling that multiple sends against a single receive will always be lossy and very inefficient. Shall we be combining the messages and sending in one send() operation only. Is that the only way to go?
SIGPIPE is sent when you try to write to an unconnected pipe/socket. Installing a handler for the signal will make send() return an error instead.
signal(SIGPIPE, SIG_IGN);
Alternatively, you can disable SIGPIPE for a socket:
int n = 1;
setsockopt(thesocket, SOL_SOCKET, SO_NOSIGPIPE, &n, sizeof(n));
Also, the data amounts you're mentioning are not very high. Likely there's a bug somewhere that causes your connection to close unexpectedly, giving a SIGPIPE.
SIGPIPE is raised because you are attempting to write to a socket that has been closed. This does indicate a probable bug so check your application as to why it is occurring and attempt to fix that first.
Attempting to just mask SIGPIPE is not a good idea because you don't really know where the signal is coming from and you may mask other sources of this error. In multi-threaded environments, signals are a horrible solution.
In the rare cases were you cannot avoid this, you can mask the signal on send. If you set the MSG_NOSIGNAL flag on send()/sendto(), it will prevent SIGPIPE being raised. If you do trigger this error, send() returns -1 and errno will be set to EPIPE. Clean and easy. See man send for details.
I was wondering how tcp/ip communication is implemented in unix. When you do a send over the socket, does the tcp/level work (assembling packets, crc, etc) get executed in the same execution context as the calling code?
Or, what seems more likely, a message is sent to some other daemon process responsible for tcp communication? This process then takes the message and performs the requested work of copying memory buffers and assembling packets etc.? So, the calling code resumes execution right away and tcp work is done in parallel? Is this correct?
Details would be appreciated. Thanks!
The TCP/IP stack is part of your kernel. What happens is that you call a helper method which prepares a "kernel trap". This is a special kind of exception which puts the CPU into a mode with more privileges ("kernel mode"). Inside of the trap, the kernel examines the parameters of the exception. One of them is the number of the function to call.
When the function is called, it copies the data into a kernel buffer and prepares everything for the data to be processed. Then it returns from the trap, the CPU restores registers and its original mode and execution of your code resumes.
Some kernel thread will pick up the copy of the data and use the network driver to send it out, do all the error handling, etc.
So, yes, after copying the necessary data, your code resumes and the actual data transfer happens in parallel.
Note that this is for TCP packets. The TCP protocol does all the error handling and handshaking for you, so you can give it all the data and it will know what to do. If there is a problem with the connection, you'll notice only after a while since the TCP protocol can handle short network outages by itself. That means you'll have "sent" some data already before you'll get an error. That means you will get the error code for the first packet only after the Nth call to send() or when you try to close the connection (the close() will hang until the receiver has acknowledged all packets).
The UDP protocol doesn't buffer. When the call returns, the packet is on it's way. But it's "fire and forget", so you only know that the driver has put it on the wire. If you want to know whether it has arrived somewhere, you must figure out a way to achieve that yourself. The usual approach is have the receiver send an ack UDP packet back (which also might get lost).
No - there is no parallel execution. It is true that the execution context when you're making a system call is not the same as your usual execution context. When you make a system call, such as for sending a packet over the network, you must switch into the kernel's context - the kernel's own memory map and stack, instead of the virtual memory you get inside your process.
But there are no daemon processes magically dispatching your call. The rest of the execution of your program has to wait for the system call to finish and return whatever values it will return. This is why you can count on return values being available right away when you return from the system call - values like the number of bytes actually read from the socket or written to a file.
I tried to find a nice explanation for how the context switch to kernel space works. Here's a nice in-depth one that even focuses on architecture-specific implementation:
http://www.ibm.com/developerworks/linux/library/l-system-calls/