I have to write an application that will stream some book to many clients. While writing i met a problem with Segmentation fault. I know that when it is being killed SIG45 is being sent. Here is my code.
Server code:http://wklej.org/id/3365150/
Client code:http://wklej.org/id/3365151/
Problem occurs when i run first client. Could you please tell what am i doing wrong here?
I run my example with gdb and this is what i got.
Program received signal SIG45, Real-time event 45.
main (argc=5, argv=0x7fffffffde28) at serverA.c:172
172 while(1) {}
Related
I am trying to control 5V 4-Ch Relay Module http://www.icstation.com/icstation-micro-channel-relay-module-control-relay-module-icse012a-p-4012.html using Raspberry Pi-4, Model-B, 2GB RAM.
But sometimes I am observing "pl2303 ttyUSB0: pl2303_read_int_callback - usb_submit_urb failed with result -1" error and the Relay did not perform the required operation.
It would be good to know the root cause or the reason for this issue from the professional present here on this forum.
Any hint/clue would be really helpful and appreciated :) .
Thanks Again.
TL;DR: there seem to be a defect in pl2303 driver caused by race or incorrect state after multiple connect/disconnect. Simply reload the pl2303 kernel model to workaround this issue when you get it.
Error '-1' (EPERM) in kernel USB stack is rare: I can find only one occurrence in urb submit path e.g.:
https://elixir.bootlin.com/linux/v5.4.99/source/drivers/usb/core/hcd.c#L1152
More to this, kernel documentation on usb_kill_urb says:
To cancel an URB synchronously, call usb_kill_urb():
void usb_kill_urb(struct urb *urb)
It does everything usb_unlink_urb() does, and in addition it waits until after
the URB has been returned and the completion handler has finished. It also marks
the URB as temporarily unusable, so that if the completion handler or anyone
else tries to resubmit it they will get a -EPERM error
So basically driver tries to submit INT IN URB after it got killed. Looking at the pl2303 driver code you can see only 2 places where INT IN URB is poisoned, e.g.:
https://elixir.bootlin.com/linux/v5.4.65/source/drivers/usb/serial/pl2303.c#L756
https://elixir.bootlin.com/linux/v5.4.65/source/drivers/usb/serial/pl2303.c#L788
In both cases this happen on either close() or unsuccessful open() of USB serial device.
Proper fix would be to:
Check whether its open that fails on usb_serial_generic_open() function
Check whether INT IN URB killing is done properly and there's no race
Check if pl2303 code is aligned in general with recent USB Serial driver logic
Quick fix on the running system would be just
rmmod pl2303 && modprobe pl2303
I have a java process this is getting a signal shutdown. It is one of these SIGTERM, SIGINT, SIGHUP since the shutdown hook is running..
I can't figure out why we are getting the signal. The process runs on ubuntu and I can't find anything in dmesg to indicate the OS sent the signal.
Is there anywhere else that these messages would go? Are there any tools I can attach to the PID to get information about the signal?
Thanks in advance
So I found the OOM message in the syslog. I was expecting it to be in dmesg. My mistake
I'm trying to maintain a persistent connection between client and Remote server using Qt. My sever side is fine. I'm doing my client side in Qt. Here I will be using QNetworkAccessManager for requesting server with get method(Part of QNetworkRequest method). I will be able to send and receive requests.
But after sometime(approx ~ 2 min)the client is intimating server, the connection has closed by posting a request automatically. I think QNetworkAccessManager is setting a timeout for this connection. I want to maintain a persistent connection between the ends.
Is my approach correct, if not, can someone guide me in correct path?
This question is interesting, so let's do some research. I set up a nginx server with big keep alive timeout and wrote the simpliest Qt application:
QApplication a(argc, argv);
QNetworkAccessManager manager;
QNetworkRequest r(QUrl("http://myserver/"));
manager.get(r);
return a.exec();
Also I used the following command (in Linux console) to monitor connections and check if the problem reproduces at all:
watch -n 1 netstat -n -A inet
I took quick look at Qt sources and found that it uses QTcpSocket and closes it in QHttpNetworkConnectionChannel::close. So I opened the debugger console (Window → Views → Debugger log in Qt Creator) and added a breakpoint while the process was paused:
bp QAbstractSocket::close
Note: this is for cdb (MS debugger), other debuggers require other commands. Another note: I use Qt with debug info, and this approach may not work without it.
After two minutes of waiting I got the backtrace of the close() call!
QAbstractSocket::close qabstractsocket.cpp 2587 0x13fe12600
QHttpNetworkConnectionPrivate::~QHttpNetworkConnectionPrivate qhttpnetworkconnection.cpp 110 0x13fe368c4
QHttpNetworkConnectionPrivate::`scalar deleting destructor' untitled 0x13fe3db27
QScopedPointerDeleter<QObjectData>::cleanup qscopedpointer.h 62 0x140356759
QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>>::~QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>> qscopedpointer.h 99 0x140355700
QObject::~QObject qobject.cpp 863 0x14034b04f
QHttpNetworkConnection::~QHttpNetworkConnection qhttpnetworkconnection.cpp 1148 0x13fe35fa2
QNetworkAccessCachedHttpConnection::~QNetworkAccessCachedHttpConnection untitled 0x13fe1e644
QNetworkAccessCachedHttpConnection::`scalar deleting destructor' untitled 0x13fe1e6e7
QNetworkAccessCachedHttpConnection::dispose qhttpthreaddelegate.cpp 170 0x13fe1e89e
QNetworkAccessCache::timerEvent qnetworkaccesscache.cpp 233 0x13fd99d07
(next lines are not interesting)
The class responsible for this action is QNetworkAccessCache. It sets up timers and makes sure that its objects are deleted when QNetworkAccessCache::Node::timestamp is in the past. And these objects are HTTP connections, FTP connections, and credentials.
Next, what is timestamp? When the object is released, its timestamp is calculated in the following way:
node->timestamp = QDateTime::currentDateTime().addSecs(ExpiryTime);
And ExpiryTime = 120 is hardcoded.
All involved classes are private, and I found no way to prevent this from happening. So it's way simpler to send keep-alive requests each minute (at least now you know that 1 minute is safe enough), as the alternative is to rewrite Qt code and compile custom version.
I'd say by definition, a 2 minute timeout connection qualifies for persistent. I mean if it wasn't persistent, you'd have to reconnect on every request. 2 minutes is quite generous comparing to some other software out there. But it is set to eventually timeout after a period of inactivity and that's a good thing which should not surprise. Some software allows the timeout period to be changed, but from Pavel's investigation it would appear in the case of Qt the timeout is hardcoded.
Luckily the solution is simple, just rig a timer to send a heartbeat (just a dummy request, do not confuse with "heartbeat network") every 1 minute or so to keep the connection alive. Before you use your connection deactivate the timer, and after you are done with the connection, restart the timer.
I am communicating with a server from Verifone Nurit8320(DTE) via siemens MC55 gsm modem(DCE).
I am passing AT commands via UART to give commands to siemens MC55 gsm modem (DCE).
I have given a delay of 100 ms (required) between every AT command and I am flushing the UART of DTE before sending any command on it.
Now the problem is this
In many cases DCE is responding with the response of the previously executed AT command. The DCE UART is never flushed.
Where can I get the set of AT commands so that I can flush the UART buffer of DCE?
The problem you are trying to solve (flushing the DCE UART) is the wrong problem to focus on, because it is a problem that does not exist in AT command communication.
After sending an AT command to the DCE you MUST read every single character sent back as response from the DCE, and parse the text until you have received a Final Result Code (e.g. OK, ERROR and a few more) before you can send the next AT command. Any other way is doomed to bring in an endless list of problems and will never, never, ever work reliably.
See this answer for a general outline of how your AT command sending/parsing should
look like. Using a fixed time delay should never be done; it will either abort the command or in best case waste time while waiting unnecessarily long while never removing the risk of aborting despite of waiting. See this answer for more information about abortion of AT commands.
I am designing and testing a client server program based on TCP sockets(Internet domain). Currently , I am testing it on my local machine and not able to understand the following about SIGPIPE.
*. SIGPIPE appears quite randomly. Can it be deterministic?
The first tests involved single small(25 characters) send operation from client and corresponding receive at server. The same code, on the same machine runs successfully or not(SIGPIPE) totally out of my control. The failure rate is about 45% of times(quite high). So, can I tune the machine in any way to minimize this.
**. The second round of testing was to send 40000 small(25 characters) messages from the client to the server(1MB of total data) and then the server responding with the total size of data it actually received. The client sends data in a tight loop and there is a SINGLE receive call at the server. It works only for a maximum of 1200 bytes of total data sent and again, there are these non deterministic SIGPIPEs, about 70% times now(really bad).
Can some one suggest some improvement in my design(probably it will be at the server). The requirement is that the client shall be able to send over medium to very high amount of data (again about 25 characters each message) after a single socket connection has been made to the server.
I have a feeling that multiple sends against a single receive will always be lossy and very inefficient. Shall we be combining the messages and sending in one send() operation only. Is that the only way to go?
SIGPIPE is sent when you try to write to an unconnected pipe/socket. Installing a handler for the signal will make send() return an error instead.
signal(SIGPIPE, SIG_IGN);
Alternatively, you can disable SIGPIPE for a socket:
int n = 1;
setsockopt(thesocket, SOL_SOCKET, SO_NOSIGPIPE, &n, sizeof(n));
Also, the data amounts you're mentioning are not very high. Likely there's a bug somewhere that causes your connection to close unexpectedly, giving a SIGPIPE.
SIGPIPE is raised because you are attempting to write to a socket that has been closed. This does indicate a probable bug so check your application as to why it is occurring and attempt to fix that first.
Attempting to just mask SIGPIPE is not a good idea because you don't really know where the signal is coming from and you may mask other sources of this error. In multi-threaded environments, signals are a horrible solution.
In the rare cases were you cannot avoid this, you can mask the signal on send. If you set the MSG_NOSIGNAL flag on send()/sendto(), it will prevent SIGPIPE being raised. If you do trigger this error, send() returns -1 and errno will be set to EPIPE. Clean and easy. See man send for details.