According to the manual page for recv(), errno is set to EAGAIN or EWOULDBLOCK if a timeout has been set for receive using setsockopt(SO_RCVTIMEO).
My question is what happens if multiple such sockets are used with select(). Would select return if one of the socket times out due to inactivity. What would be returned by select().
I am trying to implement a tftp server with feature to detect timeouts. One way could be to use a timeout with select() but then I would have to use a different value of timeout for each socket and keep updating the timer to the minimum value, and then do some more juggling.... etc.. etc... Just feels like a lot of unnecessary work.
PS: The tftp server is a concurrent server with multiple clients being handled using I/O Multiplexing.
The timeout parameter of select() determines the maximum time that the select() call itself will wait for something to happen before the call returns, not how long individual sockets will wait before returning a timeout error.
It sounds like you are wanting to declare some kind of an error condition if you don't hear from a client for some period of time. With UDP, you will have to keep track of that yourself. For each client, keep a record of the last time you heard from it. Put select() in a loop with a timeout of something like 1 second, then every time it returns check the difference between the current time and the last time you heard from each client. When that difference exceeds whatever threshold you want, you have your error condition.
Related
We have one problem we've been suffering of for a long long time,It's the unknown callerID received from asterisk that happens on specific situations.
First we have a sip soft phone (sipml5)
and on server side we have
asterisk-11.25.0-0
elastix-4.0.0-1
Setup: we have any cid/did inbound route connects our calls to one ring-group(that have all the extensions)
The unknown caller id shows when we have:
1-Short timeout for call(which lead to make the call stick at asterisk and asterisk resent the call to extensions but with unknown caller id).. possible solution would be to make big timeout.
2-All extensions do hangup and the call stick on asterisk and asterisk resend it to extensions with unknown caller id (possible solution would be to prevent extensions doing hangup unless they answer the call first)
3-Receiving one unknown caller id lead to successive unknown caller id calls.. no solution
What we're trying to solve is the 3rd problem and we have an idea of forcing asterisk to wait for a specific timeout between inbound calls(we tried this manually by not allowing immediate successive calls,make 4-5 seconds delay between calls and it works fine)
We want to know what configuration has to be edited to force this timeout delay between inbound calls.
You can use EPOCH function and store value somewhere(ASTDB?)
After that in dialplan compare it when dialling
No, elastic not support that and will not support it in future(weird request).
Let's suppose, I have a custom server that listens to connections on some port and once it has received a connection, it starts sending data (sort of a logger). Here's the first question:
Can it be just binary data? Actually, I need just two non-zero 8-bit values, and I was thinking of 0-value byte to separate each new portion of data.
These three bytes will be sent once or may be twice a second.
So, now I am looking for some code snippet in Swift 2 to properly read this data. Normally, I would expect calling
connectSocket(IP,port)
which would connect to the socket, and once it receives the first chunk of data,
socketCallBack()
is called, or something like that.
Intuitively, I don't like the idea of checking data in a while (true) loop. Or is this the proper way?
I've seen an example, when it first sends 'get' request to the server and immediately starts waiting for response. Probably, I can call it using a timer, once a second? Will it be correct?
What I am concerned about is trafic. Right now I have impemented it through a web-server, but I don't like that it spends way too much trafic for that overhead http data.
Probably, with that tcp connections on timer that would be much less, and it would save even more trafic if I establish just one connection in the beginning and transmit the data within this connection. Am I right?
I have an application that, when it has data to transmit, uses epoll to know whether a given TCP socket can be written to.
What I'm observing is that as the far-end of the TCP connection falls behind, and the send buffer of the TCP socket begins to fill, the frequency with which epoll returns an EPOLLOUT event appears to experience exponential backoff. This behavior happens prior to receiving an EAGAIN from the socket write.
The application is using EPOLLONESHOT, and makes an EPOLL_CTL_MOD call to rearm the EPOLLOUT event after each occurrence. But as I noted above, each subsequent occurrence is exponentially later (I had a progression of 40ms, 80ms, 160ms, 320ms, 640ms, 1280ms, etc), up until EAGAIN finally happens.
Is this an undocumented feature of epoll? Can it be disabled? It's a problem because the data is getting stale and I would prefer to discard it rather than send it late.
Thanks in advance.
No, but TCP does. epoll() blocks for at most the timeout you specify, and not a moment longer.
I have a message driven bean which serves messages in a following way:
1. It takes data from incoming message.
2. Calls external service via HTTP (literally, sends GET requests using HttpURLConnection), using the data from step 1. No matter how long the call takes - the message MUST NOT be dropped.
3. Uses the outcome from step 2 to persist data (using entity beans).
Rate of incoming messages is:
I. Low most of the time: an order of units / tens in a day.
II. Sometimes high: order of hundreds in a few minutes.
QUESTION:
Having that service in step (2) is relatively slow (20 seconds per request and degrades upon increasing workload), what is the best way to deal with situation II?
WHAT I TRIED:
1. Letting MDB to wait until service is executed, no matter how long it takes. This tends to rollback MDB transactions by timeout and to re-deliver message, increasing workload and making things even worse.
2. Setting timeout for HttpURLConnection gives some guarantees in terms of completion time of MDB onMessage() method, but leaves an open question: how to proceed with 'timed out' messages.
Any ideas are very much appreciated.
Thank you!
In that case you can just increase a transaction timeout for your message driven beans.
This is what I ended up with (mostly, this is application server configuration):
Relatively short (comparing to transaction timeout) timeout for HTTP call. The
rationale: long-running transactions from my experience tend to
have adverse side effects such as threads which are "hung" from app.
server point of view, or extra attention to database configuration,
etc.I chose 80 seconds as timeout value.
Increased up to several minutes re-delivery interval for failed
messages.
Careful adjustment of the number of threads which handle messages
simultaneously. I balanced this value with throughput of HTTP service.
I am designing and testing a client server program based on TCP sockets(Internet domain). Currently , I am testing it on my local machine and not able to understand the following about SIGPIPE.
*. SIGPIPE appears quite randomly. Can it be deterministic?
The first tests involved single small(25 characters) send operation from client and corresponding receive at server. The same code, on the same machine runs successfully or not(SIGPIPE) totally out of my control. The failure rate is about 45% of times(quite high). So, can I tune the machine in any way to minimize this.
**. The second round of testing was to send 40000 small(25 characters) messages from the client to the server(1MB of total data) and then the server responding with the total size of data it actually received. The client sends data in a tight loop and there is a SINGLE receive call at the server. It works only for a maximum of 1200 bytes of total data sent and again, there are these non deterministic SIGPIPEs, about 70% times now(really bad).
Can some one suggest some improvement in my design(probably it will be at the server). The requirement is that the client shall be able to send over medium to very high amount of data (again about 25 characters each message) after a single socket connection has been made to the server.
I have a feeling that multiple sends against a single receive will always be lossy and very inefficient. Shall we be combining the messages and sending in one send() operation only. Is that the only way to go?
SIGPIPE is sent when you try to write to an unconnected pipe/socket. Installing a handler for the signal will make send() return an error instead.
signal(SIGPIPE, SIG_IGN);
Alternatively, you can disable SIGPIPE for a socket:
int n = 1;
setsockopt(thesocket, SOL_SOCKET, SO_NOSIGPIPE, &n, sizeof(n));
Also, the data amounts you're mentioning are not very high. Likely there's a bug somewhere that causes your connection to close unexpectedly, giving a SIGPIPE.
SIGPIPE is raised because you are attempting to write to a socket that has been closed. This does indicate a probable bug so check your application as to why it is occurring and attempt to fix that first.
Attempting to just mask SIGPIPE is not a good idea because you don't really know where the signal is coming from and you may mask other sources of this error. In multi-threaded environments, signals are a horrible solution.
In the rare cases were you cannot avoid this, you can mask the signal on send. If you set the MSG_NOSIGNAL flag on send()/sendto(), it will prevent SIGPIPE being raised. If you do trigger this error, send() returns -1 and errno will be set to EPIPE. Clean and easy. See man send for details.