Asterisk, force timeout delay between consecutive inbound calls - asterisk

We have one problem we've been suffering of for a long long time,It's the unknown callerID received from asterisk that happens on specific situations.
First we have a sip soft phone (sipml5)
and on server side we have
asterisk-11.25.0-0
elastix-4.0.0-1
Setup: we have any cid/did inbound route connects our calls to one ring-group(that have all the extensions)
The unknown caller id shows when we have:
1-Short timeout for call(which lead to make the call stick at asterisk and asterisk resent the call to extensions but with unknown caller id).. possible solution would be to make big timeout.
2-All extensions do hangup and the call stick on asterisk and asterisk resend it to extensions with unknown caller id (possible solution would be to prevent extensions doing hangup unless they answer the call first)
3-Receiving one unknown caller id lead to successive unknown caller id calls.. no solution
What we're trying to solve is the 3rd problem and we have an idea of forcing asterisk to wait for a specific timeout between inbound calls(we tried this manually by not allowing immediate successive calls,make 4-5 seconds delay between calls and it works fine)
We want to know what configuration has to be edited to force this timeout delay between inbound calls.

You can use EPOCH function and store value somewhere(ASTDB?)
After that in dialplan compare it when dialling
No, elastic not support that and will not support it in future(weird request).

Related

Asterisk DID switch to out outgoing trunk?

I have a toll free DID that users call to access my PBX service on an Asterisk box. The problem is; this DID comes only with a single channel so the system can only receive one call at a time. My initial idea was to simply get the caller ID of the incoming call, disconnect the caller and issue an automated call back to him to proceed with the call. This would free up my toll free number but could be confusing for the caller of course and also, there are issues where the caller calls from behind an extension. The best solution would be to somehow seemlessly switch the call to an outgoing trunk to reconnect the caller but now using my SIP trunk.
My question is; is there a way to do this in Asterisk (or I guess, does SIP somehow allow such operation)?
Thanks in advance.
That is called "callback".
Yes, you can do it. No, asterisk have no internal way do that and no way do it not noticable for user.

Hangup After First Ring Using Asterisk Call Files

Default Asterisk configuration (only sip.conf changes).
I use call files for calling and I need to hangup after first ring while every dial.
WaitTime: 4 seconds doesn't work sometimes, since it's counting from the beginning (connect to SIP provider etc) and the client doesn't even receive the call.
00359894000001.call
Channel: SIP/flowroute/00359894000001
Extension: 00359894000001
WaitTime: 4
There are no way predict connection time and count how much ring was at called side. Ring create by endpoing equipment, you have no control/info about it.
If the connection time for the SIP provider etc is constant you should just check what it is and add it to your 4 seconds.

Erlang accept incoming tcp connections dynamically

What I am trying to solve: have an Erlang TCP server that listens on a specific port (the code should reside in some kind of external facing interface/API) and each incoming connection should be handled by a gen_server (that is even the gen_tcp:accept should be coded inside the gen_server), but I don't actually want to initially spawn a predefined number of processes that accepts an incoming connection). Is that somehow possible ?
Basic Procedure
You should have one static process (implemented as a gen_server or a custom process) that performs the following procedure:
Listens for incoming connections using gen_tcp:accept/1
Every time it returns a connection, tell a supervisor to spawn of a worker process (e.g. another gen_server process)
Get the pid for this process
Call gen_tcp:controlling_process/2 with the newly returned socket and that pid
Send the socket to that process
Note: You must do it in that order, otherwise the new process might use the socket before ownership has been handed over. If this is not done, the old process might get messages related to the socket when the new process has already taken over, resulting in dropped or mishandled packets.
The listening process should only have one responsibility, and that is spawning of workers for new connections. This process will block when calling gen_tcp:accept/1, which is fine because the started workers will handle ongoing connections concurrently. Blocking on accept ensure the quickest response time when new connections are initiated. If the process needs to do other things in-between, gen_tcp:accept/2 could be used with other actions interleaved between timeouts.
Scaling
You can have multiple processes waiting with gen_tcp:accept/1 on a single listening socket, further increasing concurrency and minimizing accept latency.
Another optimization would be to pre-start some socket workers to further minimize latency after accepting the new socket.
Third and final, would be to make your processes more lightweight by implementing the OTP design principles in your own custom processes using proc_lib (more info). However, this you should only do if you benchmark and come to the conclusion that it is the gen_server behavior that slows you down.
The issue with gen_tcp:accept is that it blocks, so if you call it within a gen_server, you block the server from receiving other messages. You can try to avoid this by passing a timeout but that ultimately amounts to a form of polling which is best avoided. Instead, you might try Kevin Smith's gen_nb_server instead; it uses an internal undocumented function prim_inet:async_accept and other prim_inet functions to avoid blocking.
You might want to check out http://github.com/oscarh/gen_tcpd and use the handle_connection function to convert the process you get to a gen_server.
You should use "prim_inet:async_accept(Listen_socket, -1)" as said by Steve.
Now the incoming connection would be accepted by your handle_info callback
(assuming you interface is also a gen_server) as you have used an asynchronous
accept call.
On accepting the connection you can spawn another ger_server(I would recommend
gen_fsm) and make that as the "controlling process" by calling
"gen_tcp:controlling_process(CliSocket, Pid of spwned process)".
After this all the data from socket would be received by that process
rather than by your interface code. Like that a new controlling process
would be spawned for another connection.

Does asynchronous receive guarantee the detection of connection failure?

From what I know, a blocking receive on a TCP socket does not always detect a connection error (due either to a network failure or to a remote-endpoint failure) by returning a -1 value or raising an IO exception: sometimes it could just hang indefinitely.
One way to manage this problem is to set a timeout for the blocking receive. In case an upper bound for the reception time is known, this bound could be set as timeout and the connection could be considered lost simply when the timeout expires; when such an upper bound is not known a priori, for example in a pub-sub system where a connection stays open to receive publications, the timeout to be set would be somewhat arbitrary but its expiration could trigger a ping/pong request to verify that the connection (and the endpoint too) is still up.
I wonder whether the use of asynchronous receive also manages the problem of detecting a connection failure. In boost::asio I would call socket::asynch_read_some() registering an handler to be asynchronously called, while in java.nio I would configure the channel as non-blocking and register it to a selector with an OP_READ interest flag. I imagine that a correct connection-failure detection would mean that, in the first case the handler would be called with a non-0 error_code, while in the second case the selector would select the faulty channel but a subsequent read() on the channel would either return -1 or throw an IOException.
Is this behaviour guaranteed with asynchronous receive, or could there be scenarios where after a connection failure, for example, in boost::asio the handler will never be called or in java.nio the selector will never select the channel?
Thank you very much.
I believe you're referring to the TCP half-open connection problem (the RFC 793 meaning of the term). Under this scenario, the receiving OS will never receive indication of the lost connection, so it will never notify the app. Whether the app is readding synchronously or asynchronously doesn't enter into it.
The problem occurs when the transmitting side of the connection somehow is no longer aware of the network connection. This can happen, for example, when
the transmitting OS abruptly terminates/restarts (power outage, OS failure/BSOD, etc.).
the transmitting side closes its side while there is a network disruption between the two sides and cleans up its side: e.g transmitting OS reboots cleanly during disruption, transmitting Windows OS is unplugged from the network
When this happens, the receiving side may be waiting for data or a FIN that will never come. Unless the receiving side sends a message, there's no way for it to realize the transmitting side is no longer aware of the receiving side.
Your solution (a timeout) is one way to address the issue, but it should include sending a message to the transmitting side. Again, it doesn't matter the read is synchronous or asynchronous, just that it doesn't read and wait indefinitely for data or a FIN. Another solution is using a TCP KEEPALIVE feature that is supported by some TCP stacks. But the hard part of any generalized solution is usually determining a proper timeout, since the timeout is highly dependent on characteristics of the specific application.
Because of how TCP works, you will typically have to send data in order to notice a hard connection failure, to find out that no ACK packet will ever be returned. Some protocols attempt to identify conditions like this by periodically using a keep-alive or ping packet: if one side does not receive such a packet in X time (and perhaps after trying and failing one itself), it can consider the connection dead.
To answer your question, blocking and non-blocking receive should perform identically except for the act of blocking itself, so both will suffer from this same issue. In order to make sure that you can detect a silent failure from the remote host, you'll have to use a form of keep-alive like I described.

Does javax.jms.ExceptionListener delay the onException callback when network connection goes down?

I'm using Websphere Application Server (WAS) 6.1's default messaging provider for JMS. My remote client application creates a connection, then does a setExceptionListener to register the callback.
When I simply stop the messaging engine using the WAS Integrated Solutions Console, my app behaves as expected, i.e., onException is called immediately and my app reacts accordingly. However, when I pull the network cable, the onException callback does not get called back for somewhere between 30 and 60 seconds.
The ugly result is that my app just tries to keep sending messages to WAS during this 30 to 60 second time frame and those messages just get lost. I've done several searches trying to find out more about the ExceptionListener (e.g., is there some configuration parameter used to specify a callback timeout), but have not had any success.
Hopefully, this makes sense to someone out there. Any suggestions how I might be able to detect the cable "cut" scenario more quickly? Thanks for your help.
-Kris
You don't happen to have a 30 second TCP timeout defined?
If so, then MQ has handed over its responsibility temporarily to the JVM/OS and is waiting for for it to ACK with whatever network related operation it has requested. Perhaps try lowering the TCP timeout value...

Resources