I am using rrmemory as strategy in Q1 and Q2.
queue members are Local/3001#agent, Local/3002#agent, Local/3003#agent and Local/3004#agent.
Local/3001#agent and Local/3002#agent penalty 0 in Q1 and Local/3003#agent and Local/3004#agent penalty 5 in Q1
Local/3001#agent and Local/3002#agent penalty 5 in Q2 and Local/3003#agent and Local/3004#agent penalty 0 in Q2
now the requirement is if penalty 0 agents are busy (on the call) don’t send a new call to them and send a new call to penalty 5 agents.
if penalty 5 agents are also busy (on the call) then send a new call to penalty 0 agents even though penalty 0 agents are already on the call (busy).
I have limited 1 call to an agent at a time.
I need to send a new call to agents who are already on the call.
I think you can't do it cleanly. If call limit is set to 1, no new calls will reach that agent if he is busy, and if it is not set, new calls will upset him all the time.
So, is it impossible? Well, you can do it if every agent has 2 SIP accounts. For example, agent1 has Local/3001#agent and you create an extra account for him (Local/4001#agent). This new account has a penalty of 6. That way, only when all agents are in call "extra agents" will be called. It is not clean but if you use Linphone or some softphone that allows you to register 2 account in one device, it will be possible.
Anyway, and in my opinion, it is not a good idea to send 2 calls to one agent. If you want then to be notified that all agents are in call, I would use other method (e-mail, Push notifications to their browser, etc.).
Hope I have helped :)
Related
In many audio call applications, every user can decide to selectively mute just one of the other users.
For example in a call with A, B and C, one participant (A) could decide to mute another (B) while B is still heard by C.
Does asterisk allow something like this in conference/meetme or other applications? I couldn't find suitable commands from the documentation
You can't do that. No any way mute USER
You can mute channel( all not hear that channel), not user.
You can organize call via chan_spy(one way sound).
You can transfer users from conference into other conference where some channels muted.
Scenario you want is possible, but dialplan to support it will be REALY complex. No way do it by simple tricks.
For example you can have each user into dedicated room and create channels/connection between rooms as you wish, room B connected to C and not connected to A. But again, that will be high cpu usage and really complex.
At current moment both app_conference and app_metmee have ONE mixing function which send packets to all users.
I'm writing an application using MPI (mpi4py actually). The application may spawn some new processes using MPI_Comm_spawn() (collectively on all current processes) and some nodes from the parent group/communicator may send data to some nodes in the child group/communicator and vice versa. (Notice MPI_Comm_spawn() and data sending/receving are happening in different threads both for functionality [there are other functionalities not directly relevant to this question so I didn't describe] and performance.)
Because the MPI_Comm_spawn() function may be called for several times and I expect all nodes can communicate with each other, I currently plan to use MPI_Intercomm_merge() to merge the two groups (parent and child) into one intracommunicator, and then send data through the new intracommunicator (and the next MPI_Comm_spawn() will happen on the new intracommunicator).
However, because the spawn and merge process happens during the program running, there will be some data sent through the old communicator already (but may not have yet been received by the dest). How could I safely switch from the old communicator to the new communicator (e.g. be able to delete the old communicator[s] at some point) while losing the least performance? The MPI_Comm_merge() is the only way I know to guarentee all processes can send data to each other (because if we don't merge, the next time we call MPI_Comm_merge(), some processes can't directly send data to each other), and I don't mind to change it to another method as long as it works well.
For example, in the following chart, process A, B, C are initial processes (mpiexec -np 3), D is a spawned process:
A and B will send continous data to C; during the sending time, D is spawned; then C sends data to D. Suppose the old communicator A, B and C uses is comm1 and the merged intracommunicator is comm2.
What I want to achieve is to send data through comm1 initially, and (all processes) switch to comm2 after D is spawned. What lacks is a mechanism to know when can C safely switch from comm1 to comm2 to receive data from A and/or B, and then I can safely call MPI_Comm_free(comm1).
Simply sending a special tag through comm1 at the time of switch would be the last option because C don't know how many processes will send data to it. It does know how many groups of processes will send data to it, so this can be achieved by introducing local leaders (but I'd like to know about other options).
Because A, B and C are processing in parellel and send/recv and spawn are happening in different threads, we can't guarentee no pending data when we call MPI_Comm_spawn(). E.g. if we imagine A and B process send and C processes recv at a same rate, when they call comm_spawn, C has only received half of the data from A and B, so we can't drop comm1 at C yet, but have to wait until C has received all pending data from comm1 (which is an unknown number of messages).
Are there any mechanisms provided by MPI or mpi4py (e.g. error codes or exceptions) to achieve this?
By the way, if my approach is apparently bad or if I misunderstand what MPI_Comm_free() does, please point out.
(What I understand is that MPI_Comm_free() is not a collective call; after calling MPI_Comm_free(comm1), no more send/recv calls to comm1 is allowed on the same node which calls MPI_Comm_free(comm1))
so basically, C invokes MPI_Comm_spawn(..., MPI_COMM_SELF, ...)
why don't you have {A,B,C} invoke MPI_Comm_spawn(..., comm1, ...) instead ?
MPI_Intercomm_merge() is a collective operation, so you need to "synchronize" your tasks somehow, so why not "synchronize" them before MPI_Comm_spawn() instead ?
then switching to the new communicator is trivial
I have 10 extensions grouped into RingGroup with number "100" and "ringall" strategy. Only 4 of 10 extensions online. Someone call to 100 and 4 online extensions gets call and starts ringing. So, how can I get this call if one (or more) of 6 offline extensions gets online (until call is active)?
Probably you should use queue for such functionality, ring group is not really good for such case, but there is a hack to achieve what you need even with ring group.
First make sure that destination if no answer is same group. then make sure that "Ring Time" is configured for quite low value like 10 seconds.
In this case when call hits ring group it will ring only 4 available extensions for 10 seconds, after 10 seconds it will go back to same group and it will ring all available extensions at that moment, so if 1 additional extensions will go online, then it will call 5 extensions for 10 seconds and etc.
I was going through the rpc semantics, at-least-once and at-most-once semantics, how does they work?
Couldn't understand the concept of their implementation.
In both cases, the goal is to invoke the function once. However, the difference is in their failure modes. In "at-least-once", the system will retry on failure until it knows that the function was successfully invoked, while "at-most-once" will not attempt a retry (or will ensure that there is a negative acknowledgement of the invocation before retrying).
As to how these are implemented, this can vary, but the pseudo-code might look like this:
At least once:
request_received = false
while not request_received:
send RPC
wait for acknowledgement with timeout
if acknowledgment received and acknowledgement.is_successful:
request_received = true
At most once:
request_sent = false
while not request_sent:
send RPC
request_sent = true
wait for acknowledgement with timeout
if acknowledgment received and not acknowledgement.is_successful:
request_sent = false
An example case where you want to do "at-most-once" would be something like payments (you wouldn't want to accidentally bill someone's credit card twice), where an example case of "at-least-once" would be something like updating a database with a particular value (if you happen to write the same value to the database twice in a row, that really isn't going to have any effect on anything). You almost always want to use "at-least-once" for non-mutating (a.k.a. idempotent) operations; by contrast, most mutating operations (or at least ones that incrementally mutate the state and are thus dependent on the current/prior state when applying the mutation) would need "at-most-once".
I should add that it is fairly common to implement "at most once" semantics on top of an "at least once" system by including an identifier in the body of the RPC that uniquely identifies it and by ensuring on the server that each ID seen by the system is processed only once. You can think of the sequence numbers in TCP packets (ensuring the packets are delivered once and in order) as a special case of this pattern. This approach, however, can be somewhat challenging to implement correctly on distributed systems where retries of the same RPC could arrive at two separate computers running the same server software. (One technique for dealing with this is to record the transaction where the RPC is received, but then to aggregate and deduplicate these records using a centralized system before redistributing the requests inside the system for further processing; another technique is to opportunistically process the RPC, but to reconcile/restore/rollback state when synchronization between the servers eventually detects this duplication... this approach would probably not fly for payments, but it can be useful in other situations like forum posts).
I have my own database to log the calls in astreisk. I need to insert call duration of every call into a table. How can I do this? Can I do this in my dialplan?
You are not giving much information about what db backend you would like to use, and also if you are asking about how to write yourself the call duration or how to configure asterisk to write the cdr in question.
So, generally speaking, you have 3 possible options for this (see below). For options 2 and 3 you would have to open the connection to the database yourself, write the queries needed to insert/update whatever row(s) needed, handle errors, etc. While for option 1 you just need to configure asterisk to do the job.
1) Asterisk can do this by default on its own, by writing the CDR (Call Detail Record) of every call to a backend. This backend can be csv, mysql, pgsql, sqlite and other databases through the cdr_odbc module. You have to configure your cdr.conf (and depending on the backend you chose, cdr_mysql.conf, cdr_odbc.conf, cdr_pgsql.conf with your backend information, like credentials, table names, etc).
The CDR will be written by default with some contents, which are the CDR Variables (taken from the predefined asterisk variable list)
If the channel has a cdr, that cdr record has it's own set of
variables which can be accessed just like channel variables. The
following builtin variables are available and, unless specified,
read-only.
The ones interesting for you at this point would be:
${CDR(duration)} Duration of the call.
${CDR(billsec)} Duration of the call once it was answered.
${CDR(disposition)} ANSWERED, NO ANSWER, BUSY
When disposition is ANSWER, billsec will contain the number of seconds to bill (the total "answered time" of the call), and duration will hold the total time of the call, including non-billed time.
2) If, on the other hand, if you are not asking about the cdr, but want to write yourself the call duration, you can have an AGI script that after issuing a dial(), reads the CDR(billsec) variable, or the ANSWEREDTIME (set by the Dial() command):
${DIALEDTIME} * Time for the call (seconds)
${ANSWEREDTIME} * Time from dial to answer (seconds)
3) You can also achieve the same result by having an AMI client listening for the event VarSet for the variable ANSWEREDTIME. The event in question will contain the channel for which this variable has been set.
So, options 2 and 3 are clearly more useful if you already have an AGI script or an AMI client of your own controlling/handling calls, and option 1 is more generic but maybe slightly less flexible.
Hope it helps!