Asterisk ringgroup get call after registration - asterisk

I have 10 extensions grouped into RingGroup with number "100" and "ringall" strategy. Only 4 of 10 extensions online. Someone call to 100 and 4 online extensions gets call and starts ringing. So, how can I get this call if one (or more) of 6 offline extensions gets online (until call is active)?

Probably you should use queue for such functionality, ring group is not really good for such case, but there is a hack to achieve what you need even with ring group.
First make sure that destination if no answer is same group. then make sure that "Ring Time" is configured for quite low value like 10 seconds.
In this case when call hits ring group it will ring only 4 available extensions for 10 seconds, after 10 seconds it will go back to same group and it will ring all available extensions at that moment, so if 1 additional extensions will go online, then it will call 5 extensions for 10 seconds and etc.

Related

Clock Kit Complication Synchronised With Time

I'm attempting to make an app that displays a different string based on the current time. It should display a different string every minute, synchronised to the apple watch's clock (ie when a new minute starts, replace the current string on the complication).
I have lots of issues with complications for the apple watch, i can see that lots of people find apple's documentation to be confusing.
I believe my implementation of getCurrentTimelineEntry is correct, as i simply grab the current date, floor it to the nearest minute (rounded down) and process it into the relevant string and stick it on the complication.
I do not understand for the life of me what the getTimelineEndDate method does, as no matter what i pass into the handler it seems to make no difference.
The most confusing part however is the getTimelineEntries method. I understand the concept, ie pre-fetching what the complication should look like. Here, i attempt to prefetch the next hour's worth of data, (in my case, 60 different entries representing 60 different minutes). This seems to work, however, the method runs 10 times before stopping, by which point it has pre fetched 600 entries, representing 10 minutes worth. This is unintended completely, however not disastrous. The worst part is that i have no idea how to fetch even more data in the future. IE, i want this method to be called on the last date of the last current entry, to fetch the next 10 minutes worth.
In essence, once these bugs have been fleshed out, i want to fetch 24 hours worth of entries (60*24 minutes). And then, when the current time matches the time of the last entry, fetch the next 24 hours, and so on.
I will be grateful for any help, as the documentation for clock kit complications are particular poor.

Kill job if execution time less than 2 min

I have a chain with several jobs, and sometimes a certain job that normally takes about 2 hours, finishes in less than 2 minutes.
What I would like to do is to kill this job if it ends in less than 2 minutes so that the chain won't proceed.
Is this possible?
Thanks
Well you don't really want to kill anything, do you? If you do, see BMC's note (including video) on using ctmkilljob.
In this case your next job is dependent on 2 things, the predecessor job being ok and the duration of the predecessor job. Add another input condition to your next job (in addition to the existing condition) to represent the >2 minutes duration.
On the job that needs to run for more than 2 minutes, add a Notify when the Exectime exceeds 2 mins (or 60 mins or whatever you decide is the threshold) and get it to shout to an entry in your shout destination table.
On the Control-M Server create a new program entry in your shout destination table and create a small script referenced by the shout. The script should use the ctmcontb utility to create a new ODAT condition that your next is waiting on. If you have a look at the BMC help note for ctmkilljob (and just substitute in ctmcontb) then you'll see how to do this.

How to add and check for constraints when sending logs using Cloud Watch Logs SDK

I successfully sent my application logs which is in JSON format to Cloud Watch Logs using the Cloud Watch Logs SDK but I could not understand how to handle the constraints provided by the end point.
Question 1: Documentation says
If you call PutLogEvents twice within a narrow time period using the
same value for sequenceToken, both calls may be successful, or one may
be rejected.
Now what the word "May Be" means, is there no certain outcome?
Question 2:
Restriction is 10,000 inputlogevent are allowed in one batch, this is not too hard to incorporate code wise but there is size constraint too, only 1 MB can be sent in one batch. Does that mean every time I append inputlogevent to logevent collection/batch I need to calculate the size of the batch? Does that mean I need to check for both number of inputlogevent as well as size of overall batch when sending logs? Isn't that too cumbersome?
Question 3
What happens if one of my inputlogevent's 100th character reached 1 MB. Then I cannot simply send incomplete last log with just 100 characters, I would have to comepletely take that inputlogevent out of the picture and send it as a part of other batch?
Question 4
With multiple docker container writing logs, there will be constant change in sequence token, and alot of calls will fail coz sequence token will keep on changing.
Question 5:
In offical POC they have not checked any constraint at all. why so?
PutBatchEvent POC
Am I thinking in the right direction?
Here's my understanding on how to use Cloudwatch log. Hope this helps
Question 1: I believe there is no guarantee due to the nature of distributed systems, your request can land on the same cluster and be rejected or land of different clusters and both of them accept it
Question 2 & Question 3: For me log events should always be small, fast and put pretty rapidly. Most logging frameworks do help you configure ( Batch size for AWS / Number of lines for file logging... ) Take a look at these frameworks
Question 4: Each of your containers ( Or, any parallel application unit) should use and maintain their own sequenceTokens and each of them will get a separate log stream

How to process calls one by one through the cvp script?

I have a problem that demands processing calls one by one. Once first call enters our decision loop another call should be redirected to a diferent loop untill first call is processed . We have an issue when 2 calls enter in the script one second appart they are processed by call studio at the same time.
Is somehow possible to dynamicly separate those calls?
we already tried to separate calls with set local variable elemtn but it does not get update quickly enough to distinguish different calls
We need calls to enter the script one at the time,not together
You can manage this by adding a wait node in icm with 5 seconds
.when the call enters vxml script in icm we add 5 secs delay.
Before the wait node add a % allocator node which will have two paths
One will have 5 seconds wait and one without. So when
Two calls are fired at same time one will go through wait
And one will go immediately go to decision and you can flip flop between the two.

How can I insert call duration of an Asterisk call into my own database?

I have my own database to log the calls in astreisk. I need to insert call duration of every call into a table. How can I do this? Can I do this in my dialplan?
You are not giving much information about what db backend you would like to use, and also if you are asking about how to write yourself the call duration or how to configure asterisk to write the cdr in question.
So, generally speaking, you have 3 possible options for this (see below). For options 2 and 3 you would have to open the connection to the database yourself, write the queries needed to insert/update whatever row(s) needed, handle errors, etc. While for option 1 you just need to configure asterisk to do the job.
1) Asterisk can do this by default on its own, by writing the CDR (Call Detail Record) of every call to a backend. This backend can be csv, mysql, pgsql, sqlite and other databases through the cdr_odbc module. You have to configure your cdr.conf (and depending on the backend you chose, cdr_mysql.conf, cdr_odbc.conf, cdr_pgsql.conf with your backend information, like credentials, table names, etc).
The CDR will be written by default with some contents, which are the CDR Variables (taken from the predefined asterisk variable list)
If the channel has a cdr, that cdr record has it's own set of
variables which can be accessed just like channel variables. The
following builtin variables are available and, unless specified,
read-only.
The ones interesting for you at this point would be:
${CDR(duration)} Duration of the call.
${CDR(billsec)} Duration of the call once it was answered.
${CDR(disposition)} ANSWERED, NO ANSWER, BUSY
When disposition is ANSWER, billsec will contain the number of seconds to bill (the total "answered time" of the call), and duration will hold the total time of the call, including non-billed time.
2) If, on the other hand, if you are not asking about the cdr, but want to write yourself the call duration, you can have an AGI script that after issuing a dial(), reads the CDR(billsec) variable, or the ANSWEREDTIME (set by the Dial() command):
${DIALEDTIME} * Time for the call (seconds)
${ANSWEREDTIME} * Time from dial to answer (seconds)
3) You can also achieve the same result by having an AMI client listening for the event VarSet for the variable ANSWEREDTIME. The event in question will contain the channel for which this variable has been set.
So, options 2 and 3 are clearly more useful if you already have an AGI script or an AMI client of your own controlling/handling calls, and option 1 is more generic but maybe slightly less flexible.
Hope it helps!

Resources