Asterisk Web API to calculate Wait Times - asterisk

I would like to know if there is a web api for asterisk. I would also like to know if the average wait time to talk to a customer service agent is exposed through the api.
I have looked around online, but could not get an firm answer.
Any pointers are appreciated.

AFAIK, no, there is no such thing in Asterisk.
What does exist is the ability of parsing the queue_log file. You can get the moment the call started, the moment the call was answered by an agent, and subtract them - this will give you the wait time. Also, the first extra data value of the CONNECT event contains the time waited.
(If you are not in the mood for parsing a text file, you can register the queue logs in the database and use SQL to generate reports based on the logs. This is in fact my preferred approach.)
If you want to provide this information to other apps, you can write your own application which reads queue_log file/table and provides a webservice which returns wait times. In the case you decide to do it, we can try some more robust answers.

Related

NServiceBus, when are too many message used?

When considering a service in NServiceBus at what point do you start questioning how many messages handled by a service is too much and start to break these into a new service?
Consider the following: I have a sales service which can currently be broken into a few distinct business components, these are sales order validation, sales order processing, purchase order validation and purchase order processing.
There are currently about 20 message handlers and 2 sagas used within this service. My concern is that during high volume traffic from my website this can cause an initial spike in the messages to jump into the hundreds. Considering that the messages need to be processed in the order they are taken off the queue this can cause a delay for the last in the queue ( depending on what processing each message does).
When separating concerns within a service into smaller business components I find this makes things a little easier. Sure, it's a logical separation, but it seems to provide a layer of clarity and understanding. To me it seems it seems an easier option to do this than creating new services where in the end the more services I have the more maintenance I need to do.
Does anyone have any similar concerns to this?
I think you have actually answered you own question :)
As soon as the message volume reaches a point where the lag becomes an issue you could look to instance your endpoint. You do not necessarily need to reduce the number of handlers. You could simply install the service a number of times and have specific message types sent to the relevant endpoint by mapping.
So it becomes a matter of a simple instance installation and some config changes. So you can then either split messages on sending so that messages from a particular source end up on a particular endpoint (maybe priority) or on message type.
I happened to do the same thing on a previous project (not using NServiecBus though) where we needed document conversion messages coming from the UI to be processed ASAP. We simply installed the conversion service again with its own set of queues and changed the UI configuration to send the conversion messages to the new endpoint. The background conversion messages were still going to the previous endpoint. So here the source determined the separation.

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

long running http process - how to put in separate process?

I know that similar questions have been asked all over the place, but I'm having trouble finding one that relates directly to what I'm after.
I have a website where a user uploads a data file, then that file is transformed and imported into SQL. The file could be up to 50mb in size, and some times this process can take 30 minutes or sometimes even longer.
I realise I need to palm off the actual work to another process, and poll that process on the web page. I'm wondering what the best approach would be though? Being a web developer by trade, I'm finding all this new Windows Service stuff a bit confusing, and I just wanted somewhere to start.
So:
Can I do / should I being doing this with a windows service? if so, how?
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
clarifications
The data is imported into sql, there's no file distribution taking place.
If there is a failure, it absolutely MUST be reported to the user. The web page will poll every, lets say, 5 seconds, from the time the async task begins, to get the 'status' of the import. Once it's finished another response will tell the page to stop polling for status updates.
queries on final decision
ok, so as I thought, it seems that a windows service is the best idea. So as to HOW to get it to work, it seems the 'put the file there and wait for the service to pick it up' idea is the generally accepted way, is there a way I can start a process run by the service, without it having to constantly be checking a database table / folder? As I said earlier, I don't have any experience with Windows Services - I wondered if I put a public method in the service, can I call it somehow?
well ...
var thread = new Thread(() => {
// your action
});
thread.Start();
but you will have problems with that:
what if the import to sql fails? should there be any response to the client
if it fails, how do you ensure the file on a later request
what if the applications shuts down ... this newly created and started thread will be killed either
...
it's not always a good idea to store everything in sql (especially files...). if you want to make the file available to several servers why not distribute them via ftp ...?
i believe that your whole concept is a bit messed up (sry assuming this), and it might be helpful if you elaborate and give us more information about your intentions!
edit:
Can I do / should I being doing this
with a windows service? if so, how?
you can :) i advise you to create a simple console-program and convert this with srvany and sc. you can get a rough overview howto here (note: insert blanks after =... that's a silly pitfall)
the term should is relative, because you did not answer the most important question
what if a record is persisted to the database, telling a consumer that file test.img should be persisted, but your service hasn't captured it or did not transform it yet?
so ... next on
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you probably could create a WCF-service which recieves some binary-data and then stores this to a database. this request could be async. yes. but what for?
once again:
please give us more insight to your workflow: what are you exactly trying to achieve? which "environmental-conditions" to you have (eg. app A polls db and expects file-records which are referenced in table x to be persisted) ...
edit:
so you want to import a .csv-file. well that changes everything :)
but i won't advise you to use a wcf-service (there could be a usage: eg. a wcf-service which has a method to insert a single row, then your iteration through the file would be implemented in another app... not that good, though).
i would suggest following:
at first do everything in your webapp (as you've already done), but rather use some sort of bulk-insert and do your transformation/logic on the database.
if you have some sort of bottle-neck then, i would suggest you something like a minor job-service, eg:
webapp will upload the file and insert a row to a job-table. the job-service is continiously polling the table/or gets informed via wcf by the webapp (hey, hey, finally some sort of usage for WCF in your scenario... :) ) and then does the import-job, writing a finish-note to a table/or set the state of the job to finished ...
but this is a bit overkill :)
Please see if my below comments helps you to resolve your issue:
•Can I do / should I being doing this with a windows service? if so, how?
Yes you can do this with a windows service. And I think that is the way you should be doing it. You can implement your own service to process your request or you can use the open source code Job Proccessor
Basically the idea is..
You submit a request for processing
the csv file in database table with
some status as not started.
Then your windows service picks up
the request from database table which
are not started and update them as in
progress status.
Once the processing is complete
succesfully /unsuccesfuly your
service updated the database table
with status as Completed / Failed.
And your asp.net page can poll to
database table for the current status
every 5 sec or so.
•Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you should not be using WCF for this purpose.

BackgroundWorker From ASP.Net Application

We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information.
I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this.
For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours.
Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing?
So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach?
Thanks and sorry for such a long post!
Kevin
In your case it actually sounds like the solution you will be looking for is multifaceted (and not a simple in and done project).
Since you said that some processes can last for hours that is absolutely not something for ASP.NET to own. This should be ran inside a windows service and managed with native windows threading.
You will need to implement some type of work queue in your service and a way to communicate with the queue. One way is to expose a WCF service for all actions your service will govern. Another would be to have service poll a database table and pick up work from the table.
To be able express the status of the process you will want the ASP.NET application to be able to have some reference to the processID for example the WCF service returns a guid identifier. Then you have a method that when you give it the processID it will return the status of the process. You can then implement the polling of that service call using AJAX and display any type of modal you wish.
Another thing to remember is that you need to design your processes to have knowledge of where it is and where it will be when it is finished so it can track the state it's in. For example, BatchJobA is run and will have 1000 records to process. The service needs to know what record it's on or what the current % of competition is for it to be able to return information to the UI. For sql queries that take a very long time to execute this can be very problematic to accurately gauge where it is unless you do alot of pre and post processing of temp tables that you can in the middle of it read the status of the temp tables to understand where it is.
Based on what you are saying I think that BackgroundWorker is not a good choice.
Furthermore keeping this functionality as a part of your main app can be problematic, specifically because you do not want the submitted processing to be interrupted if the main app recycles. You can play with asynch processing but it still will be a part of the main app AppDomain - all of it will die if the app recycles.
I would suggest buidling a separate app implementing this functionality. In a similar situation I separated background processing to a Windows service and hosted a web service in it as a means of communication
You might consider a slightly different approach.
For example, have a command and control table in which you send commands like "REFORMAT PHONE NUMBERS" or whatever.
Then have a windows service monitoring that table. Whenever a record shows up, run the command.
This eliminates any sort of worry about a background thread. Further you have a bit more flexibility with regards to what's in the queue, order of operations including priority, etc. Finally, you would have a definitive list of what is running or needs to run.
As an option, instead of a windows service you might just use a SQL job to execute every so often to watch your control table and perform the requested action.

sending an email, but not now

I'm writing an application where the user will create an appointment, and instantly get an email confirming their appointment. I'd also like to send an email the day of their appointment, to remind them to actually show up.
I'm in ASP.NET (2.0) on MS SQL . The immediate email is no problem, but I'm not sure about the best way to address the reminder email. Basically, I can think of three approaches:
Set up a SQL job that runs every night, kicking off SQL emails to people that have appointments that day.
Somehow send the email with a "do not deliver before" flag, although this seems like something I might be inventing.
Write another application that runs at a certain time every night.
Am I missing something obvious? How can I accomplish this?
Choice #1 would be the best option, create a table of emails to send, and update the table as you send each email. It's also best not to delete the entry but mark it as sent, you never know when you'll have a problem oneday and want to resend out emails, I've seen this happen many times in similar setups.
One caution - tightly coupling the transmission of the initial email in the web application can result in a brittle architecture (e.g. SMTP server not available) - and lost messages.
You can introduce an abstraction layer via an MSMQ for both the initial and the reminder email - and have a service sweeping the queue on a scheduled basis. The initial message can be flagged with an attribute that means "SEND NOW" - the reminder message can be flagged as "SCHEDULED" - and the sweeper simply needs to send any messages that it finds that are of the "SEND NOW" or that are "SCHEDULED" and have a toBeSentDate >= the current date. Once the message is successfully sent - the unit of work can be concluded by deleting the message from the queue.
This approach ensures messages are not lost - and enables the distribution of load to off-peak hours by adjusting the service polling interval.
As Rob Williams points out - my suggestion of MSMQ is a bit of overkill for this specific question...but it is a viable approach to keep in mind when you start looking at problems of scale - and you want (or need) to minimize/reduce database read/write activity (esepcially during peak processing periods).
Hat tip to Rob.
For every larger project I usually also create a service which performs regular or periodical tasks.
The service updates its status and time of last execution somewhere in the database, so that the information is available for applications.
For example, the application posts commands to a command queue, and the service processes them at the schedule time.
I find this solution easier to handle than SQL Server Tasks or Jobs, since it's only a single service that you need to install, rather than ensuring all required Jobs are set up correctly.
Also, as the service is written in C#, I have a more powerful programming language (plus libraries) at hand than T-SQL.
If it's really pure T-SQL stuff that needs to be handled, there will be a Execute_Daily stored procedure that the service is going to call on date change.
Create a separate batch service, as others have suggested, but use it to send ALL of the emails.
The web app should record the need to send notifications in a database table, both for the immediate notice and for the reminder notice, with both records annotated with the desired send date/time.
Using MSMQ is overkill--you already have a database and a simple application. As the complexity grows, MSMQ or something similar might help with that complexity and scalability.
The service should periodically (every few minutes to a few hours) scan the database table for notifications (emails) to send in the near future, send them, and mark them as sent if successful. You could eventually leverage this to also send text messages (SMS) or instant messages (IMs), etc.
While you are at it, you should consider using the Command design pattern, and implement this service as a reusable Command executor. I have done this recently with a web application that needs to keep real estate listing (MLS) data synchronized with a third-party provider.
Your option 2 certainly seems like something you are inventing. I know that my mail system won't hold messages for future delivery if you were to send me something like that.
I don't think you're missing anything obvious. You will need something that runs the day of the appointment to send emails. Whether that might be better as a SQL job or as a separate application would be up to your application architecture.
I would recommend the first option, using either an SQL or other application to run automatically every day to send the e-mails. It's simple, and it works.
Microsoft Office has a delivery delay feature, but I think that is an Outlook thing rather than an Exchange/Mail Server thing, so you're going to have to go with option 1 or 3. Or option 4 would be to write a service. That way you won't have to worry about scheduled tasks to get the option 3 application to run.
If you are planning on having this app hosted at a cheap hosting service (like GoDaddy), then what I'd recommend is to spin off a worker thread in Global.asax at Application_Start and having it sleep, wake-up, send emails, sleep...
Because you won't be able to run something on the SQL Server machine, and you won't be able to install your own service.
I do this, and it works fine.

Resources