URL on call end in studio? - twilio-studio

In Studio, is there a way to have it call a URL when the call is completed? When it's not Studio, I just use a "StatusCallback" URL with "StatusCallbackEvent" set to completed. But the docs for Studio warn to leave the "CALL STATUS CHANGES" for the phone number set to the studio flow, otherwise I end up with "stuck" executions that I have to manually kill. Anywhere that I'm forcing the call to end I could put in a http request first before ending, but if someone just hangs up in my IVR, I want to get notified of it so that I can input the ending timestamp and call cost in my tracking system.
I thought about setting the call status changes pointing to my page and having my page end the execution, but it appears that there is not actually a REST API to stop an active execution. The “stuck executions” article says to either use Console to “stop execution”, or to use the REST API to delete the execution. But those are FAR from the same thing. The REST API totally deletes the execution from even appearing in the logs, not just ends its runtime. That would not be acceptable.
Worst-case, I'll run a daily post-process which grabs the call end data from all calls made that day, but it's a bit ugly compared to how my (non-studio) programs do it with the StatusCallback.

Related

Updating Coldfusion solr collections with a scheduled task

So I'm pretty new to using the Coldfusion solr search (just moved from a CF8 Mac OS X server to a Linux CF9 server), and I'm wondering what the best way to handle automatically updating the collections is. I know scheduled tasks are meant for this but I haven't been able to find any examples online.
I currently have a scheduled task set up to update all of the collections weekly by getting the list of collections and using the cfindex tag in a loop to run the refresh command. This is pretty processing intensive though and takes about ten minutes to update the four collections I have set up so far. This works when I run it in the browser, but I get this error "The request has exceeded the allowable time limit Tag: CFLOOP" when I run the task from scheduled task administration page.
Is there a better way to handle updating the collections? Would it be better if I made a task to update each collection individually?
Here's my update code.
<cfsetting requesttimeout="1800">
<cfcollection action="list" name="collections" engine="solr">
<cfloop query="collections">
<cfindex collection="#name#" action="refresh" extensions=".pdf, .html, .htm, .cfml, .cfm" type="path" key="/home/#name#/public_html/" recurse="yes">
</cfloop>
In earlier versions of ColdFusion there was a URL parameter that could be passed on any HTTP request to change the server's timeout for the requested page. You might have guessed from the scheduled task configuration that there's an HTTP request running your task, so it functions just like any other page. In those earlier versions you would have just added &requesttimeout=900 to the URL and that gave the server 15 minutes to process that task.
In later versions they realized that this URL parameter was a security risk but they needed a way to allow developers to declare that an individual HTTP request should still be allowed to take longer than the default page timeout set in the ColdFusion Administrator. So they moved it from the URL parameter to the <cfsetting> tag.
<cfsetting requesttimeout="900" />
You need to put the cfsetting tag at the top of the page, rather than putting it inside your loop, because it's resetting the total allowable time from the beginning of the request, not just since the last cfsetting tag. Ben Nadel wrote a blog article about that here: http://www.bennadel.com/blog/626-CFSetting-RequestTimeout-Updates-Timeouts-It-Does-Not-Set-Them.htm
I'm not sure if there's an upper limit to the request timeout. I do know that in the past when I've had a really long-running task like that, the server has gradually slowed down, in some cases until it crashed. I'm not sure if I would expect reindexing Solr collections to degrade performance so badly, I think my tasks were doing some other things that were probably hogging memory at the time. Anyway if you run into that issue, you may need to divide it up into separate tasks for each collection and just make sure there's enough time between the tasks to allow each one to complete before the next one starts.
EDIT: Oops! I don't know how I missed the cfsetting tag in the original question. D'oh! In any event, when you execute a scheduled task via the CF Administrator, it performs a cfhttp request to execute the task. This is the way scheduled tasks are normally executed, and I suspect it's so the task can execute inside your own application scope, but the effect is that there are two separate requests executing. I don't think there's a cfsetting tag in the CFIDE page, but I suspect a person could add one if they wanted to allow that page longer to wait for the task to complete.
EDIT: Okay, if you wanted to add the cfsetting in the CFIDE, you would first have to decrypt the template and then add your one line of code... which might void your warranty on the server, but is probably not dangerous. ;) For decrypting the template see: Can I get the source of a hacked Coldfusion template? - and the template to edit is /CFIDE/administrator/scheduler/scheduletasks.cfm.

ASP.NET feedback during long submit

This is probably a really simple thing. Basically, the user clicks a button and a potentially long running task happens. I'd like to do a few things like toggle the button's enabled state, show a spinner, etc. In VB.NET Winforms I'd just do Application.DoEvents() and the updates would happen and the code can continue. How can I do this in ASP.NET? (preferable serverside or minimal javascript)
There are a few ways to approach this in ASP.Net depending on exactly what your requirement is. Getting the process started is fairly easy, but monitoring it for completion from the client side will require more work in both the server and the client.
The basic outline of the solution is:
1) Perform some action on the client that initiates the action. For example, you could post the entire page back on a button click, initiate an ajax request, or have a partial page postback depending on how much information you need from the page.
2) On the server side, initiate the task using a BackgroundWorker, make an entry in a workflow queue, or store a request in a database table that is monitored by a service that is responsible for performing the action.
3) Back on the client side, use javascript start a window.timeout loop that, when it times out, issues an ajax request to the web server to check on the completion. Using a timeout loop like this will ensure that the UI remains responsive and that any animations being displayed will display correctly. How you check on the completion will depend on how your server-side implementation is designed, but will almost certainly require a database.
We use the following general approach for initiating reports from the web client, which can be long running:
When the user initiates the report, open a new window to the report generation page on the client using javascript, passing the page enough parameters to get it started. Opening a separate window allows the user to continue working, but still see that there is something happening.
The user interface for the report page basically contains an animated GIF so that the user knows that something is going on.
When the report page is initially loaded on the server, it generates a unique id for monitoring the status of the report and embeds this in javascript for use in monitoring the status. It then stores this unique identifier in a database table that contains the unique id and a status column, initializing the status to requested.
Once the database entry has been made, the page fires off a BackgroundWorker to initiate the action and then returns the page to the user.
When the page is displayed, javascript starts a window.timeout loop that periodically fires off an ajax request to the web server, which then checks the database for the status of the report using the unique identifier created earlier.
When the backgroundworker finishes the report, either successfully or in failure, it updates the database table with the status, location of the report or error messages and terminates.
When the javascript loop finds that the report generation has completed, it either displays the error message or the report to the user.
Hopefully, this gives you some ideas for solving your issue.
The issue with this could be that once the page is posting you can't update other sections of the page.
you can use multiple asp:updatepanel and communicate to other update panel's causing the state to change in the panel.
take a look at this link:
http://www.ajaxtutorials.com/ajax-tutorials/tutorials-using-multiple-updatepanels-in-net-3-5-and-vb/
it will show you how to accomplish this.

ASP.NET Lifecycle and long process

I know we need a better solution but we need to get this done this way for right now. We have a long import process that's fired when you click start import button on a aspx web page. It takes a long time..sometimes several hours. I changed the timeout and that's fine but I keep getting a connection server reset error after about an hour. I'm thinking it's the asp.net lifecycle and I'd like to know if there are settings in IIS I can change to make this lifecycle last longer.
You should almost certainly do the long-running work in a separate process (not just a separate thread).
Write a standalone program to do the import. Have it set a flag somewhere (a column in a database, for example) when it's done, and put lines into a logfile or database table to show progress.
That way your page just gets the job started. Afterwards, it can self-refresh once every few minutes, until the 'completed' flag is set. You can display the log table if you want to be sure it's still running and hasn't died.
This is pretty straightforward stuff, but if you need code examples they can be supplied.
One other point to consider which might explain the behaviour is that the aspnet_wp.exe recycles if too much memory is being consumed (do not confuse this with the page life cycle)
If your long process is taking up too much memory ASP.NET will launch a new process and reassign all existing request. I would suggest checking for this. You can do this by looking in task manager at the aspnet_wp and checking the memory size being used - if the size suddnely goes back down it has recycled.
You can change the memory limit in machine.config:
<system.web>
<processModel autoConfig="true"/>
Use memoryLimit to specify the maximum allowed memory size, as a percentage of total system memory that the worker process can consume before ASP.NET launches a new process and reassigns existing requests. (The default is 60)
<system.web>
<processModel autoConfig="true" memoryLimit="10"/>
If this is what is causing a problem for you, the only solution might be to have a separate process for your long operation. You will need to setup IIS accordingly to allow your other EXE the relevant permissions.
You can try running the process in a new thread. This means that the page will start the task and then finish the page's processing but the separate thread will still run in the background. You wont be able to have any visual feedback though so you may want to log progress to a database and display that in a separate page instead.
You can also try running this as an ajax call instead of a postback which has different limitations...
Since you recognize this is not the way to do this I wont list alternatives. Im sure you know what they are :)
Extending the timeout is definitely not the way to do it. Response times should be kept to an absolute minimum. If at all possible, I would try to shift this long-running task out of the ASP.NET application entirely and have it run as a separate process.
After that it's up to you how you want to proceed. You might want the process to dump its results into a file that the ASP application can poll for, either via AJAX or having the user hit F5.
If it's taking hours I would suggest a separate thread for this and perhaps email a notification when it is ready to download the result from the server (i.e. send a link to the finished result)
Or if it is important to have a UI in the client's browser (if they are going to be hanging around for n hours) then you could have a WebMethod which is called from the client (JavaScript) using SetInterval to periodically check if its done.

Stored proc and program are still running even though browser is closed?

i have a .NET page that will perform calculation by calling a stored proc in SQL Server whn user click on a button. Normally, the calculation takes approxiamately 2 mins to complete and upon completion it will update a field in the table. Now i have a issue that, when user accidentally close the browser, the stored proc will still be running and update the table. I thought that the page should stop running, stop updating the field in table instead of keep on running even though the browser was closed?? Any ideas how to stop the process??
When a page request results in a call to the database, the page will wait for it to finish, but the database has no knowlegde of the page. So if the page stops waiting for whatever reason, the database will happily continue working until finished. In fact, the page request also has no knowlegde about whether the browser is still open or not, so if the user closes the browser, the page request itself will still execute until finished. It's only that nobody will listen to the result.
No.
The server knows nothing about whether the browser is still open. The browser just fires the process, and by the time the page is downloaded, it's interaction with the server is complete.
You can use a timer ajax call to allow the server to determine if the browser is still open and active.
In order to stop your stored procedure when this happens you will need to make some major modification to it. The only real way is to store a parameter in the database to indicate that the stored procedure should continue running and then check that parameter throughout your stored proc. If the browser is closed you then update this parameter and you can change your stored procedure to rollback/exit based on this value.
However, all of this is fairly complex for very little gain... Is it a problem that the stored procedure continues to run?
If you need to support some form of cancellation for this, you'll need to make a number of changes to your asp.net page and your calculation. Your calculation would have to look for a flag (maybe in the same table where it stored the result).
Then, in your code, you need to fire off the procedure execution asynchronously. Now, in order to do things cleanly, you really need the whole page to process asynchonously, and to wake periodically, check the Request.IsClientConnected, and if they're no longer connected, set the flag to cancel the calculation.
It's a fair chunk of work, and easy to get wrong. Also, your strategy for implementing this would vary wildly depending on whether your application needs to support 10 users or thousands (do you sleep in the .Net thread pool, and thus limit scalability of your application, or have a dedicated thread to poll the IsClientConnected property, and work out which calculation to abort?)

Instruct the browser not to wait for more content while processing continues

How do I instruct the browser not to expect any more content to be coming down the line? Kinda like the server saying “I got what I need, you don’t need to wait while I finish the processing of the page”.
I have a server-side task, which is initiated when a certain page is hit. The task may be time-consuming, but the rendered HTML will always be the same, i.e. like a “Thank you for starting the task” message.
I’m considering starting a new thread to handle the task, but I believe there must be a more elegant solution, where you just send the right headers/instructions to the browser and then continue processing.
Any ideas?
This will send your message to the browser:
Response.Flush()
However, I believe the responsibility of notifying the client that the transmission is complete is done by IIS, which it will not do until the ASP.NET engine finishes it's part of the request and passes control back to IIS. Your best bet is to start a new thread, which should not cause any problems since the request thread will end almost immediately after the new one begins... you still only have one thread per request.

Resources