I'm playing with the service worker API in my computer so I can grasp how can I benefit from it in my real world apps.
I came across a weird situation where I registered a service worker which intercepts fetch event so it can check its cache for requested content before sending a request to the origin.
The problem is that this code has an error which prevented the function from making the request, so my page is left blank; nothing happens.
As the service worker has been registered, the second time I load the page it intercepts the very first request (the one which loads the HTML). Because I have this bug, that fetch event fails, it never requests the HTML and all I see its a blank page.
In this situation, the only way I know to remove the bad service worker script is through chrome://serviceworker-internals/ console.
If this error gets to a live website, which is the best way to solve it?
Thanks!
I wanted to expand on some of the other answers here, and approach this from the point of view of "what strategies can I use when rolling out a service worker to production to ensure that I can make any needed changes"? Those changes might include fixing any minor bugs that you discover in production, or it might (but hopefully doesn't) include neutralizing the service worker due to an insurmountable bug—a so called "kill switch".
For the purposes of this answer, let's assume you call
navigator.serviceWorker.register('service-worker.js');
on your pages, meaning your service worker JavaScript resource is service-worker.js. (See below if you're not sure the exact service worker URL that was used—perhaps because you added a hash or versioning info to the service worker script.)
The question boils down to how you go about resolving the initial issue in your service-worker.js code. If it's a small bug fix, then you can obviously just make the change and redeploy your service-worker.js to your hosting environment. If there's no obvious bug fix, and you don't want to leave your users running the buggy service worker code while you take the time to work out a solution, it's a good idea to keep a simple, no-op service-worker.js handy, like the following:
// A simple, no-op service worker that takes immediate control.
self.addEventListener('install', () => {
// Skip over the "waiting" lifecycle state, to ensure that our
// new service worker is activated immediately, even if there's
// another tab open controlled by our older service worker code.
self.skipWaiting();
});
/*
self.addEventListener('activate', () => {
// Optional: Get a list of all the current open windows/tabs under
// our service worker's control, and force them to reload.
// This can "unbreak" any open windows/tabs as soon as the new
// service worker activates, rather than users having to manually reload.
self.clients.matchAll({type: 'window'}).then(windowClients => {
windowClients.forEach(windowClient => {
windowClient.navigate(windowClient.url);
});
});
});
*/
That should be all your no-op service-worker.js needs to contain. Because there's no fetch handler registered, all navigation and resource requests from controlled pages will end up going directly against the network, effectively giving you the same behavior you'd get without if there were no service worker at all.
Additional steps
It's possible to go further, and forcibly delete everything stored using the Cache Storage API, or to explicitly unregister the service worker entirely. For most common cases, that's probably going to be overkill, and following the above recommendations should be sufficient to get you in a state where your current users get the expected behavior, and you're ready to redeploy updates once you've fixed your bugs. There is some degree of overhead involved with starting up even a no-op service worker, so you can go the route of unregistering the service worker if you have no plans to redeploy meaningful service worker code.
If you're already in a situation in which you're serving service-worker.js with HTTP caching directives giving it a lifetime that's longer than your users can wait for, keep in mind that a Shift + Reload on desktop browsers will force the page to reload outside of service worker control. Not every user will know how to do this, and it's not possible on mobile devices, though. So don't rely on Shift + Reload as a viable rollback plan.
What if you don't know the service worker URL?
The information above assumes that you know what the service worker URL is—service-worker.js, sw.js, or something else that's effectively constant. But what if you included some sort of versioning or hash information in your service worker script, like service-worker.abcd1234.js?
First of all, try to avoid this in the future—it's against best practices. But if you've already deployed a number of versioned service worker URLs already and you need to disable things for all users, regardless of which URL they might have registered, there is a way out.
Every time a browser makes a request for a service worker script, regardless of whether it's an initial registration or an update check, it will set an HTTP request header called Service-Worker:.
Assuming you have full control over your backend HTTP server, you can check incoming requests for the presence of this Service-Worker: header, and always respond with your no-op service worker script response, regardless of what the request URL is.
The specifics of configuring your web server to do this will vary from server to server.
The Clear-Site-Data: response header
A final note: some browsers will automatically clear out specific data and potentially unregister service workers when a special HTTP response header is returned as part of any response: Clear-Site-Data:.
Setting this header can be helpful when trying to recover from a bad service worker deployment, and kill-switch scenarios are included in the feature's specification as an example use case.
It's important to check the browser support story for Clear-Site-Data: before your rely solely on it as a kill-switch. As of July 2019, it's not supported in 100% of the browsers that support service workers, so at the moment, it's safest to use Clear-Site-Data: along with the techniques mentioned above if you're concerned about recovering from a faulty service worker in all browsers.
You can 'unregister' the service worker using javascript.
Here is an example:
if ('serviceWorker' in navigator) {
navigator.serviceWorker.getRegistrations().then(function (registrations) {
//returns installed service workers
if (registrations.length) {
for(let registration of registrations) {
registration.unregister();
}
}
});
}
That's a really nasty situation, that hopefully won't happen to you in production.
In that case, if you don't want to go through the developer tools of the different browsers, chrome://serviceworker-internals/ for blink based browsers, or about:serviceworkers (about:debugging#workers in the future) in Firefox, there are two things that come to my mind:
Use the serviceworker update mechanism. Your user agent will check if there is any change on the worker registered, will fetch it and will go through the activate phase again. So potentially you can change the serviceworker script, fix (purge caches, etc) any weird situation and continue working. The only downside is you will need to wait until the browser updates the worker that could be 1 day.
Add some kind of kill switch to your worker. Having a special url where you can point users to visit that can restore the status of your caches, etc.
I'm not sure if clearing your browser data will remove the worker, so that could be another option.
I haven't tested this, but there is an unregister() and an update() method on the ServiceWorkerRegistration object. you can get this from the navigator.serviceWorker.
navigator.serviceWorker.getRegistration('/').then(function(registration) {
registration.update();
});
update should then immediately check if there is a new serviceworker and if so install it. This bypasses the 24 hour waiting period and will download the serviceworker.js every time this javascript is encountered.
For live situations you need to alter the service worker at byte-level (put a comment on the first line, for instance) and it will be updated in the next 24 hours. You can emulate this with the chrome://serviceworker-internals/ in Chrome by clicking on Update button.
This should work even for situations when the service worker itself got cached as the step 9 of the update algorithm set a flag to bypass the service worker.
We had moved a site from godaddy.com to a regular WordPress install. Client (not us) had a serviceworker file (sw.js) cached into all their browsers which completely messed things up. Our site, a normal WordPress site, has no service workers.
It's like a virus, in that it's on every page, it does not come from our server and there is no way to get rid of it easily.
We made a new empty file called sw.js on the root of the server, then added the following to every page on the site.
<script>
if (navigator && navigator.serviceWorker && navigator.serviceWorker.getRegistration) {
navigator.serviceWorker.getRegistration('/').then(function(registration) {
if (registration) {
registration.update();
registration.unregister();
}
});
}
</script>
In case it helps someone else, I was trying to kill off service workers that were running in browsers that had hit a production site that used to register them.
I solved it by publishing a service-worker.js that contained just this:
self.globalThis.registration.unregister();
Related
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
So I'm pretty new to using the Coldfusion solr search (just moved from a CF8 Mac OS X server to a Linux CF9 server), and I'm wondering what the best way to handle automatically updating the collections is. I know scheduled tasks are meant for this but I haven't been able to find any examples online.
I currently have a scheduled task set up to update all of the collections weekly by getting the list of collections and using the cfindex tag in a loop to run the refresh command. This is pretty processing intensive though and takes about ten minutes to update the four collections I have set up so far. This works when I run it in the browser, but I get this error "The request has exceeded the allowable time limit Tag: CFLOOP" when I run the task from scheduled task administration page.
Is there a better way to handle updating the collections? Would it be better if I made a task to update each collection individually?
Here's my update code.
<cfsetting requesttimeout="1800">
<cfcollection action="list" name="collections" engine="solr">
<cfloop query="collections">
<cfindex collection="#name#" action="refresh" extensions=".pdf, .html, .htm, .cfml, .cfm" type="path" key="/home/#name#/public_html/" recurse="yes">
</cfloop>
In earlier versions of ColdFusion there was a URL parameter that could be passed on any HTTP request to change the server's timeout for the requested page. You might have guessed from the scheduled task configuration that there's an HTTP request running your task, so it functions just like any other page. In those earlier versions you would have just added &requesttimeout=900 to the URL and that gave the server 15 minutes to process that task.
In later versions they realized that this URL parameter was a security risk but they needed a way to allow developers to declare that an individual HTTP request should still be allowed to take longer than the default page timeout set in the ColdFusion Administrator. So they moved it from the URL parameter to the <cfsetting> tag.
<cfsetting requesttimeout="900" />
You need to put the cfsetting tag at the top of the page, rather than putting it inside your loop, because it's resetting the total allowable time from the beginning of the request, not just since the last cfsetting tag. Ben Nadel wrote a blog article about that here: http://www.bennadel.com/blog/626-CFSetting-RequestTimeout-Updates-Timeouts-It-Does-Not-Set-Them.htm
I'm not sure if there's an upper limit to the request timeout. I do know that in the past when I've had a really long-running task like that, the server has gradually slowed down, in some cases until it crashed. I'm not sure if I would expect reindexing Solr collections to degrade performance so badly, I think my tasks were doing some other things that were probably hogging memory at the time. Anyway if you run into that issue, you may need to divide it up into separate tasks for each collection and just make sure there's enough time between the tasks to allow each one to complete before the next one starts.
EDIT: Oops! I don't know how I missed the cfsetting tag in the original question. D'oh! In any event, when you execute a scheduled task via the CF Administrator, it performs a cfhttp request to execute the task. This is the way scheduled tasks are normally executed, and I suspect it's so the task can execute inside your own application scope, but the effect is that there are two separate requests executing. I don't think there's a cfsetting tag in the CFIDE page, but I suspect a person could add one if they wanted to allow that page longer to wait for the task to complete.
EDIT: Okay, if you wanted to add the cfsetting in the CFIDE, you would first have to decrypt the template and then add your one line of code... which might void your warranty on the server, but is probably not dangerous. ;) For decrypting the template see: Can I get the source of a hacked Coldfusion template? - and the template to edit is /CFIDE/administrator/scheduler/scheduletasks.cfm.
I have a flex/LCDS stack, where I'm finding that after logout, I often (but not always) start receiving Duplicate HTTP Session errors on the client.
Here's the important facts of the stack:
The flex client has a login/logout functionality within the app. The page does not refresh after the logout. (Therefore, the Flex app, and the underlying mx.messaging.FlexClient remains initialised)
A user may have multiple tabs open.
per-client-authentication is set to false - we're trying to achieve SSO (integrating with CAS) so the user principle is bound to the JSession.
The problem is most evident when using long-polling for messaging, and when there are two (or more) tabs open.
The problem is very difficult to reproduce when using RTMP or Streaming channels.
A user is bound to a JSession - ie., if they log in on Tab1, they become logged in on Tab2.
When a user logs out from either tab, the Jsession is invalidated.
Here's my current theory as to what's causing the issue:
Tab1 (T1) Starts client -> Issued ClientId1 (C1) -> JSession1 (J1) created
Tab2 (T2) Starts Client -> Issued ClientId2 (C2) -> Joins J1
T1 logs in -> J1 Unaffected
T2 logs in -> J1 Unaffected
T1 & T2 Both subscribe, start polling over amflongpolling
T1 sends logout -> J1 Invalidated -> J2 created
T2 sends poll (against J1)
T1 logout completes, returns with J2, updates cookie
The last two calls create a conflict, where the LCDS sees the FlexClient appears to be related to 2 JSessions.
As a result, an error along the lines of the following is recieved:
Server.Processing.DuplicateSessionDetected Detected duplicate
HTTP-based FlexSessions, generally due to the remote host disabling
session cookies. Session cookies must be enabled to manage the client
connection correctly.
Note: I've been able to recreate the problem in a stand-alone project. I believe it's not an issue with our application specific code, instead caused by the Stateful / session nature and conflicts between multiple tabs sharing the same session.
In summary, I believe the issue is caused where the session is invalidated on the server as a result of calls from one tab, but before the response is sent to the browser to inform it of the new JSession, calls are issued under the old Jsession.
What are some appropriate strategies to defend against this duplicate session issue?
Update
To clarify, while the scenario is similar to those discussed here, there are subtle differences which make the solutions in that article inappropriate.
Specifically, the article discusses preventing duplicate sessions by controlling the initial creation of JSessions across both browsers, using a JSP, or an orchestrated RemoteObject call.
Flex actually assists in this process by preventing outbound RemoteObject calls until the local FlexClient DSid variable is defined, showing that the initial session has been established.
My scenario differs, because the JSession (& associated LCDS FlexSession / Client-Side FlexClient objects) have already been established once, (using the techniques discussed in that article) and subsequently invalidated via logout - which calls session.invalidate() - destroying the JSession.
The issue arises when Tab2 sends a call with a stale JSession, a duplicate HTTP Session error. The situation then gets compounded, as when LCDS throw the DuplicateHTTPSession error, it also invalidates all known Jsessions attached with the client, meaning that the Tab1 - which had been ok - now has a stale JSession. The next time that Tab1 sends a call, IT causes a DuplicateHTTPSession error, and the cycle repeats.
Unfortunately, the Flex framework hooks for delaying calls while sesssions are established have no easy way (that I've found) of being re-enabled once set. (I've tried the following, to no avail:)
// Reset DSid to get a new FlexSession established on LCDS
use namespace mx_internal
public function resetFlexSession()
{
FlexClient.getInstance().id = null;
// Note - using FlexClient.NULL_ID also doesn't work.
}
I feel for you - I've fought this for a long time and never found a solution, but found a fix that worked for me so hopefully it will at least keep this issue under control until you can find the culprit. (And if you do, please post it here).
Now, I've got a slightly different environment than you (I'm using CF on the backend) so keep that in mind.
I also tried the whole "FlexClient.getInstance().id = null;" thing too and it didn't work by itself, but it was how and where I implemented it that made it work.
So, this is what I did that made the problem go away.
On my main form, before ANY RemoteServer calls are made, I setup a creationComplete handler and placed this code you already know and love:
// Not sure if this is needed anymore, but I'm leaving it in
FlexClient.getInstance().id = null;
Next, in my very first call to the server, I gracefully handle the failure, and clear that stinking ID out again:
public function login(event:Event): void {
Swiz.executeServiceCall(roUsers.login(),
function (event:ResultEvent): void {
// Handle a successful login here...
}
, function (faultevent:FaultEvent): void {
// This code fixes this issue with IE tabs dying and leaving Flex with a Duplicate Session problem.
if (faultevent.fault.faultString.indexOf("duplicate")) {
FlexClient.getInstance().id = null;
Swiz.dispatchEvent(event);
}
});
}
And it worked.
Basically, try the call, and if it fails for the duplicate session thing, then clear out that ID and reissue the call.
The key point being that I don't think clearing out the ID works until you've made at least one call to the server. Once you do, it worked like a CHARM for me, and in all of my apps.
Note that I'm using the SWIZ framework above so just translate it to your own world.
By the way, I've never seen this error in any other browser but IE, and I believe it may have something to do with the infamous Dead Tab Issue that IE suffers from.
If the above doesn't work, I also know of a few changes to some config files on the server that might help.
Good luck my friend!
This article titled, Avoiding duplicate session detected errors in LCDS, gives an in-depth explanation of what's happening in your situation. Here is a relevant quote:
...[LCDS] believes that the FlexClient it received the request from was already
associated with a different session on the server.
For the client application to make sure that FlexClients in the
application don’t get into this bad state, the client application must
make sure that a session is already established on the server before
multiple FlexClients connect to the server at the same time.
There are several approaches recommended to fixing this, including:
calling a jsp page to load the application
"The jsp page could both create a session for the client application and return an html wrapper to the client which would load the swf."
calling a Remoting destination
"which would automatically create a session for the client application on the server"
An additional, unrelated, cause to be aware of;
Some browsers (Internet Explorer, for example) apply domain naming rules to cookies and this means that a code domain like "my_clientX.server.com", although it may return valid BlazeDS responses, will continually trigger duplicate session notifications as access to the cookie will be blocked.
Changing the name to a valid name (without underscore) will resolve the issue.
Using Asp.net webforms I want to track down visitor info just like Google Analytics does. Of course, I can use Google Analytic for this purpose but I want to know how can I achieve the same thing with Asp.net 3.5 and SQL Server 2008.
I want to store IP, Country, URL Referrer of the visitor, Resolution on each page request except postback. I am expecting 50k+ visit everyday..
Main concern is I want to do it in a way that it should not block current request.
i.e In general it happens when we save data in to db, current request stops on particular SP calling statment and moves ahead when it finishes executing SP or tsql statement. I want to follow "Insert and Forget" approach. It should insert in background when I pass parameter to particular event or function.
I found below alternatives for this :
1. PageAsynchTask
2. BeginExecuteNonQuery
3. Jquery Post method and Webservice (But I am not confident about this, and wondering how should I go about it)
I hope I've mentioned my problem properly.
Can anybody tell me which one is better approach? Also let me know if you've any other ideas or better approach than the listed one. Your help will be really appreciated.
Problems with any background thread in server side is each and every request is going to occupy two threads. One for serving the ASP.NET request and one for logging the stuff you want to log. So, you end up having scalability issues due to exhaustion of ASP.NET threads. And logging each and every request in database is a big no no.
Best is to just write to log files using some high performance logging library. Logging libraries are highly optimized for multi-threaded logging. They don't produce I/O calls on each and every call. Logs are stored in a memory buffer and flushed periodically. You should use EntLib or Log4net for logging.
You can use an HttpModule that intercepts each and every GET, POST and then inside the HttpModule you can check whether the Request.Url is an aspx or not. Then you can read Request.Headers["__ASYNCPOST"] and see if it's "true", which means it's an UpdatePanel async update. If all these conditions are true, you just log the request into a log file that stores the
You can get the client IP from:
HttpContext.Current.Request.UserHostAddress;
or
HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"];
To get the IP address of the machine and not the proxy use the following code
HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
However you cannot get the country. You will have to log the IP in your log files and then process the log files using some console application or job which will resolve the country of the IP. You need to get some IP->Country database to do the job. I have used http://www.maxmind.com/app/geoip_country before.
For screen size, you will have to rely on some javascript. Use a javascript on each page that finds the size of the screen on the client side and stores in a cookie.
var screenW = 640, screenH = 480;
if (parseInt(navigator.appVersion)>3) {
screenW = screen.width;
screenH = screen.height;
}
else if (navigator.appName == "Netscape"
&& parseInt(navigator.appVersion)==3
&& navigator.javaEnabled()
)
{
var jToolkit = java.awt.Toolkit.getDefaultToolkit();
var jScreenSize = jToolkit.getScreenSize();
screenW = jScreenSize.width;
screenH = jScreenSize.height;
}
Once you store it in a cookie (I haven't shown that code), you can read the screen dimensions from the HttpModule by using Request.Cookies and then log it in the log file.
So, this gives you solution for logging IP, screensize, finding country from IP, and filtering UpdatePanel async postback from logging.
Does this give you a complete solution to the problem?
Talking about server side, if you're running on IIS, and you don't need absolute real time information, I recommend you use IIS logs.
There is nothing faster than this, as it's been optimized for performance since IIS 1.0
You can append your own information in these logs (HttpRequest.AppendToLog), they have a standard format, there is an API if you want to do custom things with it (but you can still use text parser if you prefer), and there are a lots of free tools, for example Microsoft Log Parser which can transfer data in a SQL database (among others).
The first approach is looking good. (And I recommend it.) But it has 2 downsides:
Request will still block until task completes (or aborts on timeout).
You'll have to register your Task on every page.
The second approach looks inconvenient and might lead to errors. (You have to watch for a situation when your page renders faster than your query is processed. I'm not sure what will happen if your query is not complete when your page object is destroyed and GC goes on finalize() spree... but nothing good, I assume. You can avoid it by waiting for IAsyncResult.IsCompleted after render, but that's inconvenient.)
The third method is plain wrong. You should initiate your logging on the server side while processing the request you're going to log. But you still can call a web service from the server side. (Or a win service).
Personally, I'd like to implement logging in BeginRequest to avoid code duplication, but you need IsPostback... Still there might be a workaround.
Hii,
you can fire an asynchronous request and don't wait for response.
here i have some implemented code..
for that you need to create a web service to do your database operation or you can use it for your whole event handling.
from server side you have to call the web service asynchronously like this
Declare a Private Delegate
private delegate void ReEntryDelegate(long CaseID, string MessageText);
Now the method will contain web service calling like this
WebServiceTest.Notification service = new WebServiceTest.Notification();
IAsyncResult handle;
ReEntryDelegate objAscReEntry = new ReEntryDelegate(service.ReEntryNotifications);
handle = objAscReEntry.BeginInvoke(CaseID, MessageText, null, null);
break;
And the variable values will be passed by method here (CaseID,MessageText)
Hope this is clear to you
All the Best
I have an Ajax request to a web service that typically takes 30-60 seconds to complete. In some cases it could take as long as a few minutes. During this time the user can continue working on other tasks, which means they will probably be on a different page when the task finishes.
Is there a way to tell that the original request has been completed? The only thing that comes to mind is to:
wrap the web service with a web service of my own
use my web service to set a flag somewhere
check for that flag in subsequent page requests
Any better ways to do it? I am using jQuery and ASP.Net, if it matters.
You could add another method to your web service that allows you to check the status of a previous request. Then you can use ajax to poll the web service every 30 seconds or so. You can store the request id or whatever in Session so your ajax call knows what request ID to poll no matter what page you're on.
I would say you'd have to poll once in a while to see if request has ended and show some notifications, like this site does with badges for example.
At first make your request return immediately with something like "Started processing...". Then use a different request to poll for the result. It is not good neither for the server nor the client's browser to have long open HTTP sessions. Moreover the user should be informed and educated that he is starting a request that could take some time to complete.
To display the result you could have a"notification area" in all of your web pages. Alternatively you could have a dedicated page for this and instruct the user to navigate there. As others have suggested you could use polling to get the result.
You could use frames on your site, and perform all your long AJAX requests in an invisible frame. Frames add a certain level of pain to development, but might be the answer to your problems.
The only other way I could think of doing it is to actually load the other pages via an AJAX request, such that there are no real page reloads - this would mean that the AJAX requests aren't interrupted, but may cause issues with breaking browser functionality (back/forward, bookmarking, etc).
Since web development is stateless (you can't set a trigger/event on a server to update the client), the viable strategy is to setup up a status function that you can intermittently call using a javascript timer to check whether your code has finished executing. When it finishes, you can update your view.