Updating service workers on single page webapps - single-page-application

I work on a single page webapp and am implementing service workers. I've learned that I need to manually update the service worker because a single page app doesn't have traditional navigation events, however, I'm unclear about the roles of clients.claim() and self.skipWaiting().
Do I need to call these functions after manually updating the service worker? I've seen skipWaiting() called in the install event, and claim() called in the activate event handler. If I need either or both of them, are these the correct places to make these calls?

clients.claim() usually called in the install event to take control of uncontrolled clients once service worker is activated.
It only really matters on the very first load, and due to progressive
enhancement the page is usually working happily without service worker
anyway.
For skipWaiting(), this causes your service worker to kick out the current active worker and activate itself as soon as it enters the waiting phase (or immediately if it's already in the waiting phase). It doesn't cause your worker to skip installing, just waiting. It's pretty common to call it in the install event.
This must be called when updating the service worker.
For more information, you can check the service worker lifecycle documentation.

Related

Symfony app stuck when getting EntityManager

I am working on an app which has the web component (visited via browser) and background task processing component, to which web component delegates some long running stuff.
I've just hit an issue when I refreshed my web browser only to find it loading indefinitely (first spotted in AJAX, but later in normal request).
It did not really look apparent but as soon as I shut down the background Symfony command which also utilizes EntityManager the browser get unblocked and proceeds with request.
My app uses RabbitMQ to store job requests which are publish by web component. The Symfony command uses the same "backbone" to create RabbitMQ consumer and take consume those jobs.
I tried, without any result:
Restarting Apache
Restating RabbitMQ
Purging RabbitMQ queue
Using different EntityManagers for web and command
I use OldSoundRabbitMqBundle (link) to facilitate communication between those two.
The web component gets stuck regardless of action being called (not related to RabbitMQ producer).
Has anyone stumbled upon similar issue?
This happens on dev box, I haven't got around giving it a spin on a production server, nor would I until I find out more about this.
It would seem that I misused the locking mechanism in Postgres. Indeed the task processing component is a long-running task, but given that it is Symfony command, Doctrine connection is being established as early as possible.
Now comes the tricky part: I used the LOCK TABLE statement to lock some tables away from concurrent access (EXCLUSIVE type). Without closing the connection (not entity manager), those locks are left intact, until I restart the command (every 10th task).
This was the root cause.
I am still investigating some edge-cases, but since I moved away to advisory locking, I had no more lock-ups.

Thread execution

I have a web application that checks the user accounts from the database to determine their source. I want to make sure that the thread that goes to check the database runs first without any scheduling algorithm of WebSphere server.
More Clarification:
Even if I define the method at first it takes time to gather all information so I want to make sure that the thread completes getting all the information from the database and proceed to other threads in the server.
Have you tried using javax.servlet.ServletContexetListener.contextInitialized ?
Note that the JavaDoc states "All ServletContextListeners are notified of context initialization before any filters or servlets in the web application are initialized."

With Meteor, how to I run a singleton that updates periodically while clients are connected?

I'm just getting started with Meteor and I have a REST API hooked up with publish / subscribe that can periodically update per client. How do I run this behavior once globally and only refresh as long as a client is connected?
My first use case is periodically refreshing content while clients are active. My second use case is having some kind of global lock to make sure a task is only happening once at a time. I'm trying to use Meteor to make a deployment UI and I only want 1 deployment to happen at once.
publish/subscribe will work automatically only when clients are connected. However, do not put any functionality that you want to control amount of execution times in publish or subscribe functions. They might run arbitrary amount of times.
If you want some command to be executed by any client use Meteor.methodss on server side, and call it explicitly with Meteor.call from client template event.
To make sure that only one deployment happens at any given time, simplest way would be to create another collection, called for example, CurrentDeployments.And any time deployment script function in Meteor.methods is executed, check with CurrentDeployments.findOne if there are ongoing deployment or not, and only call new one if none is running.
As a side bonus, subscribe to CurrentDeployments in client, to disable 'deploy' button in case one is already running.

How to know an application is available?

when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service

Hosting WF as Windows Service

I am trying to construct a simple windows workflow to monitor a directory for inbound files and do some DB updates using Windows WF 4.0. Currently I am planning to build a 'WCF Workflow Service' and host it as a 'Windows service' running 24/7 (with a daily service shutdown and startup).
Further in the future I am planning to consume this service using an ASP.NET/WPF application to create a basic dashboard kind of stuff.
Considering the idea of directory polling for files with WF hosted on windows service, does it seems to be a good idea? What can be the cons of this?
Please advice if there are any drawbacks on this or can this achieved by better means?
I'm actually doing this, but it is a bit more complex than you think, and should be avoided if possible.
You should not be blocking from within an Activity; if it is expected to be a long running Activity that is waiting from input from the outside (FileSystemWatcher event, for instance), the workflow should idle itself and wait to be woken from the outside.
How I did this was I created a workflow extension that hosted the FileSystemWatcher. Once the Activity was ready to watch for a file, it created a bookmark and passed it to the extension.
The extension then started the FSW, holding onto the bookmark.
When a FSW event was fired, the extension resumed the bookmark, passing in an object that contained details about the event. The Activity did what was needed with the event, then re-scheduled itself.
Normally I wouldn't have done this, but I had some requirements that forced me to use WF4 to accomplish this goal. If I didn't have to use WF4, I would have just spun up the FSW within the service and consumed the events.
Unless you expect to have to be very flexible with your configuration detailing what you do with the FSW event, and expect this to change relatively often during deployment of the service, I'd skip WF4.

Resources