Firebase-Queue Graceful Shutdown on GCE - firebase

This is a design question about the handling of tasks during the shutdown of a firebase-queue based app running on Google Compute Engine.
The use case I am working with is automatically scaling queue-workers depending on the load at any given time. Specific to our project is the fact that our tasks are long-running.
In an ideal world, the queue worker would have an opportunity to finish its current tasks before the virtual machine running the worker is terminated. We are working with Google Compute Engine / instance groups to handle the scaling of our queue worker app. Firebase-queue does provide a promise based method to shutdown a queue worker (i.e. queue.shutdown()). This will stop the worker from accepting new tasks and will allow running tasks to finish prior to resolving the promise.
The problem I am facing is how to allow the queue worker to shutdown gracefully prior to instance termination (this problem would also occur during a rolling update). One way is to trigger the worker shutdown and have the worker trigger instance shutdown, but this does not seem like the best design because control is taken away from whatever service is triggering the scale down in the first place.
GCE does provide a service which will run a shutdown script prior to instance termination, however, it will forcefully shutdown an instance after about 90 seconds, which does not work for us.
I am interested in design ideas / patterns to follow here. Any help is much appreciated.

Related

Airflow resource pool usage on DAG-level?

I'm looking at using airflow for scheduling test-cases execution against shared hw in a lab and have some best practice questions on how to use the resource pool concept for a whole DAG-instance instead of just on task level.
Basically a test-case needs (executed as a instance of a test-case DAG (deploy/execute/collect/un-deploy)) certain physical resources and should therefore request them from the different resource pools(modelling the the physical resources) in order to not run into conflicting concurrent usage with other triggered DAG-instances.
My question is if it's possible to define resource usage on DAG-instance level or if it's only possible on task level. If the latter, then would one parallel task claiming the resource during the whole DAG-instance execution be the best way to handle not having to pass the resource claim between all tasks in the DAG? Other alternatives?
Update after questions from Viraj and dlamblin:
Running 1.10.1
Running LocalExecutor
Have verified that I can run parallel DAGS with concurrent tasks
The resources I want to have custom pools for are not worker resources, rather different peripheral hw units such as relays, routers etc that the tasks running in parallel on a the localexecutor should block on if they are occupied(0 custom resource pool instances left) by an/-other task(s)
The Kubernetes Executor allows for certain node type affinity to be configured on the task or dag level. The Celery Executor has a queue concept to select from a worker group with certain resources available to the worker. You're probably not using a Local Executor as your question doesn't quite make sense for that case.

Why do I not need to shutdown javafx.concurrent.Service

javafx.concurrent.Service uses internally a java.util.concurrent.ExecutorService to execute its Tasks. Instances of ExecutorService need to be shut down after usage. This does not seem to be the case for javafx.concurrent.Service. How and when does javafx.concurrent.Service shutdown its ExecutorService ?
I think your misunderstanding here comes from:
Instances of ExecutorService need to be shut down after usage.
Calling shutdown() prevents an ExecutorService from accepting new tasks. You only need to do that if it makes sense to do so (e.g. you are trying to exit the application and want to make sure that any new tasks submitted during application exit are ignored. This might happen with a scheduled executor service, for example).
The related shutdownNow() will additionally attempt to interrupt any currently running threads. So if your tasks are implemented to accept interrupts gracefully, calling shutdownNow() gives those tasks the opportunity to perform any cleanup operations necessary (closing connections, etc).
In many use cases, however, there is no need to call either of these methods. If you are assured no further tasks will be submitted to the executor, shutdown() is unnecessary. If your tasks are not long-running, shutdownNow() is unnecessary. When the application attempts to exit, any existing tasks will complete (presumably reasonably quickly) and then the application can exit.
Note that if your executor service uses daemon threads, then when the application attempts to exit, those threads will not prevent application exit (they will be terminated, without interruption). So if your tasks are short-lived, require no cleanup, and can be safely terminated at any point, this is a viable strategy.
There is nothing special about javafx.concurrent.Service here: in some sense it is a wrapper for an ExecutorService that provides additional functionality for interacting with the FX Application Thread. Just note that the default executor service provided uses daemon threads, so as above, if your tasks need cleanup you would likely provide a different executor service and shut it down gracefully in the application's stop() method.

How to deploy a socket server in iis application scope

I am implementing an ASP.NET application that needs to service conventional http requests but the responses require data that I need to acquire from providers that are executables that provide their data over sockets. My plan to implement was:
1) In Application_Start, start a new thread that starts a socket server
2) In Session_Start, launch the session-specific process that will ultimately connect to the socket server, and from there do a Monitor.Wait on a session-specific lock object which I've stored in Application.Contents by Session key
3) When the socket server sees a new connection, make the data available to the appropriate session Contents and do a Monitor.Pulse on the session-specific lock object
Is this technically feasible in IIS? Can this concept function as a stable system?
Before answering, please bear in mind I am not asking "is this the recommended approach", I am aware it is not and if I had the option to write this system from scratch I would do this differently. I'm also not able to change the fact that the programs communicate using sockets.
Given the constraints this approach makes sense.
Shutdown and recycling of IIS worker processes are always throny issues when it comes to keeping state in a web app. Note, that your worker process can recycle pretty much at any time for many reasons. Some of those reasons are unavoidable: Server reboot, app deployment, bug leading to a process crash. So you need to think through what happens in those cases: All sessions will be lost while the child processes still run. Suggested solution: Add the children into a Windows Job Object and configure the Job to be killed when the parent exits.
With overlapped IIS worker recycling you can have two functioning workers running at the same time. You must deal with that possibility.
Consider the possibility that the child process immediately crashes. It will never make a connection. Make sure your app doesn't hang waiting for the connection forever.

Impact when changing instance count for asp.net web role in Azure

I'm having no luck trying to find out how channging the instance count for an ASP.Net web role affects requests currently being processed.
Heres the scenario:
An ASP.Net site is deployed with 6 instances
Via the console I reduce the instancecount to 4
Is azure smart enough to not remove instances from the pool if it is currently progressing requests or does it just kill them mid request?
I've been through the azure doco, goolge and a number of emails to MS tech support none of which were able to answer this seemingly simple question. I know about the events that get triggered by a shutdown etc but that doesnt really help in web site scenario with a live person waiting for a request to their response.
You cannot choose which instances to kill off. Primarily this is due to Windows Azure's instance allocation scheme, where your instances are split into different fault domains (meaning different areas of the data center - different rack, etc.). If you were to choose the instances to kill, this could leave you in a state where your remaining instances are in the same fault domain, which would void the SLA.
Having said that: You get an event when your role instance is shutting down (the OnStop() event). If you capture this event, you can do instance cleanup in preparation for VM shutdown. I can't recall if you're taken out of the load balancer at this point, but you could always force yourself out with a simple PowerShell command (Set-RoleInstanceStatus -Busy). This way your asp.net instance stops taking requests, and you can more easily shut down in a graceful manner.
EDIT: Sorry - didn't quite address all of your question. Since you get to capture OnStop(), you might have to implement a mechanism to make sure nothing's being processed in that instance. Since you're out of the load balancer, and assuming your requests are processed fairly quickly (2-5 seconds), you shouldn't have to wait long to clear out remaining requests. There's probably a performance counter to check, to see how many active requests are being handled.
Just to add to David's answer: the OnStop event happens when you are off the load balancer. For web apps, it is usually sufficient time to bleed out all requests after you are disconnected from the LB until the instance is shutdown. However, for long running or stateful connections (perhaps to a worker role), there would be an abrupt disconnect in some cases. While the OnStop method removes you from the LB, it does not terminate open connections. It simply prevents you from getting new connections. For web apps, this is usually enough time to complete the request (and you can delay the shutdown if necessary in the OnStop as well if you really want to).

Can asp.net shut down an application-created thread?

The thread will be started on each Application_Start event.
It will be a monitoring thread which is supposed to run constantly.
So even if the app shuts down, once it is restarted the thread will start too ensuring it runs all the time.
However I need to be sure that this thread will not be stopped / shut down while the application is running.
So in a few words, does anybody know if asp.net could shut down such a thread without actualy stopping / recycling the application.
As a matter of design, you shouldn't depend on asp.net to run threads like this. Little things like app recycling can cause you a lot of trouble.
Instead, create a windows service to execute the thread. This way you don't have to worry about it.
Update
I just wanted to add a little more information.
IIS has the ability to execute your app across multiple threads and processes. A standard site installation usually only has a single process (aka: web garden) assigned which spins up around 20 threads to handle request processing.
However, any IIS administrator can easily add more processes to the mix. They usually do this when a site can hose a single process either because request processing takes too long, or the number of handler threads isn't enough, or as a temporary measure if the app has enough problems that a single thread will hose the entire process fairly often.
If you have a thread being spun on app start, then it will create one for each worker process the site has. This may be unexpected behavior to you or your successors.
Also, monitoring apps are almost always completely separate to the application they are monitoring. One of the primary reasons is that in the event the monitored process dies, hangs, or otherwise becomes unresponsive then the monitoring app itself still needs to carry on and log this information. Otherwise the monitored process could very well hose the monitoring app itself.
So, do yourself a favor and move this to its own process. The best way to do this on an IIS server is to create a windows service and give it the appropriate execution rights to do what you need.

Resources