Embedded hazelcast cluster occasionally breaks for no apparent reason - runtime-error

The hazelcast cluster runs in an application running on Kubernetes. I can't see any traces of partitioning or other problems in the logs. At some point, this exception starts to appear in the logs:
hz.dazzling_morse.partition-operation.thread-1 com.hazelcast.logging.StandardLoggerFactory$StandardLogger: app-name, , , , , - [172.30.67.142]:5701 [app-name] [4.1.5] Executor is shut down.
java.util.concurrent.RejectedExecutionException: Executor is shut down.
at com.hazelcast.scheduledexecutor.impl.operations.AbstractSchedulerOperation.checkNotShutdown(AbstractSchedulerOperation.java:73)
at com.hazelcast.scheduledexecutor.impl.operations.AbstractSchedulerOperation.getContainer(AbstractSchedulerOperation.java:65)
at com.hazelcast.scheduledexecutor.impl.operations.SyncBackupStateOperation.run(SyncBackupStateOperation.java:39)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationexecutor.OperationRunner.runDirect(OperationRunner.java:150)
at com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:174)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:452)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:166)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:136)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
I can't see any particular operation failing, prior to that. I do run some scheduled operations myself, but they are executing inside try-catch blocks and are never throwing.
The consequence is that whenever a node in the cluster restarts no data is replicated to the new node, which eventually renders the entire cluster useless - all data that's supposed to be cached and replicated among nodes disappears.
What could be the cause? How can I get more details about what causes whatever executor hazelcast uses to shut down?

Based on other conversations...
Your Runnable / Callable should implement HazelcastInstanceAware.
Don't pass the HazelcastInstance or IExecutorService as a non-transient argument... as the instance where the runnable is submitted will be different from the one where it runs.
See this.

Related

CloudFoundry App instances - EF Core database migration

I've written a .NET Core Rest API which does migrate/ update the database (using Entity Framework Core) in Startup.cs. Currently, only one instance is running in the production environment. It seems to be recommended to run 2 instances in production.
What happens while executing the cf push command? Are both instances stopped automatically or do I need to execute cf stop?
In addition, how do I prevent both instances from updating the database?
I've read about the CF_INSTANCE_INDEX environment variable. Is it OK to only start the database migration when CF_INSTANCE_INDEX is 0? Or does CloudFoundry provide the next mechanism: start the first instance and when this one is up-and-running, the second instance will be started?
What happens while executing the cf push command? Are both instances stopped automatically or do I need to execute cf stop?
Yes, your app will stop. The new code will stage (i.e. buildpacks run) and produce a droplet. Then the system will bring up all the requested instances using the new droplet.
In addition, how do I prevent both instances from updating the database? I've read about the CF_INSTANCE_INDEX environment variable. Is it OK to only start the database migration when CF_INSTANCE_INDEX is 0?
You can certainly do it that way. The instance number is guaranteed to be unique and the zeroth instance will always exist, so if you limit to the zeroth instance then it's guaranteed to only run once.
Another option is to run your migration as a task (i.e. cf run-task). This runs in its own container, so it would only run once regardless of the number of instances you have. This SO post has some tips about running a migration as a task.
Or does CloudFoundry provide the next mechanism: start the first instance and when this one is up-and-running, the second instance will be started?
It does, it's the --strategy=rolling flag for cf push.
See https://docs.cloudfoundry.org/devguide/deploy-apps/rolling-deploy.html
I'm not sure that this feature would work for ensuring your migration runs only once. According to the docs (See "How it works" section at the link above), your new and old containers could overlap for a short period of time. If that's the case, running the migration could potentially break the old instances. It'll be a short period of time, just until they get replaced with new instances, but maybe something to consider.

Airflow Dependencies Blocking Task From Getting Scheduled

I have an airflow instance that had been running with no problem for 2 months until Sunday. There was a blackout in a system on which my airflow tasks depend and some tasks where queued for 2 days. After that we decided it was better to mark all the tasks for that day as failed and just lose that data.
Nevertheless, now all the new tasks get trigger at the proper time but they are never being set to any state (neither queued nor running). I check the logs and I see this output:
Dependencies Blocking Task From Getting Scheduled
All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:
The scheduler is down or under heavy load
The following configuration values may be limiting the number of queueable processes: parallelism, dag_concurrency, max_active_dag_runs_per_dag, non_pooled_task_slot_count
This task instance already ran and had its state changed manually (e.g. cleared in the UI)
I get the impression the 3rd topic is the reason why it is not working.
The scheduler and the webserver were working, however I restarted the scheduler and still I am having the same outcome. I also deleted the data in mysql database for one job and it is still not running.
I also saw a couple of post that said it is not running because the depens_on_past was set to true and if the previous runs failed, the next one will never be executed. I also checked it and it is not my case.
Any input would be really apreciated.
Any ideas? Thanks
While debugging a similar issue i found this setting: AIRFLOW__SCHEDULER__MAX_DAGRUNS_PER_LOOP_TO_SCHEDULE (or http://airflow.apache.org/docs/apache-airflow/2.0.1/configurations-ref.html#max-dagruns-per-loop-to-schedule), checking the airflow code it seems that the scheduler queries for dagruns to examine (consider to run ti's for), this query is limited to that number of rows (or 20 by default). So if you have >20 dagruns that are in some way blocked (in our case because ti's were on up-for-retry), then it won't consider other dagruns even though these could run fine.

Workermanager REPLACE will affect already running instance?

We get many number of requests in a queue. We are instantiating the workermanager work as and when we get any request. How does the ExistingWorkPolicy.REPLACE work?
Document says
If there is existing pending (uncompleted) work with the same unique name, cancel and delete it.
Will it also kill the existing running worker in the middle? We really do not want the existing worker to stop in the middle, it is ok to be replaced when the worker is enqueued but not in running state. Can we use REPLACE option here?
https://developer.android.com/reference/androidx/work/ExistingWorkPolicy
As explained in WorkManager's guide and in your question, when you enqueue a new UniqueWorkRequest using REPLACE as the existing worker policy, this is going to stop a previous worker that is currently running.
What happens to your worker really depends on how you implemented it (Worker, CoroutineWorker, or another ListenableWorker subclass) and how you handle stoppages and cancellations]2.
What this means, is that your Worker needs to "cooperatively" finish and cleanup your worker:
In the case of unique work, you explicitly enqueued a new WorkRequest with an ExistingWorkPolicy of REPLACE. The old WorkRequest is immediately considered terminated.
Under these conditions, your worker will receive a call to ListenableWorker.onStopped(). You should perform cleanup and cooperatively finish your worker in case the OS decides to shut down your app.

Concurrency issues with Symfony and Doctrine

Good morning,
I have an issue when selecting the next record with Doctrine when there is concurrency. I have installed supervisord inside a docker container that starts multiple processes on the same "dispatch" command. The dispatch commands basically gets the next job in queue in the db and sends it to the right executor. Right now I have two docker containers that each run multiple processes through supervisord. These 2 containers are on 2 different servers. I'm also using Doctrine Optimistic locking. So the Doctrine query to find the next job in queue is the following:
$qb = $this->createQueryBuilder('job')
->andWhere('job.status = :todo')
->setMaxResults( 1 )
->orderBy('job.priority', 'DESC')
->addOrderBy('job.createdAt', 'ASC')
->setParameters(array("todo" => Job::STATUS_TO_DO));
return $qb->getQuery()->getOneOrNullResult();
So the issue is that when a worker tries to get the next job with the above query, I notice that they frequently run into the Optimistic Lock Exception which is fine meaning the record is already used by another worker. When there is an Optimistic Lock Exception, it's caught and then worker stops and another one starts. But I lose a lot of time because of this, because it takes multiple tries for workers to finally get the next job instead of the Optimistic Lock exception.
I thought about getting a random job id in the above Doctrine query.
What's your take on this? Is there a better way to handle this?
I finally figured it out. There was a delay between one of the server and the remote mysql so updates were not seen right away and that triggered the Optimistic Lock Exception. I fixed it by moving the mysql DB to Azure which is way faster than the old server and causes no delays.

How can I remove Host Instance Zombies from BTMessageBox

After moving most of our BT-Applications from BizTalk 2009 to BizTalk 2010 environment, we began the work to remove old applications and unused host. In this process we ended up with a zombie host instance.
This has resulted in that the bts_CleanupDeadProcesses startet to fail with error “Executed as user: RH\sqladmin. Could not find stored procedure 'dbo.int_ProcessCleanup_ProcessLabusHost'. [SQLSTATE 42000] (Error 2812). The step failed.”
After looking at the CleanupDeatProcess process, I found the zombie host instance found in the BTMsgBox.ProcessHeartBeats table, with dtNextHeartbeatTime set to the time when the host was removed.
(I'm assuming that the Host Instance Processes don't exist in your services any longer, and that the SQL Agent job fails)
From looking at the source of the [dbo].[bts_CleanupDeadProcesses] job, it loops through the dbo.ProcessHeartbeats table with a cursor (btsProcessCurse, lol) looking for 'dead' hearbeats.
Each process instance has its own cleanup sproc int_ProcessCleanup_[HostName] and a sproc for the heartbeat watchdog to call, viz bts_ProcessHeartbeat_[HostName] (although FWR the SPROC calls it #ApplicationName), filtered by WHERE (s.dtNextHeartbeatTime < #dtCurrentTime).
It is thus tempting to just delete the record for your deleted / zombie host (or, if you aren't that brave, to simply update the Next dtNextHeartbeatTime on the heartbeat record for your dead host instance to sometime next century). Either way, the SQL agent job should skip the dead instances.
An alternative could be to try and re-create the Host and Instances with the same name through the Admin Console, just to delete them (properly) again. This might however cause additional problems as BizTalk won't be able to create the 2 SPROCs above because of the undeleted objects.
However, I wouldn't obviously do this on your prod environment until you've confirmed this works with a trial run first.
It looks like someone else got stuck with a similar situation here
And there is also a good dive into the details of how the heartbeat mechanism works by XiaoDong Zhu here
Have you tried BTSTerminator? That works for one-off cleanups.
http://www.microsoft.com/en-us/download/details.aspx?id=2846

Resources