I have some EJB with #Asynchronous method.
I try to figure out what happens when all threads configured in pool are in processing and one more asynchronous call comes?
I found some answer in this post, but it is only for Websphere:
I would like to know what happens on JBoss and if there is some option to queue threads like on Websphere.
I configure thread-pool like this:
<subsystem xmlns="urn:jboss:domain:ejb3:1.2">
<async thread-pool-name="default"/>
<thread-pools>
<thread-pool name="default">
<max-threads count="10"/>
<keepalive-time time="100" unit="milliseconds"/>
</thread-pool>
</thread-pools>
...
</subsystem>
I tried to use bounded-queue-thread-pool insinde of <thread-pools> element, but it does not work.
thank you for help
Below is a description of the pool.
"A thread pool executor with an unbounded queue. Such a thread pool has a core size and a queue with no upper bound. When a task is submitted, if the number of running threads is less than the core size, a new thread is created. Otherwise, the task is placed in queue. If too many tasks are allowed to be submitted to this type of executor, an out of memory condition may occur. The "name" attribute is the name of the created executor. The "max-threads" attribute must be used to specify the thread pool size. The nested "keepalive-time" element may used to specify the amount of time that pool threads should be kept running when idle; if not specified, threads will run until the executor is shut down. The "thread-factory" element specifies the bean name of a specific threads subsystem thread factory to use to create worker threads. Usually it will not be set for an EJB3 thread pool and an appropriate default thread factory will be used."
Source: http://www.jboss.org/schema/jbossas/jboss-as-ejb3_1_2.xsd
Related
The hazelcast cluster runs in an application running on Kubernetes. I can't see any traces of partitioning or other problems in the logs. At some point, this exception starts to appear in the logs:
hz.dazzling_morse.partition-operation.thread-1 com.hazelcast.logging.StandardLoggerFactory$StandardLogger: app-name, , , , , - [172.30.67.142]:5701 [app-name] [4.1.5] Executor is shut down.
java.util.concurrent.RejectedExecutionException: Executor is shut down.
at com.hazelcast.scheduledexecutor.impl.operations.AbstractSchedulerOperation.checkNotShutdown(AbstractSchedulerOperation.java:73)
at com.hazelcast.scheduledexecutor.impl.operations.AbstractSchedulerOperation.getContainer(AbstractSchedulerOperation.java:65)
at com.hazelcast.scheduledexecutor.impl.operations.SyncBackupStateOperation.run(SyncBackupStateOperation.java:39)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationexecutor.OperationRunner.runDirect(OperationRunner.java:150)
at com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:174)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:452)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:166)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:136)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
I can't see any particular operation failing, prior to that. I do run some scheduled operations myself, but they are executing inside try-catch blocks and are never throwing.
The consequence is that whenever a node in the cluster restarts no data is replicated to the new node, which eventually renders the entire cluster useless - all data that's supposed to be cached and replicated among nodes disappears.
What could be the cause? How can I get more details about what causes whatever executor hazelcast uses to shut down?
Based on other conversations...
Your Runnable / Callable should implement HazelcastInstanceAware.
Don't pass the HazelcastInstance or IExecutorService as a non-transient argument... as the instance where the runnable is submitted will be different from the one where it runs.
See this.
We get many number of requests in a queue. We are instantiating the workermanager work as and when we get any request. How does the ExistingWorkPolicy.REPLACE work?
Document says
If there is existing pending (uncompleted) work with the same unique name, cancel and delete it.
Will it also kill the existing running worker in the middle? We really do not want the existing worker to stop in the middle, it is ok to be replaced when the worker is enqueued but not in running state. Can we use REPLACE option here?
https://developer.android.com/reference/androidx/work/ExistingWorkPolicy
As explained in WorkManager's guide and in your question, when you enqueue a new UniqueWorkRequest using REPLACE as the existing worker policy, this is going to stop a previous worker that is currently running.
What happens to your worker really depends on how you implemented it (Worker, CoroutineWorker, or another ListenableWorker subclass) and how you handle stoppages and cancellations]2.
What this means, is that your Worker needs to "cooperatively" finish and cleanup your worker:
In the case of unique work, you explicitly enqueued a new WorkRequest with an ExistingWorkPolicy of REPLACE. The old WorkRequest is immediately considered terminated.
Under these conditions, your worker will receive a call to ListenableWorker.onStopped(). You should perform cleanup and cooperatively finish your worker in case the OS decides to shut down your app.
ASP.NET WebAPI2 application on .NET 4.6.2, hosted on IIS on Windows Server 2016. From time to time, there is a lot (hundreds) of requests stuck for hours (despite the fact I have request timeout 60s set) with no CPU usage. So, I took the memory dump of w3wp process, along with sos.dll, clr.dll and mscordacwks.dll and all my project's dlls and pdbs from bin directory from server and used WinDbg as described in many blogs and tutorials. But, in all of them, they are able to directly see CLR stack by calling ~*e !clrstack. I can see CLR stacktrace for some Redis and ApplicationInsights workers, but for all other managed threads I can see only:
OS Thread Id: 0x1124 (3)
Child SP IP Call Site
GetFrameContext failed: 1
0000000000000000 0000000000000000
!dumpstack for any of these gives just this:
0:181> !dumpstack
OS Thread Id: 0x1754 (181)
Current frame: ntdll!NtWaitForSingleObject+0x14
Child-SP RetAddr Caller, Callee
000000b942c7f6a0 00007fff33d63acf KERNELBASE!WaitForSingleObjectEx+0x8f, calling ntdll!NtWaitForSingleObject
000000b942c7f740 00007fff253377a6 clr!CLRSemaphore::Wait+0x8a, calling kernel32!WaitForSingleObjectEx
000000b942c7f7b0 00007fff25335331 clr!GCCoop::GCCoop+0xe, calling clr!GetThread
000000b942c7f800 00007fff25337916 clr!ThreadpoolMgr::UnfairSemaphore::Wait+0xf1, calling clr!CLRSemaphore::Wait
000000b942c7f840 00007fff253378b1 clr!ThreadpoolMgr::WorkerThreadStart+0x2d1, calling clr!ThreadpoolMgr::UnfairSemaphore::Wait
000000b942c7f8e0 00007fff253d952f clr!Thread::intermediateThreadProc+0x86
000000b942c7f9e0 00007fff253d950f clr!Thread::intermediateThreadProc+0x66, calling clr!_chkstk
000000b942c7fa20 00007fff37568364 kernel32!BaseThreadInitThunk+0x14, calling ntdll!LdrpDispatchUserCallTarget
000000b942c7fa50 00007fff3773e821 ntdll!RtlUserThreadStart+0x21, calling ntdll!LdrpDispatchUserCallTarget
So I have no idea, where to look for bug in my code.
(here is the full result:
https://gist.github.com/rouen-sk/eff11844557521de367fa9182cb94a82
and here is the results of !threads:
https://gist.github.com/rouen-sk/b61cba97a4d8300c08d6a8808c4bff6e)
What can I do? Google search for GetFrameContext failed gives nothing helpful.
As mentioned, this is not trivial, however you can find a case study of similar problem here: https://blogs.msdn.microsoft.com/rodneyviana/2015/03/27/the-case-of-the-non-responsive-mvc-web-application/
In a nutshell:
Download NetExt. It is the zip file here:
https://github.com/rodneyviana/netext/tree/master/Binaries
Open your dump and load NetExt
Run !windex to index the heap
Run !whttp -order -running to see a list of running requests
If the requests contains thread number you can go to the thread to see what is happening
If the requests contains --- instead of thread number, they are waiting a thread and this is a sign that some throttling is happening
If it is a WCF service, run !wservice to see the services
Run !wruntime to see runtime information
Run !wapppool to see Application Pool information
Run !wdae to list all errors
... And so it goes. When you do this again and again you will be able to spot issues easily
I'm writing a "multi-workers" application using top-shelf and rebus.
My idea is to use the MyWorker1Namespace - MyWorker1Namespace.Messages, MyWorker2Namespace - MyWorker2Namespace.Messages pattern.
I would like to run the application without spanning multiple process, instead I would like to configure the application with moultiple input queues in order to be ready to split it up to multiple processes if necessary.
Is there any way to declare multiple input queues and multiple worker threads in one application using Rebus?
I guess configuration should be something like this:
<rebus inputQueue="MyWorker1Namespace.input" errorQueue="MyWorker1Namespace.error" workers="1" maxRetries="5"> </rebus>
<rebus inputQueue="MyWorker2Namespace.input" errorQueue="MyWorker2Namespace.error" workers="1" maxRetries="5"> </rebus>
...
Since the Rebus app.config XML is kind of optimized for one-bus-instance-per-process scenarios, you cannot configure multiple buses fully in XML, but there's nothing that keeps you from starting multiple bus instances inside the same process.
I've often done that e.g. in Azure worker roles where I want to host multiple logical endpoints without incurring the cost of physically separate deployments, and I've also sometimes done it with Topshelf-hosted Windows Services.
Usually, my app.config ends up with the following XML:
<rebus workers="1">
<add messages="SomeAssembly.Messages" endpoint="someEndpoint.input"/>
<add messages="AnotherAssembly.Messages" endpoint="anotherEndpoint.input"/>
</rebus>
thus allowing me to configure the default number of workers per bus and the endpoint mappings once and for all. Then, when my application starts up, it will keep an IoC container per bus for the duration of the application lifetime - with Windsor, I usually end up with a general bus installer that has parameters for the queue names, which allows me to configure Windsor like this:
var containers = new List<IWindsorContainer> {
new WindsorContainer()
// always handlers first
.Install(FromAssembly.Containing<SomeEndpoint.SomeHandler>())
// and then the bus, which gets started at this point
.Install(new RebusInstaller("someEndpoint.input", "error"))
// and then e.g. background timers and other "living things"
.Install(new PeriodicTimersInstannce()),
new WindsorContainer()
.Install(FromAssembly.Containing<AnotherEndpoint.AnotherHandler>())
.Install(new RebusInstaller("anotherEndpoint.input", "error"))
};
// and then remember to dispose each container when shutting down the process
where the RebusInstaller (which is a Windsor mechanism) basically just puts a bus with the right queue names into the container, e.g. like this:
Configure.With(new WindsorContainerAdapter(container))
.Transport(t => t.UseMsmq(_inputQueueName, _errorQueueName))
.(...) etc
.CreateBus().Start();
I like the idea that each IoC container instance functions as a logically independent application on its own. This way it would be easy to break things apart some time in the future if you want to e.g. be able to deploy your endpoints independently.
I hope this provides a bit of inspiration for you :) please don't hesitate to ask if you need more pointers.
I'm using a simple Spray-based servlet. After deploying and running this servlet on Tomcat7 I undeploy it (and possibly deploy it again afterwards) without restarting the servlet container (so basically the JVM instance is preserved).
The problem is that the threads created by Akka at each servlet deploy are not destroyed when the servled is undeployed (i.e. when Akka shuts-down) and a new set of threads are created at every deploy. Thus... leakage.
Calling system.shutdown() and system.awaitTermination() is useless.
Is there a way of killing these threads spawned at servlet initialization?
Here is a sample log entry from Tomcat7:
SEVERE: The web application [/...] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#68871741]) and a value of type [scala.concurrent.forkjoin.ForkJoinPool.Submitter] (value [scala.concurrent.forkjoin.ForkJoinPool$Submitter#155aa3ef]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Nov 14, 2013 1:53:24 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
Have you tried calling system.shutdown() and system.awaitTermination() at ServletContextListener#contextDestroyed()? That should clear up all resources before moving on to undeploy the app.
If you're using the Scala API, I've created a PR for this: https://github.com/spray/spray/pull/787
Cheers
Tulio