JVM Restartability in Weblogic - asynchronous

I have a weblogic server running in clustered mode with one admin server(A) and 3 managed servers(M1,M2,M3). A and M1 reside on the same machine. In the weblogic, I am running a multi-tiered J2EE application, where an asynchronous batch process is invoked from the UI. We noticed, that when the M1 is killed (due to server load etc) - and restarted, the batch process that was invoked continues to run. My assumption would have been that the batch process also starts within the same JVM as M1. I have checked all the java processes running on the box, but none of them point to executing this batch in a separate JVM.
Appreciate your help !
Contrib

Related

How to run a memory consuming application using windows scheduler?

I have a console application which consumes at most 5Gb RAM while running. I want to initiate this application using windows scheduler. The problem is windows scheduler closes the application when it reaches 400-500mb of RAM usage. I have also changed the priority of the app from 7 to 0 in the task's XML configuration but seen no improvement.
I have received the following error message while trying to run the application.
any help?
error message

Biztalk Message restore

Requirement: Updating BizTalk application to a new version
Problem: The MSI import does not go through if there are running/suspended instances. Termination would result in loss of messages
What did I try:
I had about 100+ messages in messagebox some active, some with suspended resumable status.
I took the back up of BizTalkMsgBoxDb, I then terminated all instances from BTAdmin console and then restored the BizTalkMsgBoxDb.
I was expecting the messages to be back in BizTalkMsgBoxDb but when I queried from BiztalkAdmin console I don't find any of the message back.
Did I miss anything?
if your changes do not contain any changes to ports etc. try and replace the assemblies in GAC and then restart your host instances.
Doing a backup of just one of the BizTalk databases and restoring it is a very dangerous practice and I would strongly advise against it as it can cause some very nasty side effects.
The normal process of a deployment would be to switch of the receive locations and allow any running processes to finish and to resume or terminate any messages/orchestrations as appropriate.
Once there were no longer any suspended and running processes/messages would you unenlist all the Orchestrations and do the deployment.
If there are some long running processes that cannot be completed or terminated inside the deployment window then you would have to look at doing a side-by-side deployment. That involves changing the version number of all the DLLs, deploying this and then switching of the receive locations of the old version and switching on the new one.
When the old version has finished you stop that and un-deploy it.

w3wp.exe is restarting but all GET requests are eventually queued and serviced?

I have a w3wp.exe that is restarting on my IIS server (see specs below). Memory gradually climbs to ~3G then it randomly restarts itself about every 1-2min.
Memory Usage:
The odd thing is that once this memory drop (what looks like a restart - btw...the app pool does not get recycled/restarted) happens GET requests are queued but then serviced as soon as the service warms/starts up (causing a delay in responses to our clients - who were initially reporting delayed reponse times on occasion).
I have followed this link to get a stack dump once the .exe restarts (private bytes go to ~0) but nothing gets logged (no .dmp file) with diag debug once the service restarts.
I see tons of warnings in my webserver (IIS) log but that's it:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
ASK: I'm not sure if this is a memory limitation, if cacheing is not playing well with my threads/tasks, if cacheing is blowing up, if there is watchdog service restarting my application, etc,. Has anybody run across something similar with w3wp.exe restarting? It's hard to tell because diagdebug is not giving me a dump once it restarts.
SPECS:
MVC4 Web API servicing GET requests (code is debug build with debug=true)
Uses MemoryCache with Model and Business Objects with cache eviction set to 2hrs...uses
Task (TPS) for each new request.
Database: SQL Server 2008R2
Web Servers: Windows Server 2008R2 Enterprise SP1 (64bit, 64G RAM)
IIS 7.5
One application pool...no other LOB applications running on this server
Your first step is to reproduce the problem in your test environment. Setup some kind of load generation app (you can write it yourself pretty easily) and get the same problem happening. Then turn off debug in web.config and see if that fixes the issue. Then change it to be a release build and test again.
I've never used memorycache - try reducing the cache eviction time or just turn it off and see if that fixes the issue. Good luck :)

Is it possible to choose whether to generate heap dump or not on the fly?

We have an application which is deployed to a WebSphere server running on UNIX, and we are experiencing two issues:
a system hang which recovers after a few minutes - to investigate, we will need the thread dump (javacore).
a system hang which does not recover and requires WebSphere to be restarted - to investigate, we will need the thread dump and heap dump.
The problem is: when a system hang occurs, we do not know whether it is issue 1 or 2.
Ideally we would like to manually generate the thread dump first, and wait to see if the system recovers. If it does not, then we generate the thread dump and the heap dump, before restarting WebSphere.
I know about the kill -3 (or kill -QUIT) command. The command would generate thread dump only (if the parameter IBM_HEAPDUMP=false), or thread dump and heap dump (if IBM_HEAPDUMP=true). However, IBM_HEAPDUMP has to be set before WebSphere is started and cannot be changed while WebSphere is running.
Is my understanding correct, regarding the IBM_HEAPDUMP parameter and the kill -3 command?
Also, is it possible get the logs in the way I described? (i.e. when generating JVM diagnostics, choose whether to generate heap dump or not on the fly)
Your understanding is consistent with everything I've read.
However, I believe you can accomplish what you want by using wsadmin scripting. This article describes how to force javacores and heapdumps on a Windows platform where kill -3 is not available, but the same commands can be run on any WebSphere system.
From within wsadmin or a wsadmin script, execute:
set jvm [$AdminControl completeObjectName type=JVM,process=server1,*]​
$AdminControl invoke $jvm generateHeapDump​
$AdminControl invoke $jvm dumpThreads​

Pros and Cons of running Quartz.NET embedded or as a windows service

I want to add quartz scheduling to an ASP.NET application.
It will be used to send queued up emails.
What are the pros and cons of running quartz.net as windows service vs embedded.
My main concern is how Quartz.NET in embedded mode handles variable number of worker processes in IIS.
Here are some things to you can consider while you decide whether you should run embedded or not:
If you are going to be creating jobs ONLY from within the hosting application, then run embedded. Otherwise, run as a service.
If your jobs might need permissions that are different from the permissions that the web app has, run as a service.
If your jobs are long running jobs, or jobs that use a lot memory, run as a service.
If you need to run your jobs in a clustered environment for either performance, scalability or fault tolerance, run as a service.
From the items above you can deduce that my preference is to run it as a service. This is because if you are going to go through the trouble of setting up a job scheduler, this means that you have jobs that need to run on a schedule, or long running jobs. A service is usually the better choice for this type of work.
Quartz.NET can be instantiated on a per-application basis (web farm configuration mandates number of schedulers). You can safely run multiple schedulers if you have your jobs backed in a database and you have Quartz.NET configured in clustered mode (and clocks synced naturally).
The main concern is to the application pool handling prior to IIS 7.5. Without constant checks your application worker can get recycled and your scheduler will be down until someone issues a web request to fire up the application pool again. IIS 7.5 has the new feature to keep application pools running all the time.
Otherwise there should not be a big difference between the two models.

Resources