How can i gracefully shutdown siebel application object manager - crm

My Requirement is to gracefully shutdown siebel applicaiton object manager component having more then 100 tasks to complete. i try with shutdown SCBroker but faced weird issue while navigating application.
BR

SCBroker manages connections between SWSE and Siebel Server, to shutdown an AOM you can run:
shutdown comp <comp_name> for server <server_name>
or:
shutdown fast comp <comp_name> for server <server_name>
https://docs.oracle.com/cd/E14004_01/books/SystAdm/SystAdm_SrvrMgrCLI15.html

Related

How to continue serving requests in Java Servlet container while load balancer removes the server from the pool based on a health check?

I want to continue serving requests while my load balancer removes the server from the pool based on a health check. If I know the server is being shut down, I can change the health check to unhealthy and give it time to be removed without dropping any requests.
My attempt: contextDestroyed in ServletContextListener
I'm delaying shutdown of my web app like this in a ServletContextListener:
override def contextDestroyed(sce: ServletContextEvent) {
logger.info(s"contextDestroyed, suspend for $ShutdownDelayMS ms")
Thread.sleep(ShutdownDelayMS)
logger.info("Suspend finished, shutting down")
}
I see the logs and the delay in between, but as soon as contextDestroyed is called, the server will no longer serve requests, and in stdout I immediately see:
[INFO] Stopped ServerConnector#19877453{HTTP/1.1}{0.0.0.0:8080}
Am I misunderstanding how contextDestroyed is supposed to work? Jetty 9.2.9.v20150224, javax.servlet 3.0.1.
Docs for contextDestroyed say:
Receives notification that the ServletContext is about to be shut
down. All servlets and filters will have been destroyed before any
ServletContextListeners are notified of context destruction.
So that means it immediately stops serving requests if I'm reading it correctly. Is there another way to achieve what I'm trying to do, create a delay that allows requests to continue being served?

ODP.NET Connection Pooling Issues - Fault Tollerance After Database Goes Down

I have an WebAPI service using ODP.NET to make connections to several oracle databases. Normally the web service would be hit several times a second and will never have long periods on inactivity. In our test site however, we did not use it for 2-3 days. This morning, we hit the service and got "connection request timeout" exceptions from ODP.NET, suggesting that the connection pool was out of available connections. We are closing the connections after use. The service was working fine before the period, but today the very first query got the timeout exception. Our app pool in IIS is configured to never reset.
My question then is, what can cause the connection pool to fill with bad connections after a period of inactivity, where these connections are not cleaned up in the usual 3 minute cycle? It only happened to 2 out of the 3 of our databases, and Validate Connection=true is set for all of them.
EDIT
So after talking to the DBA, there is some different between a connection/session being killed manually or by timeout and the database server severing the TCP connections. In this case, the TCP connection was severed as part of a regular backup (why is not important for this). I guess this happens when the whole database server goes offline at once. The basis of the question still applies I think though: why is ODP.NET unable to cleanup severed connections overtime? There is a performance counter that refers to "Stasis" connections, could those connections be stuck in that state? I would think that it should be able to see that a connection is no longer active (Validate Connection=True), kill it and not return it to the pool.
Granted, this problem can be solved by just resetting the app pool everything the database goes down. I would still like to configure ODP.NET connection pooling to be more fault tolerant.
I have run into this same issue, and the only solution I have found is to use the Connection Lifetime connection string parameter in conjunction with Validate Connection.
In my particular case, the connection timeout was set at the server and the connections in the pool would timeout, but not be sniped out of the pool, resulting in errors.
Setting both the Connection Lifetime and the Validate Connection parameters has resolved the issue.
Make sure the Connection Lifetime value that you choose is less than the server connection inactivity timeout.
The recommended solution is to use ODP.NET Fast Connection Failover (FCF). FCF will automatically remove invalid connections from the pool such that you don't need to use Validate Connection, Connection Lifetime, nor clear the pool.
To use FCF, set "HA events=true", use connection pooling, and have your DBA set up Fast Application Notification (FAN) on the server side. FAN is what alerts the ODP.NET pool when a DB service or node goes down or rebooted. Upon receiving the message, ODP.NET knows which connections to remove from the pool and removes them, leaving all other valid connections untouched.
Something else is going on here. Min Pool Size and some of the other settings help when the connection is severed from things like DBA configured idle timeouts and firewall tcp idle timeouts, 'connection request timeout' occurs when created a new connection.
This could be simple network problem. There could be something interfering with dns resolution of the servers. Another case is not having fully qualified entries in tnsnames. I've been bit by the latter a couple of times.
The other issue is the one you've already recognized - full pool.
Double check that you don't have a connection leak somewhere. A missing .Close is one thing but if you're not using a 'using' statement, a try/finally is required as an unhandled exception could be thrown prior to the .Close.
I would use perfmon to monitor some of the connection statistics to start - NumberOfPooledConnections, NumberOfActiveConnections, etc:

Detect when asp.net application is shutting down during long running request

There have been some questions about how to detect when an application is shutting down using IRegisteredObject. However, IRegisteredObject.Stop will not be invoked until all active requests complete.
This will be the case for long running requests (pushlet, long polling, web socket), meaning that an app pool recycle can be held up by these requests indefinitely.
Is there a way to detect from a long running request that an application shut down is pending?
I've already tested using IRegisteredObject or polling HostingEnvironment.ShutdownReason. Neither one works until active requests are completed.
The Katana/Owin project accesses the internal System.Web.Hosting.UnsafeIISMethods.MgdHasConfigChanged method to detect a shutdown so that long running requests can detect this state.
See ShutdownDectector and UnsafeIISMethods for sample implementation.

Idle time-out for a background thread on IIS

IIS Version :7.5,
Idle time-out for the ApplicationPool: 20 minutes.
step:
1. User visits a page.
2. when the server receives the request, a new thread is created by the code to process an complex operation. And at the same time, the response is sent to the user saying that the request is processed in the background.
After 20 minute(no visit to the site), the worker process is shut down. The complex operation has not been finished.
How to make the iis know that the worker process is not idle if a thead is running?
I have the same problem at this tame, currently found the best solution just create new request from IIS worker thread to IIS itself to avoid background thread shutdown.

BlazeDS+ActiveMQ: non-graceful disconnection of Flex client from a durable topic does not remove it from ActiveMQ

I'm trying to make a Flex-based desktop application consume messages from an ActiveMQ topic with a durable subscription, using the JMS bridge of BlazeDS. The basic scenario is as follows:
Messages are produced by other producers in the topic to which the Flex client is subscribed.
The Flex client may go offline from time to time, but it must receive all the messages it has missed while being offline when it connects to BlazeDS again. (Of course the Flex client connects with the same client ID every time).
It can not be guaranteed that the Flex client is shut down gracefully.
Everything works fine if I explicitly disconnect my consumer on the Flex side by calling disconnect() - I do it in the exit handler of the application. However, due to #3 above, it is not guaranteed that disconnect() is called all the time. When the Flex client shuts down without calling disconnect(), it seems that the subscription of the "proxy JMS client" that BlazeDS creates and associates to the Flex client stays active towards ActiveMQ, so ActiveMQ still thinks that the client is logged in. When the Flex app starts up the next time, it is unable to log in to BlazeDS because ActiveMQ refuses its subscription, claiming that the client ID is already taken. Why is it so and what can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly?
More detailed information: some debugging revealed that:
BlazeDS becomes aware of the termination of the Flex client because it prints a few exceptions to the console when in debug mode. The messages are as follows:
[BlazeDS]23:18:13.688 [WARN] Endpoint with id 'my-streaming-amf' is closing the streaming connection to FlexClient with id '71E6466F-D91F-201C-F60A-A6CB52F95D9F' because endpoint encountered a socket write error, possibly due to an unresponsive FlexClient.
ClientAbortException: java.net.SocketException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:542)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.handleFlexClientStreamingOpenRequest(BaseStreamingHTTPEndpoint.java:818)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.serviceStreamingRequest(BaseStreamingHTTPEndpoint.java:1055)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.service(BaseStreamingHTTPEndpoint.java:460)
at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:353)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:963)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more
[BlazeDS]23:18:13.689 [DEBUG] Streaming thread 'http-8400-1' for endpoint with id 'my-streaming-amf' is releasing connection and returning to the request handler pool.
[BlazeDS]23:18:13.689 [INFO] Number of streaming clients for FlexSession with id '5BC5E8D604A361BCA673B05AC624CCC1' is 0.
[BlazeDS]23:18:13.689 [DEBUG] Number of streaming clients for endpoint with id 'my-streaming-amf' is 0.
At this stage, the subscriptions are still shown on the ActiveMQ web admin interface as being active.
Killing BlazeDS (more precisely, the Tomcat server that hosts it) with kill -9 from the console makes ActiveMQ realize immediately that the "proxy JMS client" is gone and it becomes offline on the ActiveMQ web admin interface. This made me conclude that BlazeDS is keeping the proxy JMS client alive explicitly since kill -9 gives no chance to BlazeDS to unsubscribe the client but it still becomes offline in ActiveMQ.
So, the question once again: What can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly? Is this a bug in BlazeDS or am I just missing some hidden configuration setting that would make it work?
Version information: BlazeDS 4.0, ActiveMQ 5.5.0, both freshly downloaded today. I'm using the Tomcat server in the BlazeDS turnkey but ActiveMQ is installed separately because the BlazeDS turnkey ships with ActiveMQ 4.1.1 only. By the way, that version of ActiveMQ has the same issue.
The problem is that there is no way for BlazeDS to detect that your Flex client was shutdown, you will have to implement your own mechanism - my suggestion is to use a heart beat implemented with messaging. If no message is received from the client after a time interval you can assume that the Flex client is gone and do the disconnect (or you can use the session timeout mechanism on the server, and do the disconnect on session expire).
What you have seen (the exception caught when the streaming channel is closed) is not enough to say 100% sure that the Flex client is gone. The streaming is implemented using an HTTP connection kept open forever (used to send server messages) and periodic HTTP post calls (initiated by the client to send messages). In some networks the firewall can decide to kill the HTTP connection after a couple of seconds and you will receive the same error like the one you posted. However, it does not mean that the Flex client is killed - the Flex client can use a fallback strategy and switch to short/long polling in this case. Actually it would be a bug if BlazeDS will automatically do the JMS disconnect in this case.

Resources