problem
We have a java ee app deployed on Weblogic server 10.3.1 which provides HTTP web service. Let's just call it 'server'. Another java app deployed on tomcat on another machine, which will make web service calls to the server, Let's just call it 'client'. Both the client and the server app are using Axis.
There are chances that the client will fail with an exception
"java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:168)",
and 1 minute later the a server app log shows:
org.apache.axis.Message ERROR -
java.io.IOException: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99).
It failed no more than 5 times a day. The total web service request on the server is less, like, several hundreds. And it seems it tend to fail after the server has a long time 'rest': The first call on every morning will probably fail (we don't have business services at night).
The exceptions stacks are:
Client side:
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:168)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:235)
at java.io.FilterInputStream.read(FilterInputStream.java:66)
at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at javax.xml.parsers.SAXParser.parse(SAXParser.java:375)
at org.apache.axis.encoding.DeserializationContext.parse(DeserializationContext.java:227)
at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:696)
... 12 more
Server side:
2013-03-06 08:37:29,491 org.apache.axis.Message ERROR - java.io.IOException:
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
at java.net.SocketOutputStream.write(SocketOutputStream.java:137)
at weblogic.servlet.internal.ChunkOutput.writeChunkNoTransfer(ChunkOutput.java:530)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:487)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:315)
at weblogic.servlet.internal.ChunkOutput$2.checkForFlush(ChunkOutput.java:580)
at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:222)
at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:146)
at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:138)
at org.apache.axis.utils.ByteArray.writeTo(ByteArray.java:375)
at org.apache.axis.SOAPPart.writeTo(SOAPPart.java:265)
at org.apache.axis.Message.writeTo(Message.java:539)
at org.apache.axis.transport.http.AxisServlet.sendResponse(AxisServlet.java:902)
at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:777)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:170)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:168)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
at org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter.doFilterInternal(OpenEntityManagerInViewFilter.java:112)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
at com.ulic.ucia.framework.util.AppFilter.doFilter(AppFilter.java:48)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
at com.ebao.pub.framework.AppFilter.doFilter(AppFilter.java:101)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3588)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2200)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2106)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1428)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
When the client fails, we check the app log and the database, and it turns out that the business transaction is always success. Weblogic server somehow fails to write the HTTP response. And Weblogic server log file says:
BEA 101366 The server could not send the HTTP message during the configured timeout value. The socket has been closed.
Sorted logs and some analyses
Both the servers have ntp service. I simplified and sorted the server logs and the client logs by time to get a clearer view,
2013-03-06 08:36:18,755 Web Service Client began it's work
2013-03-06 08:36:18,758 Web Service Client make http calls and wait for response
2013-03-06 08:36:19,039 APP on WLS received request
2013-03-06 08:36:24,553 APP on WLS app log says transaction finished with a success.
2013-03-06 08:36:24,575 Web Service Client received java.net.SocketException: Connection reset”
2013-03-06 08:37:29,491 APP on WLS org.apache.axis.Message ERROR - java.io.IOException:java.net.SocketException: Socket closed
Most notably, the client received a Connection Reset after 6 seconds of waiting when the server app just finished proceeding. So I think it must has something to do with the server. The server machine sended a 'TCP RST' when weblogic was trying to send http response due to some reason, but what is that reason?
For all I know , there are two scenarios (may be more) the 'RST' will be send in an java application.
Java thread ends without closing the socket, the tcp stack will send a 'RST' to the socket on the other hand to indicate an error
Tcp stack failed to send all data after 'linger' time.
And now my brain has stucked, don't know what to do next. Any suggestions will be appreciated.
try to setup axis 2 CHUNKED property of HTTP connection in false. Some web servers can't process HTTP without content length header and client fails with Socket Exception.
serviceProxy = new FunctionsServiceStub( ... );
Options options = serviceProxy._getServiceClient().getOptions();
options.setProperty( org.apache.axis2.transport.http.HTTPConstants.CHUNKED, Boolean.FALSE );
Related
Problem:
I testing my asp.net webapi application in my server (use IIS) and Concurrency number is set to 2000,loop count is forever,and alter several second i get Connection timed out: connect error
what i have tried:
set http connect timeout and response timeout as 200000ms in jmeter gui.
set requestQueueLimit to 65535 and min process to 15 in IIS manager.
set minWorkerThread and minIoThread to 200 and timeout to 20 miniutes in web.config file and restart my application in IIS
None of the above worked,and i found the server's cpu usage has been low ,here is the screenshot when using jmeter to test:
cpu usage
jmeter screen shot
here is the error log:
org.apache.http.conn.HttpHostConnectException: Connect to XXX.XXX.com:80 [XXX.XXXX.com/XXXX] failed: Connection timed out: connect
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$JMeterDefaultHttpClientConnectionOperator.connect(HTTPHC4Impl.java:408)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:939)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:650)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:66)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1301)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1290)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:651)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:570)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:501)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:268)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
Check Maximum Concurrent Connections and other limits in your web site configuration: Advanced Settings->Limits->Maximum Concurrent Connections
It may be not connected with IIS at all and the timeout can happen at your website level due to incorrect database configuration or inefficient algorithms used. Consider re-running your test with profiler tool telemetry like YourKit or dotTrace - it will give you full information regarding what's going on under the hood
Don't run load tests using JMeter GUI, it's only for tests development and debugging, when it comes to execution you should be running your JMeter tests in command-line non-GUI mode
Remove all the listeners, they don't add any value and just consume valuable resources
We are running a CXF 2.7.11 application on WAS 8.5.5.2 server. Application has classloading parent last property also we disabled IBM JaxWS engine as instructed on CXF documentation.
Application is running fine a couple of days, after that we get below exceptions and TCP channel seems to be full.
From the stack trace that have ws classes I suspect CXF for this problem but that may be a result of another problem
The application is also a Spring MVC application that exposes REST resources..
[10.11.2014 05:00:20:887 EET] 00000049 TCPChannel W TCPC0004W: TCP Channel TCP_2 has exceeded the maximum number of open connections 20000.
[10.11.2014 05:02:16:343 EET] 0000023f SSLHandshakeE E SSLC0008E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired. Exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at com.ibm.jsse2.b.a(b.java:56)
at com.ibm.jsse2.nc.a(nc.java:90)
at com.ibm.jsse2.nc.unwrap(nc.java:292)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:26)
at com.ibm.ws.ssl.channel.impl.SSLConnectionLink.readyInbound(SSLConnectionLink.java:535)
at com.ibm.ws.ssl.channel.impl.SSLConnectionLink.ready(SSLConnectionLink.java:295)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214)
at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113)
at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175)
at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138)
at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204)
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775)
at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1864)
So this is a bit tricky. You can simply increase the number of connections which can be done from the console:
Servers > WebSphere Application Servers > SERVER_NAME > web container > web container transport chains > TCP CHANNEL
The reason I said this is tricky because there could be a larger underlying issue, for example, a connection leak. To get to the point where you are using up 20K connections is quite a lot, however, I don't know how much load you're expecting on this server. If this is simply a test environment then you need to start looking into a possible connection leak.
Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired. Exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
This portion of the error message means that plain-text non-SSL connections are being made to an SSL port. You might also want to take a look at that and see who's making these calls because it's an overhead.
Using 20000 connections is extremely high. You probably have bugs in your client code which is leaking connections. If you are using CXF in client you may take a look at this https://issues.apache.org/jira/browse/CXF-5144.
Increasing connections number will not solve your issue, it will just delay it.
I deployed a CMT MDB on Websphere 7, which gets messages from a Websphere MQ 7 queue and through a resource adapter it sends them to an external system.
After the messages get through the MDB logic and I try to make a connection with the external system through the resource adapter I'm getting the error:
Connection Error Request Stack: java.lang.Throwable
at com.ibm.ejs.j2c.ConnectionEventListener.connectionErrorOccurred(ConnectionEventListener.java:441)
at com.jbase.jremote.jca.EventNotifier$1.notify(Unknown Source)
at com.jbase.jremote.jca.JRemoteManagedConnection.notify(Unknown Source)
at com.jbase.jremote.jca.JRemoteManagedConnection.isAlive(Unknown Source)
at com.jbase.jremote.jca.JRemoteManagedConnectionFactory.matchManagedConnections(Unknown Source)
at com.ibm.ejs.j2c.PoolManager.getMCWrapperFromMatch(PoolManager.java:3909)
at com.ibm.ejs.j2c.PoolManager.claimVictim(PoolManager.java:3784)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2474)
at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:1064)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:701)
at com.jbase.jremote.jca.JRemoteConnectionFactoryImpl.getConnection(Unknown Source)
at com.jbase.jremote.jca.JRemoteConnectionFactoryImpl.getConnection(Unknown Source)
at com.temenos.tocf.grouping.mdb.MessageGroupingMDB.processRequest(MessageGroupingMDB.java:192)
at com.temenos.tocf.grouping.mdb.MessageGroupingMDB.sendMessage(MessageGroupingMDB.java:166)
at com.temenos.tocf.grouping.mdb.MessageGroupingMDB.onMessage(MessageGroupingMDB.java:123)
at com.ibm.ejs.container.MessageEndpointHandler.invokeMdbMethod(MessageEndpointHandler.java:1093)
at com.ibm.ejs.container.MessageEndpointHandler.invoke(MessageEndpointHandler.java:778)
at $Proxy28.onMessage(Unknown Source)
at com.ibm.mq.connector.inbound.MessageEndpointWrapper.onMessage(MessageEndpointWrapper.java:131)
at com.ibm.mq.jms.MQSession$FacadeMessageListener.onMessage(MQSession.java:147)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.run(JmsSessionImpl.java:2598)
at com.ibm.mq.jms.MQSession.run(MQSession.java:862)
at com.ibm.mq.connector.inbound.WorkImpl.run(WorkImpl.java:229)
at com.ibm.ejs.j2c.work.WorkProxy.run(WorkProxy.java:399)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1662)
If I deploy the same MDB with the same resource adapter in WebSphere 8.5 it works without any error.
So I assume it's something related to WebSphere 7 and the way it fails to open/find a connection in the connection pool.
It's not a load test so it's not the case that too many resources are trying to get a free connection.
Thanks in advance for any suggestion about this issue!
It looks like your MDB is attempting to get a connection using the com.jbase resource adapter and the IBM JCA is searching for any matching connections in the free pool. It calls the com.jbase resource adapters matchManagedConnection method, and the resource adapter has done an isAlive check that appears to then call the connectionErrorOccurred callback that you can see at the top of the stack. This happens when the resource adapter wants the JCA component to remove a 'bad' connection from the pool, i.e. the resource you was connected to has been stopped or network connection removed.
Having said all of the above, the reason why you see the error message in the log is because tracing is on :), the exception is only generated to obtain the stack trace to output to the log file. The exception is never 'thrown' and processing should continue as normal.
If a client cancel its request, the application server is suposed to throw the following error :
java.net.SocketException: Connection reset by peer: socket write error
But what is exactly happening ?
Let's say I'm doing a very expensive operation on the server side, and I'm writing some data to the outputstream everytime my server service get a new result (kind of streaming).
In the middle of this operation, the client cancel the request. What happens ?
The operation stops, because the socket throws this error when the connection closed ? If it's not stopped, what happens to the data flushed in the outputstream after that ?
Thanks
I can't tell what Tomcat is doing but here is what happens:
the client closed the socket gracefully (then the server is notified about the close and closes its side of the connection too, in which case any buffered data ready to be sent is lost);
the client cut the socket brutally (then the server is NOT notified and it will detect the connection loss after a timeout or at the first attempt to send data - this will fail).
So, if your streaming is "constant", the server will always be 'protected' against undetected lost connections (the first send attempt will clean-up the area).
If this streaming is not constant, then you should make room for a timeout, or use TCP Keep-Alives to make sure that the connection state is tested on a regulary basis.
Hope it helps.
I'm trying to make a Flex-based desktop application consume messages from an ActiveMQ topic with a durable subscription, using the JMS bridge of BlazeDS. The basic scenario is as follows:
Messages are produced by other producers in the topic to which the Flex client is subscribed.
The Flex client may go offline from time to time, but it must receive all the messages it has missed while being offline when it connects to BlazeDS again. (Of course the Flex client connects with the same client ID every time).
It can not be guaranteed that the Flex client is shut down gracefully.
Everything works fine if I explicitly disconnect my consumer on the Flex side by calling disconnect() - I do it in the exit handler of the application. However, due to #3 above, it is not guaranteed that disconnect() is called all the time. When the Flex client shuts down without calling disconnect(), it seems that the subscription of the "proxy JMS client" that BlazeDS creates and associates to the Flex client stays active towards ActiveMQ, so ActiveMQ still thinks that the client is logged in. When the Flex app starts up the next time, it is unable to log in to BlazeDS because ActiveMQ refuses its subscription, claiming that the client ID is already taken. Why is it so and what can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly?
More detailed information: some debugging revealed that:
BlazeDS becomes aware of the termination of the Flex client because it prints a few exceptions to the console when in debug mode. The messages are as follows:
[BlazeDS]23:18:13.688 [WARN] Endpoint with id 'my-streaming-amf' is closing the streaming connection to FlexClient with id '71E6466F-D91F-201C-F60A-A6CB52F95D9F' because endpoint encountered a socket write error, possibly due to an unresponsive FlexClient.
ClientAbortException: java.net.SocketException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:542)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.handleFlexClientStreamingOpenRequest(BaseStreamingHTTPEndpoint.java:818)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.serviceStreamingRequest(BaseStreamingHTTPEndpoint.java:1055)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.service(BaseStreamingHTTPEndpoint.java:460)
at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:353)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:963)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more
[BlazeDS]23:18:13.689 [DEBUG] Streaming thread 'http-8400-1' for endpoint with id 'my-streaming-amf' is releasing connection and returning to the request handler pool.
[BlazeDS]23:18:13.689 [INFO] Number of streaming clients for FlexSession with id '5BC5E8D604A361BCA673B05AC624CCC1' is 0.
[BlazeDS]23:18:13.689 [DEBUG] Number of streaming clients for endpoint with id 'my-streaming-amf' is 0.
At this stage, the subscriptions are still shown on the ActiveMQ web admin interface as being active.
Killing BlazeDS (more precisely, the Tomcat server that hosts it) with kill -9 from the console makes ActiveMQ realize immediately that the "proxy JMS client" is gone and it becomes offline on the ActiveMQ web admin interface. This made me conclude that BlazeDS is keeping the proxy JMS client alive explicitly since kill -9 gives no chance to BlazeDS to unsubscribe the client but it still becomes offline in ActiveMQ.
So, the question once again: What can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly? Is this a bug in BlazeDS or am I just missing some hidden configuration setting that would make it work?
Version information: BlazeDS 4.0, ActiveMQ 5.5.0, both freshly downloaded today. I'm using the Tomcat server in the BlazeDS turnkey but ActiveMQ is installed separately because the BlazeDS turnkey ships with ActiveMQ 4.1.1 only. By the way, that version of ActiveMQ has the same issue.
The problem is that there is no way for BlazeDS to detect that your Flex client was shutdown, you will have to implement your own mechanism - my suggestion is to use a heart beat implemented with messaging. If no message is received from the client after a time interval you can assume that the Flex client is gone and do the disconnect (or you can use the session timeout mechanism on the server, and do the disconnect on session expire).
What you have seen (the exception caught when the streaming channel is closed) is not enough to say 100% sure that the Flex client is gone. The streaming is implemented using an HTTP connection kept open forever (used to send server messages) and periodic HTTP post calls (initiated by the client to send messages). In some networks the firewall can decide to kill the HTTP connection after a couple of seconds and you will receive the same error like the one you posted. However, it does not mean that the Flex client is killed - the Flex client can use a fallback strategy and switch to short/long polling in this case. Actually it would be a bug if BlazeDS will automatically do the JMS disconnect in this case.