Remote JMS communication works, EJB communication fails - ejb

I'm currently introducing myself into Java EE, especially into how to use EJB and JMS.
The Issue in a Nutshell
I have a local client and a remote server in the internet. JMS communication works fine, EJB communication doesn't. Either communication succeeds in my local network.
The Issue in More Detail
I'm using Maven to build:
an EAR containing a couple of stateless remote EJBs and one MDB
a desktop client which includes EJB client JARs generated by Maven which are used to communicate with the server.
I'm using a WildFly application server to deploy the EAR. According to the server logs, EJBs are successfully exported. All traffic is based on http-remoting. Undertow switches protocols according to sniffed traffic as needed.
Let's take the client login as an example. The login API call is based on remote EJB communication, meanwhile JMS messages are sent.
If I run both client and server on my local machine from Eclipse, login works fine and JMS messages are processed correctly. Same applies if I run the client on a different machine in the same network. If I run the client in my local network, but the server in the internet, EJB communication fails and JMS messages are still processed correctly.
Why is that?
Log Content
The remote EJB's login method is not invoked, there are no logs. I have added log4j.logger.org.jboss.ejb.client=TRACE to log4j.properties. Thats the client logs in case of a failing API call:
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - endpoint.create.options. has the following options {}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {} have been merged with defaults {org.xnio.Options.THREAD_DAEMON=>true} to form {org.xnio.Options.THREAD_DAEMON=>true}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connectionprovider.create.options. has the following options {org.xnio.Options.SSL_ENABLED=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {org.xnio.Options.SSL_ENABLED=>false} have been merged with defaults {} to form {org.xnio.Options.SSL_ENABLED=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connection.default.connect.options. has the following options {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false} have been merged with defaults {} to form {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connection.default.channel.options. has the following options {}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:490 - Connection org.jboss.ejb.client.PropertiesBasedEJBClientConfiguration$RemotingConnectionConfigurationImpl#33f49f38 successfully created for connection named default
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:295 - No clusters configured in properties
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG EJBClientPropertiesLoader:100 - Looking for jboss-ejb-client.properties using classloader sun.misc.Launcher$AppClassLoader#58644d46
2015-06-21 16:47:49 [GS Desktop Init Thread] INFO client:45 - JBoss EJB Client version 2.1.1.Final
...
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-4] DEBUG RemotingConnectionEJBReceiver:191 - Channel Channel ID eb5d763d (outbound) of Remoting connection 25bff644 to euve1234.vserver.de/84.46.96.86:8080 opened for context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]} Waiting for version handshake message from server
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-5] INFO remoting:103 - EJBCLIENT000017: Received server version 2 and marshalling strategies [river]
2015-06-21 16:47:51 [GS Desktop Thread 0] INFO remoting:218 - EJBCLIENT000013: Successful version handshake completed for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]} on channel Channel ID eb5d763d (outbound) of Remoting connection 25bff644 to euve1234.vserver.de/84.46.96.86:8080
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] TRACE ChannelAssociation:375 - Received message with header 0x8
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:763 - Received module availability report for 11 modules
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSAuthManagerEjb-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSNotificationManagerEjb-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
...
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSServerEAR-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
2015-06-21 16:47:51 [GS Desktop Thread 0] DEBUG ConfigBasedEJBClientContextSelector:174 - Registered 1 remoting EJB receivers for EJB client context org.jboss.ejb.client.EJBClientContext#3baeae68
...
2015-06-21 16:47:51 [JavaFX Application Thread] WARN GsTask:38 - API call background task failed
java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:GSServerEAR, moduleName:GSAuthManagerEjb-0.0.1, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext#63dd58c4
at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:774)
at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:116)
at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:186)
at org.jboss.ejb.client.EJBInvocationHandler.sendRequestWithPossibleRetries(EJBInvocationHandler.java:255)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:200)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:183)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:146)
at com.sun.proxy.$Proxy7.createSession(Unknown Source)
...
Some of My Thoughts
Could the issue be caused by
Invalid IPs?
No. After moving the server, I've updated the standalone.xml configuration, and I've ensured by observing network traffic on all machines that all calls are received. JMS works.
WildFly security settings, e.g. concerning security realm configuration?
No. Login works locally. These settings should be valid after moving the server. Both JMS and EJB use the same WildFly application user. JMS works.
Could be a networking/routing issue, since EJBs are based on RMI, or some kind of firewall issue?
Probably, but JMS works. I'm not yet really used to JMS, but isn't it based on RMI? I send javax.jms.ObjectMessages. Session.createObjectMessage(Serializable object) takes a Serializable, that's why I suggest that we have RMI here, too.
Locally, WildFly runs on Windows 7. Remote, Wildfly runs on Ubuntu. I've tried Ubuntu 10/12/14 for this.
Concerning Java and WildFly: These are platform independent. Write once, run everywhere. I suggest that it is very unlikely that the problem is caused by the underlying OS. I've verified that traffic flows and JMS works.
Please correct me if I'm wrong.
Additional Notes
I know that I've provided little information so long, since I didn't want to blow up my question. Please don't hesitate to ask if you need more information. This applies to code, too.
I'm using Java SE 8 / Java EE 7. Concerning WildFly: I've tested 8.1.0.Final, 8.2.0.Final and 9.0.0.Beta2.
Please don't simply refer to online examples. I've been working on such for days already, and please keep in mind that the local communication works fine, already.
I highly appreciate any thoughts and comments on this since I got really stuck. Many thanks in advance.
Update 1: EJB Implementation, Client Context Creation and EJB Lookup
Server:
#Remote
public interface GsAuthManager {
GsClientSession createSession(String username, String password);
}
#Stateless
public class GsAuthManagerBean implements GsAuthManager {
#Override
public GsClientSession createSession(String username, String password) {
// ...
}
}
WildFly logs:
java:jboss/exported/GSServerEAR/GSAuthManagerEjb-0.0.1/GsAuthManagerBean!de.genesys.server.ejb.auth.GsAuthManager
Client:
static void initEjbClient(String serverHostname, String username, String password) {
final Properties ejbClientProps = new Properties();
ejbClientProps.put("remote.connections", "default");
ejbClientProps.put("remote.connection.default.port", "8080");
ejbClientProps.put("remote.connection.default.host", serverHostname);
ejbClientProps.put("remote.connection.default.username", username);
ejbClientProps.put("remote.connection.default.password", password);
ejbClientProps.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
ejbClientProps.put("remote.connection.default.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS", "JBOSS-LOCAL-USER");
ejbClientProps.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
EJBClientConfiguration clientConfig = new PropertiesBasedEJBClientConfiguration(ejbClientProps);
ContextSelector<EJBClientContext> selector = new ConfigBasedEJBClientContextSelector(clientConfig);
EJBClientContext.setSelector(selector);
}
static GsAuthManager initEjbProxy(String serverHostname, String username, String password) throws NamingException{
Properties props = new Properties();
props.put("jboss.naming.client.ejb.context", true);
props.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
props.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
props.put(Context.PROVIDER_URL, "http-remoting://" + serverHostname + ":8080");
props.put(Context.SECURITY_PRINCIPAL, username);
props.put(Context.SECURITY_CREDENTIALS, password);
InitialContext context = new InitialContext(props); // Stripped down, original code keeps a strong reference and closes context on program termination
return (GsAuthManager) context.lookup("ejb:GSServerEAR/GSAuthManagerEjb-0.0.1/GsAuthManagerBean!de.genesys.server.ejb.auth.GsAuthManager");
}

In the logs you get on deploy appName='GSServerEAR-0.0.1' but when the exception occurs: appName:GSServerEAR. So I suppose, the runtime of your EAR file (when you deploy it) contains the version, and it should not.

Related

Jmeter testing Asp.net application get timeout

Problem:
I testing my asp.net webapi application in my server (use IIS) and Concurrency number is set to 2000,loop count is forever,and alter several second i get Connection timed out: connect error
what i have tried:
set http connect timeout and response timeout as 200000ms in jmeter gui.
set requestQueueLimit to 65535 and min process to 15 in IIS manager.
set minWorkerThread and minIoThread to 200 and timeout to 20 miniutes in web.config file and restart my application in IIS
None of the above worked,and i found the server's cpu usage has been low ,here is the screenshot when using jmeter to test:
cpu usage
jmeter screen shot
here is the error log:
org.apache.http.conn.HttpHostConnectException: Connect to XXX.XXX.com:80 [XXX.XXXX.com/XXXX] failed: Connection timed out: connect
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$JMeterDefaultHttpClientConnectionOperator.connect(HTTPHC4Impl.java:408)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:939)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:650)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:66)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1301)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1290)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:651)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:570)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:501)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:268)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
Check Maximum Concurrent Connections and other limits in your web site configuration: Advanced Settings->Limits->Maximum Concurrent Connections
It may be not connected with IIS at all and the timeout can happen at your website level due to incorrect database configuration or inefficient algorithms used. Consider re-running your test with profiler tool telemetry like YourKit or dotTrace - it will give you full information regarding what's going on under the hood
Don't run load tests using JMeter GUI, it's only for tests development and debugging, when it comes to execution you should be running your JMeter tests in command-line non-GUI mode
Remove all the listeners, they don't add any value and just consume valuable resources

Consistent periodic first-chance SocketException in AWS ECS/Fargate hosted Asp.NET web service

I have a Asp.NET Core 5 web service (.NET 5) hosted in AWS ECS/Fargate. It uses an Application Load Balancer (ALB) with health checks enabled pointing to /health which uses the standard health checks library.
I'm monitoring the service with Datadog and with dotnet runtime metrics turned on. With this monitoring, we are seeing a consistent ~30 first-chance SocketException(s) per minute. Even when sitting idle and not handling any real requests. The only requests being served are the health check requests.
I remote exec'd into the ECS task and used dotnet-dump to get the exception from the heap of a memory dump. Here's what I found:
Exception type: System.Net.Sockets.SocketException
Message: Operation canceled
InnerException: <none>
StackTrace (generated):
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketAwaitableEventArgs.<GetResult>g__ThrowSocketException|7_0(System.Net.Sockets.SocketError)+0x2a
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketAwaitableEventArgs.GetResult()+0x29
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection+<ProcessReceives>d__28.MoveNext()+0x1b5
System.Private.CoreLib.dll!System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()+0x1c
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)+0xcc
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)+0x46
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection+<DoReceive>d__27.MoveNext()+0xfb
We have a less regular first-chance ObjectDisposedException (~1 per minute) that could also be related (or a race condition artifact of the other). Which I found in the memory dump analysis:
Exception type: System.ObjectDisposedException
Message: Cannot access a disposed object.
InnerException: <none>
StackTrace (generated):
System.Net.Sockets.dll!System.Net.Sockets.Socket.ThrowObjectDisposedException()+0x6a
System.Net.Sockets.dll!System.Net.Sockets.Socket.ReceiveAsync(System.Net.Sockets.SocketAsyncEventArgs, System.Threading.CancellationToken)+0x67
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection+<ProcessReceives>d__28.MoveNext()+0x219
System.Private.CoreLib.dll!System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()+0x1c
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)+0xcc
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)+0x46
Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll!Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection+<DoReceive>d__27.MoveNext()+0xfb
I don't see this issue when running on my local machine. And I do not see any logs that indicate that any requests were cancelled.
I suspected that ALB was not reading the response body and cancelling every request after reading only the status code. So I changed the health check HealthCheckOptions.ResponseWriter to return 204-NoContent with no response body, but that didn't help at all.

Producer clientId=producer-2 Connection to node -1 could not be established. Broker may not be available

I use SpringBoot as a consumer of Kafka, and the program accepts messages from Kafka, but the log contains the following information
[kafka-producer-network-thread | producer-2] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-2] Connection to node -1 could not be established. Broker may not be available.
Please help me to solve this problem

Corda node shutdown along with spring boot client

I have a node which is running on my machine along with a spring boot client. Client connects to node's rpc port, every thing works well but when i close my client the node crashes and i have to restart it. Why its happening is it a bug or i am doing some thing wrong. I have also deployed them on cloud and same problem happens.
Corda Open Source 4.0 (503a2ff) May 16 11:37:43 broker java[16853]: Logs can be found in : /opt/corda/logs May 16 11:37:58 broker java[16853]: Advertised P2P messaging addresses : 35.228.97.4:10011 May 16 11:37:58 broker java[16853]: RPC connection address : 10.166.0.2:10012 May 16 11:37:58 broker java[16853]: RPC admin connection address : 10.166.0.2:10050 May 16 11:38:01 broker java[16853]: Loaded 2 CorDapp(s) : Contract CorDapp: Template CorDapp version 1 by vendor Corda Ope May 16 11:38:01 broker java[16853]: Node for "Broker" started up and registered in 19.86 sec May 16 11:38:01 broker java[16853]: SSH server listening on port : 2222 May 16 12:10:03 broker java[16853]: Shutting down ...
It depends how you create your CordaRPCOps class.
If it is a bean, then it will call CordaRPCOps.shutdown when the client is shutdown. This is due to Spring triggering any method named shutdown on any beans by default. Therefore by not creating it as a bean, for example having a wrapper class around CordaRPCOps that is created as a bean instead will resolve this issue.
Or you can tell spring to not trigger the shutdown method by defining your bean like:
#Bean(destroyMethod = "")
public CordaRPCOps proxy() {}

BlazeDS+ActiveMQ: non-graceful disconnection of Flex client from a durable topic does not remove it from ActiveMQ

I'm trying to make a Flex-based desktop application consume messages from an ActiveMQ topic with a durable subscription, using the JMS bridge of BlazeDS. The basic scenario is as follows:
Messages are produced by other producers in the topic to which the Flex client is subscribed.
The Flex client may go offline from time to time, but it must receive all the messages it has missed while being offline when it connects to BlazeDS again. (Of course the Flex client connects with the same client ID every time).
It can not be guaranteed that the Flex client is shut down gracefully.
Everything works fine if I explicitly disconnect my consumer on the Flex side by calling disconnect() - I do it in the exit handler of the application. However, due to #3 above, it is not guaranteed that disconnect() is called all the time. When the Flex client shuts down without calling disconnect(), it seems that the subscription of the "proxy JMS client" that BlazeDS creates and associates to the Flex client stays active towards ActiveMQ, so ActiveMQ still thinks that the client is logged in. When the Flex app starts up the next time, it is unable to log in to BlazeDS because ActiveMQ refuses its subscription, claiming that the client ID is already taken. Why is it so and what can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly?
More detailed information: some debugging revealed that:
BlazeDS becomes aware of the termination of the Flex client because it prints a few exceptions to the console when in debug mode. The messages are as follows:
[BlazeDS]23:18:13.688 [WARN] Endpoint with id 'my-streaming-amf' is closing the streaming connection to FlexClient with id '71E6466F-D91F-201C-F60A-A6CB52F95D9F' because endpoint encountered a socket write error, possibly due to an unresponsive FlexClient.
ClientAbortException: java.net.SocketException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:542)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.handleFlexClientStreamingOpenRequest(BaseStreamingHTTPEndpoint.java:818)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.serviceStreamingRequest(BaseStreamingHTTPEndpoint.java:1055)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.service(BaseStreamingHTTPEndpoint.java:460)
at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:353)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:963)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more
[BlazeDS]23:18:13.689 [DEBUG] Streaming thread 'http-8400-1' for endpoint with id 'my-streaming-amf' is releasing connection and returning to the request handler pool.
[BlazeDS]23:18:13.689 [INFO] Number of streaming clients for FlexSession with id '5BC5E8D604A361BCA673B05AC624CCC1' is 0.
[BlazeDS]23:18:13.689 [DEBUG] Number of streaming clients for endpoint with id 'my-streaming-amf' is 0.
At this stage, the subscriptions are still shown on the ActiveMQ web admin interface as being active.
Killing BlazeDS (more precisely, the Tomcat server that hosts it) with kill -9 from the console makes ActiveMQ realize immediately that the "proxy JMS client" is gone and it becomes offline on the ActiveMQ web admin interface. This made me conclude that BlazeDS is keeping the proxy JMS client alive explicitly since kill -9 gives no chance to BlazeDS to unsubscribe the client but it still becomes offline in ActiveMQ.
So, the question once again: What can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly? Is this a bug in BlazeDS or am I just missing some hidden configuration setting that would make it work?
Version information: BlazeDS 4.0, ActiveMQ 5.5.0, both freshly downloaded today. I'm using the Tomcat server in the BlazeDS turnkey but ActiveMQ is installed separately because the BlazeDS turnkey ships with ActiveMQ 4.1.1 only. By the way, that version of ActiveMQ has the same issue.
The problem is that there is no way for BlazeDS to detect that your Flex client was shutdown, you will have to implement your own mechanism - my suggestion is to use a heart beat implemented with messaging. If no message is received from the client after a time interval you can assume that the Flex client is gone and do the disconnect (or you can use the session timeout mechanism on the server, and do the disconnect on session expire).
What you have seen (the exception caught when the streaming channel is closed) is not enough to say 100% sure that the Flex client is gone. The streaming is implemented using an HTTP connection kept open forever (used to send server messages) and periodic HTTP post calls (initiated by the client to send messages). In some networks the firewall can decide to kill the HTTP connection after a couple of seconds and you will receive the same error like the one you posted. However, it does not mean that the Flex client is killed - the Flex client can use a fallback strategy and switch to short/long polling in this case. Actually it would be a bug if BlazeDS will automatically do the JMS disconnect in this case.

Resources