Here is what our setup looks like:
SignalR Server (an ASP.NET MVC application) on windows Server 2012.
Sencha HTML5 apps (SignalR clients) on the same server (Windows Server 2012).
.NET Windows Service on a Windows Server 2008 R2 server. This also acts as a SignalR client.
We are using the SignalR development build from a couple of weeks ago (1.1.0). Occasionally, we find that the SignalR server loses the connection to the .NET client. Using the latest build allowed us to add client-side logging.
We have a good example of this happening last Thursday (28 Mar 2013 at 6.33 PM).
Client-side log (for the sake of brevity, I have removed any logs that are not important). Note that the Hub class is called "Transmitter".
ChangeState(New connection, Disconnected, Connecting)
WS: wss://www.myURL.com/SignalRServer/signalr/connect?transport=webSockets&connectionToken=fsfY8BWv5eiSNuQtHW6wfY6XWDInkrVEn5fyHogSIyapob1BxoZLz0aMO_IuARmd9suam3my3yU0sQOEYMah-ZVj_4idQHXKQmhUKyRFlNgObCWlblCDfXflOq4Fxt8P0&connectionData=[{"Name":"Transmitter"}]
Auto: Failed to connect to using transport webSockets. System.PlatformNotSupportedException: The WebSocket protocol is not supported on this platform.
at System.Net.WebSockets.ClientWebSocket..ctor()
at Microsoft.AspNet.SignalR.Client.Transports.WebSocketTransport.<PerformConnect>d__0.MoveNext()
SSE: GET https://www.myURL.com/SignalRServer/signalr/connect?transport=serverSentEvents&connectionToken=fsfY8BWv5eiSNuQtHW6wfY6XWDInkrVEn5fyHogSIyapob1BxoZLz0aMO_IuARmd9suam3my3yU0sQOEYMah-ZVj_4idQHXKQmhUKyRFlNgObCWlblCDfXflOq4Fxt8P0&connectionData=[{"Name":"Transmitter"}]
ChangeState(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Connecting, Connected)
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: initialized)
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {"C":"B,25|1,0|2,0|3,0","M":[{"H":"Transmitter","M":"joined","A":["0e3cd780-9efc-4824-b9a2-bae81b9be17f","27/03/2013 8:16:46 PM"]}]})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {"C":"B,25|1,1|2,0|3,0","M":[{"H":"Transmitter","M":"isLoggedIn","A":[true]}]})
OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, {"I":"0"})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
............
............
............
............
............
............
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
Connection Timed-out : Transport Lost Connection 28/03/2013 8:33:14 AM
ChangeState(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Connected, Reconnecting)
SSE: GET https://www.myURL.com/SignalRServer/signalr/?transport=serverSentEvents&connectionToken=fsfY8BWv5eiSNuQtHW6wfY6XWDInkrVEn5fyHogSIyapob1BxoZLz0aMO_IuARmd9suam3my3yU0sQOEYMah-ZVj_4idQHXKQmhUKyRFlNgObCWlblCDfXflOq4Fxt8P0&messageId=B%2C6&connectionData=[{"Name":"Transmitter"}]
ChangeState(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Reconnecting, Connected)
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: initialized)
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {"C":"B,7","M":[{"H":"Transmitter","M":"rejoined","A":["0e3cd780-9efc-4824-b9a2-bae81b9be17f","28/03/2013 6:33:12 PM"]}]})
OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, {"I":"12"})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
............
............
Before the connection times out, everything works as expected - the SignalR server can call .NET client's methods and vice versa.
After the connection times out, the .NET client can call server Hub methods, but the server cannot call the .NET client methods.
Other things to note:
Note that the connection times out reporting "28/03/2013 8:33:14 AM" as the time. This is different to the times logged in subsequent entries and the time was actually 6:33:14 PM (Australian EST).
As can be seen above ServerSentEvents is being used as the SignalR .NET client is on a Windows 2008 server.
There is no pattern to the timeout. Sometimes it does not happen for days, other times it happens within a few hours.
The timeout is not related to the the machines (both, where the SignalR server or the the SignalR client reside) being rebooted, IIS resets, etc.
Also, this is quite different to momentary connection losses and re-connections, which seem to work as required. An example of a momentary loss in connection is shown below:
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
ChangeState(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Connected, Reconnecting)
SSE: GET https://www.myURL.com/SignalRServer/signalr/?transport=serverSentEvents&connectionToken=fsfY8BWv5eiSNuQtHW6wfY6XWDInkrVEn5fyHogSIyapob1BxoZLz0aMO_IuARmd9suam3my3yU0sQOEYMah-ZVj_4idQHXKQmhUKyRFlNgObCWlblCDfXflOq4Fxt8P0&messageId=B%2C1D%7CF%2CE%7CG%2C0%7CH%2C0&connectionData=[{"Name":"Transmitter"}]
ChangeState(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Reconnecting, Connected)
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {"C":"B,2","M":[{"H":"Transmitter","M":"rejoined","A":["00fabf13-eb07-4704-8d43-4d8865a519f0","28/03/2013 4:08:26 PM"]}]})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: initialized)
OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, {"I":"11"})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
SSE: OnMessage(0e3cd780-9efc-4824-b9a2-bae81b9be17f, Data: {})
What I don't understand is why the momentary connection losses and re-connections seem to work, but the reconnections after a connection time-out do not work.
Is there anything that we can do to address this?
Thanks in advance.
When the client's Disconnected event is raised it means the connection was removed by the server. After that occurs it's up to the developer to restart the connection.
We do this because keeping state around on the server for a long time is expensive.
Related
I am trying to run Example Cordapp in two AWS instances. With Notary and PartyA in 1st Instance and PartyB and PartyC in 2nd Instance.
I followed the steps here,
Corda nodes: how to connect two independent pc as two nodes?
In the conf file of,
Notary and PartyA - I have edited the P2P address to reflect the PrivateIP of Instance 1
PartyB and PartyC - I have edited the P2P address to reflect the PrivateIP of Instance 2
With the above conf files, I ran the Network Bootstrapper jar in Instance 1 and copied the folders PartyB and PartyC to Instance 2 and started the Notary and Parties 1 by 1 respectively in the corresponding Instances.
All nodes started succesfully and when I try to execute a IOU flow from PartA(in Instance 1) to PartyC(in Instance2), it is pausing at Collecting counterparties signature step without proceeding further. Below is what I see in PartyA's Console,
Fri Nov 30 08:39:10 UTC 2018>>> flow start ExampleFlow$Initiator iouValue: 50, otherParty: "O=PartyC,L=Paris,C=FR"
Verifying contract constraints.
Signing transaction with our private key.
Gathering the counterparty's signature.
Collecting signatures from counterparties. (hanging here and not proceeding further)
When I tried to look at the log information in NodeA, it displays as below.,
[INFO ] 2018-11-30T08:39:10,077Z [main] messaging.RPCServer.start - Starting RPC server with configuration RPCServerConfiguration(rpcThreadPoolSize=4, reapInterval=PT1S, deduplicationCacheExpiry=PT24H) {}
[INFO ] 2018-11-30T08:39:10,115Z [Thread-0 (ActiveMQ-client-global-threads)] bridging.BridgeControlListener.processControlMessage - Received bridge control message Create(nodeIdentity=DLHBP432vnpLNpCNwGQJjx3hd6RDz4LiYxmZJo757W8Hbw, bridgeInfo=BridgeEntry(queueName=internal.peers.DL9tRWQ867M3tni7KRqkXEJKPrkyW5KVj6fyRyDBHGaGA6, targets=[[2001:0:9d38:953c:3c:ce3:cbd9:3c59]:10013], legalNames=[O=PartyC, L=Paris, C=FR])) {}
[INFO ] 2018-11-30T08:39:11,072Z [nioEventLoopGroup-2-2] netty.AMQPClient.nextTarget - Retry connect to [2001:0:9d38:953c:3c:ce3:cbd9:3c59]:10013 {}
[INFO ] 2018-11-30T08:39:12,171Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to [2001:0:9d38:953c:3c:ce3:cbd9:3c59]:10013 {}
[INFO ] 2018-11-30T08:39:14,172Z [nioEventLoopGroup-2-4] netty.AMQPClient.nextTarget - Retry connect to [2001:0:9d38:953c:3c:ce3:cbd9:3c59]:10013 {}
[INFO ] 2018-11-30T08:39:15,175Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to [2001:0:9d38:953c:3c:ce3:cbd9:3c59]:10013 {}
I could able to ping between instances with the private IP's without any issues. Can someone help me where am I missing things.
Thanks in advance.
This issue was caused by a firewall on the node's machine that was preventing the node's messages from reaching the counterparty nodes.
You need to open:
Outbound ports for your node's P2P address
Inbound ports for the other nodes' P2P addresses
On a large network, this can mean opening many inbound ports, which can be an issue for some companies' security policies. This issue is addressed by the Corda Firewall.
I'm currently introducing myself into Java EE, especially into how to use EJB and JMS.
The Issue in a Nutshell
I have a local client and a remote server in the internet. JMS communication works fine, EJB communication doesn't. Either communication succeeds in my local network.
The Issue in More Detail
I'm using Maven to build:
an EAR containing a couple of stateless remote EJBs and one MDB
a desktop client which includes EJB client JARs generated by Maven which are used to communicate with the server.
I'm using a WildFly application server to deploy the EAR. According to the server logs, EJBs are successfully exported. All traffic is based on http-remoting. Undertow switches protocols according to sniffed traffic as needed.
Let's take the client login as an example. The login API call is based on remote EJB communication, meanwhile JMS messages are sent.
If I run both client and server on my local machine from Eclipse, login works fine and JMS messages are processed correctly. Same applies if I run the client on a different machine in the same network. If I run the client in my local network, but the server in the internet, EJB communication fails and JMS messages are still processed correctly.
Why is that?
Log Content
The remote EJB's login method is not invoked, there are no logs. I have added log4j.logger.org.jboss.ejb.client=TRACE to log4j.properties. Thats the client logs in case of a failing API call:
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - endpoint.create.options. has the following options {}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {} have been merged with defaults {org.xnio.Options.THREAD_DAEMON=>true} to form {org.xnio.Options.THREAD_DAEMON=>true}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connectionprovider.create.options. has the following options {org.xnio.Options.SSL_ENABLED=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {org.xnio.Options.SSL_ENABLED=>false} have been merged with defaults {} to form {org.xnio.Options.SSL_ENABLED=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connection.default.connect.options. has the following options {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] TRACE PropertiesBasedEJBClientConfiguration:272 - Options {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false} have been merged with defaults {} to form {org.xnio.Options.SASL_DISALLOWED_MECHANISMS=>[JBOSS-LOCAL-USER],org.xnio.Options.SASL_POLICY_NOANONYMOUS=>false}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:242 - remote.connection.default.channel.options. has the following options {}
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:490 - Connection org.jboss.ejb.client.PropertiesBasedEJBClientConfiguration$RemotingConnectionConfigurationImpl#33f49f38 successfully created for connection named default
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG PropertiesBasedEJBClientConfiguration:295 - No clusters configured in properties
2015-06-21 16:47:49 [GS Desktop Init Thread] DEBUG EJBClientPropertiesLoader:100 - Looking for jboss-ejb-client.properties using classloader sun.misc.Launcher$AppClassLoader#58644d46
2015-06-21 16:47:49 [GS Desktop Init Thread] INFO client:45 - JBoss EJB Client version 2.1.1.Final
...
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-4] DEBUG RemotingConnectionEJBReceiver:191 - Channel Channel ID eb5d763d (outbound) of Remoting connection 25bff644 to euve1234.vserver.de/84.46.96.86:8080 opened for context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]} Waiting for version handshake message from server
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-5] INFO remoting:103 - EJBCLIENT000017: Received server version 2 and marshalling strategies [river]
2015-06-21 16:47:51 [GS Desktop Thread 0] INFO remoting:218 - EJBCLIENT000013: Successful version handshake completed for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]} on channel Channel ID eb5d763d (outbound) of Remoting connection 25bff644 to euve1234.vserver.de/84.46.96.86:8080
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] TRACE ChannelAssociation:375 - Received message with header 0x8
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:763 - Received module availability report for 11 modules
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSAuthManagerEjb-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSNotificationManagerEjb-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
...
2015-06-21 16:47:51 [Remoting "config-based-ejb-client-endpoint" task-6] DEBUG RemotingConnectionEJBReceiver:765 - Registering module EJBModuleIdentifier{appName='GSServerEAR-0.0.1', moduleName='GSServerEAR-0.0.1', distinctName=''} availability for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#3baeae68, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#3f310404,channel=jboss.ejb,nodename=euve1234]}
2015-06-21 16:47:51 [GS Desktop Thread 0] DEBUG ConfigBasedEJBClientContextSelector:174 - Registered 1 remoting EJB receivers for EJB client context org.jboss.ejb.client.EJBClientContext#3baeae68
...
2015-06-21 16:47:51 [JavaFX Application Thread] WARN GsTask:38 - API call background task failed
java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:GSServerEAR, moduleName:GSAuthManagerEjb-0.0.1, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext#63dd58c4
at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:774)
at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:116)
at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:186)
at org.jboss.ejb.client.EJBInvocationHandler.sendRequestWithPossibleRetries(EJBInvocationHandler.java:255)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:200)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:183)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:146)
at com.sun.proxy.$Proxy7.createSession(Unknown Source)
...
Some of My Thoughts
Could the issue be caused by
Invalid IPs?
No. After moving the server, I've updated the standalone.xml configuration, and I've ensured by observing network traffic on all machines that all calls are received. JMS works.
WildFly security settings, e.g. concerning security realm configuration?
No. Login works locally. These settings should be valid after moving the server. Both JMS and EJB use the same WildFly application user. JMS works.
Could be a networking/routing issue, since EJBs are based on RMI, or some kind of firewall issue?
Probably, but JMS works. I'm not yet really used to JMS, but isn't it based on RMI? I send javax.jms.ObjectMessages. Session.createObjectMessage(Serializable object) takes a Serializable, that's why I suggest that we have RMI here, too.
Locally, WildFly runs on Windows 7. Remote, Wildfly runs on Ubuntu. I've tried Ubuntu 10/12/14 for this.
Concerning Java and WildFly: These are platform independent. Write once, run everywhere. I suggest that it is very unlikely that the problem is caused by the underlying OS. I've verified that traffic flows and JMS works.
Please correct me if I'm wrong.
Additional Notes
I know that I've provided little information so long, since I didn't want to blow up my question. Please don't hesitate to ask if you need more information. This applies to code, too.
I'm using Java SE 8 / Java EE 7. Concerning WildFly: I've tested 8.1.0.Final, 8.2.0.Final and 9.0.0.Beta2.
Please don't simply refer to online examples. I've been working on such for days already, and please keep in mind that the local communication works fine, already.
I highly appreciate any thoughts and comments on this since I got really stuck. Many thanks in advance.
Update 1: EJB Implementation, Client Context Creation and EJB Lookup
Server:
#Remote
public interface GsAuthManager {
GsClientSession createSession(String username, String password);
}
#Stateless
public class GsAuthManagerBean implements GsAuthManager {
#Override
public GsClientSession createSession(String username, String password) {
// ...
}
}
WildFly logs:
java:jboss/exported/GSServerEAR/GSAuthManagerEjb-0.0.1/GsAuthManagerBean!de.genesys.server.ejb.auth.GsAuthManager
Client:
static void initEjbClient(String serverHostname, String username, String password) {
final Properties ejbClientProps = new Properties();
ejbClientProps.put("remote.connections", "default");
ejbClientProps.put("remote.connection.default.port", "8080");
ejbClientProps.put("remote.connection.default.host", serverHostname);
ejbClientProps.put("remote.connection.default.username", username);
ejbClientProps.put("remote.connection.default.password", password);
ejbClientProps.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
ejbClientProps.put("remote.connection.default.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS", "JBOSS-LOCAL-USER");
ejbClientProps.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
EJBClientConfiguration clientConfig = new PropertiesBasedEJBClientConfiguration(ejbClientProps);
ContextSelector<EJBClientContext> selector = new ConfigBasedEJBClientContextSelector(clientConfig);
EJBClientContext.setSelector(selector);
}
static GsAuthManager initEjbProxy(String serverHostname, String username, String password) throws NamingException{
Properties props = new Properties();
props.put("jboss.naming.client.ejb.context", true);
props.put("jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
props.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
props.put(Context.PROVIDER_URL, "http-remoting://" + serverHostname + ":8080");
props.put(Context.SECURITY_PRINCIPAL, username);
props.put(Context.SECURITY_CREDENTIALS, password);
InitialContext context = new InitialContext(props); // Stripped down, original code keeps a strong reference and closes context on program termination
return (GsAuthManager) context.lookup("ejb:GSServerEAR/GSAuthManagerEjb-0.0.1/GsAuthManagerBean!de.genesys.server.ejb.auth.GsAuthManager");
}
In the logs you get on deploy appName='GSServerEAR-0.0.1' but when the exception occurs: appName:GSServerEAR. So I suppose, the runtime of your EAR file (when you deploy it) contains the version, and it should not.
I got two Tomcats with identical java apps ('clients') that communicate with a single MQ server
On one client, the communication is fast, but on the other it takes a lot of time (> 13 sec).
On the infrastructure the only difference is that the 'slow client' is on a different domain then the server, while the 'fast client' is on the same domain, which should cause any trouble, as far as I know.
I've sniffed the communication both ways with Wireshark, and I've found one difference: the MQ server sends an additional "ACK" to the slow client, does not get nothing, waits for 13 seconds and then sends the "INITIAL DATA" packet that actually starts the work, like this:
Client - > server :IBM WebSphere SYN
Server - > client :IBM WebSphere SYN-ACK
Client - > server :IBM WebSphere ACK
Client - > server :INITIAL DATA
Server - > client :IBM WebSphere ACK // only on slow client!!!
// Waits for 13 seconds - only on slow client.
Server - > client :INITIAL DATA // from here, it flows on both clients
Notes:
1) The communication to the server is not SSL, and uses IP address rather then DNS names, so resolving issues should not occur.
2) I've published a similar question before - this is not the same case! last time it was BIOS over TCP issue.
They way we used to overcome the issue is using SimpleConnctionManager and a long timeout.
It's a work around, but it works.
I'm trying to integrate signalr with an existing asp.net forms web app.
After initially connecting successfully and the server side then calling back to the client js function, signalr seems to have trouble maintaining a connection. I'm developing on a windows 7 machine so the 10 connections limit makes this somewhat challenging to debug. I have however, seen what appears to be the same issue when the website is deployed to a 2003 Enterprise Ed. Server so I don't think I'm seeing a connection limit issue (I stand ready to be corrected though)
Looking in fiddler, I do eventually get a 200 for the connection request but the only JSON I get back as this:
{"C":"B,0|7,4|8,0|9,0","T":1,"M":[]}
I have no idea what this represents. Initially when the connection is successful I get this (which includes the data payload I expect):
"C": "B,0|BK,1|BL,0|BM,0",
"M": [{
"H": "notifyHub",
"M": "notificationReceived",
"A": ["[{\"TransitionNotificationId\":527,\"AuthorizationJobId\":53,\"TransitionType\":2,\"IsWorkShop\":true},
{\"TransitionNotificationId\":528,\"AuthorizationJobId\":53,\"TransitionType\":12,\"IsWorkShop\":true},
{\"TransitionNotificationId\":580,\"AuthorizationJobId\":61,\"TransitionType\":2,\"IsWorkShop\":true}]"]
}]
If I could interpret the JSON in the 'failed' request properly I'd have an idea of where to look for the problem.
Cheers in advance.
T:1 means you got a connection timeout. When using longpolling the connection will timeout every 120 seconds (by default). This is because most load balancers/proxies will kill idle connections after sometime. The other transports send a keep alive to stop this from happening.
As for the rest of the payload:
C: Cursor
M: Messages
H: Hubname
M: Method name
A: Method args
T: Timeout
D: Disconnect
I'm trying to make a Flex-based desktop application consume messages from an ActiveMQ topic with a durable subscription, using the JMS bridge of BlazeDS. The basic scenario is as follows:
Messages are produced by other producers in the topic to which the Flex client is subscribed.
The Flex client may go offline from time to time, but it must receive all the messages it has missed while being offline when it connects to BlazeDS again. (Of course the Flex client connects with the same client ID every time).
It can not be guaranteed that the Flex client is shut down gracefully.
Everything works fine if I explicitly disconnect my consumer on the Flex side by calling disconnect() - I do it in the exit handler of the application. However, due to #3 above, it is not guaranteed that disconnect() is called all the time. When the Flex client shuts down without calling disconnect(), it seems that the subscription of the "proxy JMS client" that BlazeDS creates and associates to the Flex client stays active towards ActiveMQ, so ActiveMQ still thinks that the client is logged in. When the Flex app starts up the next time, it is unable to log in to BlazeDS because ActiveMQ refuses its subscription, claiming that the client ID is already taken. Why is it so and what can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly?
More detailed information: some debugging revealed that:
BlazeDS becomes aware of the termination of the Flex client because it prints a few exceptions to the console when in debug mode. The messages are as follows:
[BlazeDS]23:18:13.688 [WARN] Endpoint with id 'my-streaming-amf' is closing the streaming connection to FlexClient with id '71E6466F-D91F-201C-F60A-A6CB52F95D9F' because endpoint encountered a socket write error, possibly due to an unresponsive FlexClient.
ClientAbortException: java.net.SocketException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:542)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.handleFlexClientStreamingOpenRequest(BaseStreamingHTTPEndpoint.java:818)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.serviceStreamingRequest(BaseStreamingHTTPEndpoint.java:1055)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.service(BaseStreamingHTTPEndpoint.java:460)
at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:353)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:963)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more
[BlazeDS]23:18:13.689 [DEBUG] Streaming thread 'http-8400-1' for endpoint with id 'my-streaming-amf' is releasing connection and returning to the request handler pool.
[BlazeDS]23:18:13.689 [INFO] Number of streaming clients for FlexSession with id '5BC5E8D604A361BCA673B05AC624CCC1' is 0.
[BlazeDS]23:18:13.689 [DEBUG] Number of streaming clients for endpoint with id 'my-streaming-amf' is 0.
At this stage, the subscriptions are still shown on the ActiveMQ web admin interface as being active.
Killing BlazeDS (more precisely, the Tomcat server that hosts it) with kill -9 from the console makes ActiveMQ realize immediately that the "proxy JMS client" is gone and it becomes offline on the ActiveMQ web admin interface. This made me conclude that BlazeDS is keeping the proxy JMS client alive explicitly since kill -9 gives no chance to BlazeDS to unsubscribe the client but it still becomes offline in ActiveMQ.
So, the question once again: What can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly? Is this a bug in BlazeDS or am I just missing some hidden configuration setting that would make it work?
Version information: BlazeDS 4.0, ActiveMQ 5.5.0, both freshly downloaded today. I'm using the Tomcat server in the BlazeDS turnkey but ActiveMQ is installed separately because the BlazeDS turnkey ships with ActiveMQ 4.1.1 only. By the way, that version of ActiveMQ has the same issue.
The problem is that there is no way for BlazeDS to detect that your Flex client was shutdown, you will have to implement your own mechanism - my suggestion is to use a heart beat implemented with messaging. If no message is received from the client after a time interval you can assume that the Flex client is gone and do the disconnect (or you can use the session timeout mechanism on the server, and do the disconnect on session expire).
What you have seen (the exception caught when the streaming channel is closed) is not enough to say 100% sure that the Flex client is gone. The streaming is implemented using an HTTP connection kept open forever (used to send server messages) and periodic HTTP post calls (initiated by the client to send messages). In some networks the firewall can decide to kill the HTTP connection after a couple of seconds and you will receive the same error like the one you posted. However, it does not mean that the Flex client is killed - the Flex client can use a fallback strategy and switch to short/long polling in this case. Actually it would be a bug if BlazeDS will automatically do the JMS disconnect in this case.