I have an application based on flex client and blazeds at the server side.
Scenario:
1. Client makes a remoting call over secure-amf channel.
2. Server takes some time processing the request usually more than 60 seconds
When server side code tries to write back to the pipe over which the connection is made. it reports broken pipe error.
I have found several queries on this topic without any conclusive solution.
Error log at servier side:
[BlazeDS]java.io.IOException: Broken pipe
Write failed: Broken pipe
Here is my serives-config.xml snippet.
<channel-definition id="my-secure-amf" class="mx.messaging.channels.SecureAMFChannel">
<endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
</properties>
</channel-definition>
The time-out of your AMF/http connection is set by your Http server application. And depending on what server you are running, you should look for the configuration matching timeout-value. If you are running Apache Tomcat 7, refer to the following configuration:
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
There is also a possibility to change the timeout in the services-config.xml: (but this isn't applicable to you)
<channel-definition id="my-channel" .... >
<endpoint url...../>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<connect-timeout-seconds>10</connect-timeout-seconds>
</properties>
</channel-definition>
Related
I´m currently struggling with EJB remote calls against Wildfly 25 or 26.
Same client application works fine with Widfly 10,13,16,20 and to some extent also with Wildfly 25 or 26.
The problem starts wenn the return object size of the EJB call exceeds some limit, which limit seems to change sometime. Example: I made a test EJB method which returns same string what I provide as parameter. Mostly, if the length of the String exceeds ~65.000 chars, the Wildfly EJB client hangs on reading the result. Anyhow, sometimes I have also experienced that client freezes at more than that limit. In my client I have registered an EJB Call Interceptor and I see that the call freezes when the EJB-context.getResult() is invoked. On server side, also based on a server side interceptor, I see that the call was done, but obviuosly something goes wrong on receiving the return value through the EJB client.
This is the stacktrace of the hanging thread:
"main#1" prio=5 tid=0x1 nid=NA waiting
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:502)
at org.wildfly.httpclient.common.WildflyClientInputStream.read(WildflyClientInputStream.java:147)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.jboss.marshalling.SimpleDataInput.read(SimpleDataInput.java:111)
at org.jboss.marshalling.UTFUtils.readUTFBytes(UTFUtils.java:151)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:314)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:231)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
at org.wildfly.httpclient.ejb.HttpEJBReceiver$2.getResult(HttpEJBReceiver.java:207)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:620)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.protocol.remote.RemotingEJBClientInterceptor.handleInvocationResult(RemotingEJBClientInterceptor.java:57)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.TransactionPostDiscoveryInterceptor.handleInvocationResult(TransactionPostDiscoveryInterceptor.java:148)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.DiscoveryEJBClientInterceptor.handleInvocationResult(DiscoveryEJBClientInterceptor.java:130)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.NamingEJBClientInterceptor.handleInvocationResult(NamingEJBClientInterceptor.java:87)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.AuthenticationContextEJBClientInterceptor$$Lambda$94.871790326.get(Unknown Source:-1)
at org.jboss.ejb.client.AuthenticationContextEJBClientInterceptor.call(AuthenticationContextEJBClientInterceptor.java:59)
at org.jboss.ejb.client.AuthenticationContextEJBClientInterceptor.handleInvocationResult(AuthenticationContextEJBClientInterceptor.java:52)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at com.ge.hac.ca.common.util.CommonClientInvocationInterceptor.handleInvocationResult(CommonClientInvocationInterceptor.java:196)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.TransactionInterceptor.handleInvocationResult(TransactionInterceptor.java:212)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:622)
at org.jboss.ejb.client.EJBClientInvocationContext.getResult(EJBClientInvocationContext.java:551)
at org.jboss.ejb.client.EJBClientInvocationContext.awaitResponse(EJBClientInvocationContext.java:1003)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:182)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:116)
at com.sun.proxy.$Proxy4.loopback(Unknown Source:-1)
at com.ge.hac.ca.perf.connection.TestConnectionToWildfly.checkEJBInvocation_LoggerService(TestConnectionToWildfly.java:225)
at com.ge.hac.ca.perf.connection.TestConnectionToWildfly.executeEJBIterations(TestConnectionToWildfly.java:191)
at com.ge.hac.ca.perf.connection.TestConnectionToWildfly.main(TestConnectionToWildfly.java:401)
I'm using Amazon Corretto jdk1.8.0_292.
Has anyone experienced similar issue, and if yes, how can this be solved?
EJB calls with huge result objects (i.e. String >= 64 Kbytes) doesn't work with Wildfly 25 or 26 because I have to use now the HTTP protocol instead of "remoting" what I have used for years before.
Specifically the HTTP/2 implemenattion doesn't work but the HTTP/1 seems to work.
Explanation:
I have checked / debugged the Wildfly client code and in my opinion there are several bugs in both the server and the client code of the HTTP/2 protocol implementation.
First bug: The server sends suddenly (mostly on bigger objects) in the middle of the data stream a FRAME_TYPE_RST_STREAM (see io.undertow.protocols.http2.Http2FrameHeaderParser:191 --> type = header[3] & 0xff;),
which marks the stream as broken (see io.undertow.server.protocol.framed.AbstractFramedStreamSourceChannel:684 --> state |= STATE_STREAM_BROKEN;)
and afterwards causes "ClosedChannelExceptions" on all stream reads inside the client (see org.wildfly.httpclient.common.WildflyClientInputStream:58 --> int res = streamSourceChannel.read(pooled.getBuffer());)
I didn't debug the server side code, so I can't tell why the server sends suddenly a "reset stream" on some huge objects. Sometimes the same huge object is transferred to the client without issues. On using HTTP/1 this neever happens so it can't be a network issue.
Second bug (initiated by the above reset stream) : The client can't handle correctly this "ClosedChannelExceptions" initiated by the received reset stream, because it freezes the whole communication.
The client keeps reading from the channel and after each read it invokes the wait(0) on its lock Object (see org.wildfly.httpclient.common.WildflyClientInputStream:147 --> lock.wait();)
Normally this wait(0) is continued by invoking the notifyAll() of same lock Object (in case of ClosedChannelExceptions it is the org.wildfly.httpclient.common.WildflyClientInputStream:94 --> lock.notifyAll();
Unluckily after the last read and therefore last invoked wait(0), there is no notifyAll() invoked anymore, so the EJB client call freezes for ever. This should not happen under any circumstances in my opinion.
The solution I found is, but I'm not happy with that, to disable the HTTP/2 protocol on the server and use HTTP/1 instead.
standalone-full.xml --> <https-listener name="https" socket-binding="https" enable-http2="false" />
Making now EJB calls through the HTTP/1 protocol works fine for huge Objects as well.
The mentioned classes with line numbers are all taken from Wildlfy 26.0.0 Client jar:
I hope some Wildfly expert will read this and react.
I have an esb from which i make a webservice call,which works fine most of the times, but sometimes i get the below exception
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
the weird thing is after i get this exception, some times the http outbound call still succeeds and some times does not succeed
why is this not consistent?
is there a chance that some configuration on mule http connector can help this exception scenario behave consistently?
All i am asking is... how to stop the http outbound request from getting processed after a read timed out exception is thrown?
the flow looks like below shown code
<queued-asynchronous-processing-strategy name="allow2Threads" maxThreads="2"/>
<flow name="TestFlow" processingStrategy="allow2Threads">
<vm:inbound-endpoint path="performWebserviceLogic" exchange-pattern="one-way" />
.... some transformation logic
....
<http:outbound-endpoint address="http://localhost:8080/firstwebservicecall" responseTimeout="65000" exchange-pattern="request-response"/>
....
.... some transformation logic on response...
<http:outbound-endpoint address="http://localhost:8080/secondWeberviceCall" responseTimeout="20000" exchange-pattern="request-response"/>
......some transformation logic on response...
<catch-exception-strategy>
<choice>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
.... unreliable ...result... as firstWeberviceCall may succeed even after the control comes here
and if we process http://localhost:8080/firstwebservicecall .. the transaction takes place twice... as already it succeeded above even after an exception is thrown
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
..... reliable ... if control comes here and if we process http://localhost:8080/secondWeberviceCall .. the transaction takes place only once
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
....reliable
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
....reliable
</when>
</choice>
</catch-exception-strategy>
</flow>
You can configure, thus increase, the time-outs of the HTTP transport in different places:
Response time out on the endpoints,
Connection and socket timeouts on the connector.
This is just pushing the problem further though: increasing the time-outs may solve your issue temporarily but you're still exposed to the failure.
To handle it properly, I think you should strictly check the response status code after each HTTP outbound endpoint, using maybe a filter to break the flow if the status code is not what you expect.
Also, it's well possible that you get a response time out after the HTTP request has been received by the server and before the response gets back to Mule. In that case, as far as Mule is concerned, the call has failed and must be retried. Which means the remote service must be idempotent, i.e. the client should be able to safely retry any operation that failed (or it thinks has failed).
check server SO_TIMEOUT in httpconnection, set it to 0
check - https://www.mulesoft.org/jira/browse/MULE-6331
I have an application (say, TcpApp) sending pure TCP messages (i.e., no SOAP, no envelope ... just a raw string or even bytes). I need to connect ESB to listen those messages over a specific port (say, 3333), and make some mediation (for now, do nothing but logging is enough). I think it would be a good idea to make an ActiveMQ queue from TcpApp and then to make a proxy service from JMS in the ESB (instead of connect directly the ESB to the TcpApp).
I read several samples and answers, but always the contect is XML, and TCP is only the transport. What sometime happens is that applications send no special formats over TCP (sometime called telegrams).
I tried to change the content type, but still the ESB refuses to read the TCP port.
<parameter name="transport.tcp.contentType">text/plain</parameter>
May be I'm still confuse with the architecture of the solution, but I think a Broker, or an ESB like WSO2, should work is this case as a mediator from this TcpApp. I prefer to disscus the solution before to get the real config to make it work.
All comments, welcomed!
In WSO2 EI 6.1.1, I have found that I can successfully process plain text TCP messages if I also specify a recordDelimiter and recordDelimiterType. Example from a working proxy (with the line feed character as delimiter):
<parameter name="transport.tcp.responseClient">true</parameter>
<parameter name="transport.tcp.inputType">binary</parameter>
<parameter name="transport.tcp.recordDelimiter">0x0A</parameter>
<parameter name="transport.tcp.contentType">text/plain</parameter>
<parameter name="transport.tcp.port">50001</parameter>
<parameter name="transport.tcp.recordDelimiterType">byte</parameter>
The message body in the input sequence looks like this:
<text xmlns="http://ws.apache.org/commons/ns/payload">this_is_the_message</text>
You need to use the correct message formatters and builders to process anything. Use following formatters in the axis2.xml file.
<messageFormatter contentType="application/binary" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
<messageBuilder contentType="application/binary" class="org.wso2.carbon.relay.BinaryRelayBuilder"/>
Just change the content type whatever you like and use the same in the proxy service config as well. Actually I have a blog post on this as well [1] :)
[1] - http://soatechflicks.blogspot.com/2017/05/processing-binary-data-from-tcp.html
I want to set a retry policy for HTTP call, in case of occasional network failuer, So I configured as following:
<http:connector name="HTTP_Retry" cookieSpec="netscape" validateConnections="true" sendBufferSize="0" receiveBufferSize="0" receiveBacklog="0" clientSoTimeout="10000" serverSoTimeout="10000" socketSoLinger="0" doc:name="HTTP\HTTPS">
<reconnect frequency="1000" count="3"/>
</http:connector>
....
<http:outbound-endpoint address="http://localhost:18081/mule/TheCreditAgencyService" doc:name="HTTP" exchange-pattern="request-response" method="POST" connector-ref="HTTP_Retry"/>
But the retry policy is not applied, even I configured a customer retry policy, I debuged the application, set break point, the program is not run into my customer class.
I read the document but there is only example of JMS.
Any tips? Do I miss configured?
Thanks in advance!
The ill-named retry policies take care of reconnecting connectors not resending messages in case of failure.
On a disconnected connector like the HTTP one, a retry policy has not effect. It's useful on connectors like JMS, where a permanent connection is maintained towards a broker, connection that needs reconnecting in case of failure.
What you are after is the until-successful routing message processor.
My Flex application is being hosted at http://<ip>:8080/MyApp/Login.html, when I go there there is a request for http://<ip>:8080/crossdomain.xml is created, as well as a request for https://<ip>:8080/crossdomain.xml. This happens when I attempt to use a Remote Java Object call to the same server, and grab assets from it. I am not hosting https on port 8080, therefore this https call will fail.
The problem is that the https call will sometimes take a long time to fail (it fails by the connectionTimeout length in the tomcat connector). Other times it fails quickly. However, in the times that it does take a long time to fail, the page doesn't complete loading because I'm waiting on those assets and the remote object calls for data.
I've tried setting up a forceful retrieval of a crossdomain.xml with the following inside it:
<?xml version="1.0"?>
<cross-domain-policy>
<allow-access-from domain="*" to-ports="*"/>
</cross-domain-policy>
With the AS3 code:
Security.loadPolicyFile(browserUrl+"/assets/crossdomain.xml");
Which is being called in the application's initialize event. The above, forceful, crossdomain.xml file is being called and retrieved correctly according to chrome and wireshark, but the default locations at the root of the server are still being attempted, and the https attempt is still timing out. And the app is not completing its loading until that attempt times out.
So I think I determined what this issue was. I had a secure channel being used first in the list of channels, and since it was different domain (http instead of the https that the channel was on) it would do the crossdomain.xml lookup.
I reordered the channels to have the regular http channel first, and my application no longer does any crossdomain lookups except the forceful one (which I've since removed).
Since the first channel was over https, Flex was waiting on that lookup before falling back to the insecure channel.
I also think this should continue to work if actually using https, since I believe that a plaintext request to a secure connector will be rejected instantly instead of timing out.