wso2 esb out of memory when clients disconnects - out-of-memory

I've a wso2esb 4.8.0 running with some proxies deployed in.
All works great until clients that are calling the proxies begin to disconnect, before receiving the response from the esb.
after few minutes the ESB goes in outOfmemory error.
the average size of requests is: 1.2 Kb.
the average size of responses is: 1.6 Mb.
the server is running with: -Xms256m -Xmx1024m -XX:MaxPermSize=256m.
In the heap dump I can see that the major classes retaining memory are java.lang.Thread (PassThroghtMessageProcessor), lot of and all with size of 36Mb .
sometimes there is also this error: java.lang.OutOfMemoryError: GC overhead limit exceeded
if clients don't disconnect everything works good.
any idea?

If clients are disconnecting, maybe the root cause is that your back end is taking too long to answer.
There are a few thing you can do:
Decrease the timeout for back end responses, so messages don't pile up on the ESB
Use the throttling mediator or handler in order to reject messages above a certain rate.
Both are targeted at the symptoms (oom at the ESB). Other than that, it may be necessary an integration model redesign, maybe using a store-and-forward approach.

Related

Load balancing TCP traffic using Apache Camel with Netty leads to transaction failures

I am new to Apache Camel and Netty and this is my first project. I am trying to use Camel with the Netty component to load balance heavy traffic in a back end load test scenario.This is the setup I have right now:
from("netty:tcp:\\this-ip:9445?defaultCodec=false&sync=true").loadBalance().roundRobin().to("netty:tcp:\\backend1:9445?defaultCodec=false&sync=true,netty:tcp:\\backend2:9445?defaultCodec=false&sync=true)
The issue is unexpected buffer sizes that I am receiving in the response that I see in the client system sending tcp traffic to Camel. When I send multiple requests one after the other I see no issues and the buffer size is as expected. But, when I try running multiple users sending similar requests to Camel on the same port, I intermittently see unexpected buffer sizes, sometimes 0 bytes to sometimes even greater than the expected number of bytes. I tried playing around with multiple options mentioned in the Camel-Netty page like:
Increasing backlog
keepAlive
buffersizes
timeouts
poolSizes
workerCount
synchronous
stream caching (did not work)
disabled useOriginalMessage for performance
System level TCP parameters, etc. among others.
I am yet to resolve the issue. I am not sure if I'm fundamentally missing something. I did take a look at the encoder/decoders and guess if that could be an issue. But, I don't understand why a load balancer needs to encode/decode messages. I have worked with other load balancers which just require endpoint configurations and hence, I am assuming that Camel does not require this. Am I right? Please know that the issue is not with my client/backend as I ran a 2000 user load test from my client to the backend with less than 1% failures but see a large number of failure ( not that there are no successes) with Camel. I have the following questions:
1.Is this a valid use-case for Apache Camel- Netty? Should I be looking at Mina or others?
2.Can I try to route tcp traffic to JMS or other components and then finally to the tcp endpoint?
3.Do I need encoders/decoders or should this configuration work?
4.Should I continue with this approach or try some other load balancer?
Please let me know if you have any other suggestions. TIA.
Edit1:
I also tried the same approach with netty4 and mina components. The route looks similar to the one in netty. The route with netty4 is as follows:
from("netty4:tcp:\\this-ip:9445?defaultCodec=false&sync=true").to("netty4:tcp:\\backend1:9445?defaultCodec=false&sync=true")
I read a few posts which had the same issue but did not find any solution relevant to my issue.
Edit2:
I increased the receive timeout at my client and immediately noticed the mismatch in expected buffer length issue fall to less than 1%. However, I see that the response times for each transaction when using Camel and not using it is huge; almost 10 times higher. Can you help me with reducing the response times for each transaction? The message received back at my client varies from 5000 to 20000 bytes. Here is my latest route:
from("netty:tcp://this-ip:9445?sync=true&allowDefaultCodec=false&workerCount=20&requestTimeout=30000")
.threads(20)
.loadBalance()
.roundRobin()
.to("netty:tcp://backend-1:9445?sync=true&allowDefaultCodec=false","netty:tcp://backend-2:9445?sync=true&allowDefaultCodec=false")
I also used certain performance enhancements like:
context.setAllowUseOriginalMessage(false);
context.disableJMX();
context.setMessageHistory(false);
context.setLazyLoadTypeConverters(true);
Can you point me in the right direction about how I can reduce the individual transaction times?
For netty4 component there is no parameter called defaultCodec. It is called allowDefaultCodec. http://camel.apache.org/netty4.html
Also, try something like this first.
from("netty4:tcp:\\this-ip:9445?textline=true&sync=true").to("netty4:tcp:\\backend1:9445?textline=true&sync=true")
The above means the data being sent is normal text. If you are sending byte or something else you will need to provide decoding/encoding for netty to handle the data.
And a side note. Before running the Camel route, test manually to send test messages via a standard tcp tool like sockettest to verify that everything works. Then implement the same via Camel. You can find sockettest here http://sockettest.sourceforge.net/ .
I finally solved the issue with the same route settings as above. The issue was with the Request and Response Delimiter not configured properly due to which it was either closing the connection too early leading to unexpected buffer sizes or it was waiting too long even after the entire buffer was received leading to high response times.

Connection timed out error in JMeter test execution

When am running my JMeter scripts using GUI for few of the samples sometimes am getting Connection timed out error and response are not getting, but if I run the same test after few mins I got the response for the same samples.
Can anybody please answer what is the solution for this?
Currently am checking the response time of each page, if add timers than the page response time will be showing more right?
There are at least 3 possibles reasons:
Your server (meaning web servers handling request and any components after them) is not handling the load correctly and slowing down, monitor the system and check
You have exhausted your injector ephemeral ports , you need to adjust your OS TCP settings to increase port range
You're running load test in GUI mode with a View Results Tree in test, this is bad practice as GC will happen frequently possibly triggering Stop The World leading to this. As per best-practices use NON GUI mode:
https://jmeter.apache.org/usermanual/best-practices.html
https://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/

ASP.Net MVC Delayed requests arriving long after client browser closed

I think I know what is happening here, but would appreciate a confirmation and/or reading material that can turn that "think" into just "know", actual questions at the end of post in Tl,DR section:
Scenario:
I am in the middle of testing my MVC application for a case where one of the internal components is stalling (timeouts on connections to our database).
On one of my web pages there is a Jquery datatable which queries for an update via ajax every half a second - my current task is to display correct error if that data requests times out. So to test, I made a stored procedure that asks DB server to wait 3 seconds before responding, which is longer than the configured timeout settings - so this guarantees a time out exception for me to trap.
I am testing in Chrome browser, one client. Application is being debugged in VS2013 IIS Express
Problem:
Did not expect the following symptoms to show up when my purposeful slow down is activated:
1) After launching the page with the rigged datatable, application slowed down in handling of all requests from the client browser - there are 3 other components that send ajax update requests parallel to the one I purposefully broke, and this same slow down also applied to any actions I made in the web application that would generate a request (like navigating to other pages). The browser's debugger showed the requests were being sent on time, but the corresponding break points on the server side were getting hit much later (delays of over 10 seconds to even a several minutes)
2) My server kept processing requests even after I close the tab with the application. I closed the browser, I made sure that the chrome.exe process is terminated, but breakpoints on various Controller actions were still getting hit for 20 minutes afterward - mostly on the actions that were "triggered" by automatically looping ajax requests from several pages I was trying to visit during my tests. Also breakpoints were hit on main pages I was trying to navigate to. On second test I used RawCap monitor the loopback interface to make sure that there was nothing actually making requests still running in the background.
Theory I would like confirmed or denied with an alternate explanation:
So the above scenario was making looped requests at a frequency that the server couldn't handle - the client datatable loop was sending them every .5 seconds, and each one would take at least 3 seconds to generate the timeout. And obviously somewhere in IIS express there has to be a limit of how many concurrent requests it is able to handle...
What was a surprise for me was that I sort of assumed that if that limit (which I also assumed to exist) was reached, then requests would be denied - instead it appears they were queued for an absolutely useless amount of time to be processed later - I mean, under what scenario would it be useful to process a queued web request half an hour later?
So my questions so far are these:
Tl,DR questions:
Does IIS Express (that comes with Visual Studio 2013) have a concurrent connection limit?
If yes :
{
Is this limit configurable somewhere, and if yes, where?
How does IIS express handle situations where that limit is reached - is that handling also configurable somewhere? ( i mean like queueing vs. immediate error like server is busy)
}
If no:
{
How does the server handle scenarios when requests are coming faster than they can be processed and can that handling be configured anywhere?
}
Here - http://www.iis.net/learn/install/installing-iis-7/iis-features-and-vista-editions
I found that IIS7 at least allowed unlimited number of silmulatneous connections, but how does that actually work if the server is just not fast enough to process all requests? Can a limit be configured anywhere, as well as handling of that limit being reached?
Would appreciate any links to online reading material on the above.
First, here's a brief web server 101. Production-class web servers are multithreaded, and roughly one thread = one request. You'll typically see some sort of setting for your web server called its "max requests", and this, again, roughly corresponds to how many threads it can spawn. Each thread has overhead in terms of CPU and RAM, so there's a very real upward limit to how many a web server can spawn given the resources the machine it's running on has.
When a web server reaches this limit, it does not start denying requests, but rather queues requests to handled once threads free up. For example, if a web server has a max requests of 1000 (typical) and it suddenly gets bombarded with 1500 requests. The first 1000 will be handled immediately and the further 500 will be queued until some of the initial requests have been responded to, freeing up threads and allowing some of the queued requests to be processed.
A related topic area here is async, which in the context of a web application, allows threads to be returned to the "pool" when they're in a wait-state. For example, if you were talking to an API, there's a period of waiting, usually due to network latency, between sending the request and getting a response from the API. If you handled this asynchronously, then during that period, the thread could be returned to the pool to handle other requests (like those 500 queued up requests from the previous example). When the API finally responded, a thread would be returned to finish processing the request. Async allows the server to handle resources more efficiently by using threads that otherwise would be idle to handle new requests.
Then, there's the concept of client-server. In protocols like HTTP, the client makes a request and the server responds to that request. However, there's no persistent connection between the two. (This is somewhat untrue as of HTTP 1.1. Connections between the client and server are sometimes persisted, but this is only to allow faster future requests/responses, as the time it takes to initiate the connection is not a factor. However, there's no real persistent communication about the status of the client/server still in this scenario). The main point here is that if a client, like a web browser, sends a request to the server, and then the client is closed (such as closing the tab in the browser), that fact is not communicated to the server. All the server knows is that it received a request and must respond, and respond it will, even though there's technically nothing on the other end to receive it, any more. In other words, just because the browser tab has been closed, doesn't mean that the server will just stop processing the request and move on.
Then there's timeouts. Both clients and servers will have some timeout value they'll abide by. The distributed nature of the Internet (enabled by protocols like TCP/IP and HTTP), means that nodes in the network are assumed to be transient. There's no persistent connection (aside from the same note above) and network interruptions could occur between the client making a request and the server responding to the request. If the client/server did not plan for this, they could simply sit there forever waiting. However, these timeouts are can vary widely. A server will usually timeout in responding to a request within 30 seconds (though it could potentially be set indefinitely). Clients like web browsers tend to be a bit more forgiving, having timeouts of 2 minutes or longer in some cases. When the server hits its timeout, the request will be aborted. Depending on why the timeout occurred the client may receive various error responses. When the client times out, however, there's usually no notification to the server. That means that if the server's timeout is higher than the client's, the server will continue trying to respond, even though the client has already moved on. Closing a browser tab could be considered an immediate client timeout, but again, the server is none the wiser and keeps trying to do its job.
So, what all this boils down is this. First, when doing long-polling (which is what you're doing by submitting an AJAX request repeatedly per some interval of time), you need to build in a cancellation scheme. For example, if the last 5 requests have timed out, you should stop polling at least for some period of time. Even better would be to have the response of one AJAX request initiate the next. So, instead of using something like setInterval, you could use setTimeout and have the AJAX callback initiate it. That way, the requests only continue if the chain is unbroken. If one AJAX request fails, the polling stops immediately. However, in that scenario, you may need some fallback to re-initiate the request chain after some period of time. This prevents bombarding your already failing server endlessly with new requests. Also, there should always be some upward limit of the time polling should continue. If the user leaves the tab open for days, not using it, should you really keep polling the server for all that time?
On the server-side, you can use async with cancellation tokens. This does two things: 1) it gives your server a little more breathing room to handle more requests and 2) it provides a way to unwind the request if some portion of it should time out. More information about that can be found at: http://www.asp.net/mvc/overview/performance/using-asynchronous-methods-in-aspnet-mvc-4#CancelToken

Http 400's to mid tier when server is under stress

I'm working on a project where we have an asp.net website which makes asmx web service calls to a mid tier. The timeout to the mid tier is 5s. One thing that we've noticed is that occasionally, at peak traffic times, we get http 400's when calling the mid tier.
We've done a network trace on the website tier for some of these http 400 requests and noticed that
1) the 3 way tcp handshake goes fast
2) the actual first packet of the post is not initiated until 5 seconds later. The ack from the first packet of the post comes back quickly
3) a fin ack is sent shortly thereafter (presumably due to timing out), after which the web service tier comes back with the http 400 quickly (the 400 being understandable as the post was incomplete)
Sometimes there is an extra 5s delay before step 3. Any idea why this may happen? Step 2 looks like a very strange behavior to me. Could there be a resource contention causing this delay before the post is sent? Perhaps some sort of resource that could be configured differently?
We're using the standard .net async methods for making the request (BeginInvoke). The post body is fully available as a string before we call the api.
I'm thinking the cpu could be too high, causing the delay before 2. Has anybody seen that before? I know the cpu is at least 80% during this time. It could be higher, but our perf counter isn't very high resolution. We're trying to get a higher-resolution perf counter during a repro to confirm that further. Let me know if you have any other ideas!
Thanks!

Call to slow service over HTTP from within message-driven bean (MDB)

I have a message driven bean which serves messages in a following way:
1. It takes data from incoming message.
2. Calls external service via HTTP (literally, sends GET requests using HttpURLConnection), using the data from step 1. No matter how long the call takes - the message MUST NOT be dropped.
3. Uses the outcome from step 2 to persist data (using entity beans).
Rate of incoming messages is:
I. Low most of the time: an order of units / tens in a day.
II. Sometimes high: order of hundreds in a few minutes.
QUESTION:
Having that service in step (2) is relatively slow (20 seconds per request and degrades upon increasing workload), what is the best way to deal with situation II?
WHAT I TRIED:
1. Letting MDB to wait until service is executed, no matter how long it takes. This tends to rollback MDB transactions by timeout and to re-deliver message, increasing workload and making things even worse.
2. Setting timeout for HttpURLConnection gives some guarantees in terms of completion time of MDB onMessage() method, but leaves an open question: how to proceed with 'timed out' messages.
Any ideas are very much appreciated.
Thank you!
In that case you can just increase a transaction timeout for your message driven beans.
This is what I ended up with (mostly, this is application server configuration):
Relatively short (comparing to transaction timeout) timeout for HTTP call. The
rationale: long-running transactions from my experience tend to
have adverse side effects such as threads which are "hung" from app.
server point of view, or extra attention to database configuration,
etc.I chose 80 seconds as timeout value.
Increased up to several minutes re-delivery interval for failed
messages.
Careful adjustment of the number of threads which handle messages
simultaneously. I balanced this value with throughput of HTTP service.

Resources