How do retries work in a Datapower mpgw service using routing-url to set backside URL? - ibm-datapower

I have a datapower mpgw service that takes in JSON POST and GET HTTPs requests. Persistent connections are enabled. It sets the backend url using the dp routing-url variable. How do retries work for this? is there some specific retry setting? does it do retries automatically up to a certain point? what if I don't want it to retry?
The backend app is taking about 1.5 minutes to return 500 when it can't connect, but I want it to return more quickly. I have the "backside timeout" set to 30 seconds. I'm wondering if it's because it's retrying a couple times but I can't find info on how retries are working or configured in this case.

I'm open to more answers, but what i found here looks like it says that with persistent connections enabled, DP will retry after the backend timeout duration up until the duration of the persistent connection timeout.

Related

I am getting HTTP:GATEWAY_TIMED_OUT from cloudhub

I am using a requestor connector in mule 4 for calling an API. That API takes 24 minutes to send back the response to requestor. So when I am running my application in cloudhub, I am getting HTTP:GATEWAY_TIMED_OUT, error_code: 504.
The response timeout I am setting in the requestor is 24 minutes and connection idle timeout is set to default(30000ms)
How can we update the response timeout of cloudhub?
I understand that you are making an HTTP request to a REST API implemented as an application deployed in CloudHub. You probably are using an URL that goes through CloudHub load balancer (example https://myapp.us-e1.cloudhub.io/api/...). The load balancer has a fixed 5 minutes that can not be changed. Note that 24 minutes is a lot of time to keep connections resources open.
Some alternatives could:
Restructure your application to operate in an asynchronous manner. This might require significant effort.
Skip the load balancer tier and connect to the application worker directly using its DNS name (example https://mule-worker-myapp.us-e1.cloudhub.io:8082/api/...). Be aware that going this way you will lose the benefits of using a load balancer.

Apache Async http client performance vs sync client

I am trying to switch my application to the async version from apache http-components client . The goal is to be able to handle more outbound connections (in the near future).
The payload of the requests is quite small (<5KB)
The endpoints I hit are around 20 in number.
With sync version of apache http client, the through put is about 200 requests/sec.
The average response time is about 100ms/request.
I abort the requests after a max of 180ms.
After switching to Async, the response time went up by 20ms/request.
The throughput also reduced to 160/sec. The number of aborted requests doubled.
This is after fine tuning the application a lot.
Is there anything I can do to improve the performance of async client?
I set maxConnectionsPerRoute high. Have a large Connection pool.
Are there any params that are key to getting the most out of async client?
Did you forget to set maxConnTotal?
The default maxConnTotal is 20, this is a global limit.
I forgot to set it once.

Bing API HTTP request timeout

I'm using Bing V7 API and sending an HTTP requests for this endpoint:
https://api.cognitive.microsoft.com/bing/v7.0/search
When I'm define my HttpClient, I need to select the right Timeout value. To short timeout, will makes me loose some answers from the server. Too long timeouts, will make me wait, even if the server is not there.
I looked on Bing documentation and didn't find the right value.
What is the right HTTP request timeout for this calls?
What is the right HTTP request timeout for this calls?
I'm using Bing V7 API and sending an HTTP requests for this endpoint:
Bing provides their API via a HTTP endpoint. This has nothing really to do with the API itself in my opinion as HTTP is just the transport in this situation. HTTP request context is normally handled by eg. reverse proxies such as NGINX (or likely MS IIS here). Hence no documentation in the API docs.
When I'm define my HttpClient, I need to select the right Timeout value. To short timeout, will makes me loose some answers from the server. Too long timeouts, will make me wait, even if the server is not there.
The timeout value in your HttpClient is just ment to eventually recover from a blocking situation. This means that your program won't block indefinitely, but will at some point terminate the HTTP action at hand. This is useful if your HttpClient got into a eg. network split situation, deadlock or similar situation and no reply will ever come.
A timeout value between 45 to 60 seconds is plenty.
Too long timeouts, will make me wait, even if the server is not there.
I would keep the HttpClient timeout value at a fixed eg. 60 seconds and have a second "supervisor" thread doing some more dynamic "Smoke test" to check if connectivity is ok or if there is some other problem at which point you then can terminate HttpClient early.

Response not received back to client from Apigee Cloud

POSTMANCleint--> Apigee OnCloud-->Apigee On Premise---->Backend
Backend is taking 67 sec to respond and i can see the response in Apigee cloud as well however the same response is not sent to client and instead timeout is received .
I have also increased the timeout counts on HTTTargetConnectionProperties but still the issue persists.
Please let us know where to investigate.
There are two levels of timeout in Apigee -- first at the load balancer which has a 60 second timeout, then at the Apigee layer which I believe was 30 seconds but looks like it was increased to 60.
My guess is that the timeout response is coming from the load balancer and that the the timing is just such that Apigee is able to get the response but the load balancer has already dropped the connection.
If this is a paid instance you should be able to get Apigee to adjust the timeouts to make this work (but, man... 67,000ms response times are pretty long...)

HTTP Keep Alive in a large Web Applications

I have a web application deployed over IIS 7.0. the application is accessible by large number of users and manipulates large data ..my question is concerning the HTTP Keep-Alive option which is set to true by default.
is it a better approach to set the HTTP Keep-Alive to false or true.
in case of true is the good approach to use time out?
KeepAlive should normally be used to handle the requests that immediately follow an HTML request. Let's say on the first visit to your site I get an HTML page with 5 css, 5js and 25 images, I will use my HTTP connection which is still alive to request these things (well, depends on the browser, I'll maybe use 3 connection to speed up these things).
To handle this fact we usually use a Keepalive of 2s or 3s. Having a longer keepalive means the connection is waiting for the next page that the user may request. This may be a valid way of thinking, next time the user will want a page, we'll avoid to loose time establishing HTTP connection (and this can be the longest part of the request/response time). But for your server that mean most of HTTP connection that are handled by the server are doing... nothing. And you will reach your MaxConnection (W3SVC/MaxConnections with a ridiculous default to 10), with connections doing nothing. Really Bad. So long keep-alive needs big webservers and should be used only if your application really needs it.
If you use Keepalive in a 'classical website' you must change the connection timeout (by default 2min). In Apache you would have 2 settings, a keepalive tiemout (5s by default) and a connection timeout (2min). In IIS seems the timeout settings is used for both. So do not set it to 2s (a client really slow in sending his request will timeout), but something like 10s is maybe enough. Now one response is to disallow Keep-Alive, and make the browser opening more connections. Another response is to use a modern webserver (like nginx or cherokee for example) which handles keep-alive connection in a more elegant and resource-free way than Apache or IIS.
Even if you do not use Keepalive, what's the reason of waiting 2 minutes for a client timeout? it's is certainly too high, decrease this value to something like 60s.
Then you should check several settings related to timeout (ConnectionTimeout, HeaderWaitTimeout, MinFileBytesPerSec) and this nice response on performances settings in the registry.
This article will bring more insight and don't forget to check the "How do we fix it?" section
http://mocko.org.uk/b/2011/01/23/http-keepalive-considered-harmful/
I think that it's not a good idea to get all users connected.
Because of:
User just can open your site, but not use it - why we shoud keep connection for long time?
It's hard to keep much connection (more memory)
Use connection time-out (max 5 min will we ok)
BUT: if your application is a live chat - you should kepp alive all connection. In this way better to use Ajax Long Polling Request + Node JS + some fast nosql db to store chat messages.

Resources