How to handle blocked RabbitmMQ connections using Rebus - rebus

We are planning to use Rebus for all our RabbitMQ operations, but we have encountered a problem. When the RabbitMQ connection is blocked "_bus.Advanced.Routing.Send" method hangs indefinitely, causing our application to also hang.
I tried adding (a very ugly) hack to run the "_bus.Advanced.Routing.Send" operation in another thread and aborting the thread after a timeout, but it causes other problems (System.NotSupportedException: Pipelining of requests forbidden).
Is there a way to timeout operations on blocked connections (from high memory alarm)? Or at least detect blocked connections?
Thank you in advance.

Related

How to mock "io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason" when doing grpc call?

I'm implementing a retry logic for my grpc call, when it saw StatusRuntimeException it will do retry several times.
My question is how can I mock the call will throw StatusRuntimeException?
My thought is to set the alive time and alive timeout really small, like 5mills, is that works? Or is there any other good way to do that
NettyChannelBuilder.forAddress()
.keepAliveTime(5, TimeUnit.MILLISECONDS)
.keepAliveTimeout(5, TimeUnit.MILLISECONDS)
.keepAliveWithoutCalls(true)
grpc-java has retry feature out-of-box, you might try:
https://github.com/grpc/grpc-java/tree/master/examples/src/main/java/io/grpc/examples/retrying
gRPC A8 suggests never set keepalive time within 1min, you would see GOAWAY from server if so.
Suggests for clients to avoid configuring their keepalive much below one minute (see Server Enforcement section for additional details)
Actually, if you simply no not start a server, you see StatusRuntimeException: UNAVAILABLE: io exception

High response time vs queuing

Say I have a webserivce used internally by other webservices with an average response time of 1 minute.
What are the pros and cons of such a service with "synchronous" responses versus making the service return id of the request, process it in the background and make the clients poll for results?
Is there any cons with HTTP connections which stay active for more than one minute? Does the default keep alive of TCP matters here?
Depending on your application it may matter. Couple of things worth mentioning are !
HTTP protocol is sync
There is very wide misconception that HTTP is async. Http is synchronous protocol but your client could deal it async. E.g. when you call any service using http, your http client may schedule is on the background thread (async). However The http call will be waiting until either it's timeout or response is back , during all this time the http call chain is awaiting synchronously.
Sockets
Since HTTP uses socket and there is hard limit on sockets. Every HTTP connection (if created new every time) opens up new socket . if you have hundreds of requests at a time you can image how many http calls are scheduled synchronously and you may run of sockets. Not sure for other operation system but on windows even if you are done with request sockets they are not disposed straight away and stay for couple of mins.
Network Connectivity
Keeping http connection alive for long is not recommended. What if you loose network partially or completely ? your http request would timeout and you won't know the status at all.
Keeping all these things in mind it's better to schedule long running tasks on background process.
If you keep the user waiting while your long job is running on server, you are tying up a valuable HTTP connection while waiting.
Best practice from RestFul point of view is to reply an HTTP 202 (Accepted) and return a response with the link to poll.
If you want to hang the client while waiting, you should set a request timeout at the client end.
If you've some Firewalls in between, that might drop connections if they are inactive for some time.
Higher Response Throughput
Typically, you would want your OLTP (Web Server) to respond quickly as possible, Since your queuing the task on the background, your web server can handle more requests which results to higher response throughput and processing capabilities.
More Memory Friendly
Queuing long running task on background jobs via messaging queues, prevents abusive usage of web server memory. This is good because it will increase the Out of memory threshold of your application.
More Resilient to Server Crash
If you queue task on the background and something goes wrong, the job can be queued to a dead-letter queue which helps you to ultimately fix problems and re-process the request that caused your unhandled exceptions.

Can a gRPC client connect timeout be set independent of reconnect backoff settings?

We'd like to configure our gRPC client to reconnect very quickly after a connection is lost. (I believe the default behavior is to attempt to reconnect after 20 seconds, backing off to 120 seconds between attempts.) After a review of available settings, we tried setting grpc.initial_reconnect_backoff_ms and grpc.min_reconnect_backoff_ms to 200. While that results in quick reconnects when a connection is lost, we sometimes see calls (from tests) fail with GRPC::Internal: 13:Completed without a response. Looking at logging from a tcp reverse proxy sitting between client and server, I see a connection lasting for just over 200ms, then a second connection lasting for longer. So it looks like the reconnect times are effectively serving as timeouts on connection attempts.
Is it possible to configure a gRPC client so that it will begin attempting a reconnect very quickly after a connection is lost, but allow creation of that connection to take longer than the reconnect time?
If it matters, this is a Ruby client.
The initial backoff is supposed to be 1 second.
You're experiencing a bug were the minimum connection timeout acts as both the timeout and the backoff (so the 1s initial backoff is ignored). So both your initial problem and the failed workaround are caused by the same bug.
(The bug was noticed a month ago, but an issue wasn't filed due to a mixup with a second bug. Your question here let me notice the missing issue.)

gSoap : is keep-alive header mandatory for asynchronous message?

Initially, I have problem with the option keep-alive enabled (it blocks the next clients calls. Only the first call that receives an answer).
And now, I need to implement some asynchronous web services using gSoap.
So am I obliged to enable keep-alive in order to implement asynchronous web services?
Thank you a lot!
To give some background, establishing a TCP connection has a significant setup overhead. The purpose of keep-alive is to reduce latency by allowing this overhead to be avoided on subsequent connections by reusing the already opened TCP connection instead of constructing a new connection completely from scratch.
You can get the functionality of a web service without using keep alive (after all, keep alive was introduced in HTTP/1.1, and HTTP/1.0 has worked for a long time without keep alive). However, you will definitely experience worse performance than if you properly support keep alive. It should also be noted that, when it comes to establishing connections on mobile, tearing down previous connections and creating new connections completely from scratch rather than keeping a connection open and reusing it may also have implications for the battery. In particular, closing and opening a connection may cause the radio to go to sleep and then wake up again, and the radio usually spends more power when it transitions from sleep to wake than in the steady state.
Your service should be multithreaded to support multiple clients, here gsoap documentation explains it http://www.cs.fsu.edu/~engelen/soapdoc2.html#tth_sEc19.11

Detecting aborted requests in a HttpServlet

Is there a way to find out if a HttpServletRequest is aborted?
I'm writing an instant browser application (some kind of chat): The clients asks for new events in a loop using AJAX-HTTP-Requests. The server (Tomcat) handles the requests in a HttpServlet. If there are no new events for this client, the server delays the reply until a new event arrives or a timeout occurs (30sec).
Now I want to identify clients that are no longer polling. Therefore, I start a kick-Timer at the end of a request which is stopped when a new request arrives. If the client closes the browser window the TCP-Connection is closed and the HTTP-Request is aborted.
Problem: The client does not run into the kick-Timeout because the Servlet still handles the event request - sleeping and waiting for an event or timeout.
It would be great if I could somehow listen for connection abort events and then notify the waiting request in order to stop it. But I couldn't find anything like that in the HttpServletRequest or HttpServletResponse...
This probably won't help the OP any more, but it might help others trying to detect aborted HTTP connections in HttpServlet in general, as I was having a similar problem and finally found an answer.
The key is that when the client cancels the request, normally the only way for the server to find out is to send some data back to the client, which will fail in that case. I wanted to detect when a client stops waiting for a long computation on server, so I ended up periodically writing a single character to response body through HttpServletResponse's writer. To force sending the data to the client, you must call HttpServletResponse.flushBuffer(), which throws ClientAbortException if the connection is aborted.
You are probably using some sort of thread-notification (Semaphores or Object.wait) to hold and release the Servlet threads. How about adding a timeout (~10s) to the wait, then somehow checking whether the connection is still alive and then continuing the wait for another 10s, if the connection is still there.
I don't know whether there are reliable ways to poll the "liveness" of the connection (e.g. resp.getOutputStream not throwing an Exception) and if so, which way is the best (most reliable, least CPU intense).
It seems like having waiting requests could degrade the performance of your system pretty quickly. The threads that respond to requests would get used up fast if requests are held open. You could try completing all requests (and returning "null" to your clients if there is no message), and having a thread on the back-end that keeps track of how long it's been since clients have polled. The thread could mark a client as being inactive.

Resources