I have questions on Rebus retry policy below:
Configure.With(...)
.Options(b => b.SimpleRetryStrategy(maxDeliveryAttempts: 2))
.(...)
https://github.com/rebus-org/Rebus/wiki/Automatic-retries-and-error-handling#customizing-retries
1 Can it be used for both Publiser (enqueue messages) and Subscriber (dequeue messages)?
2 I have a subscriber that is unable to dequeue the message. Thus the message is sent to error queue.
Below is error for when putting the message to error queue. But I cannot see the loggings for the retry.
[ERR] Rebus.Retry.PoisonQueues.PoisonQueueErrorHandler (Thread #9): Moving messa
ge with ID "<unknown>" to error queue "poison"
Rebus.Exceptions.RebusApplicationException: Received message with empty or absen
t 'rbs2-msg-id' header! All messages must be supplied with an ID . If no ID is p
resent, the message cannot be tracked between delivery attempts, and other stuff
would also be much harder to do - therefore, it is a requirement that messages
be supplied with an ID.
Is it possible to define and store custom logging for each retry, not within IErrorHandler?
3 How long does each retry wait in between be default?
4 Is it possible to define custom wait time for each retry (not within IErrorHandler)? If so, is Polly supported for this scanario? like below:
Random jitterer = new Random();
Policy
.Handle<HttpResponseException>() // etc
.WaitAndRetry(5, // exponential back-off plus some jitter
retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
+ TimeSpan.FromMilliseconds(jitterer.Next(0, 100))
);
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly
Update
How can I test the retry policy?
Below is what I tried based on the code below:
public class StringMessageHandler : IHandleMessages<String>
{
public async Task Handle(String message)
{
//retry logic with Polly here
}
}
I sent an invalid message of string type to the string topic, however, the Handle(String message) is not invoked at all.
Rebus' retry mechanism is only relevant when receiving messages. It works by creating a "queue transaction"(*), and then if message handling fails, the message gets rolled back to the queue.
Pretty much immediately thereafter, the message will again be received and attempted to be handled. This means that there's no delay between delivery attempts.
For each failed delivery, a counter gets increased for that message's ID. That's why the message ID is necessary for Rebus to work, which also explains why your message with out an ID gets immediately moved to the dead-letter queue.
Because of the disconnected nature of the delivery attempts (only a counter per message ID is stored), there's no good place to hook in a retry library like Polly.
If you want to Pollify your message handling, I suggest you carry out individual operations with Polly policies – that way, you can easily have different policies for dealing with failing web requests, failing file transfers on network drives, etc. I do that a lot myself.
To avoid not being able to properly shut down your bus instance if it's in the process of a very long Polly retry, you can pass Rebus' internal CancellationToken to your Polly executions like this:
public class PollyMessageHandler : IHandleMessages<SomeMessage>
{
static readonly IAsyncPolicy RetryPolicy = Policy
.Handle<Exception>()
.WaitAndRetryForeverAsync(_ => TimeSpan.FromSeconds(10));
readonly IMessageContext _messageContext;
public PollyMessageHandler(IMessageContext messageContext)
{
_messageContext = messageContext;
}
public async Task Handle(SomeMessage message)
{
var cancellationToken = _messageContext.GetCancellationToken();
await RetryPolicy.ExecuteAsync(DoStuffThatCanFail, cancellationToken);
}
async Task DoStuffThatCanFail(CancellationToken cancellationToken)
{
// do your risky stuff in here
}
}
(*) The actual type of transaction depends on what is supported by the transport.
With MSMQ, it's a MessageQueueTransaction object that has Commit() and Rollback() methods on it.
With RabbitMQ, Azure Service Bus, and others, it's a lease-based protocol, where a message becomes invisible for some time, and then if the message is ACKed within that time, then the message is deleted. Otherwise – either if the message or NACKed, or if the lease expires – the message pops up again, and can again be received by other consumers.
With the SQL transports, it's just a database transaction.
Related
I am programming a fullstack application which is used on festivals to monitor their inventory on bars (how many bottles of gin they have for instance). It allows for creating an transfer request to get more stuff to specific bar and looking up those requests. The problem arises when the connection is slow enough to cause a timeout (by my testing at 1KB/s upload/download throttle it took approx 10s) but still send the data to the API.
My method which handles writing the data to the database looks like this:
public IActionResult WriteStorageTransfer([FromBody] StorageTransfer transfer)
{
Console.WriteLine("Started the execution of method");
var transferId = database.CreateNewDoc(transfer);
foreach (var item in transfer.items)
{
var sql = #$"insert into sklpohyb(idsklkarta, iddoc, datum, pohyb, typp, cenamj, idakce, idbar, idpackage, isinbaseunit)
values ({item.id}, {transferId}, current_timestamp, {packMj}, {transfer.typ}, {item.prodejnicena}, {transfer.idakce}, {transfer.idbar}, case when {pack.idbaleni} = -1 then NULL else {pack.idbaleni} end, {pack.isinbaseunit})";
database.ExecuteQueryAsTransmitter(sql);
}
return Ok(transferId); // transferId is then used by frontend to display the created transfer request.
}
This would be all nice and all, but the frontend appears to send the data to the API, API processes it and writes it to the database, but then timeout occurs on the HttpRequest, crashing the method, thus never returning a HttpResponse to the frontend (or returning code 0: 'Unknown error').
The exception thrown by the API:
Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware[1]
An unhandled exception has occurred while executing the request.
System.IO.IOException: The request stream was aborted.
---> Microsoft.AspNetCore.Connections.ConnectionAbortedException: The HTTP/2 connection faulted.
--- End of inner exception stack trace ---
at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
at System.IO.Pipelines.Pipe.GetReadAsyncResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http2.Http2MessageBody.ReadAsync(CancellationToken cancellationToken)
at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.ReadAsyncInternal(Memory`1 destination, CancellationToken cancellationToken)
at System.Text.Json.JsonSerializer.ReadFromStreamAsync(Stream utf8Json, ReadBufferState bufferState, CancellationToken cancellationToken)
at System.Text.Json.JsonSerializer.ReadAllAsync[TValue](Stream utf8Json, JsonTypeInfo jsonTypeInfo, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter.ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)
at Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter.ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)
at Microsoft.AspNetCore.Mvc.ModelBinding.Binders.BodyModelBinder.BindModelAsync(ModelBindingContext bindingContext)
at Microsoft.AspNetCore.Mvc.ModelBinding.ParameterBinder.BindModelAsync(ActionContext actionContext, IModelBinder modelBinder, IValueProvider valueProvider, ParameterDescriptor parameter, ModelMetadata metadata, Object valu
e, Object container)
at Microsoft.AspNetCore.Mvc.Controllers.ControllerBinderDelegateProvider.<>c__DisplayClass0_0.<<CreateBinderDelegate>g__Bind|0>d.MoveNext()
--- End of stack trace from previous location ---
The size of the JSON sent to API is usually ~10 KB, nothing too serious like 100MB so I don't think the size is the problem
This leaves the frontend hanging and the users tend to click the button again, possibly writing multiple duplicates to the database as he does not know if the invoice has been processed or if there is an error in the app.
Interestingly the Console.Write("Started execution of the method") does not get triggered as I do not see it in the console window, yet the data gets written into the database after manually checking it.
Perfect thing would be if I could notify the user that something went wrong in the creation of the transfer request, and prevented the creation of it in the database. I tried using try catch block targeted on IOException
Thanks a lot in advance, anything goes at this point
The problem arises when the connection is slow enough to cause a timeout but still send the data to the API.
Back to the drawing board (or time to give us more details). If your POS app (which I assume this is) wants to report a sale to the back-end, why would the user have to wait for this? And why would the user be able to report one sale twice?
Instead have the client generate a unique transaction ID locally and store them in local storage, and (after each sale, and periodically, but most importantly: on the background) have the client try to synchronize their transactions to the server. The server can then reject duplicate transactions so it won't record the same sale twice, and your app can handle periods without or with spotty internet access.
As for your error: the timeout probably is a minute or so, which may be too long for this use case anyway. The client will ultimately throw an exception if it doesn't get an HTTP response, but do you want your bar person to wait on the POS for a minute? They are going to call it a POS then.
I'm working on a micro service powered by SpringMVC and Spring Cloud Kafka.
For simplicity I will only focus on the part that makes HTTP request.
I have a binding function like the following (please note that I'm using the functional style binding).
#SpringBootApplication
public class ExampleApplication {
// PayloadSender uses RestTemplate to send HTTP request.
#Autowired
private PayloadSender payloadSender;
#Bean
public Function<KStream<String, Input>, KStream<String, Output>> process() {
// payloadSender.send() is a blocking call which sends payload using RestTemplate,
// once response is received it will collect all info and create "Output" object
return input -> input
.map((k,v) -> KeyValue.pair(k, payloadSender.send(v))); // "send" is a blocking call
// Question: if autoCommitOffset is set to true, would offset automatically commit right after the "map" function from KStream?
}
public static void main(String[] args) {
SpringApplication.run(ExampleApplication.class, args);
}
}
From this example you can see that the payloadSender is sending the payload from the input stream using RestTemplate and upon receiving the response creating the "Output" object and produce to the output topic.
Since payloadSender.send() is blocking, I'm worried that this will cause performance issue. Most importantly if the HTTP request gets timed out, I'm afraid it will exceed the commit interval (usually the HTTP timeout interval is much much greater than the consumer commit interval) and cause the kafka broker to think the consumer is dead (please correct me if I'm wrong).
So is there a better solution for this case? I would eventually switch over to spring-reactive but for the time being I need to make sure the MVC model works. Although I'm not sure spring-reactive would have magically solve this issue.
The default max.poll.interval is 5 minutes; you can increase it or reduce max.poll.records. You can also set a timeout on the rest call.
I'm struggling to find any good examples on how to implement error handling with Spring WebFlux.
The use case I want to handle is notifying HTTP clients that a stream has terminated unexpectedly. What I have found it that with the out of the box behaviour, when a stream is terminated, for example by raising a RuntimeException after x items have been processed, is handled too gracefully! The client is flushed all items up until the exception is raised, and then the connection is closed. As far as the client is concerned the request was successful. The following code shows how this has been setup:
public Mono<ServerResponse> getItems(ServerRequest request) {
Counter counter = new Counter(0);
return ServerResponse
.ok()
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(operations.find(query, Document.class, "myCollection")
.map(it -> {
counter.increment();
if(counter.getCount() > 500) {
throw new RuntimeException("an error has occurred");
}
return it;
}), Document.class);
}
What is the recommended way to handle the error and notify the HTTP client that the stream terminated unexpectedly?
It really depends on how you'd like to communicate that failure to the client. Should the client display some specific error message? Should the client reconnect automatically?
If this is a "business error" that doesn't prevent you from writing to the stream, you could communicate that failure using a specific event type (look at the Server Sent Events spec).
Spring WebFlux supports ServerSentEvent<T>, which allows you to control various fields such as event, id, comment and data (the actual data). Using an Flux::onErrorMap operator, you could write a specific ServerSentEvent that has an "error" event type (look at the ServerSentEvent.builder() for more).
But this is not transparent to the client, as you'd have to subscribe to specific events and change your JavaScript code otherwise you may display error messages as regular messages.
I was trying to understand Asynchronous Controller implementation from one of links:
http://shengwangi.blogspot.in/2015/09/asynchronous-spring-mvc-hello-world.html
I was puzzled on point that Controller thread received request and exists. Then service method received the request for further processing.
#RequestMapping("/helloAsync")
public Callable<String> sayHelloAsync() {
logger.info("Entering controller");
Callable<String> asyncTask = new Callable<String>() {
#Override
public String call() throws Exception {
return helloService.doSlowWork();
}
};
logger.info("Leaving controller");
return asyncTask;
}
Since, Controller exists it and pass the control to appropriate handler mapping/ jsp. What will be seen on the browser for the user ?
Browser waits for the response to process it.
Asynchronous process takes place only at the server end and it has nothing to do with the browser. Browser sends the request and waits for the server to write the response back.
Since you returned Callable doesnt mean that controller exists the flow. Spring`s response handlers will wait for async task to get executed to write the response back.
Please go through AsyncHandlerMethodReturnValueHandler which handles Asynchronous response returned from the controller.
if you return callable then it will be handled by CallableHandlerMethodReturnvaluehandler :
public void handleReturnValue(Object returnValue, MethodParameter returnType,
ModelAndViewContainer mavContainer, NativeWebRequest webRequest) throws Exception {
if (returnValue == null) {
mavContainer.setRequestHandled(true);
return;
}
Callable<?> callable = (Callable<?>) returnValue;
WebAsyncUtils.getAsyncManager(webRequest).startCallableProcessing(callable, mavContainer);
}
I had cleared my doubt from this link:
https://dzone.com/articles/jax-rs-20-asynchronous-server-and-client
However, they used different way to accomplish the asynchronous processing but the core concept should be the same for every approach.
Some important part of the article:
The idea behind asynchronous processing model is to separate
connection accepting and request processing operations. Technically
speaking it means to allocate two different threads, one to accept the
client connection and the other to handle heavy and time consuming
operations. In this model, the container dispatched a thread to accept
client connection (acceptor), hand over the request to processing
(worker) thread and releases the acceptor one. The result is sent back
to the client by the worker thread. In this mechanism client’s
connection remains open. May not impact on performance so much, such
processing model impacts on server’s THROUGHPUT and SCALABILITY a lot.
What I want to do is to achieve 100% thread agility with asp.net 4.5 async/await while waiting for a chat message (or a MSMQ message)to come. Async/await can release the HTTP request handling thread to the thread pool, but how to not use any thread while waiting for chat messages to come?
In java, I can use the latest Jersey rest API to achieve this using the #ManagedAsync/#Suspended annotation:
// Java code, using Jersey rest API
#Path("/Chatroom")
public class ChatHandler {
private static final HashMap<Integer, AsyncResponse> map = new HashMap<>();
#GET
#Path("/JoinRoom")
#ManagedAsync
public void joinRoom(#PathParam("UserId") String id, #Suspended final AsyncResponse ar) {
map.put(i, ar);
}
#GET
#Path("/PostChat")
public String sendChat(#PathParam("UserId") String id, #QueryParam("message") String message) throws InterruptedException {
map.get(i).resume(message);
return "Message successfully sent to user " + id;
}
}
Here is a description for the above code. John first uses a URL like this to join the chat room as user john in order to receive some chat messages. In the corresponding method joinRoom(), Jersey will, same as async/await in asp.net 4.5, return the http request thread back into the thread pool. However here I put the HttpContext, in Jersey the AsyncResponse object, in a hashmap for later use.
Then say for the next 50 seconds, there is nobody sending chat messages, the backend Jersey server will not use any threads on anything during that 50 seconds. No thread is spent on waiting for the next chat message. Then on the 51st second, Mary goes to the URL to send a hello message to user john. In sendChat() method, I retrive the HttpContext (AsyncResponse) from the hash map, and resume it with the chat message. At this time, Jersey will find an available thread in the http request thread pool to resume the AsyncResponse and send the chat message "hello" to John's browser. Mary's browser will see "Message successfully sent to user john".
So how can I acheive the same 100% thread agility with asp.net async/await? In my await method, I can not find a correct way to wait for the chat message without occupying a worker thread.
There is already an ASP.NET system designed for this: SignalR. It was designed with async in mind, so it makes optimum use of threads.
If you don't want to use SignalR, you can use any of a number of asynchronous coordination primitives. TPL Dataflow may be a good fit for this kind of scenario, or you can use SemaphoreSlim or TaskCompletionSource or any of the primitives in my AsyncEx library.