Given that I have a web/SOAP service, how do I setup and teardown a proper transaction context for Rebus (the messaging bus)? When Rebus is calling into a message handler this is not a problem since Rebus will setup the transaction context before calling the handler - but what about the opposite where a web service handler needs to send/publish a message via Rebus?
I am not interested in how to implement an HTTP module or similar - only the basics around Rebus: what is needed to prepare Rebus for sending a message?
The web service code has its own transaction going on when talking to the application database. I need to be able to setup Rebus when setting up the database transaction and comit/rollback Rebus when doing the same with the database.
I have a similar problem with standalone command line programs that needs to both interaction with a database and sending Rebus messages.
Rebus will automatically enlist send and publish operations in its own "ambient transaction context", which is accessed via the static(*) AmbientTransactionContext.Current property.
You could implement ITransactionContext yourself if you wanted to, but Rebus comes with DefaultTransactionContext in the box.
You use it like this:
using(var context = new DefaultTransactionContext())
{
AmbientTransactionContext.Current = context;
// send and publish things in here
// complete the transaction
await context.Complete();
}
which could easily be put e.g. in an OWIN middleware or something similar.
(*) The property is static, but the underlying value is bound to the current execution context (by using CallContext.LogicalGet/SetData), which means that you can think of it as thread-bound, with the nice property that it flows as expected to continuations.
In Rebus 2.0.2 it is possible to customize the accessors used to get/set the context by calling AmbientTransactionContext.SetAccessors(...) with an Action<ITransactionContext> and a Func<ITransactionContext>, e.g. like this:
AmbientTransactionContext.SetAccessors(
context => {
if (HttpContext.Current == null) {
throw new InvalidOperationException("Can't set the transaction context when there is no HTTP context");
}
HttpContext.Current.Items["current-rbs-context"] = context
},
() => HttpContext.Current?.Items["current-rbs-context"] as ITransactionContext
);
which in this case makes it work in a way that flows properly even when using old school HTTP modules ;)
Related
Background:
I have a web application in which I have SignalR as well.
I'm using AutoFac as DI container where my database is registered as
builder.RegisterType<MyDbContext>().AsSelf().InstancePerLifetimeScope();
i.e. MyDbContext is registered as PerRequestDependency.
The ChatHub is also registered with same dependency level. i.e.
builder.Register<IHubContext>((c) =>
{
return GlobalHost.ConnectionManager.GetHubContext<ChatHub>();
})
.InstancePerLifetimeScope();
Problem:
The problem I am facing is - The DbContext throws error saying there are multiple threads calling the DbContext.
Here is the exact error:
System.NotSupportedException: A second operation started on this context before a previous asynchronous operation completed. Use 'await' to ensure that any asynchronous operations have completed before calling another method on this context. Any instance members are not guaranteed to be thread safe. at System.Data.Entity.Internal.ThrowingMonitor.EnsureNotEntered()
Note: I have looked into entire code and I am 100% sure that I have awaited all async calls to the database.
Possible Solution:
If I change the AutoFac registration to per below then the error goes away but I feel, it will require more database connections.
builder.RegisterType<MyDbContext>().AsSelf();
i.e. remove InstancePerLifetimeScope
Expectation:
Better solution than increasing database connections.
Make sure you don't open the same entity twice.
Example:
var x = db.user.FirstOrDefault(a=>a.id == 1);
.
some code here
.
var y = db.user.FirstOrDefault(a=>a.id == 1);
y.userName = "";
Consider this extremely simple .NET Core 3.1 (and .NET 5) application with no special config or hosted services:
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
internal class Program
{
public static async Task Main(string[] args)
{
var builder = Host.CreateDefaultBuilder(args);
builder.UseWindowsService();
var host = builder.Build();
var fireAndforget = Task.Run(async () => await host.RunAsync());
await Task.Delay(5000);
await host.StopAsync();
await Task.Delay(5000);
await host.RunAsync();
}
The first Run (sent as a background fire and forget task only for the purpose of this test) and Stop complete successfully. Upon calling Run a second time, I receive this exception:
System.AggregateException : 'Object name: 'EventLogInternal'.Cannot access a disposed object. Object name: 'EventLogInternal'.)'
If I do the same but using StartAsync instead of RunAsync (this time no need for a fireAndForget), I receive a System.OperationCanceledException upon called StartAsync the second time.
Am I right to deduce that .NET Generic Host aren't meant to be stopped and restarted?
Why do I need this?
My goal is to have a single application running as a Windows Service that would host two different .NET Generic Host. This is based on recommendation from here in order to have separate configuration and dependency injection rules and message queues.
One would stay active for all application lifetime (until the service is stopped in the Windows services) and would serve as a entry point to receive message events that would start/stop the other one which would be the main processing host with full services. This way the main services could be in "idle" state until they receive a message triggering their process, and another message could return them to idle state.
The host returned by CreateDefaultBuilder(...).Build() is meant to represent the whole application. From docs:
The main reason for including all of the app's interdependent resources in one object is lifetime management: control over app startup and graceful shutdown.
The default builder registers many services in singleton scope and when the host is stopped all of these services are disposed or switched to some "stopped" state. For example before calling StopAsync you can resolve IHostApplicationLifetime:
var appLifetime = host.Services.GetService<IHostApplicationLifetime>();
It has cancellation tokens representing application states. When you call StartAsync or RunAsync after stopping, all tokens still have IsCancellationRequested set to true. That's why the OperactionCancelledException is thrown in Host.StartAsync.
You can list other services during configuration:
For me it sounds like you just need some background jobs to process messages but I've never used NServiceBus so I don't know how it will work with something like Hangfire. You can also implement IHostedService and use it in the generic host builder.
I'm doing something like:
do
{
using IHost host = BuildHost();
await host.RunAsync();
} while (MainService.Restart);
with MainService constructor:
public MainService(IHostApplicationLifetime HostApplicationLifetime)
MainService.Restart is a static bool set by the MainService itself in response to some event which also calls HostApplicationLifetime.StopApplication().
I am thinking of a way to manage failed messages in Rebus.
In my second level retry strategy I want to save the message and exception details into the database so that I can later review the error details and decide whether to resend the message to the be reprocessed or ignore and delete.
In the handler I am capturing details as follows:
public async Task Handle(IFailed<StudentCreated> failedMessage)
{
//Logic to Defer Message with rebus_defer_count not shown
DictionarySerializer dictionarySerializer = new
DictionarySerializer();
ObjectSerializer objectSerializer = new ObjectSerializer();
string headers =
dictionarySerializer.SerializeToString(failedMessage.Headers);
string message =
objectSerializer.SerializeToString(failedMessage.Message);
Exception lastException= failedMessage.Exceptions.Last();
string exception = objectSerializer.SerializeToString(lastException);
//Logic to save the message and error details in the database not shown
}
This will enable me to save the message and error details into the database where I can create a dashboard to view the messages and resolve them as I wish rather than in the broker queue such as RabbitMQ.
Now my question is how can I return them to the handler where the error was raised using the information provided in the headers?
What is the best way to do it with REBUS provided I have all the details from the Failed Message as shown in my code snippet?
Regards
What you're trying to achieve will be much easier if you make a small change to your application. You see, Rebus already has a built-in service in place for handling failed messages called IErrorHandler.
You can register your own error handler like this:
Configure.With(...)
.(...)
.Options(o => o.Register<IErrorHandler>(c => new MyCustomErrorHandler()))
.Start();
thus replacing the default error handler (which btw. is PoisonQueueErrorHandler)
The error handler gets to handle the message in the form of the raw TransportMessage (i.e. simply headers and a byte[]) when all retries have failed, so this is the perfect place to save the message to your database.
If you then look here, you can see how Rebus' default error handler adds its own queue name as the rbs2-source-queue header, meaning that the message can later be sent back to that queue.
With this information, it should be fairly easy to write some code that inspects the message for its source queue and sends a RabbitMQ message to that queue.
This will only work if the re-delivery service has access to the RabbitMQ instance where all of your Rebus endpoints are running, of course. It's less straightforward, if you want to implement this in a general way: E.g. if you were using Fleet Manager, each Rebus instance would use a long-polling protocol to query the server for commands, which enables Fleet Manager to tell any Rebus instance to e.g. send a previously failed message to any queue it has access to.
I have been reading This Book on page 58 to understand how to do asynchronous event integration between microservices.
Using RabbitMQ and publish/subscribe patterns facilitates pushing events out to subscribers. However, given microservice architectures and docker usage I expect to have more than once instance of a microservice 'type' running. From what I understand all instances will subscribe to the event and therefore would all receive it.
The book doesn't clearly explain how to ensure only one of the instances handle the request.
I have looked into the duplication section, but that describes a pattern that explains how to deduplicate within a service instance but not necessarily against them...
Each microservice instance would subscribe using something similar to:
public void Subscribe<T, TH>()
where T : IntegrationEvent
where TH : IIntegrationEventHandler<T>
{
var eventName = _subsManager.GetEventKey<T>();
var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);
if (!containsKey)
{
if (!_persistentConnection.IsConnected)
{
_persistentConnection.TryConnect();
}
using (var channel = _persistentConnection.CreateModel())
{
channel.QueueBind(queue: _queueName,
exchange: BROKER_NAME,
routingKey: eventName);
}
}
_subsManager.AddSubscription<T, TH>();
}
I need to understand how a multiple microservice instances of the same 'type' of microservice can deduplicate without loosing the message if the service goes down while processing.
From what I understand all instances will subscribe to the event and
therefore would all receive it.
Only one instance of subscriber will process the message/event. When you have multiple instances of a service running and subscribed to same subscription the first one to pick the message will set the message invisible from the subscription (called visibility timeout). If the service instance is able to process the message in given time it will tell the queue to delete the message and if it's not able to process the message in time , the message will re-appear in queue for any instance to pick it up again.
All standard service bus (rabbitMQ, SQS, Azure Serivce bus etc) provide this feature out of box.
By the way i have read this book and used the above code from eShotContainers and it works the way i described.
You should look into following pattern as well
Competing Consumers pattern
Hope that helps!
I have been playing around with Rebus and RabbitMQ, and came across a scenario I cannot seem to get working.
I have a couple of queues; queue1 & queue2 and they take the same class/message type. Now, Rebus seems to prefer different message types per queue, this is not an option for me right now, so i use the advanced routing bus.Advanced.Routing.Send("queue1", Message)
I would like to utilise the bus.defer functionality but am unsure how to combine them both. I know I might need to introduce a waiting queue as an external timeout manager (which I have yet to get working too, but thats for another day)
Has anyone done anything similar?
How to send the message
As you have probably discovered, when you bus.Defer, Rebus will use the endpoint mappings to look up the destination queue from the type of the message being deferred (which is analogous to bus.Send/bus.SendLocal, in that it has an accompanying bus.DeferLocal too, which always sends to the sender's own input queue).
What is missing, is something analogous to bus.Advanced.Routing.Send, but fortunately it is pretty easy to emulate a combination of bus.Defer and an explicitly routed message but setting the rbs2-deferred-recipient header on a message:
var headers = new Dictionary<string, string> {
{Headers.DeferredRecipient, "destination-queue"}
};
var delay = TimeSpan.FromMinutes(5);
await bus.DeferLocal(delay, yourMessage, headers);
How to configure the timeout manager
You can use Rebus' internal timeout manager by configuring some kind of timeout persistence – e.g. by pulling in Rebus.SqlServer and using SQL Server to store timeouts like so:
Configure.With(...)
.(...)
.Timeouts(t => t.StoreInSqlServer(...))
.Start();
Another option is to install a Rebus endpoint as a dedicated timeout manager, which simply uses the same configuration as can be seen above, and then all other endpoints do this:
Configure.With(...)
.(...)
.Timeouts(t => t.UseExternalTimeoutManager("timeouts"))
.Start();
assuming that your timeout manager uses the timeouts queue.
Update relevant from Rebus 5
Rebus 5 (which is currently available as a prerelease package on Nuget.org) has builtin support for deferring messages to an explicitly specified destination queue.
It can be done like this:
var delay = TimeSpan.FromMinutes(2);
await bus.Advanced.Routing.Defer("dest-queue", delay, message);
which will simply carry out the steps mentioned above underneath the covers.