how to avoid compile time dependency to a rebus message? - rebus

I want to integrate two bounded contexts via events raised by Context A and consume the Events from Context B.
How ever i want to avoid compile time dependency, so Context B does not have to include dll's/libraries of Context A. (At least i don't want the hassle of needing to update a reference to A every time a new Event type gets exposed by Context A.
Is there any prefered/best practice to do this with Rebus?

There's a couple of ways, actually :)
Myself, I prefer to distribute the messages as separate NuGet packages – then it becomes a matter of looking into packages.config to see which dependencies each endpoint has.
As long as I keep published message schemas immutable (i.e. follow a strict append-only approach to evolving it), there's no problem in consuming events – the data is simply truncated when deserializing into an old version of the message schema.
But if you want your endpoints to be less coupled than that, you can do a couple of things.
Unless you change the serializer, the messages are serialized as UTF8-encoded JSON. This means that a subscriber can always install its own JSON serializer, that could e.g. deserialize the message to its own types, or simply into a JObject (assuming you are using Newtonsoft JSON.NET).
In fact – if I recall correctly – you can include the NuGet package Rebus.NewtonsoftJson and use it by going
Configure.With(new CastleWindsorContainerAdapter(container))
.(...)
.Serialization(s => s.UseNewtonsoftJson())
.Start();
which brings with it Newtonsoft's JObject into the mix, which you can then use in your message handler by implementing IHandleMessages<JObject>.
I hope that gives you some inspiration :)

Related

Transient Database contexts from separate dependencies fails for parallel queries

Background (TLDR: I need parallel queries)
I am building REST service that needs to be able to answer queries very fast.
As such I'm pre-loading a large part of the database into memory and answering using that data instead of making complex database queries for each request. This works great, and the average response time of the API is well below the requirements and a lot faster than direct database queries.
But I have a problem. The service takes about 5 minutes to start and pre-load all of its information. During this time it can not answer queries.
Problem
I want to change this so that during the pre-load phase it makes database queries until the in-memory cache is loaded.
This leads me to a problem. I need to have multiple active queries to my database. Anyone who has tried this in EF Core has problably seen this message.
System.InvalidOperationException: A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
The first sentence on the linked page is
Entity Framework Core does not support multiple parallel operations
being run on the same DbContext instance.
I thought this would be easily solved by wrapping my cache-loading into its own class and the direct query into another, and then having both of these requiring their own instance of the Database Context. Then my service can in turn get these injected and use both of these dependencies in parallel.
This should be what I have:
I have also set up my database context so that it uses transient for all parts.
services.AddDbContext<IDataContext, DataContext>(options =>
options.UseSqlServer(connectionString), ServiceLifetime.Transient, ServiceLifetime.Transient
);
I have also enabled MultipleActiveResultSets=True
All of this however results in the exact same error as listed above.
Again, everything is Transient except the HandlerService which is Singelton as I want this to keep a copy of the cache in memory and not have to load it for every request.
What is it I have failed to understand about the ef-core database context, or DI in general?
I figured out what the problem was. In my case there is as described above, one singleton handler. This handler has one (indirect) context (through DI) for fulfilling requests until the cache is loaded. When multiple parallel queries are sent to the API before the cache is loaded, then this error occurs as each of these request are using the same context. And in my test I was always hitting the parallel requests as part of the startup and hence the singelton service was trying to use the same db context for multiple requests. My solution is to in this one place step outside the "normal" dependency injection and use the IServiceScopeFactory to get a new instance of the dependency used to resolve requests before the cache is loaded. Bohdans answer led me to this conclusion and ultimate solution.
I'm not sure whether it qualifies for a full answer but it's too broad for a comment.
When doing .NET core background services which are obviously singletons too I use IServiceScopeFactory to create services with a limited lifetime.
Here's how I create a context
using (var scope = _scopeFactory.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<DbContext>();
}
My guess is that you could inject it in your hander and use it like this too. So it would allow you to leave context as scoped instead of transient with is default setting btw.
Hope that helps.

ASP.NET thread agility - how to overcome?

ASP.NET is known to exhibit what is called "thread agility". In short, it means that multiple threads may be employed to fulfill a single request, although not more than one thread at a time. This is an optimization that means a thread waiting for asynchronous I/O may be returned to the pool and used to service other requests.
However, ASP.NET does not migrate all thread-related data when moving a request. Microsoft either forgot to do so, or thought that using thread-local storage (made easy by the ThreadStatic attribute) was something only the people coding ASP.NET themselves should do.
Based on quick googling, it seems to me that the only way to avoid the issue is to rely on HttpContext instead. The context is indeed migrated if ASP.NET decides to switch threads mid-request, so this overcomes the problem. But it creates a brand new headache instead: It ties your application logic to HttpContext, and therefore to a web context. That's not acceptable in all situations (in fact, I'd say it's unacceptable in most). Besides, since HttpContext is sealed and has internal constructors, you cannot mock or stub it, and therefore your logic also becomes untestable.
According to this (old) blog post, CallContext does NOT work, which is pretty infuriating given that a call context is conceptually precisely a logical thread!
Is there a simple way to reliably implement "per-LOGICAL-thread" isolation that will work in asp.net contexts as well as other contexts?
If not, does anyone know of a lightweight third-party framework that solves the problem? Does StructureMap behave correctly when ASP.NET migrates threads?
I would like a general answer, but in case anyone wonders, the specific use case I'm looking at is for use of Entity Framework in a SharePoint context. We're unfortunately stuck with SP-2010 and EF 3.5 for a while. EF basically requires that data is saved using the same context as they were originally read from - or else you have to keep track of changes yourself. I would like to introduce a "current model" concept. The first time the model is called upon in processing each HTTP request it should be instantiated, and then that same model instance should be used for the duration of the request. But the code relying on "Model.Current" should also work if executed in the context of a timer job. I'm fine with the timer job code explicitly disposing of the model when done with it (a task I'd like to give to a handler for HttpApplication.EndRequest in the SharePoint web context).
There may be reasons not to do this, and that's interesting too, but I would anyway really appreciate to learn of a way to achieve "logical thread isolation" in an asp.net context, as it'd be remarkably useful.
There is a nice post related to the problem: Implicit Async Context ("AsyncLocal").
If I got everything right, Logical CallContext i.e. CallContext.LogicalGetData and CallContext.LogicalSetData make it real to migrate immutable data correctly given you live in the world past .NET 4.5. This immutable limitation is a nut but still...way to go.

Accessing a thread un-safe COMobject in classic ASP

Trying to fix a problem in a classic ASP application, however I am inexperienced. Tried to find more info but was unable to.
The app instantiates a COM object for data retrieval which is not thread-safe, so the following instructions are added.
comObject=CreateObject("comServer.comObject")
returnValue=comObject.DoWork(.......)
...
comObject = Nothing
However, when processing two different http requests at the same time, the latter one seems to overwrite the first request, giving the first requester an error. It looks as if the comObject variable is shared between the requests.
How to instantiate the object in such a way that every separate request in IIS, gets it's own instance of the comObject?
Without knowing what the object does or how it does it, it's impossible to give specific advice. A general description will have to do:
The object is broken/buggy. It is the object's responsibility to handle the problem.
A COM object is supposed to handle all threading issues internally, or defer to COM STA apartments if it cannot do it, or doesn't want to (for those aspects that an STA can handle). This goes deep into the design of the object.
Regardless of COM Apartment choice, a DoWork(...) method with a semantic that precludes multiple separate COM objects in separate threads from handling simultaneous calls - is a seriously problematic design at best. A proper design would either include mechanisms to handle the conflict explicitly, or just hide the conflict from the calling code and handle the conflict internally.
Depending on the details of what DoWork() does, there might be ways to fix the object in such a way that the calls can succeed in parallel, or block each other so the calls are effectively serialized, or to cause the second call to throw a "You already called me" error. Again, which approach is more appropriate depends heavily on what the method does.
If you can't modify this broken component, your best option would be to write a COM wrapper that ensures serialization to the real object.
In any case, there is nothing reasonable you can do from the client (ASP VBScript) side.

What is Unity's equivalent of Windsor's Release

I have some unmanaged resources in classes I'm injecting into controllers that I need to dispose once the controller is disposed (otherwise I'll have memory leak). I have looked at IUnityContainer and did not find a Release (or similar) method that allow me to do that.
After some trial and error (and reading), it seems to me that Unity do not keep track of what is going on about the types it creates. This is way different from Windsor, where I can call Release and the entire object graph will be release. This is actually one of the points of having a container in the first place (object lifecycle management). I should not need to call Dispose directly the container should be able to do that for me in the proper order/objects.
So, my question is, how can I tell Unity that an object is no longer needed and should be disposed?
If there is no way of doing that, is there a way to change the lifecycle to per web request?
As a note, changing the container is not an option. Unfortunately :(
You will have to look at the different lifetime managers in Unity. The ContainerControlledLifetimeManager will call dispose on every item it creates. Unfortunately this manager acts as a singleton for resolved objects so might not be appropriate for you.
The other alternative is to create your own lifetime manager which keeps track of objects that it creates and when the container is disposed just disposes every object.

NHibernate, Sqlite, missing tables and IOC fun

I'm doing unit testing on a class library that uses NHibernate for persistence. NHibernate is using a Sqlite in-memory database for testing purposes. Under normal circumstances, it's easy to get StructureMap to kick out a session for me.
However, because I'm using the in-memory database to improve testing speed, I need to have a single session available for the duration of a test (because it blows the database away when I create a new one). And there is another wrinkle. The case that is currently burning me is testing a custom NHibernate-based ASP.NET membership provider. These are created apparently once per AppDomain, so I shouldn't inject the session into it, for obvious reasons.
Is there a way in structuremap to tell it to get rid of an instance of a particular type while still maintaining the bits that tell it how to instantiate that type? Really, if I could get away with it, I would just make it act like the HttpScoped object lifetime, but apparently I can only do that within the context of an Http request. Is there a straightforward way to manually control the lifetime of an object coming out of structuremap?
I apologize for the length of this and the possibility that it is a dumb question. I'm solo on this project, so I don't really have anyone to bounce ideas off of.
You could wrap the session in your own ISession implementation which delegates to a real session which lifetime you control. Then register your own ISession as instance.
I ended up making two constructors for my provider along with a private variable of type Func. By default, its value was set to my standard code for creating a session using StructureMap's ObjectFactory.
The overloaded constructor accepted as a parameter an object of type Func. That way, I can inject a strategy for creating an instance of that type if needed, but otherwise don't have to go through any extended effort. In the case of my test, I created the session in the NUnit setup method and destroyed it in the Teardown. I don't love this idea, but I don't currently hate it enough to rip it out....yet.
This got rid of the error I was experiencing in regard to the tables. However, it appears that NHibernate for some reason cannot write to an in-memory sqlite database under the conditions I created. I'm now working on testing to see if I can write to one in the file system. It isn't ideal, but it will be a good long while (I hope), before the performance of writing to disk really starts hurting.

Resources