autofac, ASP.NET integration, and Dispose - asp.net

Autofac newbie here, but I like what I see so far. I'm trying to take advantage of request-lifetime of my resolved objects and I'm having trouble confirming that a dispose is actually happening after a request is done.
I have a disposable object that I get at the start of a page request and dispose at the end. I'm using autofac to get an instance of the object now and I wanted to see if autofac would do the disposing for me.
i've instrumented the Dispose() method on the object in question, and i can see it 'fire' when my page does the lifetime management. I see no evidence when I don't dispose myself but let autofac do it.
I'm using these instructions to get thigns configured, including the web.config and global.asax changes. I am able to instantiate the object just fine but I can't tell if it's really being disposed. Is there another step?

Whether you dispose the object manually within the page or let the Autofac module do it, there will be a difference in when your object is disposed in respect to the request lifecycle. The Autofac ContainerDisposalModule will not dispose the Request container, and with it your object, until the HttpApplication.EndRequest is fired, which is at the very end of the request lifecycle.
Depending on how you are tracing the call to your objects Dispose method, there could be a possibility that you don't see the output. How are you instrumenting your Dispose method?

Repeat of answer from your re-post:
Most of the time this happens (in any
IoC container) you'll find that one
component along a chain of
dependencies is a singleton.
E.g.
A -> B -> C
If A is 'factory', B is 'singleton'
and C is 'factory', then resolving A
will get a reference to the singleton
B, which will always reference the
same C.
In order for a new C to get created
every time you resolve A, B must also
be 'factory'.

I figured it out!
I was asking the WRONG container for the object instance - I was asking the application-container for the object and not the request-container.
D'oh!

Dispose is nothing more than an interface that allows you to define a "Dispose" method. The most common use for requiring a disposable class is if there are resources in that class that should be freed explicitly (such as a windows resource handle). For the most part the IDisposable interface is not required, as the garbage collector is extremely powerful and will do a much better job at managing memory. However, obviously there are plenty of cases where handles must be freed immediately, which brings me on to the next point, implementation of IDisposable.
What NOT to do:
var myClass = MyDisposableClass();
// do stuff with myClass
myClass.Dispose();
Proper usage:
using (var myClass = MyDisposableClass())
{
// do stuff with myClass
}
The compiler will effectively build the same as the following:
MyDisposableClass myClass = MyDisposableClass();
try
{
// do stuff with myClass
}
finally
{
myClass.Dispose();
}
The important distinction being that no matter what happens, you know your Dispose will get called. In addition, you can tie a destructor (which if exists, is called by the Garbage Collector) which you can then tie in to call your Dispose method; but if you need to do this for whatever reason be sure to not free the same resource twice (set your pointers to null after releasing).

Related

When to use Request.RegisterForDispose?

For ASP.NET Web API, I've been working on my own implementation of IHttpControllerActivator and am left wondering when (or why?) to use the HttpRequestMessage extension method "RegisterForDispose".
I see examples like this, and I can see the relevance in it, since IHttpController doesn't inherit IDisposable, and an implementation of IHttpController doesn't guarantee its own dispose logic.
public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
var controller = (IHttpController) _kernel.Get(controllerType);
request.RegisterForDispose( new Release(()=> _kernel.Release(controller)));
return controller;
}
But then I see something like this and begin to wonder:
public IHttpController Create(
HttpRequestMessage request,
HttpControllerDescriptor controllerDescriptor,
Type controllerType)
{
if (controllerType == typeof(RootController))
{
var disposableQuery = new DisposableStatusQuery();
request.RegisterForDispose(disposableQuery);
return new RootController(disposableQuery);
}
return null;
}
In this instance RootController isn't registered for disposal here, presumably because its an ApiController or MVC Controller? - And thus will dispose itself.
The instance of DisposableStatusQuery is registered for disposal since it's a disposable object, but why couldn't the controller dispose of the instance itself? RootController has knowledge of disposableQuery (or rather, it's interface or abstract base), so would know it's disposable.
When would I actually need to use HttpRequestMessage.RegisterForDispose?
One scenario I've found it useful for: for a custom ActionFilter.
Because the Attribute is cached/re-used, items within the Attribute shouldn't rely on the controller to be disposed of (to my understanding - and probably with caveats)... so in order to create a custom attribute which isn't tied to a particular controller type/implementation, you can use this technique to clean up your stuff. In my case, it's for an ambient DbContextScope attribute.
RegisterForDispose it's a hook that will be called when the request is disposed. This is often used along with "some" of the dependency injection containers.
For instance, some containers (like Castle.Windsor) by default will track all dependencies that they resolve. This is according to Windsor ReleasePolicy LifecycledComponentsReleasePolicy which states that it will keep track of all components that were created. In other words your garbage collector will not be able to cleanup if your container still tracks your component. Which will result into memory leaks.
So for example when you define your own IHttpControllerActivator to use it with a dependency injection container it is in order to resolve the concrete controller and all its dependencies. At the end of the request you need to release all the created dependencies by the container otherwise you will end with a big memory leak. You have this opportunity doing it with RegisterForDispose
I use RegisterForDispose with the DI container's. Based on Blog post I have implemented to dispose the container(Nested Container) after each request so that it clears all the objects which i has created.
One may want to hook code around the life cycle of a request that (1) has little to do with controllers and (2) does not subclass the request type.
I would imagine the idiomatic form of such code takes the shape of extension methods on HttpRequestMessage, for example. If the code allocates disposable resources, it would need to hook the disposal code to something. I'm not too familiar with the various extension points of the ASP.NET pipeline, but I suppose hooking code just to dispose of resources at the end of the request processing stage was common enough to justify a dedicated registration mechanism for disposable resources (as opposed to more generally subscribing code to be executed).
Since you're asking, I found a nice example scenario in this sample. Here, an Entity Framework context is set as a property of the request, and must be disposed of properly. While this property is intended to be used by controllers, they're not specific to any controller or controller super-class, so in my opinion this is a very sensible design choice. If you're curious why, this is because these requests are "OData batch requests" and controller actions will be invoked multiple times over the lifetime of each request (once per "operation"). Certain operations are grouped into atomic "changesets" that must be wrapped in transactions at a higher-level than controllers (a dedicated mechanism is used: an ODataBatchHandler, so that the controllers themselves are oblivious to this). Hence, controllers alone are not enough, as one cannot have them dispose of the context themselves in this scenario.
Hope this helps.

Why would I want to use UnitOfWork with Repository Pattern?

I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}

Asp.net: Can a delegate ("Action") be serialized into control state?

I am implementing a user control that has a method that takes an Action delegate as a parm.
Attempting to store the delegate in Control State yields a serialization error. Is it even possible to serialize a delegate into Control State?
BP
Not easily - and it could open the door for potential problems.
It is theoretically possible to use reflection to determine which method of an object the delegate is invoking, and write a custom serialization process for it. Upon deserialization you would once again need to write logic to convert the information into a delegate reference.
The problem is that in the general case, discovering the object at runtime that you need to re-generate the delegate for is not always possible. If the delegate refers to a lambda or anonymous method that complicates things even more because their may be closures involved.
You are probably better off either:
Not preserving the Action delegate between requests and having the ASP.NET code re-attach the delegate on postback. This is the least risky option IMHO.
Storing the delegate reference in session state and reattach it to the deserialized object on postback. This option is risky for two reasons:
a) holding on to object references indefinitely in memory if the end user never posts back, or you forget to clear the object from server state.
b) if the delegate references page elements (controls, etc) you may run
into subtle bugs because the delegate will operate against the objects from the previous request, and not the new request.
In this post the author serializes an Action object to be executed later in time.
You can extend at your own action serializing to a string instead to a file.
Very interesting:
http://mikehadlow.blogspot.com/2011/04/serializing-continuations.html

Calling a getter without assigning it to anything (lazy loading)

I have a DTO which can be fully loaded or lazy loaded using Lazy Load Pattern. How it is loaded depends on what the Flex Application needs. However, this DTO will be sent to a Flex application (swf). Normally, a collection for instance, will only be loaded when called. In my case however, the collection will only be called in Flex, so my implementation on the .NET side will obviously not work in this case (except if Flex would do a server call... something I would like to avoid).
In the getter of the collection, the data is retrieved from the database. If I would be working with ASP.NET pages, it would work, but not if the DTO is sent to Flex.
How would you deal with this? I could call the getter before sending the DTO to Flex, but that seems awful... + calling the getter can only be done if it is assigned to something (and the local variable that will hold the collection will never be used...).
You can introduce a method to load dependents - loadDependencies - that should take of all lazy loading for your DTO object before being sent over the wire (to Flex). You can abstract this method to an interface to streamline such process across different DTOs. There is nothing against using getters the way you described it inside this method.
I would probably introduce a Finalize method for the class and perhaps a FinalizeAll extension method for various collections of the class. This method would simply go through and reference all the getters on the public properties of the class to ensure that they are loaded. You would invoke Finalize (or FinalizeAll) before sending the object(s) to your Flex app. You might even want to make this an interface so that you can test for the need for finalization before transfering your objects and invoke the method based on a test for the interface rather than checking for each class individually.
NOTE: Finalize is just the first name that popped into mind. There may be (probably is) a better name for this.

Entity Framework: What negatives can I bump into by not disposing of my object context?

EDIT: Duplicate of Should Entity Framework Context be Put into Using Statement?
I've been tossing around this idea for some time wondering what bad could happen by not properly disposing my object context and allowing it to die with the GC. Normally, I would shun this, but there is a valid reason to do it.
We are using partial classes. In those partial classes we expose properties that access FK objects. For example, let's say I have a Customer class with a CustomerType FK object. In the class, I would expose a CustomerTypeName property that does this:
public string CustomerTypeName {
get {
if (CustomerType == null) {
CustomerTypeReference.Load()
}
return CustomerType.CustomerTypeName;
}
}
This works out very handy if the original query did not do a .Include("CustomerType").
However, if I dispose the context, the above property no longer works. So... I guess this leads to a couple of questions:
1) If I never explicitly dispose of the context, what negatives will I see, if any?
2) Is there any other way to accomplish lazy loading in the above scenario and still dispose of the context?
In my answer to 'LINQ to SQL - where does your DataContext live?' we have the page as owner of the DataContext for the life of the page, and it is the page that properly disposes of the DataContext when the page is itself disposed of.
As #Chu points out it's a little dirty, but if you're going to use what is arguably a data transfer object directly in your UI, then your UI should control the lifetime of the DataContext.
Well ObjectContext's that are left around indefinitely are fine, so long as you don't keep loading / adding lots of new objects.
Every object that is loaded or added will always be tracked by the ObjectContext until it is disposed, so if you never dispose, and you keep tracking more objects it will just get bigger and bigger.
One option you could look at doing is using some utility method to either access some well known context or create a temporary context.
The key to this is using the EntityReference.EntityKey and making sure both entities are detached.
i.e.
this.CustomerType = Utility.GetDetachedObjectByKey<Customer>(
this.CustomerTypeReference.EntityKey);
The basic implementation of GetDetachedObjectByKey is something like this:
public static T GetDetachedObjectByKey<T>(EntityKey key)
where T: EntityObject
{
using (MyContext ctx = new MyContext())
{
T t = ctx.GetObjectByKey(key) as T;
ctx.Detach(t);
return t;
}
}
This will only work if the original object target is detached too. You could experiment with where the Context used by this method comes from.
Hope this helps
Alex
Why not keep the context around for the length of your screen?

Resources