I made a custom handler that derives from MvcHandler. I have my routes using a custom RouteHandler that returns my new handler for GetHttpHandler(), and I override ProcessRequest() in my custom handler. The call to GetHttpHandler is triggering a breakpoint and my handler's constructor is definitely being called, but BeginProcessRequest() is being called on the base MvcHandler instead of ProcessRequest().
Why are the async methods being called when I haven't done anything to call them? I don't want asynchronous handling, and I certainly didn't do anything explicit to get it. My controllers all derive from Controller, not AsyncController.
I don't have the source code with me right now, but I can add it later if needed. I was hoping someone might know some of the reasons why BeginProcessRequest might be called when it's not wanted.
Brad Wilson responded to my post on the Asp.net forums with the following answer http://forums.asp.net/t/1547898.aspx:
Short answer: yes.
With the addition of AsyncController,
the MvcHandler class needs to be an
IHttpAsyncHandler now, which means
that as far as the ASP.NET core
runtime is concerned, the entry points
are now BeginProcessRequest and
EndProcessRequest, not ProcessRequest.
It sounds like ProcessRequest is not even called anymore, but I could be mistaken. I can say that I haven't seen it in my testing.
Related
For ASP.NET Web API, I've been working on my own implementation of IHttpControllerActivator and am left wondering when (or why?) to use the HttpRequestMessage extension method "RegisterForDispose".
I see examples like this, and I can see the relevance in it, since IHttpController doesn't inherit IDisposable, and an implementation of IHttpController doesn't guarantee its own dispose logic.
public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
var controller = (IHttpController) _kernel.Get(controllerType);
request.RegisterForDispose( new Release(()=> _kernel.Release(controller)));
return controller;
}
But then I see something like this and begin to wonder:
public IHttpController Create(
HttpRequestMessage request,
HttpControllerDescriptor controllerDescriptor,
Type controllerType)
{
if (controllerType == typeof(RootController))
{
var disposableQuery = new DisposableStatusQuery();
request.RegisterForDispose(disposableQuery);
return new RootController(disposableQuery);
}
return null;
}
In this instance RootController isn't registered for disposal here, presumably because its an ApiController or MVC Controller? - And thus will dispose itself.
The instance of DisposableStatusQuery is registered for disposal since it's a disposable object, but why couldn't the controller dispose of the instance itself? RootController has knowledge of disposableQuery (or rather, it's interface or abstract base), so would know it's disposable.
When would I actually need to use HttpRequestMessage.RegisterForDispose?
One scenario I've found it useful for: for a custom ActionFilter.
Because the Attribute is cached/re-used, items within the Attribute shouldn't rely on the controller to be disposed of (to my understanding - and probably with caveats)... so in order to create a custom attribute which isn't tied to a particular controller type/implementation, you can use this technique to clean up your stuff. In my case, it's for an ambient DbContextScope attribute.
RegisterForDispose it's a hook that will be called when the request is disposed. This is often used along with "some" of the dependency injection containers.
For instance, some containers (like Castle.Windsor) by default will track all dependencies that they resolve. This is according to Windsor ReleasePolicy LifecycledComponentsReleasePolicy which states that it will keep track of all components that were created. In other words your garbage collector will not be able to cleanup if your container still tracks your component. Which will result into memory leaks.
So for example when you define your own IHttpControllerActivator to use it with a dependency injection container it is in order to resolve the concrete controller and all its dependencies. At the end of the request you need to release all the created dependencies by the container otherwise you will end with a big memory leak. You have this opportunity doing it with RegisterForDispose
I use RegisterForDispose with the DI container's. Based on Blog post I have implemented to dispose the container(Nested Container) after each request so that it clears all the objects which i has created.
One may want to hook code around the life cycle of a request that (1) has little to do with controllers and (2) does not subclass the request type.
I would imagine the idiomatic form of such code takes the shape of extension methods on HttpRequestMessage, for example. If the code allocates disposable resources, it would need to hook the disposal code to something. I'm not too familiar with the various extension points of the ASP.NET pipeline, but I suppose hooking code just to dispose of resources at the end of the request processing stage was common enough to justify a dedicated registration mechanism for disposable resources (as opposed to more generally subscribing code to be executed).
Since you're asking, I found a nice example scenario in this sample. Here, an Entity Framework context is set as a property of the request, and must be disposed of properly. While this property is intended to be used by controllers, they're not specific to any controller or controller super-class, so in my opinion this is a very sensible design choice. If you're curious why, this is because these requests are "OData batch requests" and controller actions will be invoked multiple times over the lifetime of each request (once per "operation"). Certain operations are grouped into atomic "changesets" that must be wrapped in transactions at a higher-level than controllers (a dedicated mechanism is used: an ODataBatchHandler, so that the controllers themselves are oblivious to this). Hence, controllers alone are not enough, as one cannot have them dispose of the context themselves in this scenario.
Hope this helps.
I am creating a custom route by subclassing RouteBase. I have a dependency in there that I'd like to wire up with IoC. The method GetRouteData just takes HttpContext, but I want to add in my unit of work as well....somehow.
I am using StructureMap, but info on how you would do this with any IoC framework would be helpful.
Well, here is our solution. Many little details may be omitted but overall idea is here. This answer may be a kind of offtop to original question but it describes the general solution to the problem.
I'll try to explain the part that is responsible for plain custom HTML-pages that are created by users at runtime and therefore can't have their own Controller/Action. So the routes should be either somehow built at runtime or be "catch-all" with custom IRouteConstraint.
First of all, lets state some facts and requirements.
We have some data and some metadata about our pages stored in DB;
We don't want to generate a (hypothetically) whole million of routes for all of existing pages beforehand (i.e. on Application startup) because something can change during application and we don't want to tackle with pushing the changes to global RouteCollection;
So we do it this way:
1. PageController
Yes, special controller that is responsible for all our content pages. And there is the only action that is Display(int id) (actually we have a special ViewModel as param but I used an int id for simplicity.
The page with all its data is resolved by ID inside that Display() method. The method itself returns either ViewResult (strongly typed after PageViewModel) or NotFoundResult in case when page is not found.
2. Custom IRouteConstraint
We have to somewhere define if the URL user actually requested refers to one of our custom pages. For this we have a special IsPageConstraint that implements IRouteConstraint interface. In the Match() method of our constraint we just call our PageRepository to check whether there is a page that match our requested URL. We have our PageRepository injected by StructureMap. If we find the page then we add that "id" parameter (with the value) to the RouteData dictionary and it is automatically bound to PageController.Display(int id) by DefaultModelBinder.
But we need a RouteData parameter to check. Where we get that? Here comes...
3. Route mapping with "catch-all" parameter
Important note: this route is defined in the very end of route mappings list because it is very general, not specific. We check all our explicitly defined routes first and then check for a Page (that is easily changeable if needed).
We simply map our route like this:
routes.MapRoute("ContentPages",
"{*pagePath}",
new { controller = "Page", action = "Display" }
new { pagePath = new DependencyRouteConstraint<IsPageConstraint>() });
Stop! What is that DependencyRouteConstraint thing appeared in mapping? Well, thats what does the trick.
4. DependencyRouteConstraint<TConstraint> class
This is just another generic implementation of IRouteConstraint which takes the "real" IRouteConstraint (IsPageConstraint) and resolves it (the given TConstraint) only when Match() method called. It uses dependency injection so our IsPageConstraint instance has all actual dependencies injected!
Our DependencyRouteConstraint then just calls the dependentConstraint.Match() providing all the parameters thus just delegating actual "matching" to the "real" IRouteConstraint.
Note: this class actually has the dependency on ServiceLocator.
Summary
That way we have:
Our Route clear and clean;
The only class that has a dependency on Service Locator is DependencyRouteConstraint;
Any custom IRouteConstraint uses dependency injection whenever needed;
???
PROFIT!
Hope this helps.
So, the problem is:
Route must be defined beforehand, during Application startup
Route's responsibility is to map the incoming URL pattern to the right Controller/Action to perform some task on request. And visa versa - to generate links using that mapping data. Period. Everything else is "Single Responsibility Principle" violation which actually led to your problem.
But UoW dependencies (like NHibernate ISession, or EF ObjectContext) must be resolved at runtime.
And that is why I don't see the children of RouteBase class as a good place for some DB work dependency. It makes everything closely coupled and non-scalable. It is actually impossible to perform Dependency Injection.
From now (I guess there is some kind of already working system) you actually have just one more or less viable option that is:
To use Service Locator pattern: resolve your UoW instance right inside the GetRouteData method (use CommonServiceLocator backed by StructureMap IContainer). That is simple but not really nice thing because this way you get the dependency on static Service Locator itself in your Route.
With CSL you have to just call inside GetRouteData:
var uow = ServiceLocator.Current.GetService<IUnitOfWork>();
or with just StructureMap (without CSL facade):
var uow = ObjectFactory.GetInstance<IUnitOfWork>();
and you're done. Quick and dirty. And the keyword is "dirty" actually :)
Sure, there is much more flexible solution but it needs a few architectural changes. If you provide more details on exactly what data you get in your routes I can try to explain how we solved our Pages routing problem (using DI and custom IRouteConstraint).
I have a common base class from which all my ASMX webservice classes will inherit. In the constructor, I want to do some common authentication checks; if they fail, I would like to halt processing right away (subclass's code would not get executed) and return a 401-status-code response to the caller.
However, the common ASPX-like ways of doing this don't seem to work:
Context.Response.End(); always kicks back a ThreadAborted exception to the caller, within a 500-status-code response. Even if I explicitly set Context.Response.StatusCode = 401 before calling End(), it is ignored. The result is still a 500-response, and the message is always "thread-aborted-exception".
MSDN suggests I use HttpContext.Current.ApplicationInstance.CompleteRequest() instead. However, this does not stop downstream processing: my subclass's functions are still executed as if the constructor had done nothing. (Kind of defeats the purpose of checking authorization in the constructor.)
I can throw a new HttpException. This is a little better in that it does prevent downstream processing, and at least it gives me control over the exception Message returned to the caller. However, it isn't perfect in that the response is still always a 500.
I can define a DoProcessing instance var, and set it to true/false within the constructor. Then have every single WebMethod in every single subclass wrap its functionality within an if (DoProcessing) block... but let's face it, that's hideous!
Is there a better / more thorough way to implement this sort of functionality, so it is common to all my ASMX classes?
edit: Accepting John's answer, as it is probably the best approach. However, due to client reluctance to adopt additional 3rd-party code, and some degree of FUD with AOP, we didn't take that approach. We ended up going with option #3 above, as it seemed to strike the best balance between speed-of-implementation and flexibility, and still fulfill the requirements.
The best way to do it would be to switch to WCF, which has explicit support for such a scenario.
If you must still use ASMX, then your best bet is to call the base class methods from each web method. You might want to use something like PostSharp to "magically" cause all of your web methods to call the base class method.
Context.Response.Write("My custom response message from constructor");
Context.Response.StatusCode = (int)HttpStatusCode.Forbidden;
Context.Response.End();
That code prevent to pass in web method after constructor.
I need to get a value from an API I made with ASHX and normally it is called from javascript but I need to call it right in ASP.NET I figured this shouldn't be a problem but I'm not sure the syntax.
Well you have a couple options
You can refactor the code in your ASHX to be in a shared library so you can access the methods directly and so can the handler.
You can instantiate the handler and invoke the members if they aren't private.
You can create a webrequest to the handler and handle the response.
These are just a few of the easy ways.
I personally like the first method because it promotes code reuse, but depending on scenario you can do what you like.
Edit to provide answers for question in comment.
Essentially Yes... Instead of having a bunch of code in your handler you make a class called something meaningful to you contextually. Inside that class you place the logic that was in your handler. Then from your handler you can create an instance or call a static version of the class (depending on how you implemented it) passing it the HttpContext object or whatever is required for that logic to run correctly. Do the same thing in your ASPX page. You can now call into an object that contains the logic from anywhere in your app instead of having it reside in the handler alone.
EX:
Public Class MyCommonLogic
Public Shared Function ReturnSomethingCommon(context As HttpContext) As String
Return "Hello World!"
End Function
End Class
Then from the handler or the aspx page..
Dim something As String = MyCommonLogic.ReturnSomethingCommon(...)
I made the function static, but that is just an example of course I would implement it however would make more sense in your scenario.
Changed code to VB sorry about that.
If the ASHX is on the same server especially if its within the same web app, you should refactor your logic out of the ashx into a common class that both the aspx and ashx can call.
Otherwise you can look at using: System.Net.WebClient
All literature I see on creating custom handlers deals with associating an extension with a handler, e.g. if I wanted a handler for Ajax requests, I could implement the IHttpHandler interface in an AjaxHandler class.
Now, to have individual instances of AjaxHandlers, e.g. DocAjaxHander, PersonAjaxHandler etc. how would I derive the base AJAX handling of my AjaxHandler class without registering each individual *.ajax page?
It sounds like your assuming 1 HttpHandler = 1 Page or 1 Control, but as I understand, 1 HttpHandler can handle all pages of a certain file extension.
Your Question is not very clear, and your reponse to another answerer makes no sense...
"In fact, it seemed, to me, a lot like I was asking Http handlers, using a .ajax handler as an example."
But I shall assume you are thinking "DocAjaxHander" and "PersonAjaxHandler" should each be created for a "DocAjaxControl" and "PersonAjaxControl" respectively. I dont think that would be neccesary, 1 handler should be able to handle all your ajax requests if you choose to do it that way, but it doesn't feel like the most intuitive solution to me (using HttpHandlers), anyway, onto the details...
every IHttpHandler object needs to implement :
public void ProcessRequest(HttpContext context)
which allows :
context.Response.Write("Your JSON Response in here");
but at the level of 'ProcessRequest()', you have no access to the instance of the control which created the ajax call, or to the 'System.Web.UI.Page' object that holds the control, or anything.
context.Request
to the rescue! With the Request object above you can read QueryStrings, Sessions, an you can determine the path of the original HttpRequest (i.e. PersonAjaxObject may make an ajax call to 'myPersonobjPage.ajax' for its JSON data, but the '.ajax' extention lands the request at your custom http handler and it's ProcessRequest method.)
If I was you, and I was going to use an HttpHandler for my ajax calls, I'd use query string data to provide enough info for my handler to know 'what type of object am I responding too' as well as 'what data is that object requesting'.
Hope that helps.
You can automatically handle AJAX request in a number of ways. Here's how to do it with a web service:
http://www.asp.net/AJAX/Documentation/Live/Tutorials/ConsumingWebServicesWithAJAXTutorial.aspx
Well, one way to do it would be via query string params...