Okay, I'm open to being told I'm approaching the problem incorrectly, so go ahead if that's the case, but I have a unhandled exception provider I'm adding to my builder.Services in program.cs, along with some data services. I can't figure out a good way to add an existing data service to the unhandled exception provider (custom logging to a db via the data service).
I had tried passing it, or using it via injection, but in the exception provider the data service keeps coming up as null in practice.
So I was thinking, is there a way to get a reference to the WebApplication that is running (the one we are running via app.Run() in Program.cs) from elsewhere in the running program.
Logically, I'd like to do something like this in the unhandled exception provider:
MyService = GetRunningApp().services.BuildServiceProvider().GetService<IMyService>();
Am I missing an easier way to do this?
In Blazor it's not as easy to inject the middleware for global error handling like it is with Web API (Correct me if I'm wrong!)
For Blazor I think the closest we have is the <ErrorBoundary>. Check out this solution by Alamakanambra. They provide an example on how to create a component that handles global exceptions. You can do whatever logging or handling you need in the OnErrorAsync.
protected override Task OnErrorAsync(Exception exception)
{
receivedExceptions.Add(exception);
return base.OnErrorAsync(exception);
}
Related
I have a class library project which i am referencing in my asp.net web forms project.
Whenever any exception happens, i want to log it. But, I am not doing any exception handling in any of the class library method. That is i have not used try-cath block in any of the methods in the class library.
Because, any excepyion that gets thrown from class library methods is caught in my presentation layer/business layer(wherever i call the functions of the class library) and proper logging is done here in the web forms project.
Is it correct to do this way.?
Its Ok to write that way, but in that case your Class Library will throw exception which system exception message, which might be sometime difficult to interpret and act accordingly.
Instead you can have Try-Catch-Finally block in class library, and Re-throw exception with user defined message to calling method, this will help in tracing issue.
Example :
Catch(FileNotFoundException Ex)
{
throw new ("File Not Found at XYZ Location, Please Check File Exist and retry ...")
}
Centralized Exception Logging:
There are multiple ways you can log error messages. One of the good way would be to use Enterprise Library Exception Block to Log error to file, which can be used in debugging issues.
This article can come handy for Enterprise Library:
http://msdn.microsoft.com/en-us/library/ff649140.aspx
You can log exception centrally to CSV, XML or even to windows Event Viewer.
Create Separate class / project to perform all exception logging in project.
Seems like a reasonable approach to me. Delegating the exception logging to the application keeps your class library abstract meaning it can be reused without having a dependency on the logging library/framework.
Logging should be the responsibility of the application, not the component. There may be some exceptional circumstances where trace-level debugging is required at component level, for those special cases having a log-enabled version of the lib that you could swap in might come in handy. However, I would argue that a properly tested component would reduce the need for that.
I'm developing an app with VS2013, using EF6.02, and Web API 2. I'm using the ASP.NET SPA template, and creating a RESTful api against an entity framework data source backed by a sql server. (In development, this resides on the SQL Server local instance.)
I've got two API methods so far (one that just reads data, one that writes data), and I'm testing them by calling them in the javascript. When I only call a single method in my script, either one works perfectly. But if I call both in script (without waiting for either's callback to fire), I get bad results and different exceptions in the debugger. Some exceptions state that the save can't be completed because there are pending transactions. Another exception stated something about a conflict with other threads. And sometimes, the read operation fails with a null pointer exception when trying to read a result set.
"New transaction is not allowed because there are other threads running in the session."
This makes me question if I'm correctly getting a new DBContext per request. My code for this looks like:
static Startup()
{
context = new Data.SqlServer.AppDbContext();
...
}
and then whenever instantiating a unit of work, I access Startup.context.
I've tried to implement the unit of work pattern, and each request shares a single UOW object which has a single DBContext object.
My question: Do I have additional responsibility to ensure that web requests "play nicely" with eachother? I hope that this is a problem that others have already dealt with. Perhaps the errors that I'm seeing are legitimate in the sense that if one user's data is being touched, it is temporarily in an invalid state and if other requests come in at that exact moment, they indeed will fail (and I should code anticipating these failures). I guess that even if each request has its own DBContext, they still share the same underlying SQL data source so perhaps that's causing issues.
I can try to put together a testcase, but I get differing behavior depending on where I put breakpoints and how long I spend on them, reaffirming to me that this is timing related.
Thanks for any help or suggestions...
-Ben
Your problem is where you are setting your context. The Startup method is for when the entire application starts, thus any request made will all use the same context. This is not a per request setup, but rather a per application setup. As to why you are getting the errors, EntityFramework is NOT thread-safe. Since IIS spawns many threads to handle concurrent request, your single context is being used across multiple threads.
As for a solution, you can look into
-Dependency Injection frameworks (such as Ninject or Unity)
-place a using statement in your UnitOfWork classes
using(var context = new Data.SqlServer.AppDbContext()){//do stuff}
-Or, I have seen instances of people creating a class that gets the context for that request and stores it in the HttpContext.Cache[] element (using a unique name so you can retrieve it in another class easily), making it so that you will reuse the same context for the same request. Something like this:
public AppDbContext GetDbContext()
{
var httpContext = HttpContext.Current;
if (httpContext == null) return new AppDbContext();
const string contextTypeKey = "AppDbContext";
if (httpContext.Items[contextTypeKey] == null)
{
httpContext.Items.Add(contextTypeKey, new AppDbContext());
}
return httpContext.Items[contextTypeKey] as AppDbContext;
}
To use the above method, make a simple call var context = GetDbContext();
Note
We have all of the above methods, but this is specifically to the third method. It seems to work well with two caveats. First, do not use this in a using statement as it will not be available to any other classes during the scope of the request (you dispose it). And secondly, ensure that you have a call on Application_EndRequest that does actually dispose of it. We saw these little buggers hanging around after the request ended in memory causing a huge spike in memory usage.
I have a very simple webservice which is hosted on IIS, if you call any webmethod this basically throws some exception. There is third party application install in the same system where webservice hosted which intercept webmethod and get all information about unhandled exception (like method name, exceptiontype, stacktrace, code etc). Anyone who needs this exception info can subscribe for the eventobject with third party application. So I wrote event subscription code in IIS process itself.
So flow is like this, test client will call webmethod which basically throws some exception, third party application catches those exception and whoever subscribe for exceptioninfo will get those information in XML format.
Now I want that XML information to be accessed in my test client, is there any way to achieve this? I am not sure if this is feasible also or not as I am new to webservice world and please excuse me if this doesn't make any sense.
I found two solution for this problem,
1) Using HttpModule edit the Http response and add the exception info xml in the response itself.
2) have different exe which will run in background and subscribe for the exception info events using API provided by 3rd Party application. Once the exe receive exception information for any failed transaction, exe will write the same information in some shared storage. In my case I am using Azure Table Storage.
I have a simple app where I use global.asax to map a serviceroute to a wcf service through a custom servicehostfactory in Application_Start. The constructor of that service does some initial processing to set up the service which takes a bit of time.
I need this constructor to fire when its serviceroute is added automatically. I tried creating a clientchannel from global.asax and making a dummy call to spin up the service, but discovered the service isn't up yet -- it seems application_start has to return?
So how do I get the constructor of the service to fire when first mapped through global.asax without having to manually hit the service? Unfortunately AppFabric isn't an option for us, so I can't just use it's built-in autostart..
UPDATE
I was asked for a bit more detail;
This is like a routing management service. So I have Service1 -- it gets added as a serviceroute in global.asax. Now I have http://localhost/Service1
Inside Service1 I have a method called 'addServiceRoute'. When called, it also registers a route for Service2. Now I have http://localhost/Service1/Service2.
My initial solution from global.asax was to build a channelfactory to http://localhost/service1 but that wouldn't work. Service1 wasn't up yet and wouldn't come up till Application_Start returned (Still not sure why?). So then I thought I'd cheat and move that initial addserviceroute call to the constructor of service1. Also didn't work.
It was mentioned that this shouldnt be in the constructor -- i agree, this is just testing code.
A singleton was also mentioned, which might be ok, but I intend to have more than one instance of Service1 on a machine (in the same app pool) so I don't think that'll work?
** UPDATE #2 **
I was asked for sample code.. here it is from global.asax (trimmed a bit for brevity).. So http://localhost/Test DOES come up.. But if I have to use appfabric to warm up Test and get its constructor to fire, then don't I need Test.svc or something? How do I get appfabric to even see this service exists?
protected void Application_Start(object sender, EventArgs e)
{
RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}");
RouteTable.Routes.Add(
new ServiceRoute("Test", new MyServiceHostFactory(ITestService, BindingType.BasicHttpBinding, true), TestService));
}
What you describe requires singleton service (something you should avoid) because normally each call or session gets a new instance = a new call to constructor. In self hosted WCF service you can instantiate singleton service instance and pass it to ServiceHost constructor. In case of IIS hosted service used together with ServiceRoute you can try to create your own class derived from ServiceHostFactory and pass created service instance as a parameter to its constructor. In this factory class implement CreateServiceHost method and pass that existing service instance into ServiceHost constructor. To make this work your service class must still be handled as singleton through service behavior.
Btw. constructor should not do any time consuming operation. Constructor is for constructing object not for initializing infrastructure. Using constructor for such initialization is bad practice in the first place.
AppFabric autostart is what I would recommend - even though you say you cannot use it - this is the problem it was meant to solve (warming up your service).
As an alternative before AppFabric existed, you would have to use a scheduled task (a.k.a cron job) with an executable that calls into the service you want initialized. The way AppFabric autostart works is by using named pipes (net.pipe) to trigger the warm up - but it just does this exact thing when the service is recycled. The difference between the scheduled task approach and the AppFabric autostart is that the scheduled task doesn't know when your application pool has been recycled - you would need to poll your service periodically to keep it warm.
Alternatively you could consider hosting your WCF application outside of IIS via self-hosting. This would avoid the warm-up issue, but you wouldn't achieve many of the benefits of the IIS hosted container. See HttpSelfHostServer in the new MVC Web API or review using a standard ServiceHost.
I thought I read somewhere that the WebRole runs in a different process than IIS on Windows Azure, making it possible to combine both Web and Worker roles http://things.smarx.com/#Combine%20Web%20and%20Worker%20Roles
Assuming the following generic code:
public class WebRole : RoleEntryPoint
{
public override void Run()
{
... Exception gets thrown here.
}
}
Does this require a separate exception handling approach?
Is the Run different than OnStart, meaning certain services have been started?
Any best practices?
The title of the question and the inline question are different - which one are you most concerned about?
The WebRole in 1.3+ SDK can run full IIS, which runs under a different process than the RoleEntryPoint. This means for exception handling purposes, the RoleEntryPoint and the IIS web app are totally isolated. You would need explicit error handling in each as one does not apply to the other.
The other question you asked has to do with the Run vs OnStart. The OnStart method is called before your instance is connected to the LoadBalancer. This is your chance to bootstrap the role with anything you need to do before active traffic hits it. You must return true and not throw an error in OnStart or you will never get an active instance. Some folks use the OnStart to programmatically create the IIS stuff they need (sites, apps, vdirs, etc.). The Run method is your entry point to the main worker logic. It is like static void Main() (but one you never exit from).
The reality is that the Web and Worker roles are pretty much identical with the sole exception that the Web role has some nice declarative syntax now to setup IIS for you. All other caveats of runnig in a worker role apply to the web role when using the RoleEntryPoint.
RoleEntyPoint gets intialized before ASP.Net runtime is intitialized. As far as i remember if Run method throws exception the role would recycle,and you can see that in your Azure management portal. See this for some hints.