I'm currently using App insights for logging exceptions and tracing.
My project is Asp.net MVC based and I've 3 different layers. One is front end (MVC with angular) and rest two are Web API service layers.
I've default telemetry logging enabled for all the 3 layers and it is logging all the operations.My problem is that I'm not able to correlate the operation on all the 3 layers. How can I achieve this. I've read that one option is to set the custom property (userID in session) on Telemetry Initializer , but I don't have that information on my Web API layers as they are stateless.
One solution I can think about:
You can generate an internal identifier (e.g. a GUID) for each operation in your MVC layer, add it as a custom parameter (using the Telemetry Initializer) and pass that same ID to the Web API layers for each API call you make. For every telemetry event these Web API layers send, add the identifier as a custom parameter in the same fashion as above.
This way you will be able to correlate the events based on your internal ID.
Besides the already mentioned alternatives, there's already partial support for this built into AI through the ParentOperationId and OperationId context fields. However, there's nothing in place right now that can propagate the fields through Http calls automatically.
It should be possible to have your code add the necessary headers when making outgoing HTTP calls and if you use the right ones, and the other side has the necessary telemetry initializers, it should be better.
Related
I'm working on a project structured as N-Tier / 3-Tier Architectural Style. The layers (client-server) communicate with each other by HTTP and WCF. I want to use AppInsights to track everything in the project. However, I'm confused about where in the project to create or add AppInsight packages. I have created two instances of resources under one resource group on Azure. Then, I configured inst. key and other things regarding projects in the solution. Although I overrode handle attribute with mine(as explained in AppInsights docs), I couldn't see all call methods and exception tracks from client to server along with layers. Is it not possible to track exceptions in that project type or I'm mistaken?
UPDATE for additional info to define problem clearly: When I use one resource and one instrumentation key, AppInsights defines well whole project and shows relation of my solution and its dependencies. But, when one request failed due to exception in the server, I can't reach whole exception trace, just last method where request is asked from server. Because of that I use two resource instance, one for server and one for client(not real client, its like middle layer).
if you are using multiple apps/keys and multiple layers, you might need to introduce custom code to set/pass through something like a correlationId through http headers, and code that reads and uses that as application insights's "operation id". If you do the work across different applications/ikeys, searching across the apps will become more complicated.
instead, you might want to use just one ikey across all the layers, and instead use different ikeys for different environments, like dev/staging/prod instead.
there are some examples of reading/using operationId, like this blog post:
http://blog.marcinbudny.com/2016/04/application-insights-for-owin-based.html#.WBkL24WcFaQ
where someone does something similar with OWIN.
For Exception tracking, if the exceptions are happening at one of the other asp.net layers, you might need to enable some extra settings to collect exceptions or more information for them:
http://apmtips.com/blog/2016/06/21/application-insights-for-mvc-and-mvc-web-api/
I'm very new to Application Insights, and I'm thinking of using it for a set of services I plan on implementing with asp.net webapi. I was able to get the basic telemetry up and running very easily (right-clicking on a project on VS, Add Application Insights), but then I hit a block. I plan to have a correlation id set in the request headers for calls to downstream services, and I would like to tag all the telemetry related to one outside call with the same correlation id.
So far I've found that there is a way to configure a TelemetryInitializer, but if I understood correctly, this is run before I get to access the request, meaning I can't check if there is a correlation id that I should attach.
So I guess there might be 2 ways to solve this: 1) if I can somehow actually get access to the request headers before the initializer, that would obviously solver the problem, or 2) somehow get a hold of the TelemetryClient instance that is used to report the automatically generated telemetry.
Perhaps the last resort would be to turn off all of the automatic stuff and do all of it manually, when I could of course control what properties are set on the TelemetryClient. But this would be quite a lot more work, so I'd prefer to find some other solution.
You were rights saying that you should use TelemetryInitializer. All TelemetryInitializers are called when Track method is called on any telemetry item. Autogenerated request telemetry is "tracked" on request OnEnd, you should have all your custom headers available for you at that time.
Please also have a look at OperationId - this is part of the standard context managed by App Inisghts and is used exactly for the purpose of correlating requests with downstream execution. This is created and passed automatically, including traces (if you use trackTrace).
Moreover, we have built-in support in our UX for easily seeing all telemetry for a particular operation - it can be found in "Search->Details-->Related Items-->All telemetry for this operation"
So my question is very much related to this one: Entity persitance inside Domain Events using a repository and Entity Framework?
EDIT: A much better discussion on the topic is also here: Where to raise persistence-dependent domain events - service, repository, or UI?
However my question is rather more simple and technical, assuming that I'm taking the right approach.
Let's suppose I have the following projects:
MyDomainLayer -> very simple classes, Persitence Ignorance, a.k.a POCOs
MyInfrastructureLayer -> includes code for repositories, Entity Framework
MyApplicationLayer -> includes ASP.Net MVC controllers
MyServicesLayer -> WCF-related code
MyWebApplication -> ASP.Net MVC (Views, Scripts, etc)
When an event is raised (for example a group membership has been granted),
then two things should be done (in two different layers):
To Persist data (insert a new group membership record in the DB)
To Create a notification for the involved users (UI related)
I'll take a simple example of the last reference I wrote in the introduction:
The domain layer has the following code:
public void ChangeStatus(OrderStatus status)
{
// change status
this.Status = status;
DomainEvent.Raise(new OrderStatusChanged { OrderId = Id, Status = status });
}
Let's assume the vent handler is in MyApplicationLayer (to be able to talk to the Services Layer).
It has the following code:
DomainEvent.Register<OrderStatusChanged>(x => orderStatusChanged = x);
How does the wire-in happen? I guess is with structuremap, but how does this wire-in code looks exactly?
First, your layering isn't exactly right. Corrections:
Application Layer - ASP.NET MVC controllers are normally thought of as forming an adapter between your application layer and HTTP/HTML. Therefore, the controllers aren't themselves part of the application layer. What belongs in application layer are application services.
MyServicesLayer - WCF-related code. WCF implemented services are adapters in the hexagonal architecture referenced by Dennis Traub.
MyWebApplication - ASP.Net MVC (Views, Scripts, etc). Again, this forms an adapter in a hexagonal architecture. MVC controllers belong here as well - effectively they are implementation detail of this adapter. This layer is very similar to service layer implemented with WCF.
Next, you describe 2 things that should happen in response to an event. Persistence is usually achieved with committing a unit of work within a a transaction, not as a handler in response to an event. Also, notifications should be made after persistence is complete, or in other words after the transaction is committed. This is best done in an eventually consistent manner that is outside of the unit of work that generated the domain event in the first place.
For specifics on how to implement a domain event pub/sub system take a look here.
My first recommendation, get rid of the notion of Layers and make yourself familiar with the concept of a Hexagonal Architecture a.k.a. Ports and Adapters.
With this approach it is much easier to understand how the domain model can stay independent of any of the surrounding concerns. Basically that is object-orientation on an architectural level. Layers are procedural.
For your specific problem, you might create a project containing the event handlers that project events into the database. These handlers can have direct access to the database or go through an ORM. You probably won't need any repositories there since the events should contain all information that's needed.
Consider 3 modules/classes in an ASP.NET Webforms application.
I need a web service for each of them, where each web service contains only one function.
Should I group them into one web service class, or should I keep the one web service for each class?
If they are related and need to be exposed for consumtion by a single client you could create one webservice and call this an API. This means you and your client maintain/consume a single webservice.
If they are clearly unrelated, separate them.
Group them in one class (the base webservice class). If it's needed you can branch from here and instantiate more complicated classes, or even call external libraries (for example if you have a data layer to call)
I have an ASP.NET MVC web app whose controllers use WCF to call into the domain model on a different server. The domain code needs to talk to a database and access to the database server isn't always possible from web servers (depends on the customer site) hence the use of WCF to get to a place where my code is allowed to connect to the database server.
This is configurable so if the controllers are able to access the database server directly then I use local instances of the domain objects rather than use WCF.
Lets say I have a page asking for person details like age, name etc. This is a complex type that is a parameter on my WCF operation like this :
[OperationContract]
string SayHello( Person oPerson);
When I generate the client code (eg; by adding a service reference in my client) I get a separate Person class that fulfills the wcf contract. The client, an MVC web app, can use this client Person class as the view model and all is well. I pass that straight into the WCF client methods and it all works brilliantly.
If my mvc client app is configured to NOT use WCF I have a problem. If I am calling my domain objects directly from the controller (assume I have a domain access factory/provider setup) then I need the original Person class and not the wcf generated Person class. This results in my problem which is that I will have to perform mapping from one object to another if I don't use WCF
The main problem with this is that there are many domain objects that will need to be mapped and errors may be introduced such as new properties forgotten about in future changes
I'm learning and experimenting with WCF and MVC can you help me know what my options are in this scenario? I'm sure there will be an easy way out of this given the extensibility of WCF and MVC
Thanks
It appears that you are not actually trying to use a service-oriented architecture. In this case, you can place the domain objects into a single assembly, and share it between the WCF service and the clients. When creating the clients, use "Add Service Reference", and on the "Advanced" tab, choose "Share Types". Either choose to share all types, or choose the list of assemblies whose types you want to share.
Sound service-oriented-architecture dictates that you use message based communication regardless of whether your service is on another machine, in another process, in another appdomain, or in your appdomain. You can use different endpoints with different bindings to take advantage of the speed of the link (http, tcp, named pipes) based on the location of your service, but the code using that service would remain the same.
This may not be the easiest or least time-consuming answer, but one thing you can do is avoid using the "add service reference" option, and then copy your contract interfaces to your MVC application and initiate the connection to WCF manually without automatically creating a service proxy. This will allow you to use one set of classes for your model objects and you can control explicitly when to use WCF or not.
There's a good series of webcasts on WCF by Michele Leroux Bustamante, and I think in episode 2, she explains how to do exactly this. Check it out here: http://www.dasblonde.net/WCFWebcastSeries.aspx
Hope this helps!
One sound option is that you always use WCF, even if client and server are in the same process, as Aviad points out.
Another option is to define the service contracts on interfaces, and to put these, together with the data contracts into an assembly that is shared between client and server. In the client, don't use svcutil or a service reference; instead, use ClientFactory<T>.
This way, your client code will use the same interfaces and classes as the server.