I'm trying to create an end-to-end Application Map using Application Insights. Note all dependencies and metrics are captured and sent using the SDK.
Take the following scenario:
Windows service (batch processing) > (calls) WebAPI > (queries db)
I have 2 Application Insight resources - Windows Service and WebAPI. Both are capturing metrics but in isolation. How can I create a dependency using the SDK between resource 1 (i.e. service) and resource 2 (i.e. WebAPI)? I need to be able to view the Application Map for resource 1 and be able to see the entire end-to-end view of windows service > web service > db.
I can currently see only windows service > WebApi (App Map resource 1) or WebApi > db (App Map resource 2). Need to bring both together somehow?
Application Insights sdk only collect dependencies automatically for HTTP dependencies. Also it only works when the application insights profiler is running on the machine (often installed on azure websites through the Application Insights Extension).
If you happen to be in one of the situations where the new beta sdk is not collecting dependencies for you. You can do that on your own by writing a little bit of code yourself.
The sdk's autocollection code is open source and you can use it to guide you as to how to track these dependencies. The idea is to append the dependency telemetry's target field with the hash of the target component's instrumentation key and set the dependency type to "Application Insights".
Here is how to compute the hash: Compute Hash
Here is how to add it to the target field and set the right dependency type on the dependency telemetry object: Add component correlation to DependencyTelemetryTarget
A little word of caution. There may soon be a change to the format in which the target field is captured / the name of the dependency type (see this discussion). If and when that happens, it would be an easy enough change for you too.
My recommendation would be to use the same Application Insights resources (e.g. instrumentation key) for both your Windows Service and Web API.
You can separate your telemetry for those two services by adding a custom property indicating the service for all telemetry you emit. Easiest way to do this would be to implement a telemetry initializer (see here for documentation).
It is not possible today. Possible ways -
Use a single InstrumentationKey and identify by a custom property (as
suggested by #EranG
Export the data for both the apps and do your own thing
Please vote on this uservoice. Product team is already considering implementing this functionality in future.
Related
I am new to PACT and trying to use pact-net for contract testing for a .net microservice. I understand the concept of consumer test which generates a pact file.
There is the concept of a provider state middleware which is responsible for making sure that the provider's state matches the Given() condition in the generated pact.
I am bit confused on the following or how to achieve this:
The provider tests are run against the actual service. So we start the provider service before tests are run. My provider service interacts with a database to store and retrieve records. PACT also mentions that all the dependencies of a service should be stubbed.
So we run the actual provider api that is running against the actual db?
If we running the api against actual db how do we inject the data into the db? Should we be using the provider api's own endpoints to add the Given() data?
If the above is not the correct approach then what is?
All the basic blog articles I have come across do not explain this and usually have examples with no provider states or states that are just some text files on the file system.
Help appreciated.
I'm going to add to Matt's comment, you have three options:
Do your provider test with a connected environment but you will have to do some cleanup manually afterwards and make sure your data is always available in your db or/and the external APIs are always up and running. Simple to write but can be very hard to maintain.
You mock your API calls but call the real database.
You mock all your external dependencies: the API and the DB calls.
For 2) or 3) you will have to have test routes and inject the provider state middleware in your provider test fixture. Then, you can configure provider states to be called to generate in-memory data if solution 3) or add some data-init if you are in solution 2)
You can find an example here: https://github.com/pact-foundation/pact-net/tree/master/Samples/EventApi/Provider.Api.Web.Tests
The provider tests are run against the actual service
Do you mean against a live environment, or the actual service running locally to the unit test (the former is not recommended, because of (2) above).
This is one of the exceptions to that rule. You can choose to use a real DB or an in-memory one - whatever is most convenient. It's common to use docker and tools like that for testing.
In your case, I'd have a specific test-only set of routes that respond to the provider state handler endpoints, that also have access to the repository code and can manipulate state of the system.
I am using Elastic APM agent (https://www.elastic.co/guide/en/apm/agent/dotnet/current/index.html) to instrument an ASP.NET MVC Application. I added the nuget packages and added the module entry in the web.config.
I am able to get data in the Kibana APM tab and nicely shows the time spent by each call. (see screenshot below).
Mu Question is: How can I drill down inside each of these calls to see where the time is spent in the stackstace? Is there something I am missing?
There are basically 2 ways the agent captures things:
Auto-instrumentation: in this case you don't write any code, the agent just captures things for you - this is what we see on your screenshot
Manual code instrumentation - for this you can use the Public Agent API and capture things programatically.
In a typical ASP.NET Classic MVC application the agent has auto instrumentation for outgoing HTTP calls with HttpClient, Database calls with EF6 (Make sure to add the interceptor) (SqlClient support is already work-in-progress, hopefully released soon). So unless you have any of these within those requests, the agent won't capture things out of the box.
If you want to capture more things, currently the way to go is to place some agent specific code - so basically manual code instrumentation - into your application and use the public agent API.
End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.
I have an application that consists of a web application, and mutliple windows services, only one windows service is installed depending on what version of the backend sofware is used.
Currently, Data is saved by the web app in a database, then the relevant service is installed and this picks up the data and posts it in to the backend system that is installed.
I want to change this to use WCF services so the resulting data is returned directly to the web app.
I have not used WCF services before but Im assuming I can do something like this.
WebApp.Objects.Dll - contains Database objects, eg PurchaseOrder object
WebApp.Service.Contracts.dll - here I can describe the service methods, this will reference the WebApp.Objects.dll so I can take a PurchaseOrder object as a parameter
WebApp.Service.2011.dll - This will be the actual service for the 2011 version of the backend system, this will reference the WebApp.Service.Contracts dll
WebApp.Service.2012.dll - This will be the actual service for the 2012 version of the backend system, this will reference the WebApp.Service.Contracts dll
So, my question is, does the web app need to know the specifics about what backend WCF service is used? I just want to call a service with the specified Interface and not care about how its implemented or what it does internally, but just to return the purchase order that was created in the backend system (whether it return an interface or a concrete class)
Will i be able to create a service client without needing to know whether its the 2011, or 2012 WCF service being used?
As long as you are able to use the exact same contract for all the versions the web application does not need to know which version of the WCF service it is accessing.
In the configuration of the web application, you specify the URL and the contract. However, besides the contract there might be other differences between the services. In an extreme example this might mean that v2011 uses a different binding as v2012 of the backend - which is not very likely from your description. But also subtle differences in the configuration or the behavior of the services should be addressed in the configuration files. E.g. if v2012 needs longer for an action as v2011 does, the timeouts need to be configured so that the longer time of v2012 does not lead to an expiration.
I have an ASP.NET MVC 3 application that I am currently working on. I am implementing a service layer, which contains the business logic, and which is utilized by the controllers. The services themselves utilize repositories for data access, and the repositories use entity framework to talk to the database.
So top to bottom is: Controller > Service Layer > Repository (each service layer depends on a single injectable repository) > Entity Framework > Single Database.
I am finding myself making items such as UserService, EventService, PaymentService, etc.
In the service layer, I'll have functions such as:
ChargePaymentCard(int cardId, decimal amount) (part of
PaymentService)
ActivateEvent(int eventId) (part of EventService)
SendValidationEmail(int userId) (part of UserService)
Also, as an example of a second place I am using this, I have another simple console application that runs as a scheduled task, which utilizes one of these services. There is also an upcoming second web application that will need to use multiple of these services.
Further, I would like to keep us open to splitting things up (such as our single database) and to moving to a service-oriented architecture down the road, and breaking some of these out into web services (conceivably even to be used by non-.NET apps some day). I've been keeping my eyes open for steps that might make make the leap to SOA less painful down the road.
I have started down the path of creating a separate assembly (DLL) for each service, but am wondering if I have started down the wrong path. I'm trying to be flexible and keep things loosely coupled, but is this path helping me any (towards SOA or in general), or just adding complexity? Should I instead by creating a single assembly/dll, containing my entire service layer, and use that single assembly wherever any services need to be used?
I'm not sure the implications of the path(s) I'm starting down, so any help on this would be appreciated!
IMO - answer is it depends on a lot of factors of your application.
Assuming that you are building a non-trivial application (i.e. is not a college/hobby project to learn SOA):
User Service / Event Service / Payment Service
-- Create its own DLL & expose it as a WCF service if there are more than one applications using this service and if it is too much risk to share the DLL to different application
-- These services should not have inter-dependencies between each other & should focus on their individual area
-- Note: these services might share some common services like logging, authentication, data access etc.
Create a Composition Service
-- This service will do the composition of calls across all the other service
-- For example: if you have an Order placed & the business flow is that Order Placed > Confirm User Exists (User Service) > Raise an OrderPlaced event (Event Service) > Confirm Payment (Payment Service)
-- All such composition of service calls can be handled in this layer
-- Again, depending on the environment, you might choose to expose this service as its own DLL and/or expose it as a WCF
-- Note: this is the only service which will share the references to other services & will be the single point of composition
Now - with this layout - you will have options to call a service directly, if you want to interact with that service alone & you will need to call the composition service if you need a business work flow where different services need to be composed to complete the transaction.
As a starting point, I would recommend that you go through any of the books on SOA architecture - it will help clear a lot of concepts.
I tried to be as short as possible to keep this answer meaningful, there are tons of ways of doing the same thing, this is just one of the possible ways.
HTH.
Having one DLL per service sounds like a bad idea. According to Microsoft, you'd want to have one large assembly over multiple smaller ones due to performance implications (from here via this post).
I would split your base or core services into a separate project and keep most (if not all) services in it. Depending on your needs you may have services that only make sense in the context of a web project or a console app and not anywhere else. Those services should not be part of the "core" service layer and should reside in their appropriate projects.
It is better to separate the services from the consumers. In our peojects we have two levels of separation. We used to group all the service interfaces into one Visual Studio project. All the Service Implementations are grouped into another project.
The consumer of the services needs to reference two dll but it makes the solution more maintainable and scalable. We can have multiple implementations of the services.
For e.g. the service interface can define a contract for WebSearch in the interfaces project. And there can be multiple implementations of the WebSearch through different search service providers like Google search, Bing search, Yahoo search etc.