the Process in the Tiers&Nodes doesn't display on the Application Dashboard - appdynamics

i found my process can be found under the Tiers&Nodes but i can not see it on the Application Dashboard, how can i make it could be see on the Application Dashbard? Thanks for your help.

"Tiers & Nodes" shows any Node which has registered with the Controller in the Application.
On the "Application Dashboard" you will see a flow map of all registered Business Transactions with entry points on the Nodes in the Application.
So I am guessing that you are seeing the Node, but don't have any registered Business Transactions (check the "Business Transactions" screen to confirm).
i.e. You can see the service but not any details of any code execution
You need to update your Business Transaction Detection rules / push traffic into the Application in order to register Business Transactions. These will be seen under "Business Transactions" once registered and visible on relevant Application / Tier / Node flow maps.
See the following documentation:
Business Transactions - https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions
Configure Business Transactions - https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions/configure-business-transactions
Flow Maps Documentation - https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-applications/flow-maps (see "Live Entity Data" for what is shown on flowmaps)

Related

How can I drill down on stacktrace in ASP.NET MVC application using Elastic APM?

I am using Elastic APM agent (https://www.elastic.co/guide/en/apm/agent/dotnet/current/index.html) to instrument an ASP.NET MVC Application. I added the nuget packages and added the module entry in the web.config.
I am able to get data in the Kibana APM tab and nicely shows the time spent by each call. (see screenshot below).
Mu Question is: How can I drill down inside each of these calls to see where the time is spent in the stackstace? Is there something I am missing?
There are basically 2 ways the agent captures things:
Auto-instrumentation: in this case you don't write any code, the agent just captures things for you - this is what we see on your screenshot
Manual code instrumentation - for this you can use the Public Agent API and capture things programatically.
In a typical ASP.NET Classic MVC application the agent has auto instrumentation for outgoing HTTP calls with HttpClient, Database calls with EF6 (Make sure to add the interceptor) (SqlClient support is already work-in-progress, hopefully released soon). So unless you have any of these within those requests, the agent won't capture things out of the box.
If you want to capture more things, currently the way to go is to place some agent specific code - so basically manual code instrumentation - into your application and use the public agent API.

ASP.NET 4 Health Monitoring - capture User Agent Details

I am currently implementing Health monitoring on my ASP.Net Web forms .NET 4.
It gives me below sections of details when there is an un-handled exception on my site.
Summary
Application Information
Events
Process information:
Exception information:
Request information: Thread information:
I would like to also capture user agent details. is it possible?
Can any one point me to the right source on how to achieve this?
Also interested to know how can i capture this exception details in to my database, if the exception is not from database.
Well i found this on msdn, hope any one in need will get benifited with these links.
How to store events to SQL Server
How to forward events to WMI
How to forward events to e-mail
Custom Failure Audit Default at
http://msdn.microsoft.com/en-us/library/system.web.management.webfailureauditevent(v=vs.100).aspx
http://support.microsoft.com/kb/893664

how to wire-in domain event handlers in multi-layer applications

So my question is very much related to this one: Entity persitance inside Domain Events using a repository and Entity Framework?
EDIT: A much better discussion on the topic is also here: Where to raise persistence-dependent domain events - service, repository, or UI?
However my question is rather more simple and technical, assuming that I'm taking the right approach.
Let's suppose I have the following projects:
MyDomainLayer -> very simple classes, Persitence Ignorance, a.k.a POCOs
MyInfrastructureLayer -> includes code for repositories, Entity Framework
MyApplicationLayer -> includes ASP.Net MVC controllers
MyServicesLayer -> WCF-related code
MyWebApplication -> ASP.Net MVC (Views, Scripts, etc)
When an event is raised (for example a group membership has been granted),
then two things should be done (in two different layers):
To Persist data (insert a new group membership record in the DB)
To Create a notification for the involved users (UI related)
I'll take a simple example of the last reference I wrote in the introduction:
The domain layer has the following code:
public void ChangeStatus(OrderStatus status)
{
// change status
this.Status = status;
DomainEvent.Raise(new OrderStatusChanged { OrderId = Id, Status = status });
}
Let's assume the vent handler is in MyApplicationLayer (to be able to talk to the Services Layer).
It has the following code:
DomainEvent.Register<OrderStatusChanged>(x => orderStatusChanged = x);
How does the wire-in happen? I guess is with structuremap, but how does this wire-in code looks exactly?
First, your layering isn't exactly right. Corrections:
Application Layer - ASP.NET MVC controllers are normally thought of as forming an adapter between your application layer and HTTP/HTML. Therefore, the controllers aren't themselves part of the application layer. What belongs in application layer are application services.
MyServicesLayer - WCF-related code. WCF implemented services are adapters in the hexagonal architecture referenced by Dennis Traub.
MyWebApplication - ASP.Net MVC (Views, Scripts, etc). Again, this forms an adapter in a hexagonal architecture. MVC controllers belong here as well - effectively they are implementation detail of this adapter. This layer is very similar to service layer implemented with WCF.
Next, you describe 2 things that should happen in response to an event. Persistence is usually achieved with committing a unit of work within a a transaction, not as a handler in response to an event. Also, notifications should be made after persistence is complete, or in other words after the transaction is committed. This is best done in an eventually consistent manner that is outside of the unit of work that generated the domain event in the first place.
For specifics on how to implement a domain event pub/sub system take a look here.
My first recommendation, get rid of the notion of Layers and make yourself familiar with the concept of a Hexagonal Architecture a.k.a. Ports and Adapters.
With this approach it is much easier to understand how the domain model can stay independent of any of the surrounding concerns. Basically that is object-orientation on an architectural level. Layers are procedural.
For your specific problem, you might create a project containing the event handlers that project events into the database. These handlers can have direct access to the database or go through an ORM. You probably won't need any repositories there since the events should contain all information that's needed.

WF4 entity status handling, entities batch processing

I have created a simple order manager wf service (state machine) in WF4.
Order (EF entity) properties: Id, IsExport, NumOfProduct, ProductName, Status (waiting, approved, rejected).
State machine states:
1. OrderReceived (validation -> response activity)
2. Waiting (empty)
- Transitions:
update(update order activity) -> waiting state
approve(assign status field, update order and response activities) -> final state
3. Final state.
Correlation key: Order.Id
The implementation rised a few questions.
WF can manage one flow of the order instance, the order flow and the order entity is in one-one relation.
Question is that where and how should I implement the listing of entityes according to a state filter (eg. approved orders or waiting orders). The list should be accessible via WCF service method.
What is the best practise to manage the batch data processing. (eg: Multiple order approval. "Foreach" in the client is not the required sln.)
The state of the order is symbolized by the "state activity persisted instances" and the entity's status field in the db as well.
What is the best practise to decide the state of the entity, listing the active persisted activity instances in the defined state or select the entities from the db (by an activity) according to a state filter parameter?
Any help would be appreciated.
Good questions!
Taking your first and third questions, there are several possible approaches to this. All require that you write a custom WCF service to enumerate the required orders. This would probably not be a WF service; it might be a REST or OData service. How would you implement the service?
You could do it entirely by querying your database through EF. This would have no dependency on WF at all, and is probably the easiest way. Your workflow would update the database record on each state change, and the service would only need to read that value.
You could rely on the tracking mechanism provided by WF, and the extensions that Ron Jacobs refers to in his answer to your question. The tracking infrastructure is described here on MSDN. It is possible to use the tracking object in memory to get the state of active workflows. However, this probably won't work well with IIS/WF services, which are automatically persisted and unloaded when dormant. You would be better off using the tracking facilities to write state records to a database. Your custom service would then just query this tracking database.
Unless you want comprehensive information about the state changes and updates that have occurred through your WF service, suggestion number one should suffice.
As for your second question, that is a little more complicated. Let's say you write a REST service that lists the orders awaiting approval. You write a Web page that displays those orders, and the user can check the orders he wants to approve. Now, the number of workflows that you need to update is the same as the number of orders he approves.
You could, as you mention, call the Web service multiple times—but for a large number of orders that would be an unnecessary overhead.
What's the alternative? You would need to write a custom service method on your non-WF service that takes an array of order ids. That service would have to call your WF service multiple times to update each one. Since the WF service is being called from another service on the same machine, you can use the .Net Named Pipe binding instead of one of the HTTP bindings so that the overhead is much less.
It's worth noting that Entity Framework doesn't support batched updates either. You'd need to write a stored procedure or custom SQL if you wanted the database update to be batched too.
Is all of this worth the effort? Probably! Using WCF and the named pipes binding is pretty standard with WF. You'll need to configure Windows Activation Service for named pipes. Also, if you're not already using AppFabric for Windows Server, have a look into it, because it adds some very good management tools for WF services.
I recently published some new samples to show how you can access the current state of the StateMachine and possible transitions. These might help you.
Windows Workflow Foundation (WF4) - Tracking State Machine Workflow Service
Windows Workflow Foundation (WF4) - Tracking State Machine

Biztalk client defined subscription items

I am designing a Biztalk solution which requires client applications to subscribe and receive only a certain subset of event messages depending on their user permissions. Subscription will be done through topic or content based routing. The client will subscribe once and receive many messages until they choose to unsubscribe.
Client applications will number in the 100s and subscribed topics could change on a regular basis, so defining an individual send port from Biztalk for each reciever isn't a viable solution.
I have thought I could build an additional message broker service which holds the individual client subscriptions and distributes messages sent from a biztalk port.
I have also seen that a recipient list pattern can be build using orchestrations. This appears to me to still follow a request-response pattern though and I am after 1 way subscribe message to many returned event messages.
My message broker solution seems to me to be doubling up on what Biztalk should be good at so I imagine I am missing some important functionality somewhere. Has anyone tried such an application before and can give some pointers? Should I be investingating the ESB toolkit as a solution? I have had a look on the net but nothing makes it very clear for this type of topic-subscription model.
Thanks,
Phil
Do take a look at the ESB Toolkit. You can use the itinerary functionality that it adds to BizTalk, either with one of the built-in resolvers (e.g., UDDI) or with your own custom resolver. This allows you to route messages based on configuration (stored in Business Rules or elsewhere).
You will find a developer-oriented overview video of the ESB Toolkit on MSDN, which is a decent introduction to the design process and tooling. There are several other helpful videos there as well.
Your specific scenario can accomplished with a single itinerary, as described here. Use a receive pipeline with the ESB Dispatch Disassembler component, configure multiple resolvers, and for each resolver a new message is produced.
There are also two samples to look at:
The Itinerary On-Ramp Sample - builds a set of SOAP headers that contain the itinerary that you create in the test client, loads the specific message file from disk, appends the itinerary headers to the message, and submits it to the ESB through an Itinerary on-ramp for processing.
The Scatter-Gather Sample - Also appends SOAP headers containing the itinerary to the message, which is submitted to the ESB through an on-ramp for processing. A Broker orchestration analyzes the settings for its itinerary step, retrieves a collection of resolvers associated with the itinerary step, and for each of those resolvers resolves the service endpoint. After that, the orchestration activates the proper ServiceDispatcher orchestration instances to dispatch the outbound request messages.
You should also look at "How to: Route a Single Message to Multiple Recipients Using an Itinerary Routing Slip" or perhaps look into creating a custom itinerary message service (documentation is here).

Resources