How Method invocation data collectors works? - appdynamics

Im getting started with AppDynamics capabilities, they provide different ways to collect data from the code.
One of them is Method invocation data collectors, I want to understand how this works under hood? How AppDynamics can gather this data without adding any code, api calls, just specifying the classname and method in AppDynamics UI is enough.
How they collect the stack trace details? Do they patch low level capabilities of the language itself?
Thanks

The implementation will vary per language type however generally the information is gathered through agent side configuration received from the controller to control the instrumentation.
In Java, instrumentation is done through the Java Instrumentation API - the agent being injected through the javaagent JVM flag. See https://medium.com/javarevisited/what-is-java-instrumentation-why-is-it-needed-1f9aa467433 for an explanation of the API
In .NET, instrumentation is done through CLR Profiling Interfaces - the agent being loaded from a dll as per profiler ENV VARS on the host. See https://learn.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/setting-up-a-profiling-environment for related MS docs

Related

PACT - Handling provider service state and running actual provider with mocked or actual database

I am new to PACT and trying to use pact-net for contract testing for a .net microservice. I understand the concept of consumer test which generates a pact file.
There is the concept of a provider state middleware which is responsible for making sure that the provider's state matches the Given() condition in the generated pact.
I am bit confused on the following or how to achieve this:
The provider tests are run against the actual service. So we start the provider service before tests are run. My provider service interacts with a database to store and retrieve records. PACT also mentions that all the dependencies of a service should be stubbed.
So we run the actual provider api that is running against the actual db?
If we running the api against actual db how do we inject the data into the db? Should we be using the provider api's own endpoints to add the Given() data?
If the above is not the correct approach then what is?
All the basic blog articles I have come across do not explain this and usually have examples with no provider states or states that are just some text files on the file system.
Help appreciated.
I'm going to add to Matt's comment, you have three options:
Do your provider test with a connected environment but you will have to do some cleanup manually afterwards and make sure your data is always available in your db or/and the external APIs are always up and running. Simple to write but can be very hard to maintain.
You mock your API calls but call the real database.
You mock all your external dependencies: the API and the DB calls.
For 2) or 3) you will have to have test routes and inject the provider state middleware in your provider test fixture. Then, you can configure provider states to be called to generate in-memory data if solution 3) or add some data-init if you are in solution 2)
You can find an example here: https://github.com/pact-foundation/pact-net/tree/master/Samples/EventApi/Provider.Api.Web.Tests
The provider tests are run against the actual service
Do you mean against a live environment, or the actual service running locally to the unit test (the former is not recommended, because of (2) above).
This is one of the exceptions to that rule. You can choose to use a real DB or an in-memory one - whatever is most convenient. It's common to use docker and tools like that for testing.
In your case, I'd have a specific test-only set of routes that respond to the provider state handler endpoints, that also have access to the repository code and can manipulate state of the system.

Replace XCC calls with Rest Calls in Marklogic

In an application .Net XCC being used to make communication with marklogic module database to execute module, function and adhoc queries etc.
I want to replace the same XCC calls with REST calls so that we can run application in marklogic 9 as .Net XCC has been deprecated in Marklogic 9.
I have tried in built rest api in marklogic. It only allows to execute module exiting in module database.
Is there any online source stuffs available or anything that could help us.
Any help would be appreciated.
Thanks,
ArvindKr
There is /v1/invoke to invoke modules in the modules database attached to the REST app-server you are addressing, but also /v1/eval that allows running ad hoc queries.
HTH!
If you're going to replace XCC.NET with RESTful calls, try out XQRS, it allows you to build services in XQuery in a manner similar to JAX-RS for Java.
I only consider the following for cases such as yours, where compatibility with legacy code is useful or required and where other options are exausted. This is not an elegant approach, but it may be useful in special cases.
The XDBC protocol (which is what XCC uses) is supported natively on the exactly same app servers and ports which the REST API is exposed. You can see this on port 8000 in a default install. The server literally cannot tell a 'REST Application' and an 'XCC Application' apart except by the URI requested in the request (and in some cases additional headers like cookies). REST and XDBC are both HTTP based, and at the HTTP layer are very similar to the extent that they can share the same ports and configurations.
XDBC is 'passed through' the REST processing via the XML Rewriter. XDBC uses /eval and /invoke while REST uses /v1/eval and /vi/invoke. If you look at the default rewriter.xml for port 8000 you can see how the routing is made. While the XDBC protocol is not formally published its not difficult to 'reverse engineer' by looking at the XCC code (public java source) and the rewriter. For example its not difficult to construct URL and payload data to do a basic eval or invoke call. You should be able to replicate existing XCC.NET client behaviour exactly by using the /eval and /invoke endpoints (look for the xdbc attribute set in the rewriter.xml, this causes the request handling to use pure XDBC protocol and behaviour.
Another alternative, if you cannot solve the external variables problem is to write new 'REST Friendly' apis that then xdmp:invoke() on the legacy APIS passing in the appropriate namespaces. An option is to put the legacy code in an entirely seperate modules DB and then replicate the module URIs exactly with the new code. If you don't need to maintain co-existing versions then you modify the old code to remove the namespaces from the parameters or assign local variable aliases.

DiagnosticSource - DiagnosticListener - For frameworks only?

There are a few new classes in .NET Core for tracing and distributed tracing. See the markdown docs in here:
https://github.com/dotnet/corefx/tree/master/src/System.Diagnostics.DiagnosticSource/src
As an application developer, should we be instrumenting events in our code, such as sales or inventory depletion etc. using DiagnosticListener instances and then either subscribe and route messages to some metrics store or allow tools like Application Insights to automatically subscribe and push these events to the AI cloud?
OR
Should we create our own metrics collecting abstraction and inject/flow it down the stack "as per normal" and pretend I never saw DiagnosticListener?
I have a similar need to publish "health events" to Service Fabric which I could also solve (abstract) using DiagnosticListener instances sprinkled around.
DiagnosticListener intends to decouple library/app from the tracing system: i.e. any library can use DiagnosticSource` to notify any consumer about interesting operations.
Tracing system can dynamically subscribe to such events and get extensive information about the operation.
If you develop an application and use tracing system that supports DiagnostiListener, e.g. ApplicationInsights, you may use either DiagnosticListener to decouple your code from tracing system or use it's API directly. The latter is more efficient as there is no extra adapter that converts your DS events to AppInsights/other tracing systems events. You can also fine-tune these events more easily.
The former is better if you actually want this layer of abstraction.
You can configure AI to use any DiagnosticListener (by specifying includedDiagnosticSourceActivities) .
If you write a library and want to rely on something available on the platform so that any app can use it without bringing new extra dependencies - DiagnosticListener is your best choice.
Also consider that tracing and metrics collection is different, tracing is much heavier and does not assume any aggregation. If you want just custom-metrics/events without in/out-proc correlation, I'd recommend using tracing system APIs directly.

PACT: How to guard against consumer generating incorrect contracts

We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.

Is there any JMX - REST bridge available?

Hi I would like to monitor a Java application using the browser but at the same time utilising the existing JMX infrastructure.
I know that JMX provides a HTTP interface but I think it provides a standard web gui and its not possible to mashup its functionality with an existing system.
Are you aware of any REST interface for JMX?
My research on google currently shows that there is one project which does something similar. Is this the only option?
Jolokia is a new (at this time) JMX Agent you can install in your JVM and exposes the MBeanServer over HTTP in JSON format.
Tomcat provides a JMX Proxy Servlet in its Manager Application. I don't think it's exactly REST, but it's stateless and is built from simple HTTP requests, so it should be close enough.
For posterity, I've recently added a little web server to my SimpleJMX package. It exposes beans from the platform MBeanServer to HTTP via Jetty if in the classpath. There is also text versions of all pages that make it easy to scrape.
// create a new JMX server listening on a specific port
JmxServer jmxServer = new JmxServer(8000);
jmxServer.start();
// register any beans to jmx as necessary
jmxServer.register(someObj);
// create a web server publisher listening on a specific port
JmxWebServer jmxWebServer = new JmxWebServer(8080);
jmxWebServer.start();
There's a little test program which shows it in operation. Here's an image of java.lang:type=Memory accessed from a browser. As you can see the output is very basic HTML.
You might want to have a look at jmx4perl. It comes with an agent servlet which proxies REST request to local JMX calls and returns a JSON structure with the answers. It supports read, write, exec, list (list of mbeans) and search operations and knows how to dive into complex data structures via an XPath like expression. Look at the protocol description for more details.
The forthcoming release can deal with bulk (== multiple at once) requests as well and adds the possibility to post a JSON request as alternative to a pure REST GET-request.
In one of the next releases there will support a proxy mode so that no agent servlet needs to be deployed on the target platform, but only on an intermediate, proxy server.
MX4J is another alternative., quoting below from the it's home page -
MX4J is a project to build an Open Source implementation of the Java(TM) Management Extensions (JMX) and of the JMX Remote API (JSR 160) specifications, and to build tools relating to JMX.

Resources