Can AppDynamics (AppD) calculate the number of messages consumed and published by an application from a JMS queue and topic? - appdynamics

I have an application that is being monitored by AppDynamics. The application consumes from a JMS queue and publishes messages to a JMS topic.
I would like to create a widget that displays the number of messages it is consuming from the queue and the number of messages it is publishing to the topic.
How do I identify the data source of the queue and topic then retrieve the number of messages that the application is consuming and then publishing?

For the consumption of messages, these can be instrumented as Business Transactions (see https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/message-queue-entry-points) - then the BT count will be the number of messages consumed.
For the publishing to a topic, these will be represented as exit calls to a backend (assuming this is not instrumented) - then the backend (a.k.a Remote Service) call count will be the number of pushes to the topic.
For actually directly getting the number of messages on a queue, you may want to use a Machine Agent extension:
e.g.
WebSphere MQ: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/websphere-mq-monitoring-extension
ActiveMQ: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/activemq-monitoring-extension
RabbitMQ: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/rabbitmq-monitoring-extension
Docs on using extensions can be found here: https://docs.appdynamics.com/appd/22.x/latest/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics

Related

Facing latency issue in Azure EventHub consumer

How to avoid latency in EventHub consumer data ?
My Architecture (data flow): IOTHub -> EventHub -> BlobStorage (No deviation from IOTHub packet to Blob Storage JSON packet)
Deviation occurs only on consumer application side (Listener is receiving with delay of 30-50 seconds)
Azure Configuration: 4 Partitions with standard S2 tier subscription.
Publisher: 3000 packets per minute.
My question: BlobStorage has proper data without deviation, but why listener part is receiving with latency. How could I overcome this ?
Tried with EventProcessorClient with respective handlers as suggested in GitHub sample code. Works fine without error. But having huge latency.Tried EventHubProducerClient as well. still same latency issue.
I can't speak to how IoT Hub manages data internally or what it's expected latency between IoT data being received and when IoT Hub itself publishes to Event Hubs.
With respect to Event Hubs, you should expect to see individual events with varying degrees of end-to-end latency. Event Hubs is optimized for throughput (the number of events that flow through the system) and not for the latency of an individual event (the amount of time it takes for it to flow from publisher to consumer).
What I'd suggest monitoring is the backlog of events available to be read in a partition. If there are ample events already available in the partition and you’re not seeing them flow consistently through the processor as fast as you’re able to process them, that’s something we should look to tune.
Additional Event Hubs context
When an event is published - by IoT Hub or another producer - the operation completes when the service acknowledged receipt of the event. At this point, the service has not yet committed the event to a partition, and it is not available to be read. The time that it takes for an event to be available for reading varies and has no SLA associated with it. Most often, it’s milliseconds but can be several seconds in some scenarios – for example, if a partition is moving between nodes.
Another thing to keep in mind is that networks are inherently unreliable. The Event Hubs consumer types, including EventProcessorClient, are resilient to intermittent failures and will retry or recover, which will sometimes entail creating a new connection, opening a new link, performing authorization, and positioning the reader. This is also the case when scaling up/down and partition ownership is moving around. That process may take a bit of time and varies depending on the environment.
Finally, it's important to note that overall throughput is also limited by the time that it takes for you to process events. For a given partition, your handler is invoked and the processor will wait for it to complete before it sends any more events for that partition. If it takes 30 seconds for your application to process an event, that partition will only see 2 events per minute flow through.

Google Cloud Pub/Sub Publisher Lifecycle

I'm running a Cloud Pub/Sub PublisherClient instance as a Singleton in an ASP.NET web application (.NET Standard 2). Does this retain a persistent HTTPS connection to the specified Cloud Pub/Sub Topic and should I call the ShutdownAsync method explicitly, or just let the connection be severed when the app pool recycles?
Running this in conjunction with Quartz.NET, publishing messages to Pub/Sub in relatively small batches, every 30 seconds. This seems to introduce server affinity in a 3-node Azure Load Balancer cluster, where the majority of traffic is routed to any given node after running for 1+ hours. Not 100% sure about best practices here.
Using Pub/Sub C# NuGet package V1 1.0 and Quartz NuGet 3.0.7
I assume you’re using this PublisherClient. Per the sample documentation, the PublisherClient instance should be shut down after use. This ensures that locally queued messages get sent. See also the ShutdownAsync documentation.

Managing multiple Azure Service Bus Queues concurrently

I'm using an Azure environment and developing in .NET
I am running a web app (ClientApp) that takes client data to perform a series of calculations. The calculations are performance intensive, so they are running on a separate web app (CalcApp).
Currently, the ClientApp sends the calculation request to the CalcApp. The requests from every client are put into a common queue and run one at a time, FIFO. My goal is to create separate queues for each client and run several calculations concurrently.
I am thinking of using the Azure Service Bus queues to accomplish this. On the ClientApp, the service bus would check for an existing queue for that client and create one if needed. On the CalcApp, the app would periodically check for existing queues. If it finds a new queue, then it would create a new QueueClient that uses OnMessageAsync() and RunCalculationsAsync() as the callback function.
Is this feasible or even a good idea?
I would consider using multiple consumers instead, perhaps with a topic denoting the "client" if you need to differentiate the type of processing based on which client originated it. Each client can add an entry into the queue, and the consumers "fight" over the messages. There is no chance of the same message being processed twice if you follow this approach.
I'm not sure having multiple queues is necessary.
Here is more information on the Competing Consumers pattern.
https://msdn.microsoft.com/en-us/library/dn568101.aspx
You could also build one consumer and spawn multiple threads. In this model, you would have one queue and one consumer, but still have the ability to calculate more than one at a time. Ultimately, though, competing consumers is far more scalable, using a combination of both strategies.

using Message oriented middleware for communications within single web application realm

I wanted to check the viability of the design approach to use Message Oriented middle-ware (MOM) technology like JMS or ActiveMQ or RabbitMQ for handling asynchronous processing within single web application i.e. the publisher and the subscriber to the MOM server will be contained in the same web application.
The rationale behind this design is to offload some of the heavy duty processing functionality as a background asynchronous operation. The publisher in this case is the server side real-time web service method which need to respond back instantaneously (< than 1 sec) to the calling web service client and the publisher emits the message on MOM Topic. The subscriber is contained in the same web application as the publisher and the subscriber uses the message to asynchronously processes the complex slightly more time consuming (5-7 seconds) functionality.
With this design we can avoid having to spawn new threads within the application server container for handling the heavy duty complex processing functionality.
Does using MOM server in this case an overkill where the message publisher and message subscriber are contained in the same web server address space? From what I have read MOM tech is used mainly for inter-application communication and wanted to check if it is fine to use MOM for intra-application communication.
Let know your thoughts.
Thanks,
Perhaps you will not think it is a good example but in the JEE world using JMS for intra-application communication is quite common. Spawning new threads is considered a bad practive and message-driven beans make consuming messages relatively easy and you get transaction support. A compliant application server like GlassFish has JMS on board so production and consumption of messages does not involve socket communication as will be the case with a standalone ActiveMQ. But there might be reasons to have a standalone JMS, e.g. if there is a cluster of consumers and you want the active instances to take over work from the failed ones... but then the standalone JMS server becomes the single point of failure and now you want a cluster of them and so on.
One significant feature of JMS is (optional) message persistence. You may be concerned that the long-running task fails for some reason and the client's request will be lost. But persistent messages are much more expensive as they cause disk IO.
From what you've described I can tell that of the usual features of MOM (asynchronous processing, guaranteed delivery, order of messages) you only need asynchronous processing. So if guarantees are not important I would use some kind of a thread pool.

How to make a Windows Service listen for additional request while it is already processing the current request?

I need to build a Windows Service in VB.net under Visual Studio 2003. This Windows service should read the flat file (Huge file of about a million records) from the local folder and upload it to the corresponding database table. This should be done in Rollback mode (Database transaction). While transferring data to table, the service should also be listening to additional client requests. So, if in between client requests for a cancel operation, then the service should rollback the transactions and give feedback to the client. This windows service also keeps writing continuously to two log files about the status and error records.
My client is ASPX page (A website).
Can somebody help me explain how to organize and achieve this functionality in a windows service(Processing and listening for additional client requests simultaneously. Ex. Cancellation request).
Also could you suggest me the ideal way of achieving this (like if it is best to implement it as web service or windows service or just a remote object or some other way).
Thank you all for your help in advance!
You can architect your service to spawn "worker threads" that do the heavy lifting, while it simply listens for additional requests. Because future calls are likely to have to deal with the current worker, this may work better than, say, architecting it as a web service using IIS.
The way I would set it up is: service main thread is listening on a port or pipe for a communication. When it gets a call to process data, it spawns a worker thread, giving it some "status token" (could be as simple as a reference to a boolean variable) which it will check at regular intervals to make sure it should still be running. Thread kicks off, service goes back to listening (network classes maintain a buffer of received data so calls will only fail if they "time out").
If the service receives a call to abort, it will set the token to a "cancel" value. The worker thread will read this value on its next poll and get the message, rollback the transaction and die.
This can be set up to have multiple workers processing multiple files at once, belonging to callers keyed by their IP or some unique "session" identifier you pass back and forth.
You can design your work like what FTP do. FTP use two ports, one for commands and another for data transfer.
You can consider two classes, one for command parsing and another for data transfer, each one on separate threads.
Use a communication channel (like a privileged queue) between threads. You can use Syste.Collections.Concurrent if you move to .NET 4.0 and more threading features like CancellationTokens...
WCF has advantages over web service, but comparing it to windows service needs more details of your project. In general WCF is easier to implement in compare to windows service.

Resources