Alerting when heartbeat not received - azure-application-insights

I've got an Application Insights log being written to by a Windows Service. Is it possible to set an alert for inactivity; for instance "If not been written to in the last 15 minutes, activate an alert"?

I'm afraid there isn't such an option out of the box.
What I would recommend doing is firing your custom metric for Keep Alive with a value of 1 every so-and-so minutes, and define a custom alert (see here) on it in case the value drops below some threshold.

You might be interested in Application Insights Heartbeat metrics.
Q&A: https://github.com/microsoft/ApplicationInsights-Home/blob/master/heartbeat/heartbeat-q-and-a.md
Spec: https://github.com/microsoft/ApplicationInsights-Home/blob/master/heartbeat/heartbeat-specification.md

Related

Application insights | Sometimes End-to-end transaction details do not show all telemetry

I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties

Azure Service Bus - Renew message lock automatically when using ServiceBusReceiver

Having spent long hours trying to find documentation and help around this resulting in nothing, I have decided to reach out to the community.
I would like to read messages from a topic subscription. Using the message, a UI is populated for a human to work on it. The time it approximately takes to process each message is 15 minutes and each client can work on only one message. At the end of processing the message, the client can either decide to stop processing messages or request a new message.
With the max lock time set at 5 minutes on the subscription, I need to be able to automatically renew my lock for up to 15 minutes.
The first attempted approach was to use CreateReceiver and fetch the message, read it and Complete message when done. The issue with this is I have not been able to figure out how to automatically renew the lock for 15 minutes. I see the RenewLockAsync function but would like for this to be automatic and not have to run a background timer to keep track of the expiring lock.
The second attempted approach was to try using ServiceBusClient.CreateProcessor() with options to set the AutoLockRenewal timespan. The issue faced here is with the processor itself running based on events in the background. Since I need to populated a UI, I need to be able to stop the processor after the message has been read, return the callback and once the human interaction is done, complete the message. I have been unable to find a way to do this.
What would be a good approach to achieve this? The subscription acts as a workqueue that multiple people pull items from and individually work them. Any help in a proposing an approach to this is appreciated.

Application Insights removing telemetry after it has been logged

I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.

Trigger a series of SMS alerts over time using Twilio/ASP.NET

I didn't see a situation quite like mine, so here goes:
Scenario highlights: The user wants a system that includes custom SMS alerts. A component of the functionality is to have a way to identify a start based on user input, then send SMS with personalized message according to a pre-defined interval after the trigger. I've never used Twilio before and am noodling around with the implementation.
First Pass Solution: Using Twilio account, I designated the .aspx that will receive the inbound triggering alert/SMS via GET. The receiving page declares and instantiates my SMSAlerter object within page load, which responds immediately with a first SMS and kicks off the System.Timer.Timer. Elementary, and functional to a point.
Problem: The alerts continue to be sent if the interval for the timer is a short time span. I tested it at a minute interval and was successful. When I went to 10 minutes, the immediate SMS is sent and the first message 10 minutes later is sent, but nothing after that.
My Observation: Since there is no interaction with the resource after the inbound text, the Session times out if left at default 20 minutes. Increasing Session timeout doesn't work, and even if it did does not seem correct since the interval will be on the order of hours, not minutes.
Using Cache to store each new SMSAlerter might be the way to go. For any SMSAlerter that is created, the schedule is used for roughly 12 hours and is replaced with a new SMSAlerter object when the same user notifies the system the following day. Is there a better way? Am I over/under-simplifying? I am not anticipating heavy traffic now (tens of users), but the user is thinking big.
Thank you for comments, suggestions. I didn't include the code, because the question is about design, not syntax.
I think your timer is going out of scope about 20 minutes after the original request, killing the timer. I have a feeling that if you keep refreshing the aspx page it won't happen - but obviously that doesn't help much.
You could launch a new thread that has the System.Timers.Timer object so it stays alive, and doesn't go out of scope when there are no follow up requests to the server. But this isn't a great idea to be honest - although it might help with understanding the issue.
Ultimately, you'll need some sort of continuously running service - as you don't want to depend on the app pool for this, so I'd suggest a Windows Service running in the background to handle it, which is going to be suitable for a long term solution.
Hope this helps!
(Edited slightly to make the windows service aspect clearer)

Google analytics measurement protocol Queue Time doesn't work properly

Have problem with Queue time parameter:
1. Have hit.
2. Offline action (phone call started).
3. During conversation visitor comes to site with other source. So the new session started.
4. Phone call ends and I'm sending event with qt parameter = phone call lenghth.
http://www.google-analytics.com/collect?v=1&t=event&tid=UA-74639540-1&cid=1141561565.1464086029&uip=195.138.65.154&ea=call&ec=PROPER&el=119&qt=127
5. Event appears in GA like initiated by last session, while using qt I expect that it will put event in session that was active at the moment of phone call start.
What is the problem? Is it possible to post event to session that is not last but ended less than 4 hours as expected by "Queue Time" behaviour? Maybe I need to specify some additional parameter?

Resources