how to set an alert for server working or not in AppDynamics? - appdynamics

how to set an alert for server up/Down in AppDynamics?
I tried to set an alert for whether the server is working or not but I didn't see any policy in AppDynamics. can you help me with that?

Create a Health Rule using the Machine Agent "Availability" metric for a Critical and / or Warning condition (for example check if zero every minute for 5 minutes).
If you want to add "alerting" have a look at setting up a Policy which does Actions (such as emailing alerts), configure this to be triggered by events created by your Health Rule

Related

How to setup web hooks to send message to Slack when Firebase functions crash?

I need to actively receive crash notifications for firebase functions.
Is there any way to set up Slack webhooks to receive a message when Firebase Functions throw an Error, functions crash, or something like that?
I would love to receive issue messages by velocity ie: Firebase Functions crash 50 times a day.
Thank you so much.
First you have to create a log based (counter) metric that will be counting specific error occurencies and second - you create alerting policy with Slack notification channel.
Let's start from finding corresponding logs that appear when the function throws an error. Since I didn't have one that would crash I used logs that indicated that it was started.
Next you have to create a log based metric. Ignore the next screen and go to Monitoring > Alerting. Click on "Create new policy", find your metric and select "Rolling Window" to whatever time period you need. For testing I used 1 minute. Then set "Rollind windows function" to "mean".
Now configure when the alert has to be triggered - I chose over 3 (within 1 minute window).
On the next screen you select notification channel. In case of Slack it has to be configured first in "Notification Channels".
You can save policy the policy now.
After a few minutes I gathered enough data to generate two incidents:
And here's some alerting related documentation that may help you understand how to use them.

How to define session metric in Firebase

I'm setting up Firebase for a mobile app. I'm finding conflicting information on Firebase's definition of a session on Google documentation as well as stackoverflow itself.
Firebase documentation and stackoverflow state that I should use "SetMinimumSessionDuration" to define the metric—however this parameter has recently been marked as Depracated (see https://firebase.google.com/docs/reference/unity/deprecated/deprecated).
It is mentioned that "a session is initiated when an app is opened in the foreground" (see https://support.google.com/firebase/answer/9191807?hl=en), but I am not confident that this webpage has been published recently and is still valid.
Does anybody have solid info on how Firebase sets this metric?
Before Jan 2019 (Analytics version 16.3.0), the default value for minimum session duration was 10 seconds. The session_start event was triggered if there was no current session, and the app was in the foreground for more than 10 seconds.
After Jan 2019, it was changed so that the session_start event is triggered as soon as the app is foregrounded. The SetMinimumSessionDuration parameter is now deprecated and can no longer be changed. See The Firebase Blog.
However, when you run a SQL query in BigQuery, you should be able to specify a minimum duration of a session for the purpose of analysis. Here's a post to get you started with such queries.
You can also still change the setSessionTimeoutDuration parameter to control the duration of inactivity that terminates the current session (default: 30 minutes).

Firebase Remote Config A/B testing shows no results after 24 hours

I configured Firebase Remote Config A/B testing for Android, and we did rollout on at least 10K devices.
For some reason, I see "0 users" in my A/B test after more than 24 hours.
Firebase GMS version is: 11.8.0
Should it show A/B participants in real-time or it's ok to see 0 users after 24 hours?
P.S: We are able to get AB test variants on test devices through Firebase Instance Id, it works well.
The simplest experiment which is running has only app package as a target, with no additional filters. And it shows 0 users as well.
Finally, we found an answer!
Maybe somebody will find it helpful:
For now, it happens (no data in Firebase remote config A/B test experiment) if you have an activation event configured for A/B test experiment.
If you have 2 different experiments, both will fail to get results even if you have "activation event" configured only in 1 of them.
Additionally, remote config will not work as well, you'll be able to get only default values.
We already reported to Google about, so they'll fix it at some point I hope.
Another useful info which is really hard to get:
How long is it ok to see "0 Total Users" in experiment I've just
started?
It takes many hours before you can see any data in your experiment. We were able to see results only after 21 hours after experiment start, so if you configured everything well, don't worry and wait for at least 24 hours. It will show 0 "Total Users" for many hours after the start.
Should I use app versionName or versionCode in "Version" field of
experiment setup?
You should use versionName.
Some useful info from support:
Firebase SDK
Make sure your users have the version of your app with the latest SDK.
Since your experiment is with Remote Config
When activateFetched() is called, all events from that point on will be tagged with the experiment. If you have a goal or activation event that happens before activateFetched(), such as automatic events like first_open, session_start, etc., the experiment setup might be wrong.
Are you using an Activation Event?
Make sure to call fetch() and activateFetched() before the activation event occurs.
Experiment ID of the experiments (if support asks you about)
It's the number at the end of the URL while viewing experiment results.
This debugging log could be useful to get what is going on
Also:
The good way to check if your experiment is working now is to set it to a specific version you didn't publish yet and check logs from remote config with the fresh app install(or erase all app data & restart).
It should show different variant every time you reinstall the app, since your Firebase Instance ID changes after app reinstall/app data erase.
If you see variants change - then A/B test is running well.
In your "build.graddle": don't forget to set the same versionName which you set in experiment setup.
In my case, I was receiving results of A/B testing but suddenly, it stopped to appear. It had continued for 7 days and then results appeared. Firebase Support manager said:
what I suspected here is just a delay in showing the result in the
experiments
Additionally, she said that
With that, I would suggest always using the latest SDK version and
enabling Google Analytics data sharing.
In my case, I used I wasn't using the latest SDK version, but Google Analytics was enabled for "Benchmarking", "Technical Support", "Account Specialists" except for "Google products & services". I believe these settings were enabled by default (the screenshot from Google Analytics):

Cannot use WSO2 ESB and AS simultaneously even after changing offset

I extracted ESB and AS, and opened up the repository/conf/carbon.xml file of the ESB >> “Ports” configuration. Change the “Offset” setting from 0 to 1.
I can run both of them on different ports 9443 & 9444. When I try to login to both of them, either of them will get log out.
Example- First I login to AS, then when I login to ESB, AS will logout and vise-verse.
What should I do?
Looks like this is a bug with products and reported an jira [1].
As a workaround, you can start two servers in two browsers. Then those instances will not automatically log out.
[1] https://wso2.org/jira/browse/WSAS-2249

How to configure Asterisk realtime with mysql properly?

I currently have over 1k realtime users setup on a MySQL server(only 10-20 users will register simulatneously) for Asterisk. The problem is the sip is not registering evertime. Sometimes I get 'registration timeout'. Is there a setup guide or a setting which I need to configure in order to have >99% successful registrations?
Never faced the issue as I have fewer users.
But according to the aterisk documentation:
If you have problems with your network connection going up and down (e.g. an unreliable cable connection) and you keep losing your sip registry, you may want to add registerattempts and registertimeout settings to the general section above the register definitions.
Setting registerattempts=0 will force Asterisk to attempt to reregister until it can (the default is 10 tries).
registertimeout sets the length of time in seconds between registration attempts (the default is 20 seconds).
About achieving 99% success:
I think you have to study your system and apply setting to the above variables accordingly (dynamically). I suggest using Markovian models like mm1 simulation if your system is not complicated.

Resources