Is it possible to create a stackdriver alert that will alert when any K8 pods are down? - stackdriver

I have the following setup:
K8 Ingress --> K8 Service --> K8 Pods x n
(GCP LB for SSL) (NodePort) (containing the application)
I can setup a stackdriver HTTPS uptime alert to notify when the site is down, but this will only alert when all n pods are out of action.
Is it possible to create a stackdriver alert that will alert when any of the n application pods are down?

firstly, you should consider putting correct scaling in place so that you may altogether avoid the need to alert when n application pods are down. Additionally, it's best to alert on what symptoms your users experience (increased latency or errors) rather than on the underlying infrastructure. Since it might be okay if n pods are down temporarily, as long as user requests still get served.
That being said, if you're running in GKE you can alert on container uptime. From your question I'm assuming that's not the case, so you could either:
* Log your own uptime checks, create a logs metric and alert when it's under a certain threshold.
* Similarly, create a custom uptime metric and alert on that.
Note that I would avoid creating a custom metric and using metric absence as the alerting policy condition, so I didn't list that as an option.
HTH and thanks for using Stackdriver.

Related

Why is My WSO2 API Manager Constantly Reaching out to an Endpoint on port 9099 when we don't have a websocket API published?

As the title suggests, I have an API Manager that is unsuccessfully reaching out on port 9099 multiple times per second and filling up the wso2carbon.log (as seen below)
I do not have any websocket APIs published so I'm unsure what endpoint it's looking for. We tried commenting out the ws_endpoint in the apim.gateway.environment block of the deployment.toml but that did not produce any perceptible changes. Any help would be greatly appreciated.
UPDATE: I have noticed whenever I shut down API manager, this stack trace below appears. It is failing to destroy this Inbound endpoint, and then tries to recreate it after the service starts, but says it already exists (so any changes on the websocket endpoint configs do nothing)

Is it possible add create rest apis without restarting server?

I am trying to implement a server which has to offer REST APIs. As the time goes by I may have to add new REST APIs based on the need of that hour. Well I can do it with a simple spring REST API service where I can add the new API and re-deploy the application to server.
But it would have been nicer if I could just go on adding APIs to the server whenever there is a need without even stopping the server ! Is it even possible ?
I would appreciate any input on this topic.
Not familiar with Spring, but surely REST APIs can be written in Python and once running can be served through Tomcat as with any other HTTP server.
The question is, why don't you want to re-start the server? Is it because you are afraid of downtime and missing out on some requests?
If so probably you could adopt one of these two strategies (laid out in layman's terms):
Load Balancing: You have 2+ servers running in a cluster behind a common HTTP server (let's say A, B, C and D, for example): bring offline A and B, update them, bring them back online while bringing offline C and D, update them, and bring them all back online.
Blue/Green: Similar to the previous one but with 2 clusters (one active and one idle - could be just 1 server per cluster, doesn't matter): update the idle one, swap it with the currently active one (i.e. channel all traffic from one to the other using the HTTP server).

Alerting when heartbeat not received

I've got an Application Insights log being written to by a Windows Service. Is it possible to set an alert for inactivity; for instance "If not been written to in the last 15 minutes, activate an alert"?
I'm afraid there isn't such an option out of the box.
What I would recommend doing is firing your custom metric for Keep Alive with a value of 1 every so-and-so minutes, and define a custom alert (see here) on it in case the value drops below some threshold.
You might be interested in Application Insights Heartbeat metrics.
Q&A: https://github.com/microsoft/ApplicationInsights-Home/blob/master/heartbeat/heartbeat-q-and-a.md
Spec: https://github.com/microsoft/ApplicationInsights-Home/blob/master/heartbeat/heartbeat-specification.md

rate limit in yii2 vs using nginx for rate limiting

what is difference between rate limiting via yii2 versus using nginx for example as reverse proxy and rate limiter ?
REF: Yii2 Rate Limiting Api
Application rate limit (like yii2) more flexible. You can write different limits per user, for example. Or put request to some queue for future execution. But each request over that limit still hit PHP scripts.
Nginx limits less flexible, but allow to stop request before PHP script.
Nginx limits usually used as DOS protection. Usual task: do not allow to spawn too much PHP processes from one IP, for example.
Application rate limit used as application backend overloading protection. It's can be database or external API. Also, application limits can be used as part of business logic (different rate limits for different tariff plans, etc)
The difference is in what layer of your web application you configure the rate limit for the calls of your api server.
in the first case Yii2, you configure a limitation directly in the php code.
With the yii\filters\RateLimitInterface you implement the methods in an Identity class (the model used for manage the data for the api calls), then yii will automatically use the yii\filters\RateLimiter for adding the limit headers to the response.
Conversely, in nginx you set this limitation directly in the Http Server configuration, the server will take charge of dialogue with the headers and then limit the requests.
The real Question here is "What should i use the yii or the nginx approach?". The answer can mute in the way you will build your api services.
Lots of people can say that using the http server for take care of this aspect is the most "Naturally" way, however yii2 give you can use php to customize the rate limiting, and this come to your advantage when you want to develop an api server with a medium/high level of complexity.
In some (very) rare case you can combine yii2 with nginx for obtain something even more custom.

JMeter: How to get assured that requests hits from JMeter has reached at server if there is no way to verify on DB or any analytic tool

I see no way to verify specific requests been reached at Server end after hitting VUsers from JMeter.
Consider "About Us" is the page where 10000 VUsers hits at once from JMeter and Server shows some activity at Perfmon. No, Lets says, in JMeter, VUsers have reached 10000/10000 to 0/10000 but there is no way to keep track of how many users hit as Analytic is not implemented in App.
I want to make sure all 10000 VUSers have hit at once. Is there any way I can find out how many VUsers have visited "About Us" page from 10000 if JMeter doesnot show any failed response?
You can monitor request rate with custom listeners available via JMeter Plugins project like:
Server Hits per Second
Active Threads Over Time
You can set desired request rate via JMeter Timers, i.e.
Constant Throughout Timer
Throughput Shaping Timer
To ensure you're really hitting your server, the best way is to check access logs of Web servers.
To then have some numbers in your report, use the new Web Report provided by JMeter 3.0:
https://jmeter.apache.org/usermanual/generating-dashboard.html
If you want live number:
https://jmeter.apache.org/usermanual/realtime-results.html

Resources