How can I check if my server is alive with metricbeat, Is it possible? - nginx

I've been using elasticsearch, metricbeat and elastalert to watch my server. I have nginx intalled on it that is been used as a reverse proxy and I need to send an to it if nginx drop or return some error, I have already some alerts configured but how can I make a rule to send alert to nginx when it drop or return some error.
Thank a lot

Metricbeat is just for data about the system resources usage. What you need is installing filebeat and activating the nginx module. Then you can use the rule type any of elastalert and filter by fileset.module: nginx and fileset.name: error:
name: your rule name
index: filebeat-*
type: any
filter:
- term:
fileset.module: "nginx"
- term:
fileset.name: "error"
alert:
- "slack"
... # your slack config stuff
realert:
minutes: 1

Related

Raspbian / Mercure - bind: permission denied

I'm trying to run Mercure on my Raspbian.
First :
I tried with mercure-legacy_0.13.0_Linux_armv6.tar.gz using the following command to run mercure
JWT_KEY='example'; ADDR='localhost:3000'; DEMO='1'; ALLOW_ANO NYMOUS='1'; CORS_ALLOWED_ORIGINS='*'; PUBLISH_ALLOWED_ORIGINS='*'; PUBLISHER_JWT_KEY='example' ./mercure run
It returns :
"msg":"Unexpected error","error":"listen tcp :80: bind: permission denied"
Second : I tried with mercure_0.13.0_Linux_armv6.tar.gz using the following command to run Mercure
MERCURE_PUBLISHER_JWT_KEY='!ChangeMe!' MERCURE_SUBSCRIBER_JWT _KEY='!ChangeMe!' ./mercure run
Caddy file :
{
{$GLOBAL_OPTIONS}
}
{
auto_https off
}
{$SERVER_NAME:localhost}
log
route {
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
It returns :
run: loading initial config: loading new config: http app module: start: tcp: listening on :443: listen tcp :443: bind: permission denied
Can anyone provide a solution : I intend to host my symfony project on a web server using apache2 on the same Raspberrry
I don't know this specific application, but your error message:
listen tcp :80: bind: permission denied
could be related with restriction for ports 80 and 443 (second message) - non-root user cannot use ports lower than 1024 on standard Linux configuration. Try to use different port or (if you don't care about security - i.e. local hobby project) run app as root.
Keep in mind that you can run Nginx as reverse proxy, so you can run your app on any high port (like 3000) on standard user.
it's a rights issue with your user.
Try with sudo, it should work.

404 after upgrading artifactory from 6.20 to 7.6.2

I am getting 404 accesing to https://my-dmain/ui/. If I try to access to https://my-dmain/artifactory it redirects to https://my-dmain/ui/ with 404. No log errors, only one warning:
2020-07-10T08:06:04.535L [35m[tomct][0m [WARNING] [ ]
[org.apache.catalina.startup.HostConfig]
[org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase
[/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/artifactory.war]
inside the host appBase has been specified, and will be ignored
2020-07-10T08:06:04.540L [35m[tomct][0m [WARNING] [ ]
[org.apache.catalina.startup.HostConfig]
[org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase
[/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/access.war]
inside the host appBase has been specified, and will be ignored
Just to confirm it, can you try to access the Artifactory using the server IP and port, like HTTP://1.2.3.4:8082? If you are able to access the Artifactory UI using the server IP and Port, I believe you need to tweak the reverse proxy being used.
Your problem is that with Artifactory 7.x the reverse proxy configuration is different. In this KB article you can find a working NGINX configuration.
One easy way to generate such configuration is to bypass your reverse proxy and go to Artifactory directly, there in the UI you will be able to log in, head to HTTP settings, and generate a new Apache or NGINX config.

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

How to disable interception of errors by Ingress in a Tectonic kubernetes setup

I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.
These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.
However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.
There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.
For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs)
Name: ingress-name-redacted
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
public.service.example.com
/ service-name:80 (<none>)
Annotations:
rewrite-target: /
service-upstream: true
use-port-in-redirects: true
Events: <none>
I have solved this on Tectonic 1.7.9-tectonic.4.
In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.
In the config maps shown, you should see one named "tectonic-custom-error".
Open it and go to the YAML editor.
In the data field you should have an entry like this:
custom-http-errors: '404, 500, 502, 503'
which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.
If you don't want some of those, just remove them, or clear them all.
It should take effect as soon as you save the updated config map.
Of course, you could to the same from the command line with kubectl edit:
$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system
Hope this helps :)

Tyk gateway with Nginx and Apache Tomcat 8 (ubuntu 14.04)

Just wondering what I am missing here when trying to create an API with Tyk Dashboard.
My setup is:
Nginx > Apache Tomcat 8 > Java Web Application > (database)
Nginx is already working, redirecting calls to apache tomcat at default port 8080.
Example: tomcat.myserver.com/webapp/get/1
200-OK
I have setup tyk-dashboard and tyk-gateway previously as follows using a custom node port 8011:
Tyk dashboard:
$ sudo /opt/tyk-dashboard/install/setup.sh --listenport=3000 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics --tyk_api_hostname=$HOSTNAME --tyk_node_hostname=http://127.0.0.1 --tyk_node_port=8011 --portal_root=/portal --domain="dashboard.tyk-local.com"
Tyk gateway:
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=127.0.0.1 --redisport=6379 --domain=""
/etc/hosts already configured (not really needed):
127.0.0.1 dashboard.tyk-local.com
127.0.0.1 portal.tyk-local.com
Tyk Dashboard configurations (nothing special here):
API name: foo
Listen path: /foo
API slug: foo
Target URL: tomcat.myserver.com/webapp/
What URI I suppose to call? Is there any setup I need to add in Nginx?
myserver.com/foo 502 nginx
myserver.com:8011/foo does not respond
foo.myserver.com 502 nginx
(everything is running under the same server)
SOLVED:
Tyk Gateway configuration was incorrect.
Needed to add --mongo and remove --domain directives at setup.sh :
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics
So, calling curl -H "Authorization: null" 127.0.0.1:8011/foo
I get:
{
"error": "Key not authorised"
}
I am not sure about the /foo path. I think that was previously what the /hello path is. But it appears there is a key not authorized issue. If the call is made using the Gateway API, then the secret value may be missing. It is required when making calls to the gateway (except the hello and reload paths)
x-tyk-authorization: <your-secret>
However, since there is a dashboard present, then I would suggest using the Dashboard APIs to create the API definition instead.

Resources