nginx Reverseproxy for Kibana gives empty dashboard on kubernetes - nginx

We have Deployed Kibana & Elastic Search on our Kubernetes Cluster. I want to access Kibana via my Nginx by configuring reverse proxy . I am able to configure the nginx.conf with kibana url but when I go to nginx_url/Kibana it gives me a White Dashboard ,no content on the web . It is Strange because i got it worked on my Local machine & it worked but same configuration on kubernetes giving me Empty response .
Below is my Kibana deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
#namespace: vpd-cluster-elk
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.10.1
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: "http://ES_Url:9200"
- name: SERVER_BASEPATH
value: "/app/home"
- name: SERVER_REWRITEBASEPATH
value: "false"
ports:
- containerPort: 5601
volumes:
- name: "config"
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
#namespace: vpd-cluster-elk
name: kibana
labels:
app: kibana
spec:
ports:
- port: 5601
targetPort: 5601
selector:
app: kibana
type: LoadBalancer
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
data:
kibana.yml: |
---
server.name: kibana
elasticsearch.url: http://ES_url:9200
# server.basePath: /app/home
# server.rewriteBasePath: true
Below is my Nginx.conf file
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# To allow special characters in headers
ignore_invalid_headers off;
client_max_body_size 0;
proxy_buffering off;
server {
listen 80;
server_name _;
location /app/home{
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://Kibana_Url:5601;
}
}
}
Kindly let me know what I am wrong or do i need any additional configurations

Related

Kubernetes qBitTorrentWebUI showing only on path '/'

I am trying to host a qBitTorrent server with Kubernetes. I have composed a YAML for the https://hub.docker.com/r/linuxserver/qbittorrent docker container.
The problem is that it is accessible only from path /. As soon as I move it to /torrent it does not find it anymore: 404 Not Found.
Steps to replicate:
apply following yamls
helm install nginx ingress-nginx/ingress-nginx
go to service_ip:8080, settings, WebUI, uncheck "Enable Host header validation"
go to localhost:nginx_port/torrent
Result:
page not loading
Expected Result:
qBitTorrent WebUi appears and works
What I tried:
adding nginx.ingress.kubernetes.io/rewrite-target: / to annotations
server.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: torrent-deployment
labels:
app: torrent
spec:
replicas: 1
selector:
matchLabels:
pod-label: torrent-pod
template:
metadata:
labels:
pod-label: torrent-pod
spec:
containers:
- name: linuxserver
image: linuxserver/qbittorrent:amd64-latest
---
apiVersion: v1
kind: Service
metadata:
name: torrent-service
labels:
app: torrent
spec:
selector:
pod-label: torrent-pod
ports:
- port: 8080
name: torrent-deployment
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: torrent-ingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: torrent
spec:
rules:
- http:
paths:
- path: /torrent
pathType: Prefix
backend:
service:
name: torrent-service
port:
number: 8080
Thanks to #matt_j I have found a workaround. I wrote and YAML for nginx myself and added the configurations from the post mentioned by matt ( https://github.com/qbittorrent/qBittorrent/wiki/NGINX-Reverse-Proxy-for-Web-UI ) and it worked.
These are the YAMLs I came up with:
server.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
namespace: nginx
spec:
selector:
matchLabels:
pod-label: nginx
template:
metadata:
labels:
pod-label: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
replicas: 1
# status:
---
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx
labels:
app: nginx
spec:
selector:
pod-label: nginx
ports:
- port: 80
name: nginx
config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: nginx
data:
nginx.conf: |
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
http {
server {
server_name 10.152.183.95;
listen 80;
location /torrent/ {
proxy_pass http://torrent-service.qbittorrent:8080/;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
#proxy_cookie_path / "/; Secure";
}
}
}
events {
worker_connections 1024;
}

Nginx Giving 403

I have usecase to have 2 micro services 1st Frontend 2nd Backend
where frontend pod is exposed to outer world and rendering page from backend pod
While access the frontend service i am getting 403 error
below are the yaml for respective pods and config map
frontend pod
apiVersion: v1
kind: Pod
metadata:
name: frontend
labels:
run: frontend
spec:
volumes:
- name: webpage
hostPath:
path: /home/vagrant/html
type: Directory
- name: nginx-config-volume
configMap:
name: nginx-config
containers:
- image: nginx
name: frontend
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: webpage
mountPath: /var/www/html
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
backend pod
apiVersion: v1
kind: Pod
metadata:
name: backend
labels:
run: backend
spec:
volumes:
- name: webpage
hostPath:
path: /home/vagrant/html
type: Directory
containers:
- image: php:7.2-fpm
name: backend
volumeMounts:
- name: webpage
mountPath: /app
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
Config Map for Nginx
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
# Set nginx to serve files from the shared volume!
root /var/www/html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass backend:9000;
}
}
}
Services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.102.90.23 <none> 9000/TCP 60m
frontend NodePort 10.111.88.172 <none> 80:30054/TCP 9h

NGINX Container Not Loading Static Files using Traefik / Kubernetes

I am running the Traefik Ingress Controller on Kubernetes (AKS). I've successfully deployed my Django application using the Traefik Ingress but it's not currently loading any static files (and therefore the styling isn't working).
Static files are served from a custom NGINX container with /static. So if my domain name is xyz.com, static is served from xyz.com/static.
apiVersion: v1
kind: Service
metadata:
name: nginxstaticservice
labels:
app: nginxstatic
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: nginxstatic
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxstatic-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
# traefik.ingress.kubernetes.io/frontend-entry-points: http,https
spec:
rules:
- host: xyz.com
http:
paths:
- path: /static
backend:
serviceName: nginxstaticservice
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginxstatic-deployment
labels:
app: nginxstatic
spec:
replicas: 1
selector:
matchLabels:
app: nginxstatic
template:
metadata:
labels:
app: nginxstatic
spec:
containers:
- name: nginxstatic
image: nginxstatic:latest
ports:
- containerPort: 80
imagePullSecrets:
This is the default.conf running on the NGINX container (this was previously working in a Website configuration.
server {
listen 80;
server_name _;
client_max_body_size 200M;
set $cache_uri $request_uri;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
ignore_invalid_headers on;
add_header Access-Control-Allow_Origin *;
location /static {
autoindex on;
alias /static;
}
location /media {
autoindex on;
alias /media;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
Resolved in comments, PathPrefixStrip was used incorrectly which caused Nginx to see different paths than expected.

Is there a point of having pod-level nginx when using nginx ingress?

I was wondering if I should have the pod level nginx in the implementations below:
I was previously using a normal ingress and kube-lego after migrating from VMs and now I am using cert-manager and GKE.
My Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-ip
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: letsencrypt
namespace: default
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-crt
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
My service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32111
protocol: "TCP"
name: http
selector:
app: myapp
My Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-1/myapp:latest
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
-
name: STATIC_ROOT
value: https://storage.googleapis.com/myapp-api/static/
-
name: STATIC_URL
value: https://storage.googleapis.com/myapp-api/static/
-
name: MEDIA_ROOT
value: /myapp/media
-
name: MEDIA_URL
value: http://myapp.com/media/
-
name: nginx
image: nginx
command: [nginx, -g,'daemon off;']
imagePullPolicy: Always
volumeMounts:
-
name: api-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
-
name: myapp-media
mountPath: /myapp/media/
ports:
- containerPort: 80
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=myapp-1:europe-west1:myapp-api=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: myapp-media
mountPath: /myapp/media
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: cloudsql
emptyDir:
- name: api-nginx-config
configMap:
name: api-nginx-config
-
name: myapp-media
persistentVolumeClaim:
claimName: myapp-media
my nginx conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: api-nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream api {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
access_log /var/log/nginx/http-access.log;
error_log /var/log/nginx/http-error.log;
listen 80;
listen [::]:80;
server_name myapp.com;
location /media/ {
alias /myapp/media;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8080/;
}
}
}
Is it serving any major purpose given I could directly map the myapp/media directly to /media in the volume mount and my tls is handled by ingress. My major concern is with the pod-level nginx as I highlighted earlier, is it useless in this case? Is it just a baggage I am carrying over from previous implementations?
Generally, there is not really a point to have an extra nginx pod. If you do, you would have something of a double ingress. An nginx ingress controller pod already has nginx in it and you can scale that up/down.
One reason you would want to keep it is for backward compatibility, if for example, you want to use an ingress but want to gradually roll it out in this sort of fashion: create new nginx ingress -> flip traffic from your own nginx only through the new nginx ingress and your own nginx, until you have flipped all your pods -> remove your own nginx gradually until you have removed them all.
Another reason is to support a very specific nginx configuration that is not supported by the nginx ingress controller yet.
You may need to run your own nginx as deployment , for the reasons listed in the above answer , plus , you may need to scale the nginx deployment , let say 10 replicas. you cant scale ingress like that. But in any case , you just need one of them.

service not deployed onto NGINX kubernetes

So this is my current setup.
I have a k8 cluster with nginx controller installed. I installed nginx using helm.
So I have a simple apple service as below:
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
and then I did a kubectl apply -f apples.yaml
Now i have an ingress.yaml as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
and then I kubectl -f ingress.yaml
my ingress controller doesnt have an external ip address.
But even without the external ip, I did a
kubectl exec -it nginxdeploy-nginx-ingress-controller-5d6ddbb677-774xc /bin/bash
And tried doing a curl kL http://localhost/apples
and its giving me a 503 error.
Anybody can help on this?
I've tested your configuration, and it seems to be working fine to me.
Pod responds fine:
$ kubectl describe pod apple-app
Name: apple-app
Namespace: default
Node: kube-helm/10.156.0.2
Start Time: Mon, 10 Sep 2018 11:53:57 +0000
Labels: app=apple
Annotations: <none>
Status: Running
IP: 192.168.73.73
...
$ curl http://192.168.73.73:5678
apple
Service responds fine:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apple-service ClusterIP 10.111.93.194 <none> 5678/TCP 1m
$ curl http://10.111.93.194:5678
apple
Ingress also responds fine, but by default it redirects http to https:
$ kubectl exec -it nginx-ingress-controller-6c9fcdf8d9-ggrcs -n ingress-nginx /bin/bash
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
<html>
<head><title>308 Permanent Redirect</title></head>
<body bgcolor="white">
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.13.12</center>
</body>
</html>
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl -k https://localhost/apple
apple
If you check the nginx configuration in controller pod, you will see that redirect configuration for /apple location:
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ more /etc/nginx/nginx.conf
...
location /apple {
set $namespace "default";
set $ingress_name "example-ingress";
set $service_name "apple-service";
set $service_port "5678";
set $location_path "/apple";
rewrite_by_lua_block {
}
log_by_lua_block {
monitor.call()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=1572
4800; includeSubDomains";
}
port_in_redirect off;
set $proxy_upstream_name "default-apple-service-5678";
# enforce ssl on server side
if ($redirect_to_https) {
return 308 https://$best_http_host$request_uri;
}
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://default-apple-service-5678;
proxy_redirect off;
}
You can disable this default behavior by adding annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
www-data#nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
apple

Resources