I have many services operating happily behind an Nginx Ingress on EKS for some time. Recently I've been trying to deploy a next.js app behind this Ingress but I can't get it to work.
The only solutions I can find online seem to be adding various header as an annotation in the ingress.yaml but these have no effect. I can't get passed the 404. A blank page is loaded as next.js can't load the files it needs.
The 404 stems from next.js rather than Nginx, so the request is still at least reaching the container. The app works correctly when run locally using Docker.
I've tried variations of the config below with no success. Not sure if the rewrite is interfering with things, but doesn't seem to be.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$3
nginx.ingress.kubernetes.io/configuration-snippet: |
location /app {
#proxy_pass ; Is this needed in an Ingress?
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
labels:
app: app-ingress
spec:
rules:
- host: example.com
http:
paths:
# Any path containing "file"
- path: /(/|$)(((.*).*(file).*))
pathType: Prefix
backend:
service:
name: file-api
port:
number: 80
- path: /app
pathType: Prefix
backend:
service:
name: nextjs-service
port:
number: 80
Am I using the wrong values in the annotation or is my approach wrong entirely?
Related
I am trying to achieve below proxy_pass settings, basically one of the services is listing to subdomain.example.com/guacamole but I want to serve it as subdomain.example.com
location / {
proxy_pass http://guacamole:8080/guacamole/;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_cookie_path /guacamole/ /;
access_log off;
# allow large uploads (default=1m)
# 4096m = 4GByte
client_max_body_size 4096m;
}
Below is nginx ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guacamole-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- subdomain.example.com
rules:
- host: subdomain.example.com
http:
paths:
- path: /guacamole
backend:
serviceName: service-guacamole-frontend
servicePort: 8080
I tried using nginx.ingress.kubernetes.io/rewrite-target: / but it didn't work.
Replacing path: /guacamole with path: / should do the trick.
rules:
- host: subdomain.example.com
http:
paths:
- path: / # replace `/guacamole` with `/`
backend:
serviceName: service-guacamole-frontend
servicePort: 8080
You should use the app-root annotation.
From nginx-ingress docs:
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.
Try to use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guacamole-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/app-root: /guacamole
spec:
tls:
- hosts:
- subdomain.example.com
rules:
- host: subdomain.example.com
http:
paths:
- path: /
backend:
serviceName: service-guacamole-frontend
servicePort: 8080
Here you can find another example.
I have a Rancher cluster (v2.4.5) running on custom nodes with the following configuration:
External machine (example.com):
Runs Rancher server on port 8443;
Runs NGINX with (among other unrelated stuff) the following basic configuration:
user nginx;
worker_processes 4;
worker_rlimit_nofile 40000;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
}
http {
upstream rancher_servers {
least_conn;
server <MY_NODE_IP>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443 ssl http2;
server_name example.com service1.example.com service2.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://rancher_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 900s;
}
}
}
Internal machine (MY_NODE_IP):
Runs Rancher agent (etcd/control plane/worker)
Firewall rules are OK, I can deploy minor web-apps with stuff running on port 80 only and get redirected automatically to HTTPS. An example of YAML I'm using to deploy stuff is the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: www-deployment
labels:
app: www
spec:
replicas: 1
selector:
matchLabels:
app: www
template:
metadata:
labels:
app: www
spec:
containers:
- name: www
image: my-www-image
---
kind: Service
apiVersion: v1
metadata:
name: www-service
spec:
selector:
app: www
ports:
- port: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: www-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: service1.example.com
http:
paths:
- path: /
backend:
serviceName: www-service
servicePort: 80
The problem is when I try to deploy a service that runs on both ports 80 and 443 but, when requested on port 80, automatically redirects to port 443. When that's the case, if I specify the Ingress like below (with port 443), I get a Bad Gateway response not from the host machine NGINX. I can tell that because my host machine runs nginx/1.18.0 and the response comes from nginx/1.17.10.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: www-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: service1.example.com
http:
paths:
- path: /
backend:
serviceName: www-service
servicePort: 443
But then, if I change the configuration above to servicePort: 80 I keep getting ERR_TOO_MANY_REDIRECTS, because it enters an infinite loop of redirecting from anything to https://anything.
Am I doing anything wrong here? How can I do a workaround to make these things work?
Found it out. Turns out that the only thing I needed to do was to tell the nginx-ingress-controller that I was expecting HTTPS connections. Final YAML for exposing the service is the following:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: www-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: service1.example.com
http:
paths:
- path: /
backend:
serviceName: www-service
servicePort: 443
I'm trying to setup basic auth for my test ingress rule and I couldn't figure out why it doesn't work. I can still access the site without a password prompt.
Versions:
EKS 1.16
Helm chart nginx-ingress-0.5.2
Nginx version 1.7.2(also tried with 1.7.0 and latest)
basic-auth secret content:
kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJHZ4RzVoc1VQJE1KZmpNcEQ2WHdPV1RaaTFDQUdlYTEK
kind: Secret
metadata:
creationTimestamp: "2020-07-02T04:46:58Z"
name: basic-auth
namespace: default
resourceVersion: "8252"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: e3b8a6d3-009b-4a4c-ad8b-b460381933d8
type: Opaque
Ingress rule:
Ingress rule:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: test.*****.com
http:
paths:
- backend:
serviceName: docker-hello-world-svc
servicePort: 8088
Also I haven't found basic-auth section within nginx controller configuration file for hello-world-ing service:
kubectl -n nginx-ingress exec -it dev-nginx-ingress-6d5f459bf5-s4qqg -- cat /etc/nginx/conf.d/default-hello-world-ing.conf
***
location / {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-hello-world-ing-***-docker-hello-world-svc-8088;
}
***
I haven't found anything suspicious in controller logs.
Basic auth works fine with another helm repo stable/nginx-ingress instead of nginx-stable/nginx-ingress.
the nginx-stable repository is for the commercial Nginx/NginxPlus that uses different configurations while the official Helm stable/nginx-ingress uses the open source nginx ingress.
I am running kubernetes v1.16 under docker desktop for windows. I have installed the nginx-ingress controller v1.7.9 using helm. I have update my hosts file to have the following entry
127.0.0.1 application.local
I have a backend service named hedgehog-service.
The following ingress definition correctly forwards request to the backend.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ml-zoo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: application.local
http:
paths:
- path: /hedgehog/
backend:
serviceName: hedgehog-service
servicePort: 80
curl application.local/hedgehog works as expected and hits the backend service.
However in order to correctly use the backend service I need to rewrite the target removing the url prefix /hedgehog. Hence I have the following ingress definition
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ml-zoo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: application.local
http:
paths:
- path: /hedgehog(/|$)(.*)
backend:
serviceName: hedgehog-service
servicePort: 80
As indicated here: https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target
Now when I curl application.local/hedgehog/test the ingress controller does not communicate with the backend but according to the logs attempts to open a file
2020/06/23 12:46:48 [error] 708#708: *792 open() "/etc/nginx/html/hedgehog/test" failed (2: No such file or directory), client: 192.168.65.3, server: application.local, request: "GET /hedgehog/test HTTP/1.1", host: "application.local"
192.168.65.3 - - [23/Jun/2020:12:46:48 +0000] "GET /hedgehog/test HTTP/1.1" 404 153 "-" "curl/7.65.3" "-"
Here is the content of etc/nginx/conf.d/default-ml-zoo-ingress
# configuration for default/ml-zoo-ingress
upstream default-ml-zoo-ingress-application.local-hedgehog-service-80 {
zone default-ml-zoo-ingress-application.local-hedgehog-service-80 256k;
random two least_conn;
server 10.1.0.48:80 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
server_tokens on;
server_name application.local;
location /hedgehog(/|$)(.*) {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-ml-zoo-ingress-application.local-hedgehog-service-80;
}
}
Does anyone know why my URLs are not getting rewritten and the requsts delivered to the backend service?
Thanks in advance!
OK having played around with this for hours, once I had written the question my next google turned up an answer.
I installed nginx using helm with the following repo stable/nginx-ingress. However according to this issue https://github.com/kubernetes/ingress-nginx/issues/5756 that is in fact a legacy repository. I uninstalled my controller and changed the repository to ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
and everything appears to be working as expected. Still not sure why the previous controller installation failed but I can get back to work :)
EDIT: For the aid of others that might end up here - in hindsight I am wondering if the reinstallation simply meant that I deleted and recreated my ingress which might have solved the original problem. In other words make sure you try recreating the ingress before reinstalling the ingress controller with helm.
My aim is to route local HTTP service that is not in Kubernetes through Kubernetes Ingress.
The configuration below works so I'm able to open http://owncloud.example.com or https://owncloud.example.com from outside.
Here is Kubernetes configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: owncloud
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet:
location ~ ^/(.*) {
proxy_pass http://192.168.250.100:8260/$1;
proxy_set_header Host $host;
}
location ~ ^/api(.*) {
proxy_pass http://192.168.250.100:8261/$1;
proxy_set_header Host $host;
}
spec:
tls:
- hosts:
- owncloud.example.com
secretName: owncloud-tls
rules:
- host: owncloud.example.com
The issue is that I see some strange errors in browser's Javascript console related to "meta". They are related with deep Javascript code. So unfortunately, there is no useful log. The website produces weird behaviour at few places while locally it works fine.
So it seems this is something to do with Kubernetes Ingress.
Previously I used plain Nginx connected to outside and this worked great:
location / {
proxy_pass http://192.168.250.100:8260/
}
If I add exactly the same block to server-snippet, website doesn't load at all. It catches default Ingress.
How to properly proxy_pass traffic from Kubernetes Ingress to another service which is running outside of Kubernetes? So it doesn't miss something through proxy.
Would be nice to have exploration on server-snippet to understand how Kubernetes Ingress configuration is different from standard Nginx usage.
If using different options, I was not able to find a solution to proxy_pass to different http when accessing path /api.
----------------- Updates -----------------
I have collected all issues for comparison.
Locally - working one:
If I click on manifest.json, it shows "Nothing to preview". If I use wget to download that json, I can see <!DOCTYPE html> in this first line. It's HTML file downloaded. But I can confirm this local version is working perfectly for years. So this screenshot is just to know how it looks when it works.
Through Ingress - not working:
I logged in successfully. Didn't spot anything weird from user experience, but issue exists:
Tried to log out. I'm not able to do it. It throws Owncloud specific error "Access forbidden
CSRF check failed" and on the console I see this:
If I go to https://owncloud.example.com/login page on purpose:
If I try to access files on this Owncloud, it also fails with 400:
If I add additional annotations:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ ^/?(.*) {
proxy_pass http://192.168.250.100:8260/$1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
# Owncloud tuning
nginx.ingress.kubernetes.io/proxy-body-size: "500000m"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "500000m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-redirect-from: "off"
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
Summarise
No errors on application side. So first thing that comes to my mind is /logout behaviour. I get 412 HTTP code which means: Precondition Failed client error response code indicates that access to the target resource has been denied and 400 bad request error.
Any expertise to catch this issue?
Many thanks
Finally found a working solution.
I just corrected location and proxy_pass to solve the root cause.
So if you have some local HTTP service which is outside of Kubernetes cluster and you want to serve this through Ingress, you just need this:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: owncloud
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ "^/(.*)" {
proxy_pass http://192.168.250.100:8260;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
# Owncloud tuning
proxy_max_temp_file_size 0;
client_max_body_size 500000m;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_buffering off;
proxy_redirect off;
proxy_set_header Connection "Keep-Alive";
}
# Owncloud tuning
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
nginx.ingress.kubernetes.io/proxy-body-size: "500000m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-redirect-from: "off"
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
spec:
rules:
- host: owncloud.example.com
tls:
- hosts:
- owncloud.example.com
secretName: owncloud-example-tls
Remove Owncloud tuning block if you have another service
Remove ssl, secure, X-Forwarded-Proto and tls: bits if you don't need HTTPS
You can add more location blocks such as ~ "^/api/(.*)" so it works as normal Nginx.
In my case it was useful to route some local Docker Compose and old fashion services to outside through Kubernetes Ingress.
P.S. Don't forget to vote for #mWatney comment if you came here to solve Owncloud CSRF error.