Serving HTTP/HTTPS service which is outside of Kubernetes cluster through Ingress - nginx

My aim is to route local HTTP service that is not in Kubernetes through Kubernetes Ingress.
The configuration below works so I'm able to open http://owncloud.example.com or https://owncloud.example.com from outside.
Here is Kubernetes configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: owncloud
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet:
location ~ ^/(.*) {
proxy_pass http://192.168.250.100:8260/$1;
proxy_set_header Host $host;
}
location ~ ^/api(.*) {
proxy_pass http://192.168.250.100:8261/$1;
proxy_set_header Host $host;
}
spec:
tls:
- hosts:
- owncloud.example.com
secretName: owncloud-tls
rules:
- host: owncloud.example.com
The issue is that I see some strange errors in browser's Javascript console related to "meta". They are related with deep Javascript code. So unfortunately, there is no useful log. The website produces weird behaviour at few places while locally it works fine.
So it seems this is something to do with Kubernetes Ingress.
Previously I used plain Nginx connected to outside and this worked great:
location / {
proxy_pass http://192.168.250.100:8260/
}
If I add exactly the same block to server-snippet, website doesn't load at all. It catches default Ingress.
How to properly proxy_pass traffic from Kubernetes Ingress to another service which is running outside of Kubernetes? So it doesn't miss something through proxy.
Would be nice to have exploration on server-snippet to understand how Kubernetes Ingress configuration is different from standard Nginx usage.
If using different options, I was not able to find a solution to proxy_pass to different http when accessing path /api.
----------------- Updates -----------------
I have collected all issues for comparison.
Locally - working one:
If I click on manifest.json, it shows "Nothing to preview". If I use wget to download that json, I can see <!DOCTYPE html> in this first line. It's HTML file downloaded. But I can confirm this local version is working perfectly for years. So this screenshot is just to know how it looks when it works.
Through Ingress - not working:
I logged in successfully. Didn't spot anything weird from user experience, but issue exists:
Tried to log out. I'm not able to do it. It throws Owncloud specific error "Access forbidden
CSRF check failed" and on the console I see this:
If I go to https://owncloud.example.com/login page on purpose:
If I try to access files on this Owncloud, it also fails with 400:
If I add additional annotations:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ ^/?(.*) {
proxy_pass http://192.168.250.100:8260/$1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
# Owncloud tuning
nginx.ingress.kubernetes.io/proxy-body-size: "500000m"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "500000m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-redirect-from: "off"
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
Summarise
No errors on application side. So first thing that comes to my mind is /logout behaviour. I get 412 HTTP code which means: Precondition Failed client error response code indicates that access to the target resource has been denied and 400 bad request error.
Any expertise to catch this issue?
Many thanks

Finally found a working solution.
I just corrected location and proxy_pass to solve the root cause.
So if you have some local HTTP service which is outside of Kubernetes cluster and you want to serve this through Ingress, you just need this:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: owncloud
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ "^/(.*)" {
proxy_pass http://192.168.250.100:8260;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
# Owncloud tuning
proxy_max_temp_file_size 0;
client_max_body_size 500000m;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_buffering off;
proxy_redirect off;
proxy_set_header Connection "Keep-Alive";
}
# Owncloud tuning
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
nginx.ingress.kubernetes.io/proxy-body-size: "500000m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000s"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-redirect-from: "off"
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
spec:
rules:
- host: owncloud.example.com
tls:
- hosts:
- owncloud.example.com
secretName: owncloud-example-tls
Remove Owncloud tuning block if you have another service
Remove ssl, secure, X-Forwarded-Proto and tls: bits if you don't need HTTPS
You can add more location blocks such as ~ "^/api/(.*)" so it works as normal Nginx.
In my case it was useful to route some local Docker Compose and old fashion services to outside through Kubernetes Ingress.
P.S. Don't forget to vote for #mWatney comment if you came here to solve Owncloud CSRF error.

Related

Nginx Ingress + nextjs -> 404

I have many services operating happily behind an Nginx Ingress on EKS for some time. Recently I've been trying to deploy a next.js app behind this Ingress but I can't get it to work.
The only solutions I can find online seem to be adding various header as an annotation in the ingress.yaml but these have no effect. I can't get passed the 404. A blank page is loaded as next.js can't load the files it needs.
The 404 stems from next.js rather than Nginx, so the request is still at least reaching the container. The app works correctly when run locally using Docker.
I've tried variations of the config below with no success. Not sure if the rewrite is interfering with things, but doesn't seem to be.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$3
nginx.ingress.kubernetes.io/configuration-snippet: |
location /app {
#proxy_pass ; Is this needed in an Ingress?
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
labels:
app: app-ingress
spec:
rules:
- host: example.com
http:
paths:
# Any path containing "file"
- path: /(/|$)(((.*).*(file).*))
pathType: Prefix
backend:
service:
name: file-api
port:
number: 80
- path: /app
pathType: Prefix
backend:
service:
name: nextjs-service
port:
number: 80
Am I using the wrong values in the annotation or is my approach wrong entirely?

Gitlab Kubernetes Agent behind Nginx Reverse Proxy

Dear friendly developer,
I am trying to register a Gitlab Kubernetes Agent inside a Minikube with a self hosted Gitlab instance. The Gitlab instance is a dockerized Omnibus installation. It does not have any exposed ports. Instead I chose to use a nginx within the same docker network to proxy_pass requests to Gitlab.
When I deploy the agent to the cluster and the container is running, it logs theses errors:
{"level":"warn","time":"2022-02-26T00:12:59.647Z","msg":"GetConfiguration.Recv failed","error":"rpc error: code = Unauthenticated desc = unauthenticated","correlation_id":"01FWSNZ31HRVTAAD5J5700BBXH"}
{"level":"error","time":"2022-02-26T00:13:28.271Z","msg":"Error handling a connection","mod_name":"reverse_tunnel","error":"rpc error: code = Unauthenticated desc = unauthenticated","correlation_id":"01FWSP040J2CRGF5WFHMEX1ACC"}
Visiting http://gitlab.local/api/v4/internal/kubernetes/agent_info results in
{
"message": "KAS JWT authentication invalid"
}
The agent successfully connects to Gitlab when I expose the gitlab ports directly to localhost (and change the agent's kubernetes config accordingly). That is why I am quite sure that it has to be a problem with my nginx websocket configuration.
I have triple checked that the token inside the kubernetes secret for the agent matches the base64 registration token generated by Gitlab.
This is an excerpt of my docker-compose file for gitlab:
services:
gitlab:
image: gitlab/gitlab-ee:latest
container_name: gitlab
restart: always
hostname: gitlab.local
networks:
- ci-cd
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.local'
registry_external_url 'http://gitlab.local:5050'
registry['enable'] = true
registry['env'] = {
"REGISTRY_HTTP_RELATIVEURLS" => true
}
gitlab_kas['enable'] = true
gitlab_kas['gitlab_address'] = 'http://gitlab.local'
volumes:
- $GITLAB_HOME/etc:/etc/gitlab:rw
- $GITLAB_HOME/opt:/var/opt/gitlab:rw
- $GITLAB_HOME/log:/var/log/gitlab:rw
shm_size: "512m"
ulimits:
sigpending: 62793
nproc: 131072
nofile: 60000
core: 0
sysctls:
net.core.somaxconn: 1024
The default API path that gitlab uses for the agent websocket connection is:
/-/kubernetes-agent/
This is my nginx configuration:
upstream gitlab_container {
server gitlab;
}
upstream gitlab_registry_container {
server gitlab:5050;
}
map $http_upgrade $connection_upgrade {
default upgrade;
`` close;
}
server {
listen 80;
listen [::]:80;
server_name gitlab.local;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Host $host;
proxy_pass http://gitlab_container;
proxy_ssl_session_reuse off;
proxy_redirect off;
proxy_cache_bypass $http_upgrade;
}
location /-/kubernetes-agent/ {
proxy_pass http://gitlab;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Host $host;
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 5050;
listen [::]:5050;
server_name gitlab.local;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Host $host;
proxy_pass http://gitlab_registry_container;
proxy_redirect off;
proxy_ssl_session_reuse off;
proxy_cache_bypass $http_upgrade;
}
}
This is the kubernetes configuration for my agent:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-agent
namespace: gitlab-kubernetes-agent
spec:
replicas: 1
selector:
matchLabels:
app: gitlab-agent
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
labels:
app: gitlab-agent
spec:
hostAliases:
- ip: ${INTERNAL_HOST_IP}
hostnames:
- "gitlab.local"
containers:
- args:
- --token-file=/config/token
- --kas-address
- ws://gitlab.local/-/kubernetes-agent/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
image: registry.gitlab.com/gitlab-org/cluster-integration/gitlab-agent/agentk:stable
livenessProbe:
httpGet:
path: /liveness
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
name: agent
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- mountPath: /config
name: token-volume
serviceAccountName: gitlab-agent
volumes:
- name: token-volume
secret:
secretName: ${GITLAB_AGENT_TOKEN_NAME}
The handshake and the protocol upgrade seems to be working fine, as my nginx log shows
172.19.0.1 - - [26/Feb/2022:00:29:32 +0000] "GET /-/kubernetes-agent/ HTTP/1.1" 101 3450 "-" "gitlab-agent/v14.8.1/86d5bf7" "-"
I guess that somehow the registration token gets lost when passing through the reverse proxy. Sadly, I cannot find any technical documentation on how the authentication works in detail.
Any clue as to what I am missing is highly appreciated!
I had exactly same error although probably had a different reason of it. My new gitlab server, by my error, was using an alternate DNS which had an entry for gitlab's external URL poining to my OLD gitlab server.
Everything else in the network (kubernetes with agent I was trying to install) were using correct DNS and effectively were pointing to the correct new gitlab server, but the gitlab server itself wasn't. I've discovered it by looking at the tcpdump and noticing the traffic on 443 between my old and new gitlab servers (so advice is to trace the https traffic on your gitlab server). Took me 2 days :( These messages should be a bit more elaborative (if they could give IPs and ports for this connection I would figure out my error in 2 minutes).
Hope this helps next people with similar problem to pin point the issue.
Another option:
See GitLab 14.10 (April 2022)
The agent server for Kubernetes enabled by default in the Helm chart
The first step for using the agent for Kubernetes in self-managed instances is
to enable the agent server, a backend service for the agent for Kubernetes.
In GitLab 14.8 we enabled the agent server for Omnibus based installations.
The feature has matured in the past few months, so we are now making the agent server enabled by default in the GitLab Helm chart as well to simplify setup for GitLab administrators.
Besides being enabled by default, the agent server accepts various configuration options to customize it according to your needs.
See Documentation and Issue.
That might be easier than setting it up through NGiNX.
This is confirmed with See GitLab 15.1 (June 2022)
Agent server for Kubernetes enabled by default in the Helm chart
The first step for using the agent for Kubernetes in self-managed instances is to enable the agent server, a backend service for the agent for Kubernetes.
In GitLab 14.8, we enabled the agent server for Omnibus based installations.
The feature has matured in the past few months, so we are now making the agent server enabled by default in the GitLab Helm chart as well to simplify setup for GitLab administrators.
Besides being enabled by default, the agent server accepts various configuration options to customize it according to your needs.
See Documentation and Issue.
See GitLab 15.1 (June 2022)
GitLab agent for Kubernetes supports proxied connections
Many users require a proxy to connect Kubernetes clusters to GitLab. Previously, the default installation method for the GitLab agent for Kubernetes did not support proxied connections.
Now, you can use the HTTP_PROXY environment variable in the agentk Helm package to support proxied connections.
See Documentation and Issue.
I don't know exactly what I did different then the previous 10 times, but suddenly the agent connected successfully with the configuration shown above. I suppose it was any of these lines inside my nginx configuration for gitlab:
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
Those I added last. But I cannot guarantee that was the breaking change. Good luck to everyone with similar issues reading this post.

Can't configure Kubernetes nginx ingress basic auth

I'm trying to setup basic auth for my test ingress rule and I couldn't figure out why it doesn't work. I can still access the site without a password prompt.
Versions:
EKS 1.16
Helm chart nginx-ingress-0.5.2
Nginx version 1.7.2(also tried with 1.7.0 and latest)
basic-auth secret content:
kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJHZ4RzVoc1VQJE1KZmpNcEQ2WHdPV1RaaTFDQUdlYTEK
kind: Secret
metadata:
creationTimestamp: "2020-07-02T04:46:58Z"
name: basic-auth
namespace: default
resourceVersion: "8252"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: e3b8a6d3-009b-4a4c-ad8b-b460381933d8
type: Opaque
Ingress rule:
Ingress rule:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: test.*****.com
http:
paths:
- backend:
serviceName: docker-hello-world-svc
servicePort: 8088
Also I haven't found basic-auth section within nginx controller configuration file for hello-world-ing service:
kubectl -n nginx-ingress exec -it dev-nginx-ingress-6d5f459bf5-s4qqg -- cat /etc/nginx/conf.d/default-hello-world-ing.conf
***
location / {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-hello-world-ing-***-docker-hello-world-svc-8088;
}
***
I haven't found anything suspicious in controller logs.
Basic auth works fine with another helm repo stable/nginx-ingress instead of nginx-stable/nginx-ingress.
the nginx-stable repository is for the commercial Nginx/NginxPlus that uses different configurations while the official Helm stable/nginx-ingress uses the open source nginx ingress.

Kubernetes NGINX ingress rewrite-target annotation breaking

I am running kubernetes v1.16 under docker desktop for windows. I have installed the nginx-ingress controller v1.7.9 using helm. I have update my hosts file to have the following entry
127.0.0.1 application.local
I have a backend service named hedgehog-service.
The following ingress definition correctly forwards request to the backend.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ml-zoo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: application.local
http:
paths:
- path: /hedgehog/
backend:
serviceName: hedgehog-service
servicePort: 80
curl application.local/hedgehog works as expected and hits the backend service.
However in order to correctly use the backend service I need to rewrite the target removing the url prefix /hedgehog. Hence I have the following ingress definition
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ml-zoo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: application.local
http:
paths:
- path: /hedgehog(/|$)(.*)
backend:
serviceName: hedgehog-service
servicePort: 80
As indicated here: https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target
Now when I curl application.local/hedgehog/test the ingress controller does not communicate with the backend but according to the logs attempts to open a file
2020/06/23 12:46:48 [error] 708#708: *792 open() "/etc/nginx/html/hedgehog/test" failed (2: No such file or directory), client: 192.168.65.3, server: application.local, request: "GET /hedgehog/test HTTP/1.1", host: "application.local"
192.168.65.3 - - [23/Jun/2020:12:46:48 +0000] "GET /hedgehog/test HTTP/1.1" 404 153 "-" "curl/7.65.3" "-"
Here is the content of etc/nginx/conf.d/default-ml-zoo-ingress
# configuration for default/ml-zoo-ingress
upstream default-ml-zoo-ingress-application.local-hedgehog-service-80 {
zone default-ml-zoo-ingress-application.local-hedgehog-service-80 256k;
random two least_conn;
server 10.1.0.48:80 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
server_tokens on;
server_name application.local;
location /hedgehog(/|$)(.*) {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-ml-zoo-ingress-application.local-hedgehog-service-80;
}
}
Does anyone know why my URLs are not getting rewritten and the requsts delivered to the backend service?
Thanks in advance!
OK having played around with this for hours, once I had written the question my next google turned up an answer.
I installed nginx using helm with the following repo stable/nginx-ingress. However according to this issue https://github.com/kubernetes/ingress-nginx/issues/5756 that is in fact a legacy repository. I uninstalled my controller and changed the repository to ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
and everything appears to be working as expected. Still not sure why the previous controller installation failed but I can get back to work :)
EDIT: For the aid of others that might end up here - in hindsight I am wondering if the reinstallation simply meant that I deleted and recreated my ingress which might have solved the original problem. In other words make sure you try recreating the ingress before reinstalling the ingress controller with helm.

Nginx reverse proxy to web app in kubernetes gives 404 for static web resources

Hi nginx extraordinaire's
I am using nginx as a load balancer and reverse proxy to play the role of an external-facing API gateway to interface applications and api's hosted in kubernetes. these are all exposed via ingress
The issue I am facing is that nginx is giving me 404s when I try to access a standard angular web app via the /test URL
See error:
Nginx was set up using the following config - nginx.conf
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl on;
ssl_certificate /etc/letsencrypt/live/myhostname/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myhostname/privkey.pem;
client_max_body_size 1G;
upstream k8snodes {
server 192.168.2.10;
server 192.168.2.11;
}
server {
listen 443 ssl;
location / {
proxy_pass http://k8snodes/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
include /etc/nginx/conf.d/*.conf;
}
}
Sitting behind /test is a Kubernetes ingress controller that serves the angular application. I can confirm that application can be accessed fine when directly through ingress so there is something that nginx is not happy with.
Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
creationTimestamp: "2019-09-14T07:43:49Z"
generation: 2
name: test-ingress
namespace: default
resourceVersion: "12193067"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/test-
ingress
uid: f8c5ea11-9caf-431e-9a18-19bd3981eece
spec:
rules:
- host: myhostname
http:
paths:
- backend:
serviceName: test-svc
servicePort: 80
path: /test
status:
loadBalancer:
ingress:
- {}
Is there something I have done wrong on the Nginx config? I have other API's that are reverse proxied fine it seems to be the web applications that try to serve static files that are giving issues.

Resources