NGINX ingress loads only html on Kubernetes AKS - nginx

I'm setting up my Kubernetes Cluster using Azure AKS and I deployed NGINX Ingress following this guide:
https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip
Now the guide works great for the demo applications it demonstrates on but when I tried deploying one of my own apps the page only showed the html parts and for the js, css and png parts I got the following errors in the Console:
When I deploy the application without NGINX Ingress on my K8S cluster it works perfectly but for some reason when I deploy it through NGINX I get these errors.
I also tried deploying another app, pgAdmin, and for that I only receive an error 503 when I try to approach its' page.
I tried several workarounds I found on the web but nothing seemed to fix it.
Files:
internal-ingress.yaml:
controller:
service:
loadBalancerIP: 10.50.0.253
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
hello-world-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: infra
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /hello-world-one(/|$)(.*)
- backend:
serviceName: pgadmin
servicePort: 80
path: /pgadmin(/|$)(.*)
- backend:
serviceName: pgadmin
servicePort: 80
path: /(.*)
- backend:
serviceName: box-model
servicePort: 80
path: /box-model(/|$)(.*)
pgadmin deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.pgadmin.name }}
spec:
selector:
matchLabels:
app: {{.Values.pgadmin.name}}
replicas: {{.Values.pgadmin.deployment.replicas}}
strategy:
type: {{.Values.pgadmin.deployment.strategy}}
template:
metadata:
labels:
app: {{.Values.pgadmin.name}}
spec:
containers:
- env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
name: {{.Values.pgadmin.secrets.PGADMIN_DEFAULT_EMAIL.name}}
key: {{.Values.pgadmin.secrets.PGADMIN_DEFAULT_EMAIL.key}}
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: {{.Values.pgadmin.secrets.PGADMIN_DEFAULT_PASSWORD.name}}
key: {{.Values.pgadmin.secrets.PGADMIN_DEFAULT_PASSWORD.key}}
image: {{.Values.pgadmin.deployment.image}}
imagePullPolicy: {{.Values.pgadmin.deployment.imagePullPolicy}}
name: {{.Values.pgadmin.name}}
ports:
- containerPort: {{.Values.pgadmin.deployment.containerPort}}
volumeMounts:
- mountPath: {{.Values.pgadmin.deployment.volumeMounts.name}}
name: {{.Values.pgadmin.deployment.volumeMounts.name}}
resources: {}
restartPolicy: {{.Values.pgadmin.deployment.restartPolicy}}
serviceAccountName: ""
volumes:
- name: {{.Values.pgadmin.volumes.pvc.name}}
persistentVolumeClaim:
claimName: {{.Values.pgadmin.volumes.pvc.name}}
status: {}
pgadmin service yaml:
apiVersion: v1
kind: Service
metadata:
name: {{.Values.pgadmin.name}}
spec:
ports:
- name: {{.Values.pgadmin.deployment.containerPort | quote}}
port: {{.Values.pgadmin.deployment.containerPort}}
targetPort: {{.Values.pgadmin.deployment.containerPort}}
status:
loadBalancer: {}
Please tell me if a relevant information is missing and I will add it.

May be nginx.ingress.kubernetes.io/rewrite-target: /$1 is creating an issue. I am not sure.

Related

Can I deploy ingress.yaml file in another namespace and run my deploy.yaml file in AKS

I have created 2 namespaces which are "ingress-basic" and "wallarm-ingress" now I have applied the deploying file in "ingress-basic" namespace and I want to know whether I can have applied my ingress.yaml file in "wallarm-ingress" namespace and expose the deployment to the internet.
This the deployment yaml file
`
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: newwallarmacr.azurecr.io/api-app:v1
ports:
- containerPort: 3333
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
type: ClusterIP
ports:
- port: 3333
selector:
app: api
`
And this is the ingress.yaml file
`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: nginx
name: api
namespace: ingress-basic
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /one(/|$)(.*)
pathType: Prefix
backend:
service:
name: api
port:
number: 3333
- path: /(.*)
pathType: Prefix
backend:
service:
name: api
port:
number: 3333
`
I tried this and this didn't work so I want to know which parts should be added, edited to get this deployment expose to the internet.
By default every workload (pod, endpoints, services) are exposed inside particular namespace. But for your case, you want to access a service hosted in wallarm-ingress via an ingress hosted in ingress-basic. In that case service should be called via this syntax.
{serviceName}.{serviceName-namespace}.svc
So your ingress object should be like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: nginx
name: api
namespace: ingress-basic
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /one(/|$)(.*)
pathType: Prefix
backend:
service:
name: api.wallarm-ingress.svc
port:
number: 3333
- path: /(.*)
pathType: Prefix
backend:
service:
name: api.wallarm-ingress.svc
port:
number: 3333

GKE ingress is not working with WordPress deployment

I am trying to deploy Wordpress on GKE, everything is ok except the ingress, the ingress is not able to connect to backend service, showing " SOME BACKEND SERVICES ARE IN UNHEALTHY STATE"
I would be grateful if someone help me.
Wordpress deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-deployment
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress
ports:
- containerPort: 80
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
env:
- name: WORDPRESS_DB_HOST
value: mysql-service
- name: WORDPRESS_DB_USER
value: wpuser
- name: WORDPRESS_DB_PASSWORD
value: pass#123
- name: WORDPRESS_DB_NAME
value: wpdb
- name: WORDPRESS_DEBUG
value: "1"
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-volumeclaim
Service yaml file
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
spec:
type: NodePort
selector:
app: wordpress
ports:
- name: portname
nodePort: 30100
port: 80
targetPort: 80
Ingress yaml file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-address
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: example.com
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress-service
port:
number: 80
Ingress GCP Consol
Ingress GCP Consol
GCP logs
In your spec
spec:
rules:
- host: example.com
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress-service
port:
number: 80
you accidentally have two different backends: one for "example.com" and one based on the path "/" for anything else. Since you are not specifying a backend for the "example.com", Ingress uses the default backend, which will never return healthy.
My guess is that you don't actually want "example.com", so deleting it from the spec should solve your issue:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-address
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress-service
port:
number: 80
You can try to go to https://console.cloud.google.com/compute/healthChecks/ and then modify the health check for the wordpress backend. For example changing it from / to /wp-admin/images/wordpress-logo.svg solved the issue in my case. This is described in this post https://serverfault.com/questions/826719/how-to-create-a-url-in-a-wordpress-that-will-return-code-200.

Ingress not forwarding traffic to pod

Ingress is not forwarding traffic to pods.
Application is deployed on Azure Internal network.
I can access app successfully using pod Ip and port but when trying Ingress IP/ Host I am getting 404 not found. I do not see any error in Ingress logs.
Bellow are my config files.
Please help me if I am missing anything or a how I can troubleshoot to find issue.
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
- name: aks-helloworld-one
image: <image>
ports:
- containerPort: 8290
protocol: "TCP"
env:
- name: env1
valueFrom:
secretKeyRef:
name: configs
key: env1
volumeMounts:
- mountPath: "mnt/secrets-store"
name: secrets-mount
volumes:
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-keyvault"
imagePullSecrets:
- name: acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8290
selector:
app: aks-helloworld-one
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aks-helloworld
port:
number: 80
Correct your service name and service port in ingress.yaml.
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
# wrong: name: aks-helloworld
name: aks-helloworld-one
port:
# wrong: number: 80
number: 8080
Actually, you can use below command to confirm if ingress has any endpoint.
kubectl describe ingress hello-world-ingress -n ingress-basic
You have mentioned the wrong service name under the ingress definition. Service name should be aks-helloworld-one as per the service definition.

K8S Ingress 404 ssl backend

I have an issue I can't figure out. I have setup Nginx Ingress Controller on my managed k8s cluster. I'm trying to reach an SSL enabled pod behind and it does not work. I have 404 not found from Nginx and the certificate which is presented is the Nginx one. I have deployed the controller using their github repo and the default files following their doc.
I have setup a clear http pod for purpose tests and it works. It seems to be related to ssl.
I have tried many things to no avail. How can I reach an SSL pod behind nginx ?
Here's the Deployment + service (for the https one) resource I have setup :
apiVersion: apps/v1
kind: Deployment
metadata:
name: moulip-https
spec:
selector:
matchLabels:
app: moulip-https
replicas: 2
template:
metadata:
labels:
app: moulip-https
spec:
containers:
- name: "wabam"
image: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
ports:
- containerPort: 443
imagePullSecrets:
- name: regcrd
---
apiVersion: v1
kind: Service
metadata:
name: https-svc
labels:
app: moulip-https
spec:
ports:
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: moulip-https
and my Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: default
spec:
rules:
- host: https.moulip.lan
http:
paths:
- backend:
serviceName: https-svc
servicePort: 443
- host: test.moulip.lan
http:
paths:
- backend:
serviceName: hostname-svc
servicePort: 80
Many thanks for any guidance you could provide me with.
You are missing tls configuration in the ingress. follow sample below
apiVersion: v1
kind: Secret
metadata:
name: testsecret-tls
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
rules:
- host: sslexample.foo.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80

Using master - minion Nginx ingress with oauth2-proxy authentication

I have an app running in a kubernetes cluster that uses TLS and oauth2 authentication as part of the Nginx ingress. It all runs fine but I now want to split my ingresses so that I have a master and a number of minions, making sure that all the authentication is handles for the complete host domain. When I do this the forced signin breaks. I can still reach it if I add the path manually but it is no longer required to reach the application. Is this possible to solve?
Example
Regular ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/auth-url: "https://my-app.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://my-app.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- secretName: my-app-com-tls
hosts:
- my-app.com
rules:
- host: my-app.com
http:
paths:
- path: /(.*)
backend:
serviceName: my-app
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
app.kubernetes.io/managed-by: Helm
chart: oauth2-proxy-3.1.0
heritage: Helm
release: oauth2-proxy
spec:
rules:
- host: my-app.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
tls:
- hosts:
- my-app.com
secretName: my-app-com-tls
Master - minion
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress-master
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: "master"
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/auth-url: "https://my-app.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://my-app.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- secretName: my-app-com-tls
hosts:
- my-app.com
rules:
- host: my-app.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress-minion
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: "minion"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: my-app.com
http:
paths:
- path: /(.*)
backend:
serviceName: my-app
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: minion
labels:
app: oauth2-proxy
app.kubernetes.io/managed-by: Helm
chart: oauth2-proxy-3.1.0
heritage: Helm
release: oauth2-proxy
spec:
rules:
- host: my-app.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
It turns out that I had unintentionally found features that were defined in two different nginx-ingress-controller packages (nginxinc and kubernetes). So the reason that it breaks is simply that there is no support for master - minion hierarchy in the controller I am actually using in my cluster. And there seems not to be any support for the authentication in the other package.
I have created a feature suggestion.

Resources