Kubernetes nginx ingress is not resolving services - nginx

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Related

FastAPI docs on kubernetes not working with devspace on a minikube cluster. 502 bad gateway

I am trying to develop an application on kubernetes with hot-reloading (instant code sync). I am using DevSpace. When running my application on a minikube cluster, everything works and I am able to hit the ingress to reach my FastAPI docs. The problem is when I try to use devspace, I can exec into my pods and see my changes reflected right away, but then when I try to hit my ingress to reach my FastAPI docs, I get a 502 bad gateway.
I have an api-pod.yaml file as such:
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-api
spec:
replicas: 1
selector:
matchLabels:
app: project-api
template:
metadata:
labels:
app: project-api
spec:
containers:
- image: project/project-api:0.0.1
name: project-api
command: ["uvicorn"]
args: ["endpoint:app", "--port=8000", "--host", "0.0.0.0"]
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/project/tasks/
port: 8000
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 600
failureThreshold: 3
ports:
- containerPort: 8000
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: project-api
spec:
selector:
app: project-api
ports:
- port: 8000
protocol: TCP
targetPort: http
type: ClusterIP
I have an api-ingress.yaml file as such:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
spec:
rules:
- http:
paths:
- path: /api/v1/project/tasks/
pathType: Prefix
backend:
service:
name: project-api
port:
number: 8000
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
Using kubectl get ep, I get:
NAME ENDPOINTS AGE
project-api 172.17.0.6:8000 17m
Using kubectl get svc, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
project-api ClusterIP 10.97.182.167 <none> 8000/TCP 17m
Using kubectl get ingress I get:
NAME CLASS HOSTS ADDRESS PORTS AGE
api-ingress nginx * 192.168.64.112 80 17m
to reiterate, my problem is when I try reaching the FastAPI docs using 192.168.64.112/api/v1/project/tasks/docs I get a 502 bad gateway.
Im running:
MacOS Monterey: 12.4
Minikube version: v1.26.0 (with hyperkit as the vm)
Ingress controller: k8s.gcr.io/ingress-nginx/controller:v1.2.1
Devspace version: 5.18.5
I believe the problem was within DevSpace. I am now comfortably using Tilt. Everything is working as expected.

Intermittent 502 errors on local nginx-ingress router in microk8s cluster

I'm new to Kubernetes and am experiencing a weird issue with my nginx-ingress router. My cluster is run locally on a raspberry pi using Microk8s, where I have 4 different deployments. The cluster uses an ingress router to route packets for the UI and API.
In short, my issue is that I receive intermittent 502 errors on calls from the UI to the backend. Intermittent meaning that for every 3 successful POSTs, there are 3 unsuccessful 502 requests (regardless of how quickly these requests are called). E.g.,
I've applied the following Ingress configuration to my cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-router
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui
port:
number: 80
- http:
paths:
- path: /lighting/.*
pathType: Prefix
backend:
service:
name: api
port:
number: 8000
And the deployments for UI and API are as follow:
UI:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
spec:
selector:
matchLabels:
app: iot-control-center
replicas: 3
template:
metadata:
labels:
app: iot-control-center
spec:
containers:
- name: ui-container
image: canadrian72/iot-control-center:ui
imagePullPolicy: Always
ports:
- containerPort: 80
API:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
selector:
matchLabels:
app: iot-control-center
replicas: 3
template:
metadata:
labels:
app: iot-control-center
spec:
containers:
- name: api-container
image: canadrian72/iot-control-center:api
imagePullPolicy: Always
ports:
- containerPort: 8000
- containerPort: 1883
After looking around online, I found this Reddit post which most closely resembles my issue, although I'm not too sure where to go from here. I have a feeling it's a load issue for either the pods or the ingress controller, so I tried adding 3 replicas to each pod (it was 1 before), but this only decreased the frequency of 502 errors.
Edit
It’s not necessarily 1 to one for 200 and 502 responses, it’s fairly random but about an even distribution of 502 and 200 responses. Also to add that I had configured this same setup with LoadBalancer (metallb) and everything worked like a charm, except for CORS. Which is why I went for Ingress to deal with CORS.
Leaving this up if anyone has a similar issue. My issue was that in the ingress configuration, the backend services referenced the deployments themselves instead of a nodeport/clusterIP service.
I instead created two cluster IP services for the UI and API, like follows:
UI
apiVersion: v1
kind: Service
metadata:
name: ui-cluster-ip
spec:
type: ClusterIP
selector:
app: iot-control-center
svc: ui
ports:
- port: 80
API
apiVersion: v1
kind: Service
metadata:
name: lighting-api-cluster-ip
spec:
type: ClusterIP
selector:
app: iot-control-center
svc: lighting-api
ports:
- port: 8000
And then referenced these services from the ingress yaml as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-router
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST,
OPTIONS, DELETE, PATCH
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui-cluster-ip
port:
number: 80
- http:
paths:
- path: /lighting/.*
pathType: Prefix
backend:
service:
name: lighting-api-cluster-ip
port:
number: 8000
I'm not sure why the previous configuration posted resulted in some 503 responses and some 200 (in my mind it should've all been 503), but in any case this solution works.

Problem with NGINX, Kubernetes and CloudFlare

I am experiencing exactly this issue: Nginx-ingress-controller fails to start after AKS upgrade to v1.22, with the exception that none of the proposed solutions is working for my case.
I am running a Kubernetes Cluster on Oracle Cloud and I accidentally upgraded the cluster and now I cannot connect anymore to the services through nginx-controller. After reading the official nginx documentation, I am aware of the new version of nginx, so I checked the documentation and re-installed the nginx-controller following Oracle Cloud official documentation.
I am able to perform step by step as I run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
And then an ingress-nginx namespace is created and a LoadBalancer is created. Then as in the guide I have created a simple hello application (though not running on port 80):
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-hello-world
labels:
app: docker-hello-world
spec:
selector:
matchLabels:
app: docker-hello-world
replicas: 1
template:
metadata:
labels:
app: docker-hello-world
spec:
containers:
- name: docker-hello-world
image: scottsbaldwin/docker-hello-world:latest
ports:
- containerPort: 8088
---
apiVersion: v1
kind: Service
metadata:
name: docker-hello-world-svc
spec:
selector:
app: docker-hello-world
ports:
- port: 8088
targetPort: 8088
type: ClusterIP
and then the ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8088
But when running the curl commands I only get a curl: (56) Recv failure: Connection reset by peer.
So I then tried to connect to some python microservices that are already running by simply editing the ingress, but whatever I do I get the same error message. And when setting the host as the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ANY_MICROSERVICE_RUNNING_IN_CLUSTER
port:
number: EXPOSED_PORT_BY_MICROSERVICE
Then, by setting the subdomain on CloudFlare I only get a 520 Bad Gateway.
Can you help me find what is that I do not see?
This may be related to your Ingress resource.
In Kubernetes versions v1.19 and above, Ingress resources should use ingressClassName instead of the older annotation syntax. Additional information on what should be done when upgrading can be found on the official Kubernetes documentation.
However, with the changes it requires at face value, from the information you're provided so far, your Ingress resource should look this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
spec:
ingressClassName: nginx
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- backend:
service:
name: docker-hello-world-svc
port:
number: 8088
path: /
pathType: Prefix
tls:
- hosts:
- SUBDOMAIN.DOMAIN.com
secretName: tls-secret
Additionally, please provide the deployment Nginx-ingress logs if you still have issues, as the Cloudflare error does not detail what could be wrong apart from providing a starting point.
As someone who uses Cloudflare and Nginx, there are multiple reasons why you're receiving a 520 error, so it'd be better if we could reduce the scope of what could be the main issue. Let me know if you have any questions.

forward from ingress nginx controller to different nginx pods according to port numbers

in my k8s system I have a nginx ingress controller as LoadBalancer and accessing it to ddns adress like hedehodo.ddns.net and this triggering to forward web traffic to another nginx port.
Now I deployed another nginx which works on node.js app but I cannot forward nginx ingress controller for any request to port 3000 to go another nginx
here is the nginx ingress controller yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: hedehodo.ddns.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /
backend:
serviceName: helloapp-deployment
servicePort: 3000
helloapp deployment works a Loadbalancer and I can access it from IP:3000
could any body help me?
Each host cannot share multiple duplicate paths, so in your example, the request to host: hedehodo.ddns.net will always map to the first service listed: my-nginx:80.
To use another service, you have to specify a different path. That path can use any service that you want. Your ingress should always point to a service, and that service can point to a deployment.
You should also use HTTPS by default for your ingress.
Ingress example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: my.example.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /hello
backend:
serviceName: helloapp-svc
servicePort: 3000
Service example:
---
apiVersion: v1
kind: Service
metadata:
name: helloapp-svc
spec:
ports:
- port: 3000
name: app
protocol: TCP
targetPort: 3000
selector:
app: helloapp
type: NodePort
Deployment example:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
labels:
app: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
template:
metadata:
labels:
app: helloapp
spec:
containers:
- name: node
image: my-node-img:v1
ports:
- name: web
containerPort: 3000
You can't have the same "path: /" for the same host. Change the path to a different one for your the new service.

Loadbalancing/Redirecting from Kubernetes NodePort Services

i got a bare metal cluster with a few nodeport deployments of my services (http and https). I would like to access them from a single url like myservices.local with (sub)paths.
config could be sth like the following (pseudo code):
/app1
http://10.100.22.55:30322
http://10.100.22.56:30322
# browser access: myservices.local/app1
/app2
https://10.100.22.55:31432
https://10.100.22.56:31432
# browser access: myservices.local/app2
/...
I tried a few things with haproxy and nginx but nothing really worked (for inexperienced in this webserver/lb things kinda confusing syntax/ config style in my opinion).
what is the easiest solution for a case like this?
The easiest and most used way is to use a NGINX Ingress. The NGINX Ingress is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.
In the documentation we can read:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
This is exactly what you want to achieve.
The first thing you need to do is to install the NGINX Ingress Controller in your cluster. You can follow the official Installation Guide.
A ingress is always going to point to a Service. So you need to have a Deployment, a Service and a NGINX Ingress.
Here is an example of an application similar to your example.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
strategy:
type: Recreate
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
strategy:
type: Recreate
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
type: ClusterIP
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2
labels:
app: app2
spec:
type: ClusterIP
ports:
- port: 5001
protocol: TCP
targetPort: 5001
selector:
app: app2
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress #ingress resource
metadata:
name: myservices
labels:
app: myservices
spec:
rules:
- host: myservices.local #only match connections to myservices.local.
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 5000
- path: /app2
backend:
serviceName: app2
servicePort: 5001

Resources