Kubernetes NGINX-ingress returns 404 - nginx

I'm a k8s newbie and I am trying to expose a port from the cluster into the local network.
I've tried to do it with metallb config layer2 + load balancer controller and this runs ok.
I have set up a 3-node environment with kubespray.
((192.168.0.1[5,6,7]))
However, I'm trying to expose an api with NodePort and NGINX-Ingress. The nodeport api service is running ok (i can make successfull requests via NODE_IP:NODE_nodeport). But if I try this ingress configuration it justs keeps telling me "connection refused":
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-api
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "k8s.customhostname.com" # solves to 192.168.0.17 which has a running pod with the api.
http:
paths:
- path: /test
pathType: Prefix
backend:
service:
name: svc-api
port:
number: 8080
Then i check the services:
Name: svc-smouapimapes
Namespace: smouapi
Labels: app=apimapes
Annotations: <none>
Selector: app=apimapes
Type: ClusterIP
IP: 10.233.26.225
Port: springboot 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.90.4:8080,10.233.92.8:8080
Session Affinity: None
Events: <none>
And then check the ingress:
Name: ingress-smouapimapes
Namespace: smouapi
Address: 192.168.0.17
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
k8s.nexusgeografics.com
/test svc-smouapimapes:8080 10.233.90.4:8080,10.233.92.8:8080)
Annotations: nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Whenever i call :
curl -I http://k8s.nexusgeografics.com/test
# CONNECTION REFUSED
What am I doing wrong?
Thanks

Try adding the following in the Nginx config ingress-smouapimapes.
Add annotation :
nginx.ingress.kubernetes.io/rewrite-target: /$2
And instead of this path: /test add path: /test(/|$)(.*)

Enin, you should configure the domain k8s.nexusgeografics.com to be resolved to the ip that providing the nginx-ingress service, thus, you can access your service through nginx-ingress.

Related

How to assign IP address to nginx Ingress resource in k8s?

I want to install nginx-controller in my Kubernetes cluster. I setup my master node at one server, and worker node at another server. I am using Ubuntu 20.04.
I followed the link (https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/cloud/1.23/deploy.yaml) and use 'kubectl apply -f file_name.yaml' to install the controller.
The controller pod is running:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-c57bb9674-p2z9d 1/1 Running 0 70s
Now I want to create an Ingress resource. I used this yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /hello
pathType: Exact
backend:
service:
name: ingress-hello
port:
number: 80
However, when I applied this yaml file, and use 'kubectl get ingress -n ingress-nginx', I saw:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-hello <none> * 80 24s
I noticed that the address for this Ingress resource is empty.
I am just wondering is it possible to assign an IP address to it? Any method/ setting to assign the address?
Thanks.
You can access to your service by http://localhost:80/hello, and if you wanan specify a custom host you need to modify your ingnx file.
This is an example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your_host
http:
paths:
- path: /hello
pathType: Exact
backend:
service:
name: ingress-hello
port:
number: 80
and you need to open your hosts files in your /system32/etc/hosts directory and add your customized host, and then your service will be accessible through
http://your_host:80/hello

nginx ingress redirect traffic

I deployed nginx ingress controller and randomApp to minikube cluster.
I have 2 requirements:
All traffic for "random/.*" should go to the random service
Other paths should go to the nginx.
This configuration is correct?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-rule-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
defaultBackend:
service:
name: ingress-nginx-controller
port:
number: 80
rules:
- host: random.localdev.me
http:
paths:
- path: /random/(.*)
backend:
service:
name: random
port:
number: 8000
pathType: Prefix
You also need to add metadata.annotations: kubernetes.io/ingress.class: "nginx" or spec.ingressClassName: nginx to allow nginx-ingress-controller to discover the ingress.
Also you shouldn't define default backend service as ingress-nginx-controller. Because you will get 503 Service Temporarily Unavailable since nginx-ingress-controller's nginx.conf didn't be configured for this. You can give another nginx server's service for it.

External OAuth authentication with Nginx in Kubernetes

Having trouble setting up external authentication for a web application behind nginx ingress. When i try to access the URL https://site.example.com from external i get no redirection to Github login, and direct access to web application happens.
Running Pods for my environment:
NAME READY STATUS
nginx-ingress-68df4dfc4f-wpj5t 1/1 Running
oauth2-proxy-6675d4b57c-cspw8 1/1 Running
web-deployment-7d4bd85b46-blxb8 1/1 Running
web-deployment-7d4bd85b46-nqjgl 1/1 Running
Active Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress LoadBalancer 10.96.156.157 192.168.1.82 80:31613/TCP,443:32437/TCP
oauth2-proxy ClusterIP 10.100.101.251 <none> 4180/TCP
web-service ClusterIP 10.108.237.188 <none> 8080/TCP
Two Ingress resources:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
labels:
app: webapp
spec:
rules:
- host: site.example.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 8080
tls:
- hosts:
- site.example.com
secretName: example-tls
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
spec:
rules:
- host: site.example.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- site.example.com
secretName: example-tls
Ingress output:
NAME CLASS HOSTS ADDRESS PORTS
ingress <none> site.example.com 192.168.1.82 80, 443
oauth2-proxy <none> site.example.com 80, 443
I see these errors in Ingress oauth2-proxy events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 54m nginx-ingress-controller All hosts are taken by other resources
Oauth2-proxy built from deployment here with Client ID, Client Secret and SECRET according to the OAuth app created in my Github account.
No logs found in oauth2-proxy logs, i suppose because it's not invoked in the process.
UPDATE:
This question was incomplete, i forgot to mention the NGINX image empolyed (NGINX 1.9.0 from installation guide).
Changing the image with the below:
NGINX Ingress controller
Release: v0.41.2
Build: d8a93551e6e5798fc4af3eb910cef62ecddc8938
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.4
the error disappears. In brief both Ingress configuration, the one from my question and the other from the answer are working.
In your configuration, you are using 2 Ingress. As you described you oauth2-proxy Ingress, in Event section you can find information:
All hosts are taken by other resources
Issue you have encounter here is called Host Collisions. It occured as in your both Ingress you have used:
spec:
rules:
- host: site.example.com
In that kind of situation, Ingress is using default algorith called Choosing the Winner.
If multiple resources contend for the same host, the Ingress Controller will pick the winner based on the creationTimestamp of the resources: the oldest resource will win.
As fast solution for your issue is to create one Ingress with 2 paths.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
spec:
rules:
- host: site.example.com
http:
paths:
- path: /
backend:
serviceName: web-service
servicePort: 8080
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- site.example.com
secretName: example-tls
Another way to resolve this issue is to use Merging Configuration for the Same Host, however it shouldn't be applied in this scenario.
As last thing, you can follow official Nginx Ingress tutorial - External OAUTH Authentication

Kubernetes NGINX-INGRESS Do I need an NGINX Service running?

I am attempting to create an NGINX-INGRESS (locally at first, then to be deployed to AWS behind a load-balancer). However I am new to Kubernetes, and I understand the Ingress model for NGINX- the configurations are confusing me as to weather I should be deploying an NGINX-INGRESS Service, Ingress or Both
I am working with multiple Flask-Apps I would like to have routed by path (/users, /content, etc.) My services are named user-service on port: 8000 (their container port is 8000 as well)
In this example an Ingress is defined. However, when I apply an ingress (in the same Namespace as my Flask there is no response from http://localhost
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
namespace: my-namespace
spec:
rules:
- http:
paths:
- path: /users
backend:
serviceName: users-service
servicePort: 8000
- path: /content
backend:
serviceName: content-service
servicePort: 8000
Furthermore, looking at the nginx-ingress "Deployment" docs, under Docker for Mac (which I assume I can use as I am using Docker on a MacOS) they define a Service like so:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
---
This seems to function for me (When I open "localhost" I get Nginx "not found", but it is a service in a different namespace then my apps- and there is no association between the port 80/443 and my service-ports.
For reference here is one of my deployment/service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
labels:
app: users-service
namespace: example
spec:
replicas: 1
selector:
matchLabels:
app: users-service
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: users-service:latest
imagePullPolicy: Never
ports:
- containerPort: 8000
---
kind: Service
apiVersion: v1
metadata:
name: users-service
spec:
selector:
app: users-service
ports:
- protocol: TCP
port: 8000
Update
I followed a video for setting up an NGINX-Controller+Ingress, here the results, entering "localhost/users" does not work,
describe-ingress:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe ingress users-ingress
Name: users-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/users users-service:8000 (10.1.0.75:8000)
Annotations: Events: <none>
users-service:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc users-service
Name: users-service
Namespace: default
Labels: <none>
Annotations: Selector: app=users-service
Type: ClusterIP
IP: 10.100.213.229
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.1.0.75:8000
Session Affinity: None
Events: <none>
nginx-ingress
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc nginx-ingress
Name: nginx-ingress
Namespace: default
Labels: <none>
Annotations: Selector: name=nginx-ingress
Type: NodePort
IP: 10.106.167.181
LoadBalancer Ingress: localhost
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32710/TCP
Endpoints: 10.1.0.74:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32240/TCP
Endpoints: 10.1.0.74:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when I try to enter the combination of NodeIP:NodePort/users, it does not connect?
From inside my nginx-ingress pod, calling:
curl 10.1.0.75:8000 or curl 10.100.213.229:8000 returns results.
For nginx or any other ingress to work properly:
Nginx ingress controller need to deployed on the cluster
A LoadBalancer or NodePort type service need to be created to expose nginx ingress controller via port 80 and 443 in the same namespace where nginx ingress controller is deployed.LoadBalancer works in supported public cloud(AWS etc). NodePort works if running locally.
ClusterIP type service need to be created for workload pods in the namespace where workload pods are deployed.
Workload Pods will be exposed via nginx ingress and you need to create ingress resource in the same namespace as of the clusterIP service of your workload Pods.
You will use either the LoadBalancer(in case nginx ingress controller was exposed via LoadBalancer) or NodeIP:NodePort(in case Nginx ingress controller was exposed via NodePort) to access your workload Pods.
So in this case since docker desktop is being used Loadbalancer type service(ingress-nginx) to expose nginx ingress controller will not work. This needs to be of NodePort type. Once done workload pods can be accessed via NodeIP:NodePort/users and NodeIP:NodePort/content. NodeIP:NodePort should give nginx homepage as well.

I need help understanding kubernetes architecture best practices

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6

Resources