Access K8s Services via Ingress - nginx

We have configured MetalLB since our K8s cluster is hosted on bare metal infrastructure. It seems to be running fine with all pods up and running.
[~]# kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-b78574c59-47qfv 1/1 Running 0 24h
pod/speaker-4q2vm 1/1 Running 0 24h
pod/speaker-m8kwk 1/1 Running 0 24h
pod/speaker-t4rvs 1/1 Running 0 24h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 3 3 3 3 3 kubernetes.io/os=linux 24h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 24h
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-b78574c59 1 1 1 24h
We have configured ingress controller via helm from https://github.com/kubernetes/ingress-nginx/releases/tag/helm-chart-3.29.0 and updating hostNetwork,ingressClass,kind to true,ingress-nginx,DaemonSet respectively in file values.yaml. The helm installation seems to have worked fine with all daemonset pods running and an LB ip provided to created ingress controller service.
[~]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/devingress-ingress-nginx-controller-c2x42 1/1 Running 0 18h
pod/devingress-ingress-nginx-controller-wtmgw 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/devingress-ingress-nginx-controller LoadBalancer x.x.x.x 1.2.3.40 80:32386/TCP,443:30020/TCP 18h
service/devingress-ingress-nginx-controller-admission ClusterIP x.x.x.x <none> 443/TCP 18h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/devingress-ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 18h
Now we have deployed two pods namely nginx with LoadBalancer service type & nginx-deploy-main with ClusterIP service type.
[~]# kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/nginx-854cf6b4d7-lv5ss 1/1 Running 0 18h
pod/nginx-deploy-main-6b5457fbb5-7tg9z 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx LoadBalancer x.x.x.x 1.2.3.41 8080:31101/TCP 18h
service/nginx-deploy-main ClusterIP x.x.x.x <none> 80/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 18h
deployment.apps/nginx-deploy-main 1/1 1 1 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-854cf6b4d7 1 1 1 18h
replicaset.apps/nginx-deploy-main-6b5457fbb5 1 1 1 18h
Below is the ingress resource setup to access nginx-deploy-main.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: nginx-main.int.org.com
http:
paths:
- path: /
backend:
serviceName: nginx-deploy-main
servicePort: 80
And the ingress resource seems to be created correctly pointing to nginx-deploy-main service.
[~]# kubectl get ing -n default
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-resource nginx nginx-main.int.org.com 80 19h
[~]# kubectl describe ing/ingress-resource -n default
Name: ingress-resource
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
nginx-main.int.org.com
/ nginx-deploy-main:80 (x.x.x.x:80)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
Outside of K8s cluster, we have nginx set up serving as reverse proxy with domain int.org.com resolution.
Below is the nginx configuration which should help me hit url http://nginx-main.int.org.com and get response but the response returned is 404.
upstream nginx-main.int.org.com {
server 1.2.3.40:80; ## Ingress Controller Service IP
}
server {
listen 80;
server_name nginx-main.int.org.com;
location / {
proxy_pass http://nginx-main.int.org.com;
}
}
Now when I try to access nginx pod (not nginx-main) using its LoadBalancer Service IP with below configuration , its able to provide response and works just fine
upstream nginx.int.org.com {
server 1.2.3.41:8080;
}
server {
listen 80;
server_name nginx.int.org.com;
location / {
proxy_pass http://nginx.int.org.com;
}
}
Am I missing something here with regards to Ingress Controller or Resource. Port Forwarding works fine and am able to access services using the same.
This really is a blocker and any help or documentation reference would be really useful .

We tried with another Ingress Controller i.e. https://github.com/nginxinc/kubernetes-ingress and were able to make it work .
Below were the steps done .
[~] git clone https://github.com/nginxinc/kubernetes-ingress/
[~] cd kubernetes-ingress/deployments
[~] git checkout v1.11.1
[~] kubectl apply -f common/ns-and-sa.yaml
[~] kubectl apply -f rbac/rbac.yaml
[~] kubectl apply -f common/default-server-secret.yaml
[~] kubectl apply -f common/nginx-config.yaml
[~] kubectl apply -f common/ingress-class.yaml
Created daemon-set pods with extra environment argument i.e. --enable-custom-resources=false added in yaml due to below issue in controller logs
Refer : Kubernetes cluster working but getting this error from the NGINX controller
[~] kubectl apply -f daemon-set/nginx-ingress.yaml
[~] kubectl get all -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-gd8gw 1/1 Running 0 3h55m x.x.x.x worker1 <none> <none>
pod/nginx-ingress-kr9lx 1/1 Running 0 3h55m x.x.x.x worker2 <none> <none>
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/nginx-ingress 2 2 2 2 2 <none> 5h14m nginx-ingress nginx/nginx-ingress:1.11.1 app=nginx-ingress
Hit respective worker nodes at port 80 and a 404 response means its working fine.
Deployed a sample application using github link https://github.com/vipin-k/Ingress-Controller-v1.9.0/blob/main/hotel.yml and updated host entry within Ingress object to hotel.int.org.com
[~] kubectl create -f hotel.yaml
[~] kubectl get all -n hotel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hotel-65d644c8f7-bj597 1/1 Running 0 3h51m x.x.x.x worker1 <none> <none>
pod/hotel-65d644c8f7-csvgp 1/1 Running 0 3h51m x.x.x.x worker2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hotel-svc ClusterIP x.x.x.x <none> 80/TCP 3h51m app=hotel
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hotel 2/2 2 2 3h51m hotel nginxdemos/hello:plain-text app=hotel
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hotel-65d644c8f7 2 2 2 3h51m hotel nginxdemos/hello:plain-text app=hotel,pod-template-hash=65d644c8f7
[~] kubectl get ing -n hotel
NAME CLASS HOSTS ADDRESS PORTS AGE
hotel-ingress nginx hotel.int.org.com 80 3h52m
[~] kubectl describe ing hotel-ingress -n hotel
Name: hotel-ingress
Namespace: hotel
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
hotel.int.org.com
/ hotel-svc:80 (x.x.x.x:80,x.x.x.x:80)
Annotations: Events: <none>
Updated external nginx configuration with enabled domain resolution .
upstream hotel.int.org.com {
server 1.2.3.41:80; #worker1
server 1.2.3.42:80; #worker2
}
server {
listen 80;
server_name hotel.int.org.com;
location / {
proxy_pass http://hotel.int.org.com;
}
}
Restart nginx and verify able to access via browser its serving response from respective running hotel namespace daemonset pods.
[~]# curl hotel.int.org.com
Server address: x.x.x.x:80
Server name: hotel-65d644c8f7-bj597
Date: 28/Apr/2021:05:47:15 +0000
URI: /
Request ID: 28d5cfab4ea28beea49879422b7e8f4c
[~]# curl hotel.int.org.com
Server address: x.x.x.x:80
Server name: hotel-65d644c8f7-csvgp
Date: 28/Apr/2021:05:52:06 +0000
URI: /
Request ID: 4135cacf83f8bf41c9677104500e610b
Exploring with MetalLB too and will post solution once its works

Related

The External IP of my Nginx load balancer dones not work

Ok lets to explain my probrem...
I have deployed a Kind Kubernetes. This is my script:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
# Mongo
- containerPort: 30005
hostPort: 27017
protocol: TCP
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
The next step is deploy MetalLB (the Load Balancer). I have used thoose yamls:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
To configure the layer 2, I set a ip range, inside the kind network. For know it:
docker network inspect -f '{{.IPAM.Config}}' kind
This commmnad show this:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
So, I set the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.255.200-172.18.255.250
Ok, the last step is install Nginx controller, I did with the following commnad:
helm install nginx-ingress-controller bitnami/nginx-ingress-controller
All deployed ok and with this command I can see all:
kubectl get all
This command show:
NAME READY STATUS RESTARTS AGE
pod/ddclient-deployment-fcbf95d66-ndldk 1/1 Running 0 51m
pod/nginx-ingress-controller-6b9cf4684f-7hsw2 1/1 Running 0 64s
pod/nginx-ingress-controller-default-backend-6798d86668-7b552 1/1 Running 0 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
service/nginx-ingress-controller-default-backend ClusterIP 10.96.247.49 <none> 80/TCP 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ddclient-deployment 1/1 1 1 51m
deployment.apps/nginx-ingress-controller 1/1 1 1 64s
deployment.apps/nginx-ingress-controller-default-backend 1/1 1 1 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ddclient-deployment-fcbf95d66 1 1 1 51m
replicaset.apps/nginx-ingress-controller-6b9cf4684f 1 1 1 64s
replicaset.apps/nginx-ingress-controller-default-backend-6798d86668 1 1 1 64s
Well, here is the problem. In theory, if you put the load balacer external ip:
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
in the browser, you should see the nginx web page. I cant, just see an error message saying
"ERR_CONNECTION_TIMED_OUT".
I dont know what I am missing...
Thanks for the help!

Ingress not forwarding the requests - Docker desktop for Windows and kubernetes

EDIT:
I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed ingress-nginx manually.
$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
It gave me an error but I think it's because I did it already before because:
$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
Then applied all my yaml files again but this time ingress is not getting any address:
$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:
$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
And applied all my yaml files:
$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.
$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.
My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?
ingress-svc.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
events-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Reproduction
I reproduced the case using minikube v1.24.0, Docker desktop 4.2.0, engine 20.10.10
First, localhost in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in /etc/hosts, I added a different one for testing and still it showed localhost. Only metallb will provide an IP address from set up network.
What happens
When minikube driver is docker, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running docker ps command in host system:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
And then minikube ssh to get inside this container and run docker ps to see all kubernetes containers.
Moving forward. Before introducing ingress, it's already clear that even NodePort doesn't work as intended. Let's check it.
There are two ways to get minikube VM IP:
run minikube IP
kubectl get nodes -o wide and find the node's IP
What should happen next with NodePort is requests should go to minikube_IP:Nodeport while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.
On minikube to access services within cluster there is a special command - minikube service %service_name% which will create a direct tunnel to the service inside the minikube VM (you can see that it contains service URL with NodePort which is supposed to be working):
$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
And now it's available on host machine:
$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
Adding ingress
Moving forward and adding ingress.
$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
Trying to get any response from ingress by hitting minikube_IP:NodePort with no luck:
$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
Trying to create a tunnel with minikube service command:
$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
And getting 404 from ingress-nginx which means we can send requests to ingress:
$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
Solutions
Above I explained what happens. Here are three solutions how to get it work:
Use another minikube driver (e.g. virtualbox. I used hyperv since my laptop has windows 10 pro)
minikube ip will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into /etc/hosts for domain used in ingress rule
Note! Even though localhost was shown in kubectl get ing ingress output in ADDRESS.
Use built-in kubernetes feature in Docker desktop for Windows.
You will need to manually install ingress-nginx and change ingress-nginx-controller service type from NodePort to LoadBalancer so it will be available on localhost and will be working. Please find my another answer about Docker desktop for Windows
(testing only) - use port-forward
It's almost exactly the same idea as minikube service command. But with more control. You will open a tunnel from host VM port 80 to ingress-nginx-controller service (eventually pod) on port 80 as well. /etc/hosts should contain 127.0.0.1 test.domain entity.
$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
And testing it works:
$ curl test.domain
StatusCode : 200
StatusDescription : OK
Update for kubernetes in docker desktop on windows and ingress:
On modern ingress-nginx versions .spec.ingressClassName should be added to ingress rules. See last updates, so ingress rule should look like:
apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...

Error 404 after deploying and exposing Nginx pod

I have deployed and exposed Nginx with the following commands:
sudo kubectl create deployment mynginx1 --image=nginx
sudo kubectl expose deployment mynginx1 --type NodePort --port 8080
I access using http://<master node IP>:<port> or http://172.17.135.42:31788
But I am getting Error 404. Help appreciated.
gtan#master:~$ kubectl get pods -owide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default mynginx1-f544c49cb-g92w2 1/1 Running 0 3m19s 172.168.10.2 slave1 <none> <none>
kube-system coredns-66bff467f8-92r4n 1/1 Running 0 7m56s 172.168.10.2 master <none> <none>
kube-system coredns-66bff467f8-gc7tc 1/1 Running 0 7m56s 172.168.10.3 master <none> <none>
kube-system etcd-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-flannel-ds-amd64-24pwc 1/1 Running 3 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-flannel-ds-amd64-q5qwg 1/1 Running 0 5m28s 172.17.82.100 master <none> <none>
kube-system kube-proxy-hf59b 1/1 Running 0 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-proxy-r7pz6 1/1 Running 0 7m56s 172.17.82.100 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 0 8m5s 172.17.82.100 master <none> <none>
gtan#master:~$
gtan#master:~$ curl -IL http://172.17.82.100:30131
curl: (7) Failed to connect to 172.17.82.100 port 30131: Connection refused where "172.17.82.100" is the master node ip address.
gtan#master:~$ kubectl get services -o wide -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m <none>
default mynginx1 NodePort 10.102.106.240 <none> 80:30131/TCP 10m app=mynginx1
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m k8s-app=kube-dns
What is the architecture of your setup? do you have worker node and master node on same machine?
check the nginx pod status with :
kubectl get pods
If the pod is running without issue then hit your worker machine IP with NodePort http:/Workernode_IP:Nodeport
The default nginx container port is 80 as you can see here. Just change the container port from 8080 to 80 in your second command:
sudo kubectl expose deployment mynginx1 --type NodePort --port 80
and try to reach the service using the NodePort showed in the output of the command, for example:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mynginx1 NodePort 10.97.142.170 <none> 80:31591/TCP 8m9s
Altenatively, you can use this yaml spec to configure your pod and service:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Testing using curl:
$ curl -IL http://localhost:31591
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Tue, 12 May 2020 10:05:04 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
Also, I recommend you to reserve a time to take a look in these documentations pages:
Kuberentes Concepts
Services

Kubernetes MetalLB External IP not reachable

I can't access to Network IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in k3s. Its 1 master and 1 workers. Each one has its own Private IP.
Master 192.168.0.13
Worker 192.168.0.13
I Installed k3s with INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"
Now I am trying to deploy a app using MetalLB and nginx ingress
--set configInline.address-pools[0].name=default \
--set configInline.address-pools[0].protocol=layer2 \
--set configInline.address-pools[0].addresses[0]=192.168.0.21-192.168.0.30
helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
--set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller\
--set controller.image.tag=0.30.0 \
--set controller.image.runAsUser=33 \
--set defaultBackend.enabled=false
I Can see every pod up and running
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-d798c9dd-lsdnp 1/1 Running 5 37h 10.42.0.25 c271-k3s-ocrh <none> <none>
local-path-provisioner-58fb86bdfd-bcpl7 1/1 Running 5 37h 10.42.0.22 c271-k3s-ocrh <none> <none>
metrics-server-6d684c7b5-v9tmh 1/1 Running 5 37h 10.42.0.24 c271-k3s-ocrh <none> <none>
metallb-speaker-4kbmw 1/1 Running 0 4m7s 192.168.0.14 c271-k3s-agent <none> <none>
metallb-controller-75bf779d4f-nb47l 1/1 Running 0 4m7s 10.42.1.45 c271-k3s-agent <none> <none>
metallb-speaker-776p9 1/1 Running 0 4m7s 192.168.0.13 c271-k3s-ocrh <none> <none>
nginx-ingress-default-backend-5b967cf596-554bq 1/1 Running 0 98s 10.42.1.46 c271-k3s-agent <none> <none>
nginx-ingress-controller-674675d5b6-blndp 1/1 Running 0 98s 10.42.1.47 c271-k3s-agent <none> <none>
App getting IP 192.168.0.21
❯ kubectl get services -n kube-system -l app=nginx-ingress -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-default-backend ClusterIP 10.43.170.195 <none> 80/TCP 112s app=nginx-ingress,component=default-backend,release=nginx-ingress
nginx-ingress-controller LoadBalancer 10.43.220.166 192.168.0.21 80:31735/TCP,443:31566/TCP 111s app=nginx-ingress,component=controller,release=nginx-ingress
I Can access the app from master and worker by curl to nginx controller pod
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 21 Mar 2020 10:43:34 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive
But the IP is not accessible from local 192.168.0.21
Diagnosis : DHCP is on, and 192.168.0.21-192.168.0.30 is absolutely free., When i try to allocate 192.168.0.21 to master or agent by netplan config they get the IP.
Please Guide me, What i am missing.
You need to make sure that the source IP address (external-ip assigned by metallb) is preserved. To achieve this, set the value of the externalTrafficPolicy field of the ingress-controller Service spec to Local. For example
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
helm.sh/chart: webapp-0.1.0
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
externalTrafficPolicy: Local
The default value for externalTrafficPolicy field is 'Cluster'. So change it to Local
In my setup with Cilium and HAProxy ingress controller I'd to change externalTrafficPolicy from Local to Cluster
kubectl --namespace ingress-controller patch svc haproxy-ingress \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
from two years I've been using metalLb in my home-lab, and I didn't get the error (although I got other errors for example ., MetalLB fails to assign an IP address from the pool)
I want to share my current setup with folks who are still struggling on the internet.
helm install --create-namespace metallb metallb/metallb -n metallb-system -f values.yaml
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.21/30
# can use series like 192.168.0.21-24 too.
Debugging - try to get logs from all the pod in namespace metallb.
kail -n metallb
K8S installed with calico using https://github.com/geerlingguy/ansible-role-kubernetes
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Maybe switching externalTrafficPolicy to local/cluster may help however, I didn't try. my setup works out of the box.
Good Luck.

Kubernetes whitelist-source-range blocks instead of whitelist IP

Running Kubernetes on GKE
Installed Nginx controller with latest stable release by using helm.
Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.
Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y"
spec:
rules:
- host: staging.com
http:
paths:
- path: /
backend:
serviceName:staging-service
servicePort: 80
I connected to the controller pod and checked the nginx config and found this:
# Deny for staging.com/
geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae {
default 1;
x.x.x.x 0;
y.y.y.y 0;
}
It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.
Yes. However, I figured out by myself. Your service has to be enabled externalTrafficPolicy: Local. That means that the actual client IP should be used instead of the internal cluster IP.
To accomplish this run
kubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Your nginx controller service has to be set as externalTrafficPolicy: Local. That means that the actual client IP will be used instead of cluster's internal IP.
You need to get the real service name from kubectl get svc command. The service is something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nobby-leopard-nginx-ingress-controller LoadBalancer 10.0.139.37 40.83.166.29 80:31223/TCP,443:30766/TCP 2d
nobby-leopard-nginx-ingress-controller is the service name you want to use.
To finish this, run
kubectl patch svc nobby-leopardnginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
When you setting up a new nginx controller, you can use the command below:
helm install stable/nginx-ingress \
--namespace kube-system \
--set controller.service.externalTrafficPolicy=Local`
to have a nginx ingress controller accept whitelist after installing.

Resources