I try to access to my deployment but can't reach NodePort net.
curl 10.99.12.214:30991
curl: (7) Failed connect to 10.99.12.214:30991; Aucun chemin d'accès pour atteindre l'hôte cible
kubectl get ep
NAME ENDPOINTS AGE
dark-room-dep 172.17.0.10:8085,172.17.0.9:8085 19h
kubernetes 10.66.222.223:6443 8d
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dark-room-dep NodePort 10.99.12.214 <none> 8085:30991/TCP 19h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
kubectl cluster-info
Kubernetes master is running at https://10.66.222.223:6443
Heapster is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dark-room-dep 2 2 2 2 20h
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default dark-room-dep-577bf64bb8-9n5p7 1/1 Running 0 20h
default dark-room-dep-577bf64bb8-jmppg 1/1 Running 0 20h
kube-system etcd-localhost.localdomain 1/1 Running 6 8d
kube-system heapster-69b5d4974d-qvtrj 1/1 Running 0 1d
kube-system kube-apiserver-localhost.localdomain 1/1 Running 5 8d
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 4 8d
kube-system kube-dns-86f4d74b45-njzj9 3/3 Running 0 1d
kube-system kube-flannel-ds-h9c2m 1/1 Running 3 6d
kube-system kube-flannel-ds-tcbd7 1/1 Running 5 8d
kube-system kube-proxy-7v6mf 1/1 Running 3 6d
kube-system kube-proxy-hwbwl 1/1 Running 4 8d
kube-system kube-scheduler-localhost.localdomain 1/1 Running 6 8d
kube-system kubernetes-dashboard-7d5dcdb6d9-q42q5 1/1 Running 0 1d
kube-system monitoring-grafana-69df66f668-zf2kc 1/1 Running 0 1d
kube-system monitoring-influxdb-78d4c6f5b6-nhdbx 1/1 Running 0 1d
route -n
Table de routage IP du noyau
Destination Passerelle Genmask Indic Metric Ref Use Iface
0.0.0.0 10.66.222.1 0.0.0.0 UG 100 0 0 ens192
10.66.222.0 0.0.0.0 255.255.254.0 U 100 0 0 ens192
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.25.1.0 172.25.1.0 255.255.255.0 UG 0 0 0 flannel.1
kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
k8s-01 Ready <none> 6d v1.10.2
localhost.localdomain Ready master 8d v1.10.2
from k8s-master:
curl 10.66.222.223:30991
curl: (7) Failed connect to 10.66.222.223:30991; Aucun chemin d'accès pour atteindre l'hôte cible
from a lambda pc:
PS C:\Users\XXX> curl 10.66.222.223:30991
curl : can not connect to distant host
at char Ligne:1 : 1
+ curl 10.66.222.223:30991
kubectl describe svc dark-room
Name: dark-room-dep
Namespace: default
Labels: app=dark-room
Annotations: <none>
Selector: app=dark-room
Type: NodePort
IP: 10.99.12.214
Port: <unset> 8085/TCP
TargetPort: 8085/TCP
NodePort: <unset> 30991/TCP
Endpoints: 172.17.0.10:8085,172.17.0.9:8085
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
cat dark-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: dark-room
namespace: default
labels:
run: dark-room
app: dark-room-svc
spec:
externalIPs:
- 10.66.222.223
type: ClusterIP
ports:
- name: http
port: 8085
nodePort: 8086
targetPort: http
protocol: TCP
selector:
run: dark-room
app: dark-room
NodePort will bind the external port to node IP.
Try
curl <node external IP>:<external port>
curl 10.66.222.223:30991
or
curl <service internal IP>:<internal port>
curl 10.99.12.214:8085
nodePort range is between 30000-32767. Try to replace targetPort: http
to targetPort: 80 – gavinlin
Thx you.
It work when I force on port 80.
expose deployment dark-room-dep --type=NodePort --port=80 --name=dark-svc
But I don't understand why doesn't work on any orther port I try (I have no firewall end setenforce 0)
Related
Ok lets to explain my probrem...
I have deployed a Kind Kubernetes. This is my script:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
# Mongo
- containerPort: 30005
hostPort: 27017
protocol: TCP
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
The next step is deploy MetalLB (the Load Balancer). I have used thoose yamls:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
To configure the layer 2, I set a ip range, inside the kind network. For know it:
docker network inspect -f '{{.IPAM.Config}}' kind
This commmnad show this:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
So, I set the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.255.200-172.18.255.250
Ok, the last step is install Nginx controller, I did with the following commnad:
helm install nginx-ingress-controller bitnami/nginx-ingress-controller
All deployed ok and with this command I can see all:
kubectl get all
This command show:
NAME READY STATUS RESTARTS AGE
pod/ddclient-deployment-fcbf95d66-ndldk 1/1 Running 0 51m
pod/nginx-ingress-controller-6b9cf4684f-7hsw2 1/1 Running 0 64s
pod/nginx-ingress-controller-default-backend-6798d86668-7b552 1/1 Running 0 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
service/nginx-ingress-controller-default-backend ClusterIP 10.96.247.49 <none> 80/TCP 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ddclient-deployment 1/1 1 1 51m
deployment.apps/nginx-ingress-controller 1/1 1 1 64s
deployment.apps/nginx-ingress-controller-default-backend 1/1 1 1 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ddclient-deployment-fcbf95d66 1 1 1 51m
replicaset.apps/nginx-ingress-controller-6b9cf4684f 1 1 1 64s
replicaset.apps/nginx-ingress-controller-default-backend-6798d86668 1 1 1 64s
Well, here is the problem. In theory, if you put the load balacer external ip:
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
in the browser, you should see the nginx web page. I cant, just see an error message saying
"ERR_CONNECTION_TIMED_OUT".
I dont know what I am missing...
Thanks for the help!
I'd like to put together a dev environment where there is a kubernetes cluster (I intend to use Microk8s with multiple nodes at the end). The reason is that I'll have a prod system running on this cluster with test environments, and eventually when a new PR is created based on the PR id a totally new system will be created and the url will be different. Something like this: prod system: http://my-system.com, test: http://test-pr-63.my-system.com.
But, first I need a kubernetes with a minimal ingress listening on an IP address and able to route traffic to services/pods, based on URL. I'm not a well versed in kubernetes space.
The end result is always connection refused when I call the IP via curl, and I don't know why.
I do the following steps to install the system on a newly created microk8s environment on m Macbook Pro.
Status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
disabled:
Enable services
~/: microk8s enable ingress dns storage
Enabling Ingress
ingressclass.networking.k8s.io/public created
namespace/ingress created
serviceaccount/nginx-ingress-microk8s-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
configmap/nginx-load-balancer-microk8s-conf created
configmap/nginx-ingress-tcp-microk8s-conf created
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled
Enabling default storage class
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon
The cluster looks like this:
~/: kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 92s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 91s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 4m10s
kube-system pod/calico-node-2r6bv 1/1 Running 0 4m11s
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 14s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7m59s
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 91s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 4m48s
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 92s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 4m48s
kube-system deployment.apps/coredns 1/1 1 1 92s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 79s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 4m11s
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 92s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 14s
Install kafka by helm using bitnamy's chart
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 3m55s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 3m54s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 6m33s
kube-system pod/calico-node-2r6bv 1/1 Running 0 6m34s
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 2m37s
eg pod/eg-kafka-zookeeper-0 1/1 Running 0 48s
eg pod/eg-kafka-zookeeper-1 1/1 Running 0 48s
eg pod/eg-kafka-zookeeper-2 1/1 Running 0 48s
eg pod/eg-kafka-0 1/1 Running 1 (32s ago) 48s
eg pod/eg-kafka-1 1/1 Running 1 (31s ago) 48s
eg pod/eg-kafka-2 1/1 Running 1 (30s ago) 48s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 10m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 3m54s
eg service/eg-kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 48s
eg service/eg-kafka ClusterIP 10.152.183.18 <none> 9092/TCP 48s
eg service/eg-kafka-zookeeper ClusterIP 10.152.183.140 <none> 2181/TCP,2888/TCP,3888/TCP 48s
eg service/eg-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 48s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 7m11s
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 3m55s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 7m11s
kube-system deployment.apps/coredns 1/1 1 1 3m55s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 3m42s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 6m34s
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 3m55s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 2m37s
NAMESPACE NAME READY AGE
eg statefulset.apps/eg-kafka-zookeeper 3/3 48s
eg statefulset.apps/eg-kafka 3/3 48s
Deploy the single service where to the traffic will be routed
It is a simple webapi using Spring, the image works and spins up without any problem.
apiVersion: v1
kind: Service
metadata:
name: webapi
spec:
selector:
app: webapi
ports:
- port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapi
spec:
selector:
matchLabels:
app: webapi
template:
metadata:
labels:
app: webapi
spec:
containers:
- name: webapi
image: ghcr.io/encyclopediagalactica/sourceformats.api.rest/sourceformat-api-rest:latest
resources:
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 80
kubectl get all --all-namespaces the newly added stuff is marked ====>
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 8m45s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 8m44s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 11m
kube-system pod/calico-node-2r6bv 1/1 Running 0 11m
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 7m27s
eg pod/eg-kafka-zookeeper-0 1/1 Running 0 5m38s
eg pod/eg-kafka-zookeeper-1 1/1 Running 0 5m38s
eg pod/eg-kafka-zookeeper-2 1/1 Running 0 5m38s
eg pod/eg-kafka-0 1/1 Running 1 (5m22s ago) 5m38s
eg pod/eg-kafka-1 1/1 Running 1 (5m21s ago) 5m38s
eg pod/eg-kafka-2 1/1 Running 1 (5m20s ago) 5m38s
====> eg pod/webapi-7755b88f98-kwsmx 1/1 Running 0 52s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 15m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 8m44s
eg service/eg-kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 5m38s
eg service/eg-kafka ClusterIP 10.152.183.18 <none> 9092/TCP 5m38s
eg service/eg-kafka-zookeeper ClusterIP 10.152.183.140 <none> 2181/TCP,2888/TCP,3888/TCP 5m38s
eg service/eg-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 5m38s
====> eg service/webapi ClusterIP 10.152.183.96 <none> 80/TCP 52s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 12m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 8m45s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 12m
kube-system deployment.apps/coredns 1/1 1 1 8m45s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 8m32s
====> eg deployment.apps/webapi 1/1 1 1 52s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 11m
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 8m45s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 7m27s
eg replicaset.apps/webapi-7755b88f98 1 1 1 52s
NAMESPACE NAME READY AGE
eg statefulset.apps/eg-kafka-zookeeper 3/3 5m38s
eg statefulset.apps/eg-kafka 3/3 5m38s
Ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapi-ingress
#annotations:
#kubernetes.io/ingress.class: public
spec:
rules:
- host: blabla.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webapi
port:
number: 80
ingressClassName: public
result:
~/: kubectl get ingress -n eg
NAME CLASS HOSTS ADDRESS PORTS AGE
webapi-ingress public blabla.com 127.0.0.1 80 48s
Additional info:
Regarding the commented out lines. There this this post which points out that the annotations should match. If I follow what is described in the answer the result is the same.
There is this post which emphasizes using the ingressClassName property. If I follow this the end result is the same. The two suggestion cannot be combined because kubernetes throws an error.
Call the endpoint
~/: curl http://blabla.com/get
curl: (7) Failed to connect to blabla.com port 80: Connection refused
My /etc/hosts looks like the following:
~/: cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 blabla.com
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
Why 127.0.0.1? I watched this video where Nana says that to whatever IP the ingress rule bound to in the /etc/hosts file I have to map it with the domain in order to route the traffic there. It makes sense to me.
I also followed this answer's suggestions and the result is the same.
How about debugging you might ask... So, nginx logs don't say a word. Seems like the traffic doesn't get to nginx.
I can't decide whether I need metallb... but, discouraging that Microk8s docs mentions that some part of the metallb won't work due to Macbook network traffic manipulation.
So, I have tried at least 4-5 scenarios and the result is the same. I assume 1, I either miss something fundamental, or 2, my Macbook has some magic which doesn't let the traffic go the kubernetes..., or, 3, the both, which wouldn't not be a big surprise. :)
Have you tried minikube, you might ask... Yes, I tried. It can't deal with setting up the Kafka instances. Not an option for me.
So, what I'm doing wrong here? Is there a tutorial which helps me to setup this cluster?
I have deployed and exposed Nginx with the following commands:
sudo kubectl create deployment mynginx1 --image=nginx
sudo kubectl expose deployment mynginx1 --type NodePort --port 8080
I access using http://<master node IP>:<port> or http://172.17.135.42:31788
But I am getting Error 404. Help appreciated.
gtan#master:~$ kubectl get pods -owide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default mynginx1-f544c49cb-g92w2 1/1 Running 0 3m19s 172.168.10.2 slave1 <none> <none>
kube-system coredns-66bff467f8-92r4n 1/1 Running 0 7m56s 172.168.10.2 master <none> <none>
kube-system coredns-66bff467f8-gc7tc 1/1 Running 0 7m56s 172.168.10.3 master <none> <none>
kube-system etcd-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-flannel-ds-amd64-24pwc 1/1 Running 3 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-flannel-ds-amd64-q5qwg 1/1 Running 0 5m28s 172.17.82.100 master <none> <none>
kube-system kube-proxy-hf59b 1/1 Running 0 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-proxy-r7pz6 1/1 Running 0 7m56s 172.17.82.100 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 0 8m5s 172.17.82.100 master <none> <none>
gtan#master:~$
gtan#master:~$ curl -IL http://172.17.82.100:30131
curl: (7) Failed to connect to 172.17.82.100 port 30131: Connection refused where "172.17.82.100" is the master node ip address.
gtan#master:~$ kubectl get services -o wide -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m <none>
default mynginx1 NodePort 10.102.106.240 <none> 80:30131/TCP 10m app=mynginx1
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m k8s-app=kube-dns
What is the architecture of your setup? do you have worker node and master node on same machine?
check the nginx pod status with :
kubectl get pods
If the pod is running without issue then hit your worker machine IP with NodePort http:/Workernode_IP:Nodeport
The default nginx container port is 80 as you can see here. Just change the container port from 8080 to 80 in your second command:
sudo kubectl expose deployment mynginx1 --type NodePort --port 80
and try to reach the service using the NodePort showed in the output of the command, for example:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mynginx1 NodePort 10.97.142.170 <none> 80:31591/TCP 8m9s
Altenatively, you can use this yaml spec to configure your pod and service:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Testing using curl:
$ curl -IL http://localhost:31591
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Tue, 12 May 2020 10:05:04 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
Also, I recommend you to reserve a time to take a look in these documentations pages:
Kuberentes Concepts
Services
I am trying to set up monitoring stack (prometheus + alertmanager + node_exporter etc) via helm install stable/prometheus onto a raspberry pi k8s cluster (1 master + 3 worker nodes) which i set up.
Managed to get all the required pods running.
pi-monitoring-prometheus-alertmanager-767cd8bc65-89hxt 2/2 Running 0 131m 10.17.2.56 kube2 <none> <none>
pi-monitoring-prometheus-node-exporter-h86gt 1/1 Running 0 131m 192.168.1.212 kube2 <none> <none>
pi-monitoring-prometheus-node-exporter-kg957 1/1 Running 0 131m 192.168.1.211 kube1 <none> <none>
pi-monitoring-prometheus-node-exporter-x9wgb 1/1 Running 0 131m 192.168.1.213 kube3 <none> <none>
pi-monitoring-prometheus-pushgateway-799d4ff9d6-rdpkf 1/1 Running 0 131m 10.17.3.36 kube1 <none> <none>
pi-monitoring-prometheus-server-5d989754b6-gp69j 2/2 Running 0 98m 10.17.1.60 kube3 <none> <none>
however after port-forwarding prometheus server port 9090 and navigating to Targets page, i realized none of the node_exporters are registered.
Digging through the logs, i found this
evel=error ts=2020-04-12T05:15:05.083Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:333: Failed to list *v1.Node: Get https://10.18.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.084Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:299: Failed to list *v1.Service: Get https://10.18.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.084Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:261: Failed to list *v1.Endpoints: Get https://10.18.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.085Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:262: Failed to list *v1.Service: Get https://10.18.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
Question: why is the prometheus pod unable to call the apiserver endpoints? Not really sure where was the configuration done wrongly
Followed through debug guide and realized individual nodes are unable to resolve services on other nodes.
Been troubleshooting for the past 1 day reading various sources but to be honest, i am not even sure where to begin with.
These are the pods running in kube-system namespace. Hope this will give a better idea of how my system is set up.
pi#kube4:~ $ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66bff467f8-nzvq8 1/1 Running 0 13d 10.17.0.2 kube4 <none> <none>
coredns-66bff467f8-z7wdb 1/1 Running 0 13d 10.17.0.3 kube4 <none> <none>
etcd-kube4 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-apiserver-kube4 1/1 Running 2 13d 192.168.1.214 kube4 <none> <none>
kube-controller-manager-kube4 1/1 Running 2 13d 192.168.1.214 kube4 <none> <none>
kube-flannel-ds-arm-8g9fb 1/1 Running 1 13d 192.168.1.212 kube2 <none> <none>
kube-flannel-ds-arm-c5qt9 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-flannel-ds-arm-q5pln 1/1 Running 1 13d 192.168.1.211 kube1 <none> <none>
kube-flannel-ds-arm-tkmn6 1/1 Running 1 13d 192.168.1.213 kube3 <none> <none>
kube-proxy-4zjjh 1/1 Running 0 13d 192.168.1.213 kube3 <none> <none>
kube-proxy-6mk2z 1/1 Running 0 13d 192.168.1.211 kube1 <none> <none>
kube-proxy-bbr8v 1/1 Running 0 13d 192.168.1.212 kube2 <none> <none>
kube-proxy-wfsbm 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-scheduler-kube4 1/1 Running 3 13d 192.168.1.214 kube4 <none> <none>
Flannel documentation states:
NOTE: If kubeadm is used, then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init to ensure that the podCIDR is set.
This is because flannel ConfigMap by default is configured to work on "Network": "10.244.0.0/16"
You have configured your kubeadm with --pod-network-cidr=10.17.0.0/16 now this needs to be configured in flannel ConfigMap kube-flannel-cfg to look like this:
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.17.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
Thanks to #kitt for his debugging help.
I suspect there is a networking issue that prevents you from reaching the API server. "dial tcp 10.18.0.1:443: i/o timeout" generally reflects that you are not able to connect or read from the server. You can use below steps to figure out the problem:
1. Deploy one busybox pod using kubectl run busybox --image=busybox -n kube-system
2. Get into the pod using kubectl exec -n kube-system -it <podname> sh
3. Try to do telnet from the tty like telnet 10.18.0.1 443 to figure out the connection issues
Let me know the output.
After much troubleshooting, i realized i am not able to ping other pods from other nodes but only able to ping from those within the node. Issue seems to be with iptables config as covered here https://github.com/coreos/flannel/issues/699.
tl;dr: running iptables --policy FORWARD ACCEPT solved my problem.
prior to updating iptables policy
Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
issue it solved now. thanks #kitt for the help earlier!
I can't access to Network IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in k3s. Its 1 master and 1 workers. Each one has its own Private IP.
Master 192.168.0.13
Worker 192.168.0.13
I Installed k3s with INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"
Now I am trying to deploy a app using MetalLB and nginx ingress
--set configInline.address-pools[0].name=default \
--set configInline.address-pools[0].protocol=layer2 \
--set configInline.address-pools[0].addresses[0]=192.168.0.21-192.168.0.30
helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
--set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller\
--set controller.image.tag=0.30.0 \
--set controller.image.runAsUser=33 \
--set defaultBackend.enabled=false
I Can see every pod up and running
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-d798c9dd-lsdnp 1/1 Running 5 37h 10.42.0.25 c271-k3s-ocrh <none> <none>
local-path-provisioner-58fb86bdfd-bcpl7 1/1 Running 5 37h 10.42.0.22 c271-k3s-ocrh <none> <none>
metrics-server-6d684c7b5-v9tmh 1/1 Running 5 37h 10.42.0.24 c271-k3s-ocrh <none> <none>
metallb-speaker-4kbmw 1/1 Running 0 4m7s 192.168.0.14 c271-k3s-agent <none> <none>
metallb-controller-75bf779d4f-nb47l 1/1 Running 0 4m7s 10.42.1.45 c271-k3s-agent <none> <none>
metallb-speaker-776p9 1/1 Running 0 4m7s 192.168.0.13 c271-k3s-ocrh <none> <none>
nginx-ingress-default-backend-5b967cf596-554bq 1/1 Running 0 98s 10.42.1.46 c271-k3s-agent <none> <none>
nginx-ingress-controller-674675d5b6-blndp 1/1 Running 0 98s 10.42.1.47 c271-k3s-agent <none> <none>
App getting IP 192.168.0.21
❯ kubectl get services -n kube-system -l app=nginx-ingress -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-default-backend ClusterIP 10.43.170.195 <none> 80/TCP 112s app=nginx-ingress,component=default-backend,release=nginx-ingress
nginx-ingress-controller LoadBalancer 10.43.220.166 192.168.0.21 80:31735/TCP,443:31566/TCP 111s app=nginx-ingress,component=controller,release=nginx-ingress
I Can access the app from master and worker by curl to nginx controller pod
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 21 Mar 2020 10:43:34 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive
But the IP is not accessible from local 192.168.0.21
Diagnosis : DHCP is on, and 192.168.0.21-192.168.0.30 is absolutely free., When i try to allocate 192.168.0.21 to master or agent by netplan config they get the IP.
Please Guide me, What i am missing.
You need to make sure that the source IP address (external-ip assigned by metallb) is preserved. To achieve this, set the value of the externalTrafficPolicy field of the ingress-controller Service spec to Local. For example
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
helm.sh/chart: webapp-0.1.0
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
externalTrafficPolicy: Local
The default value for externalTrafficPolicy field is 'Cluster'. So change it to Local
In my setup with Cilium and HAProxy ingress controller I'd to change externalTrafficPolicy from Local to Cluster
kubectl --namespace ingress-controller patch svc haproxy-ingress \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
from two years I've been using metalLb in my home-lab, and I didn't get the error (although I got other errors for example ., MetalLB fails to assign an IP address from the pool)
I want to share my current setup with folks who are still struggling on the internet.
helm install --create-namespace metallb metallb/metallb -n metallb-system -f values.yaml
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.21/30
# can use series like 192.168.0.21-24 too.
Debugging - try to get logs from all the pod in namespace metallb.
kail -n metallb
K8S installed with calico using https://github.com/geerlingguy/ansible-role-kubernetes
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Maybe switching externalTrafficPolicy to local/cluster may help however, I didn't try. my setup works out of the box.
Good Luck.