I want to redirect two namecheap domains testA.com and testB.com to two different services (websites) on my raspberry pi cluster.
I set everything up using an updated form from this guide. This means that k3s, metalb, nginx ingress and cert-manager are fully deployed and working.
% kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metallb-speaker-bsxfg 1/1 Running 1 30h
kube-system metallb-speaker-6pwsb 1/1 Running 1 30h
kube-system nginx-ingress-ingress-nginx-controller-7cc994599f-db285 1/1 Running 1 28h
cert-manager cert-manager-7998c69865-754mr 1/1 Running 2 27h
kube-system metallb-speaker-z8p97 1/1 Running 1 30h
webserver httpd-554794f9fd-npd4g 1/1 Running 1 21h
kube-system metallb-controller-df647b67b-2khlr 1/1 Running 1 30h
kube-system coredns-854c77959c-dl74f 1/1 Running 2 33h
cert-manager cert-manager-webhook-7d6d4c78bc-97g2g 1/1 Running 1 27h
kube-system metrics-server-86cbb8457f-2vqmt 1/1 Running 3 33h
cert-manager cert-manager-cainjector-7b744d56fb-bvwjd 1/1 Running 2 27h
kube-system local-path-provisioner-5ff76fc89d-vbqs9 1/1 Running 4 33h
% kubectl get services -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.43.116.250 <none> 443/TCP 28h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
nginx-ingress-ingress-nginx-controller LoadBalancer 10.43.10.136 192.168.178.240 80:31517/TCP,443:31733/TCP 28h app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
The guide describes it for dynDNS. How should I do this if I had two domains and two different websites. Is this done with a containerised certbot? Or do I need CNAME?
You can run the command and check for the LoadBalancer IP which is external or exposed to internet IP.
you can this IP to DNS side as A record or CNAME record and you are done. Your both domain will be pointing the traffic to the Kubernetes cluster and inside the Kubernetes, you can create the ingress routes or record to divert the traffic to a specific service.
Related
I'd like to put together a dev environment where there is a kubernetes cluster (I intend to use Microk8s with multiple nodes at the end). The reason is that I'll have a prod system running on this cluster with test environments, and eventually when a new PR is created based on the PR id a totally new system will be created and the url will be different. Something like this: prod system: http://my-system.com, test: http://test-pr-63.my-system.com.
But, first I need a kubernetes with a minimal ingress listening on an IP address and able to route traffic to services/pods, based on URL. I'm not a well versed in kubernetes space.
The end result is always connection refused when I call the IP via curl, and I don't know why.
I do the following steps to install the system on a newly created microk8s environment on m Macbook Pro.
Status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
disabled:
Enable services
~/: microk8s enable ingress dns storage
Enabling Ingress
ingressclass.networking.k8s.io/public created
namespace/ingress created
serviceaccount/nginx-ingress-microk8s-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
configmap/nginx-load-balancer-microk8s-conf created
configmap/nginx-ingress-tcp-microk8s-conf created
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled
Enabling default storage class
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon
The cluster looks like this:
~/: kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 92s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 91s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 4m10s
kube-system pod/calico-node-2r6bv 1/1 Running 0 4m11s
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 14s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7m59s
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 91s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 4m48s
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 92s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 4m48s
kube-system deployment.apps/coredns 1/1 1 1 92s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 79s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 4m11s
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 92s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 14s
Install kafka by helm using bitnamy's chart
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 3m55s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 3m54s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 6m33s
kube-system pod/calico-node-2r6bv 1/1 Running 0 6m34s
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 2m37s
eg pod/eg-kafka-zookeeper-0 1/1 Running 0 48s
eg pod/eg-kafka-zookeeper-1 1/1 Running 0 48s
eg pod/eg-kafka-zookeeper-2 1/1 Running 0 48s
eg pod/eg-kafka-0 1/1 Running 1 (32s ago) 48s
eg pod/eg-kafka-1 1/1 Running 1 (31s ago) 48s
eg pod/eg-kafka-2 1/1 Running 1 (30s ago) 48s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 10m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 3m54s
eg service/eg-kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 48s
eg service/eg-kafka ClusterIP 10.152.183.18 <none> 9092/TCP 48s
eg service/eg-kafka-zookeeper ClusterIP 10.152.183.140 <none> 2181/TCP,2888/TCP,3888/TCP 48s
eg service/eg-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 48s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 7m11s
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 3m55s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 7m11s
kube-system deployment.apps/coredns 1/1 1 1 3m55s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 3m42s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 6m34s
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 3m55s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 2m37s
NAMESPACE NAME READY AGE
eg statefulset.apps/eg-kafka-zookeeper 3/3 48s
eg statefulset.apps/eg-kafka 3/3 48s
Deploy the single service where to the traffic will be routed
It is a simple webapi using Spring, the image works and spins up without any problem.
apiVersion: v1
kind: Service
metadata:
name: webapi
spec:
selector:
app: webapi
ports:
- port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapi
spec:
selector:
matchLabels:
app: webapi
template:
metadata:
labels:
app: webapi
spec:
containers:
- name: webapi
image: ghcr.io/encyclopediagalactica/sourceformats.api.rest/sourceformat-api-rest:latest
resources:
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 80
kubectl get all --all-namespaces the newly added stuff is marked ====>
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress pod/nginx-ingress-microk8s-controller-5w7tw 1/1 Running 0 8m45s
kube-system pod/coredns-7f9c69c78c-94nbl 1/1 Running 0 8m44s
kube-system pod/calico-kube-controllers-69d7f794d9-wz79l 1/1 Running 0 11m
kube-system pod/calico-node-2r6bv 1/1 Running 0 11m
kube-system pod/hostpath-provisioner-566686b959-8rwkv 1/1 Running 0 7m27s
eg pod/eg-kafka-zookeeper-0 1/1 Running 0 5m38s
eg pod/eg-kafka-zookeeper-1 1/1 Running 0 5m38s
eg pod/eg-kafka-zookeeper-2 1/1 Running 0 5m38s
eg pod/eg-kafka-0 1/1 Running 1 (5m22s ago) 5m38s
eg pod/eg-kafka-1 1/1 Running 1 (5m21s ago) 5m38s
eg pod/eg-kafka-2 1/1 Running 1 (5m20s ago) 5m38s
====> eg pod/webapi-7755b88f98-kwsmx 1/1 Running 0 52s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 15m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 8m44s
eg service/eg-kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 5m38s
eg service/eg-kafka ClusterIP 10.152.183.18 <none> 9092/TCP 5m38s
eg service/eg-kafka-zookeeper ClusterIP 10.152.183.140 <none> 2181/TCP,2888/TCP,3888/TCP 5m38s
eg service/eg-kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 5m38s
====> eg service/webapi ClusterIP 10.152.183.96 <none> 80/TCP 52s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 12m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 8m45s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 12m
kube-system deployment.apps/coredns 1/1 1 1 8m45s
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 8m32s
====> eg deployment.apps/webapi 1/1 1 1 52s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 1 1 1 11m
kube-system replicaset.apps/coredns-7f9c69c78c 1 1 1 8m45s
kube-system replicaset.apps/hostpath-provisioner-566686b959 1 1 1 7m27s
eg replicaset.apps/webapi-7755b88f98 1 1 1 52s
NAMESPACE NAME READY AGE
eg statefulset.apps/eg-kafka-zookeeper 3/3 5m38s
eg statefulset.apps/eg-kafka 3/3 5m38s
Ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapi-ingress
#annotations:
#kubernetes.io/ingress.class: public
spec:
rules:
- host: blabla.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webapi
port:
number: 80
ingressClassName: public
result:
~/: kubectl get ingress -n eg
NAME CLASS HOSTS ADDRESS PORTS AGE
webapi-ingress public blabla.com 127.0.0.1 80 48s
Additional info:
Regarding the commented out lines. There this this post which points out that the annotations should match. If I follow what is described in the answer the result is the same.
There is this post which emphasizes using the ingressClassName property. If I follow this the end result is the same. The two suggestion cannot be combined because kubernetes throws an error.
Call the endpoint
~/: curl http://blabla.com/get
curl: (7) Failed to connect to blabla.com port 80: Connection refused
My /etc/hosts looks like the following:
~/: cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 blabla.com
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
Why 127.0.0.1? I watched this video where Nana says that to whatever IP the ingress rule bound to in the /etc/hosts file I have to map it with the domain in order to route the traffic there. It makes sense to me.
I also followed this answer's suggestions and the result is the same.
How about debugging you might ask... So, nginx logs don't say a word. Seems like the traffic doesn't get to nginx.
I can't decide whether I need metallb... but, discouraging that Microk8s docs mentions that some part of the metallb won't work due to Macbook network traffic manipulation.
So, I have tried at least 4-5 scenarios and the result is the same. I assume 1, I either miss something fundamental, or 2, my Macbook has some magic which doesn't let the traffic go the kubernetes..., or, 3, the both, which wouldn't not be a big surprise. :)
Have you tried minikube, you might ask... Yes, I tried. It can't deal with setting up the Kafka instances. Not an option for me.
So, what I'm doing wrong here? Is there a tutorial which helps me to setup this cluster?
We have configured MetalLB since our K8s cluster is hosted on bare metal infrastructure. It seems to be running fine with all pods up and running.
[~]# kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-b78574c59-47qfv 1/1 Running 0 24h
pod/speaker-4q2vm 1/1 Running 0 24h
pod/speaker-m8kwk 1/1 Running 0 24h
pod/speaker-t4rvs 1/1 Running 0 24h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 3 3 3 3 3 kubernetes.io/os=linux 24h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 24h
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-b78574c59 1 1 1 24h
We have configured ingress controller via helm from https://github.com/kubernetes/ingress-nginx/releases/tag/helm-chart-3.29.0 and updating hostNetwork,ingressClass,kind to true,ingress-nginx,DaemonSet respectively in file values.yaml. The helm installation seems to have worked fine with all daemonset pods running and an LB ip provided to created ingress controller service.
[~]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/devingress-ingress-nginx-controller-c2x42 1/1 Running 0 18h
pod/devingress-ingress-nginx-controller-wtmgw 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/devingress-ingress-nginx-controller LoadBalancer x.x.x.x 1.2.3.40 80:32386/TCP,443:30020/TCP 18h
service/devingress-ingress-nginx-controller-admission ClusterIP x.x.x.x <none> 443/TCP 18h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/devingress-ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 18h
Now we have deployed two pods namely nginx with LoadBalancer service type & nginx-deploy-main with ClusterIP service type.
[~]# kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/nginx-854cf6b4d7-lv5ss 1/1 Running 0 18h
pod/nginx-deploy-main-6b5457fbb5-7tg9z 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx LoadBalancer x.x.x.x 1.2.3.41 8080:31101/TCP 18h
service/nginx-deploy-main ClusterIP x.x.x.x <none> 80/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 18h
deployment.apps/nginx-deploy-main 1/1 1 1 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-854cf6b4d7 1 1 1 18h
replicaset.apps/nginx-deploy-main-6b5457fbb5 1 1 1 18h
Below is the ingress resource setup to access nginx-deploy-main.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: nginx-main.int.org.com
http:
paths:
- path: /
backend:
serviceName: nginx-deploy-main
servicePort: 80
And the ingress resource seems to be created correctly pointing to nginx-deploy-main service.
[~]# kubectl get ing -n default
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-resource nginx nginx-main.int.org.com 80 19h
[~]# kubectl describe ing/ingress-resource -n default
Name: ingress-resource
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
nginx-main.int.org.com
/ nginx-deploy-main:80 (x.x.x.x:80)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
Outside of K8s cluster, we have nginx set up serving as reverse proxy with domain int.org.com resolution.
Below is the nginx configuration which should help me hit url http://nginx-main.int.org.com and get response but the response returned is 404.
upstream nginx-main.int.org.com {
server 1.2.3.40:80; ## Ingress Controller Service IP
}
server {
listen 80;
server_name nginx-main.int.org.com;
location / {
proxy_pass http://nginx-main.int.org.com;
}
}
Now when I try to access nginx pod (not nginx-main) using its LoadBalancer Service IP with below configuration , its able to provide response and works just fine
upstream nginx.int.org.com {
server 1.2.3.41:8080;
}
server {
listen 80;
server_name nginx.int.org.com;
location / {
proxy_pass http://nginx.int.org.com;
}
}
Am I missing something here with regards to Ingress Controller or Resource. Port Forwarding works fine and am able to access services using the same.
This really is a blocker and any help or documentation reference would be really useful .
We tried with another Ingress Controller i.e. https://github.com/nginxinc/kubernetes-ingress and were able to make it work .
Below were the steps done .
[~] git clone https://github.com/nginxinc/kubernetes-ingress/
[~] cd kubernetes-ingress/deployments
[~] git checkout v1.11.1
[~] kubectl apply -f common/ns-and-sa.yaml
[~] kubectl apply -f rbac/rbac.yaml
[~] kubectl apply -f common/default-server-secret.yaml
[~] kubectl apply -f common/nginx-config.yaml
[~] kubectl apply -f common/ingress-class.yaml
Created daemon-set pods with extra environment argument i.e. --enable-custom-resources=false added in yaml due to below issue in controller logs
Refer : Kubernetes cluster working but getting this error from the NGINX controller
[~] kubectl apply -f daemon-set/nginx-ingress.yaml
[~] kubectl get all -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-gd8gw 1/1 Running 0 3h55m x.x.x.x worker1 <none> <none>
pod/nginx-ingress-kr9lx 1/1 Running 0 3h55m x.x.x.x worker2 <none> <none>
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/nginx-ingress 2 2 2 2 2 <none> 5h14m nginx-ingress nginx/nginx-ingress:1.11.1 app=nginx-ingress
Hit respective worker nodes at port 80 and a 404 response means its working fine.
Deployed a sample application using github link https://github.com/vipin-k/Ingress-Controller-v1.9.0/blob/main/hotel.yml and updated host entry within Ingress object to hotel.int.org.com
[~] kubectl create -f hotel.yaml
[~] kubectl get all -n hotel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hotel-65d644c8f7-bj597 1/1 Running 0 3h51m x.x.x.x worker1 <none> <none>
pod/hotel-65d644c8f7-csvgp 1/1 Running 0 3h51m x.x.x.x worker2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hotel-svc ClusterIP x.x.x.x <none> 80/TCP 3h51m app=hotel
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hotel 2/2 2 2 3h51m hotel nginxdemos/hello:plain-text app=hotel
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hotel-65d644c8f7 2 2 2 3h51m hotel nginxdemos/hello:plain-text app=hotel,pod-template-hash=65d644c8f7
[~] kubectl get ing -n hotel
NAME CLASS HOSTS ADDRESS PORTS AGE
hotel-ingress nginx hotel.int.org.com 80 3h52m
[~] kubectl describe ing hotel-ingress -n hotel
Name: hotel-ingress
Namespace: hotel
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
hotel.int.org.com
/ hotel-svc:80 (x.x.x.x:80,x.x.x.x:80)
Annotations: Events: <none>
Updated external nginx configuration with enabled domain resolution .
upstream hotel.int.org.com {
server 1.2.3.41:80; #worker1
server 1.2.3.42:80; #worker2
}
server {
listen 80;
server_name hotel.int.org.com;
location / {
proxy_pass http://hotel.int.org.com;
}
}
Restart nginx and verify able to access via browser its serving response from respective running hotel namespace daemonset pods.
[~]# curl hotel.int.org.com
Server address: x.x.x.x:80
Server name: hotel-65d644c8f7-bj597
Date: 28/Apr/2021:05:47:15 +0000
URI: /
Request ID: 28d5cfab4ea28beea49879422b7e8f4c
[~]# curl hotel.int.org.com
Server address: x.x.x.x:80
Server name: hotel-65d644c8f7-csvgp
Date: 28/Apr/2021:05:52:06 +0000
URI: /
Request ID: 4135cacf83f8bf41c9677104500e610b
Exploring with MetalLB too and will post solution once its works
Documentation says that I need to enter pod, but I can't.
sudo kubectl get pods -n kube-system gives me following output:
coredns-66bff467f8-bhwrx 1/1 Running 4 10h
coredns-66bff467f8-ph2pb 1/1 Running 4 10h
etcd-ubuntu-xenial 1/1 Running 3 10h
ingress-nginx-admission-create-mww2h 0/1 Completed 0 4h48m
ingress-nginx-admission-patch-9dklm 0/1 Completed 0 4h48m
ingress-nginx-controller-7bb4c67d67-8nqcw 1/1 Running 1 4h48m
kube-apiserver-ubuntu-xenial 1/1 Running 3 10h
kube-controller-manager-ubuntu-xenial 1/1 Running 3 10h
kube-proxy-hn9qw 1/1 Running 3 10h
kube-scheduler-ubuntu-xenial 1/1 Running 3 10h
storage-provisioner 1/1 Running 4 10h
When I trying to enter sudo kubectl exec ingress-nginx-controller-7bb4c67d67-8nqcw -- /bin/bash/ I receive following error:
Error from server (NotFound): pods "ingress-nginx-controller-7bb4c67d67-8nqcw" not found
Reason why I'm running everything with sudo is because I'm using vm-dirver=none
Reason why I need to know ingress controller version is because I want to use a wildcard in host name to forward multiple subdomains to same service/port. And I know that this feature is available only from ingress controller version 1.18.
You get that error because you are not passing the namespace parameter (-n kube-system).
And to get the version, you would do this:
kubectl get po ingress-nginx-controller-7bb4c67d67-8nqcw -n kube-system -oyaml | grep -i image:
I am trying to set up monitoring stack (prometheus + alertmanager + node_exporter etc) via helm install stable/prometheus onto a raspberry pi k8s cluster (1 master + 3 worker nodes) which i set up.
Managed to get all the required pods running.
pi-monitoring-prometheus-alertmanager-767cd8bc65-89hxt 2/2 Running 0 131m 10.17.2.56 kube2 <none> <none>
pi-monitoring-prometheus-node-exporter-h86gt 1/1 Running 0 131m 192.168.1.212 kube2 <none> <none>
pi-monitoring-prometheus-node-exporter-kg957 1/1 Running 0 131m 192.168.1.211 kube1 <none> <none>
pi-monitoring-prometheus-node-exporter-x9wgb 1/1 Running 0 131m 192.168.1.213 kube3 <none> <none>
pi-monitoring-prometheus-pushgateway-799d4ff9d6-rdpkf 1/1 Running 0 131m 10.17.3.36 kube1 <none> <none>
pi-monitoring-prometheus-server-5d989754b6-gp69j 2/2 Running 0 98m 10.17.1.60 kube3 <none> <none>
however after port-forwarding prometheus server port 9090 and navigating to Targets page, i realized none of the node_exporters are registered.
Digging through the logs, i found this
evel=error ts=2020-04-12T05:15:05.083Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:333: Failed to list *v1.Node: Get https://10.18.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.084Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:299: Failed to list *v1.Service: Get https://10.18.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.084Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:261: Failed to list *v1.Endpoints: Get https://10.18.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
level=error ts=2020-04-12T05:15:05.085Z caller=klog.go:94 component=k8s_client_runtime func=ErrorDepth msg="/app/discovery/kubernetes/kubernetes.go:262: Failed to list *v1.Service: Get https://10.18.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.18.0.1:443: i/o timeout"
Question: why is the prometheus pod unable to call the apiserver endpoints? Not really sure where was the configuration done wrongly
Followed through debug guide and realized individual nodes are unable to resolve services on other nodes.
Been troubleshooting for the past 1 day reading various sources but to be honest, i am not even sure where to begin with.
These are the pods running in kube-system namespace. Hope this will give a better idea of how my system is set up.
pi#kube4:~ $ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66bff467f8-nzvq8 1/1 Running 0 13d 10.17.0.2 kube4 <none> <none>
coredns-66bff467f8-z7wdb 1/1 Running 0 13d 10.17.0.3 kube4 <none> <none>
etcd-kube4 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-apiserver-kube4 1/1 Running 2 13d 192.168.1.214 kube4 <none> <none>
kube-controller-manager-kube4 1/1 Running 2 13d 192.168.1.214 kube4 <none> <none>
kube-flannel-ds-arm-8g9fb 1/1 Running 1 13d 192.168.1.212 kube2 <none> <none>
kube-flannel-ds-arm-c5qt9 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-flannel-ds-arm-q5pln 1/1 Running 1 13d 192.168.1.211 kube1 <none> <none>
kube-flannel-ds-arm-tkmn6 1/1 Running 1 13d 192.168.1.213 kube3 <none> <none>
kube-proxy-4zjjh 1/1 Running 0 13d 192.168.1.213 kube3 <none> <none>
kube-proxy-6mk2z 1/1 Running 0 13d 192.168.1.211 kube1 <none> <none>
kube-proxy-bbr8v 1/1 Running 0 13d 192.168.1.212 kube2 <none> <none>
kube-proxy-wfsbm 1/1 Running 0 13d 192.168.1.214 kube4 <none> <none>
kube-scheduler-kube4 1/1 Running 3 13d 192.168.1.214 kube4 <none> <none>
Flannel documentation states:
NOTE: If kubeadm is used, then pass --pod-network-cidr=10.244.0.0/16 to kubeadm init to ensure that the podCIDR is set.
This is because flannel ConfigMap by default is configured to work on "Network": "10.244.0.0/16"
You have configured your kubeadm with --pod-network-cidr=10.17.0.0/16 now this needs to be configured in flannel ConfigMap kube-flannel-cfg to look like this:
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.17.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
Thanks to #kitt for his debugging help.
I suspect there is a networking issue that prevents you from reaching the API server. "dial tcp 10.18.0.1:443: i/o timeout" generally reflects that you are not able to connect or read from the server. You can use below steps to figure out the problem:
1. Deploy one busybox pod using kubectl run busybox --image=busybox -n kube-system
2. Get into the pod using kubectl exec -n kube-system -it <podname> sh
3. Try to do telnet from the tty like telnet 10.18.0.1 443 to figure out the connection issues
Let me know the output.
After much troubleshooting, i realized i am not able to ping other pods from other nodes but only able to ping from those within the node. Issue seems to be with iptables config as covered here https://github.com/coreos/flannel/issues/699.
tl;dr: running iptables --policy FORWARD ACCEPT solved my problem.
prior to updating iptables policy
Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
issue it solved now. thanks #kitt for the help earlier!
everybody
As the title says, I would be curious to understand why I can't connect via Sparklyr to gCloud clusters using kubernetes.
The steps to configure the system were as follows:
project creation on gCloud (free version 300dollars)
Cloud SDK installation on MacOS
Installation Kubectl binary with curl on MacOS
Docker installation
By terminal:
configure the cluster
gcloud config set compute/zone us-central1-f
gcloud container clusters create spark-on-gke --machine-type n1-standard-2
bind the cluster admin to email
kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=pesca#gmail.com
kubectl create clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:default spark-admin
From R connect to the MASTER_IP, importing the public image offered by jluraschi
remotes::install_github("rstudio/sparklyr”); library(sparklyr)
sc <- spark_connect(config = spark_config_kubernetes(
"k8s://https://<k8s-ip>",
account = "default",
image = "docker.io/jluraschi/spark:sparklyr",
version = "2.4"))
And the error that appears is:
Error from server (NotFound): pods "sparklyr-c27317e4b89" not found
Thank you so much for your answer!
At the end of the code you will also find screenshots of the pod configuration made with gcloud. Instead, here are the results from the terminal:
MBP-di-Simone:~ simone$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 13d
MBP-di-Simone:~ simone$ kubectl get pods --all-namespaces
NAMESPACE NAME
READY STATUS RESTARTS AGE
default alpine 1/1 Running 0 13d
kube-system event-exporter-v0.2.4-5f88c66fb7-67pb6 2/2 Running 0 13d
kube-system fluentd-gcp-scaler-59b7b75cd7-mbgxj 1/1 Running 0 13d
kube-system fluentd-gcp-v3.2.0-9dlx8 2/2 Running 0 5d8h
kube-system fluentd-gcp-v3.2.0-9w6t2 2/2 Running 0 5d8h
kube-system fluentd-gcp-v3.2.0-dwrlz 2/2 Running 0 5d8h
kube-system heapster-5f6cdd4bd-qmlhb 3/3 Running 0 13d
kube-system kube-dns-79868f54c5-5sqvb 4/4 Running 0 13d
kube-system kube-dns-79868f54c5-g9h4q 4/4 Running 0 13d
kube-system kube-dns-autoscaler-bb58c6784-9bbcg 1/1 Running 0 13d
kube-system kube-proxy-gke-spark-on-gke-default-pool-7fad1be1-2279 1/1 Running 0 13d
kube-system kube-proxy-gke-spark-on-gke-default-pool-7fad1be1-70hn 1/1 Running 0 13d
kube-system kube-proxy-gke-spark-on-gke-default-pool-7fad1be1-pnpj 1/1 Running 0 13d
kube-system l7-default-backend-fd59995cd-8tzjv 1/1 Running 0 13d
kube-system metrics-server-v0.3.1-57c75779f-gz776 2/2 Running 0 13d
kube-system prometheus-to-sd-ktvbk 2/2 Running 0 13d
kube-system prometheus-to-sd-tmwkw 2/2 Running 0 13d
kube-system prometheus-to-sd-xxx4p 2/2 Running 0 13d
MBP-di-Simone:~ simone$ kubectl describe pods [sparklyr-2e62d04d5dd]
Error from server (NotFound): pods "[sparklyr-2e62d04d5dd]" not found
MBP-di-Simone:~ simone$
gCloud Cluster a
gCloud Cluster b
gCloud Cluster permission
gCloud Cluster label