Unable to access the nginx through Kubernetes LoadBalancer service - nginx

I'm using k8s provided with docker desktop (windows).
My deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:stable-alpine
ports:
- containerPort: 80
and my service yml file is
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: nginx-app
spec:
selector:
app: nginx-app
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
all are up and running but I'm unable to access the application.
>curl localhost:31000
curl: (7) Failed to connect to localhost port 31000: Connection refused
>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-685658ccbf-g84w5 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
service/my-service LoadBalancer 10.96.210.40 localhost 80:31000/TCP 4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-685658ccbf 1 1 1 8s
Note: created the Inbound/Outbound rule for this 31000 port in windows firewall to make sure it won't block

There are some questions you can try to answer in order to Debug Services:
Does the Service exist?: In your case we see that it does.
Does the Service work by DNS name?: One of the most common ways that clients consume a Service is through a DNS name.
Does the Service work by IP?: Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.
Is the Service defined correctly?: You should really double and triple check that your Service is correct and matches your Pod's port. Also:
Is the Service port you are trying to access listed in spec.ports[]?
Is the targetPort correct for your Pods (some Pods use a different port than the Service)?
If you meant to use a numeric port, is it a number (9376) or a string "9376"?
If you meant to use a named port, do your Pods expose a port with the same name?
Is the port's protocol correct for your Pods?
Does the Service have any Endpoints?: Check that the Pods you ran are actually being selected by the Service.
Are the Pods working?: Check again that the Pods are actually working.
Is the kube-proxy working?: Confirm that kube-proxy is running on your Nodes.
Going through the above steps will help you find the cause of this and possible future issues with services.

Related

Communication Between Pods in Different Cluster in K8s using yaml

Can some one provide References/Basic Idea how communication is done between pods in different clusters.
Suppose Cluster A has Pod A and Cluster B has Pod B. So how we can ensure Pod A can communicate with Pod B using yaml? -Thanks in Advance
Posting this answer as a community wiki for the better visibility and to add some additional resources as the solution was posted in the comments by user #David Maze:
If the pods are in different clusters, they can't directly communicate with each other (without using NodePort or LoadBalancer services, or otherwise making the destination service accessible from outside its own cluster).
With the most common setups the way to communicate Pod1 from Cluster1 with Pod2 with Cluster2 would be to use:
Service of type NodePort
Service of type LoadBalancer
Ingress resource - specific to HTTP/HTTPS traffic
All of the above solutions will heavily depend on where your Kubernetes cluster is deployed.
For example:
With cloud solutions like GKE, AKS, EKS you can use service type of LoadBalancer or Ingress resource to direct the traffic to your pod.
With bare metal solution you would need to use additional tools like MetalLB to use service of type LoadBalancer
You could also look on this resources:
Istio.io: Install: Multicluster: Gateways
Istio.io: Blog: Multi-cluster mesh automation
As for an example assume that you have 2 Kubernetes clusters that can expose traffic with service of type LoadBalancer.
Apply on first cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Check the EXTERNAL-IP associated with the service:
$ kubectl get service nginx-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.92.10.48 A.B.C.D 80:30994/TCP 26s
Switch to second cluster and run:
$ kubectl run -it ubuntu --image=ubuntu -- /bin/bash
$ apt update && apt install curl
$ curl A.B.C.D
You should be able to see:
<--- REDACTED --->
<p><em>Thank you for using nginx.</em></p>
<--- REDACTED --->
Additional resources:
Kubernetes.io: Concepts: Services
Medium.com: Kubernetes NodePort vs LoadBalancer vs Ingress, when I should use what - could be somewhat specific to GKE

How to access simple nginx deployment on kubernetes?

I want to deploy a simple nginx app on my own kubernetes cluster.
I used the basic nginx deployment. On the machine with the ip 192.168.188.10. It is part of cluster of 3 raspberries.
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 2d20h v1.18.2
node1-pi4 Ready <none> 2d19h v1.18.2
node2-pi3 Ready <none> 2d19h v1.18.2
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl create service nodeport nginx --tcp=80:80
service/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-8fb6d868-6957j 1/1 Running 0 10m
my-nginx-8fb6d868-8c59b 1/1 Running 0 10m
nginx-f89759699-n6f79 1/1 Running 0 4m20s
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.98.41.205
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 31400/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I always get a time out
$ curl http://192.168.188.10:31400/
curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out
Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port 31400?
As mentioned by #suren, you are creating a stand-alone service without any link with your deployment.
You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
After, type kubectl get svc to get the nodeport to access your service.
nginx-svc NodePort 10.100.136.135 <none> 80:31816/TCP 34s
To access use http://<YOUR_NODE_IP>:31816
so is 192.168.188.10 your host ip / your vm ip ?
you have to check it first if any service using that port or maybe you haven't add it into your security group if you using cloud platform.
just to make sure you can create a pod and access it using fqdn like my-svc.my-namespace.svc.cluster-domain.example

Kubernetes, access IP outside the cluster

I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

Kubernetes exposed pod connection refused - one time works, sometime not

I have a kubernetes instalation with master and 1 node.
It is configured and everything is working very good.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mantis-gfs 1/1 Running 1 22h
mongodb-gfs 1/1 Running 0 14h
I exposed the pod mongodb-gfs:
$ kubectl expose pod mongodb-gfs --port=27017 --external-ip=10.9.8.100 --name=mongodb --labels="env=development"
The extrnal IP 10.9.8.100 is the IP of the kubernetes master node
The service was created successfully.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.111.96.254 <none> 1/TCP 23d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mongodb ClusterIP 10.100.149.90 10.9.8.100 27017/TCP 1m
Now i am able to access the mongo using:
mongo 10.9.8.100:27017
And here is the problem. It works some time, but some time not.
I connect once and i get the shell, i connect second time and get:
$ mongo 10.9.8.100:27017
MongoDB shell version v3.4.17
connecting to: mongodb://10.9.8.100:27017/test
2018-11-01T09:27:23.524+0100 W NETWORK [thread1] Failed to connect to 10.9.8.100:27017, in(checking socket for error after poll), reason: Connection refused
2018-11-01T09:27:23.524+0100 E QUERY [thread1] Error: couldn't connect to server 10.9.8.100:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
Then i try again and it works, try again it works, try again it not works...
Any clues what may cause the problem?
I found the problem and the solution. The problem was, the pod definition. For both pods: mongodb-gfs and mantis-gfs i have the same label settings. Then i exposed services with the same label="env=development". In this case the traffic that i expected to go always to one pod was "loadbalanced" to one or other pod (they have the same label) of different types.
Changing the label in the mongodb-gfs pod definition solved the problem with connection issues.
Had the same issue and przemas led me in the right direction with the selectors. When different pods have the same selector labels k8s selects the wrong one sometimes. You have to choose unique names for the selectors. Weird that this is not logged anywhere which pod is selected.
apiVersion: v1
kind: Service
metadata:
name: revproxy-svc
spec:
selector:
role: app
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: loadbalance-svc
spec:
selector:
role: app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: loadbalancer-python
spec:
replicas: 1
selector:
matchLabels:
role: app
template:
metadata:
labels:
role: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: revproxy
spec:
replicas: 2
selector:
matchLabels:
role: app
template:
metadata:
labels:
role: app

Resources