Kubernetes exposed pod connection refused - one time works, sometime not - networking

I have a kubernetes instalation with master and 1 node.
It is configured and everything is working very good.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mantis-gfs 1/1 Running 1 22h
mongodb-gfs 1/1 Running 0 14h
I exposed the pod mongodb-gfs:
$ kubectl expose pod mongodb-gfs --port=27017 --external-ip=10.9.8.100 --name=mongodb --labels="env=development"
The extrnal IP 10.9.8.100 is the IP of the kubernetes master node
The service was created successfully.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.111.96.254 <none> 1/TCP 23d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mongodb ClusterIP 10.100.149.90 10.9.8.100 27017/TCP 1m
Now i am able to access the mongo using:
mongo 10.9.8.100:27017
And here is the problem. It works some time, but some time not.
I connect once and i get the shell, i connect second time and get:
$ mongo 10.9.8.100:27017
MongoDB shell version v3.4.17
connecting to: mongodb://10.9.8.100:27017/test
2018-11-01T09:27:23.524+0100 W NETWORK [thread1] Failed to connect to 10.9.8.100:27017, in(checking socket for error after poll), reason: Connection refused
2018-11-01T09:27:23.524+0100 E QUERY [thread1] Error: couldn't connect to server 10.9.8.100:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
Then i try again and it works, try again it works, try again it not works...
Any clues what may cause the problem?

I found the problem and the solution. The problem was, the pod definition. For both pods: mongodb-gfs and mantis-gfs i have the same label settings. Then i exposed services with the same label="env=development". In this case the traffic that i expected to go always to one pod was "loadbalanced" to one or other pod (they have the same label) of different types.
Changing the label in the mongodb-gfs pod definition solved the problem with connection issues.

Had the same issue and przemas led me in the right direction with the selectors. When different pods have the same selector labels k8s selects the wrong one sometimes. You have to choose unique names for the selectors. Weird that this is not logged anywhere which pod is selected.
apiVersion: v1
kind: Service
metadata:
name: revproxy-svc
spec:
selector:
role: app
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: loadbalance-svc
spec:
selector:
role: app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: loadbalancer-python
spec:
replicas: 1
selector:
matchLabels:
role: app
template:
metadata:
labels:
role: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: revproxy
spec:
replicas: 2
selector:
matchLabels:
role: app
template:
metadata:
labels:
role: app

Related

Kubernetes NGINX Ingress Controller 404 Not found / Object not found

I am taking a course in Udemy and I am new to the world of Kubernetes and I am trying to configure ingress nginx controller in Kubernetes but it returns 404 not found when i send a request at specified URL, it has been 10 days that I am trying to fix it, i've looked at similar questions but none of their answers are working for me. I am also using Skaffold to do build/deploy image on docker hub automatically when i change something in files.
My express app server:
app.get('/api/users/currentuser', (req, res) => {
res.send('Hi there');
});
app.listen(3000, () => {
console.log('[Auth] - Listening on port 3000');
});
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.com
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
auth-depl.yaml (Auth deployment & srv)
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: myusername/auth:latest
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: ClusterIP
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold.yaml file:
apiVersion: skaffold/v2beta25
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: username/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
I also executed command from NGINX Ingress Controller docs:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/cloud/deploy.yaml
I also changed hosts.file in the system:
127.0.0.1 ticketing.com
Logs:
kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-5f89899d9f-wtc94 1/1 Running 0 6h33m
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.23.71 <none> 3000/TCP 23h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7fm56 0/1 Completed 0 23h
ingress-nginx-admission-patch-5vflr 0/1 Completed 1 23h
ingress-nginx-controller-5c8d66c76d-89zhp 1/1 Running 0 23h
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-srv <none> ticketing.com localhost 80 23h
kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticketing.com
/api/users/?(.*) auth-srv:3000 (10.1.0.10:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x18 over 23h) nginx-ingress-controller Scheduled for sync
Could there be a problem with the Windows IIS web server? since I previously configured something for another project, and in the screenshot above I see:
Requested URL http://ticketing.com:80/api/users/currentuser
Physical Path C:\inetpub\wwwroot\api\users\currentuser
Also the screenshot shows the port :80 at the requested URL but I have the server port 3000? + when i request at https it returns:
502 Bad Gateway
nginx
also C:\inetpub\wwwroot is strange to me.
Any ideas would help me a lot with continuing the course.
After a few days of research I finally solved the problem, the problem was with IIS Web Server which I had enabled when I was working on a project in ASP.NET core, I uninstalled it and the problem was solved.
How to uninstall IIS from Windows 10:
Go to Control Panel > Programs and Features
Click Turn Windows features on or off
Scroll down to Internet Information Services
Click on the square next to Internet Information Services so it becomes empty
Click OK and restart the PC (required).

Unable to access the nginx through Kubernetes LoadBalancer service

I'm using k8s provided with docker desktop (windows).
My deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:stable-alpine
ports:
- containerPort: 80
and my service yml file is
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: nginx-app
spec:
selector:
app: nginx-app
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
all are up and running but I'm unable to access the application.
>curl localhost:31000
curl: (7) Failed to connect to localhost port 31000: Connection refused
>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-685658ccbf-g84w5 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
service/my-service LoadBalancer 10.96.210.40 localhost 80:31000/TCP 4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-685658ccbf 1 1 1 8s
Note: created the Inbound/Outbound rule for this 31000 port in windows firewall to make sure it won't block
There are some questions you can try to answer in order to Debug Services:
Does the Service exist?: In your case we see that it does.
Does the Service work by DNS name?: One of the most common ways that clients consume a Service is through a DNS name.
Does the Service work by IP?: Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.
Is the Service defined correctly?: You should really double and triple check that your Service is correct and matches your Pod's port. Also:
Is the Service port you are trying to access listed in spec.ports[]?
Is the targetPort correct for your Pods (some Pods use a different port than the Service)?
If you meant to use a numeric port, is it a number (9376) or a string "9376"?
If you meant to use a named port, do your Pods expose a port with the same name?
Is the port's protocol correct for your Pods?
Does the Service have any Endpoints?: Check that the Pods you ran are actually being selected by the Service.
Are the Pods working?: Check again that the Pods are actually working.
Is the kube-proxy working?: Confirm that kube-proxy is running on your Nodes.
Going through the above steps will help you find the cause of this and possible future issues with services.

How to access simple nginx deployment on kubernetes?

I want to deploy a simple nginx app on my own kubernetes cluster.
I used the basic nginx deployment. On the machine with the ip 192.168.188.10. It is part of cluster of 3 raspberries.
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 2d20h v1.18.2
node1-pi4 Ready <none> 2d19h v1.18.2
node2-pi3 Ready <none> 2d19h v1.18.2
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl create service nodeport nginx --tcp=80:80
service/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-8fb6d868-6957j 1/1 Running 0 10m
my-nginx-8fb6d868-8c59b 1/1 Running 0 10m
nginx-f89759699-n6f79 1/1 Running 0 4m20s
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.98.41.205
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 31400/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I always get a time out
$ curl http://192.168.188.10:31400/
curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out
Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port 31400?
As mentioned by #suren, you are creating a stand-alone service without any link with your deployment.
You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
After, type kubectl get svc to get the nodeport to access your service.
nginx-svc NodePort 10.100.136.135 <none> 80:31816/TCP 34s
To access use http://<YOUR_NODE_IP>:31816
so is 192.168.188.10 your host ip / your vm ip ?
you have to check it first if any service using that port or maybe you haven't add it into your security group if you using cloud platform.
just to make sure you can create a pod and access it using fqdn like my-svc.my-namespace.svc.cluster-domain.example

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

Kubernetes services cannot reach each other anymore

I’m running Kubernetes on GKE, this was working before but about 2 days ago something changed. I don’t think I changed anything to my configuration. My services do not seem to work anymore. None of my services can talk to each other. When SSHing into a running pod I cannot ping them via their service name but also not via their internal IP addresses. The external IP of the load balancer is not approachable. Here is an example of how I define the deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
ksonnet.io/component: app-name
name: app-name
spec:
replicas: 1
template:
metadata:
labels:
app: app-name
And here the service:
apiVersion: v1
kind: Service
metadata:
labels:
ksonnet.io/component: app-name
name: app-name
spec:
loadBalancerIP: x.x.x.x
ports:
- port: 4999
targetPort: 5000
selector:
app: app-name
type: LoadBalancer
I am fairly new to Kubernetes and networking and I have no clue where to look or how to debug this issue.
EDIT:
Here are the relevant kubectl get services -n test
dashboard ClusterIP 10.47.242.176 <none> 5000/TCP 1h
app-name LoadBalancer 10.47.246.63 x.xxx.xx.xx 4999:31439/TCP 1h
Then here is the kubectl describe service app-name -n test
Name: app-name
Namespace: test
Labels: app.kubernetes.io/deploy-manager=ksonnet
ksonnet.io/component=app-name
Annotations: ksonnet.io/managed: {pristine...}
Selector: app=app-name
Type: LoadBalancer
IP: 10.47.246.63
IP: xx.xxx.xx.x
LoadBalancer Ingress: xx.xxx.xx.x
Port: <unset> 4999/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31439/TCP
Endpoints: 10.44.1.141:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
EDIT 2: I tried the curl command on the default port and it timed out:
curl: (7) Failed to connect to app-name port 80: Connection timed out
When trying it on the full endpoint it got a connection refused:
curl: (7) Failed to connect to app-name port 4999: Connection refused
When looking at the deployment I get the following pod template:
Pod Template:
Labels: app=app-name
Containers:
model-manager:
Image: gcr.io/ns-delay/app-name:0.1
Port: 5000/TCP
Host Port: 0/TCP
As i see your selector in service is not matching the labels in Deployment , change to
metadata:
labels:
app: app-name
in your Deployment and it should work then.

Resources