Kubernetes services - Portforwarding to all pods through deployment - portforwarding

I want to showcase kubernetes load balancing capabilities. On my local system, I have one node in the cluster. Want to deploy nginx container in 3 pods and replace the index.html (default) with my modified index.html (having some variances). I am creating a service and assigning a port to forward all requests to port 80 of the containers. I want to access my pod as http://localhost:3030. Depending on the pod the request hits, the index.html will display the content. However with the below deployment and service code I could not hit any pod. If I do port-forward to an individual pod, I can reach it though.
I followed the approach explained here but no luck. Any idea what I am missing.
Here is what I see when get all.
$ k get all
NAME READY STATUS RESTARTS AGE
pod/app-server-6ccf5d55db-2qt2r 1/1 Running 0 3d20h
pod/app-server-6ccf5d55db-96lkb 1/1 Running 0 3d20h
pod/app-server-6ccf5d55db-ljsc4 1/1 Running 0 3d20h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-server 3/3 3 3 3d20h
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3030
selector:
app: app-server
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
replicas: 3
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: nginx:latest
ports:
- containerPort: 80

Ok, I did two mistakes.
Both service and app server deployment is in single file.
I messed up the port and servicePort values
Here are the changes I made which worked.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- name: httpport
protocol: TCP
port: 32766
nodePort: 32766
targetPort: 80
selector:
app: app-server
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
replicas: 3
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: nginx:latest
ports:
- containerPort: 80
I deployed the server first and then the service. Then I was able to reach the nginx server with http://localhost:32766
Here is the output of my k get all
$ k get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/app-server-6ccf5d55db-9xjwh 1/1 Running 0 60s 10.1.0.201 docker-desktop <none> <none>
pod/app-server-6ccf5d55db-mdtrx 1/1 Running 0 60s 10.1.0.200 docker-desktop <none> <none>
pod/app-server-6ccf5d55db-smmcg 1/1 Running 0 60s 10.1.0.199 docker-desktop <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/app-service NodePort 10.110.72.85 <none> 32766:32766/TCP 54s app=app-server
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/app-server 3/3 3 3 60s web-server nginx:latest app=app-server
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/app-server-6ccf5d55db 3 3 3 60s web-server nginx:latest app=app-server,pod-template-hash=6ccf5d55db

Related

The External IP of my Nginx load balancer dones not work

Ok lets to explain my probrem...
I have deployed a Kind Kubernetes. This is my script:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
# Mongo
- containerPort: 30005
hostPort: 27017
protocol: TCP
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
The next step is deploy MetalLB (the Load Balancer). I have used thoose yamls:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
To configure the layer 2, I set a ip range, inside the kind network. For know it:
docker network inspect -f '{{.IPAM.Config}}' kind
This commmnad show this:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
So, I set the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.255.200-172.18.255.250
Ok, the last step is install Nginx controller, I did with the following commnad:
helm install nginx-ingress-controller bitnami/nginx-ingress-controller
All deployed ok and with this command I can see all:
kubectl get all
This command show:
NAME READY STATUS RESTARTS AGE
pod/ddclient-deployment-fcbf95d66-ndldk 1/1 Running 0 51m
pod/nginx-ingress-controller-6b9cf4684f-7hsw2 1/1 Running 0 64s
pod/nginx-ingress-controller-default-backend-6798d86668-7b552 1/1 Running 0 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
service/nginx-ingress-controller-default-backend ClusterIP 10.96.247.49 <none> 80/TCP 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ddclient-deployment 1/1 1 1 51m
deployment.apps/nginx-ingress-controller 1/1 1 1 64s
deployment.apps/nginx-ingress-controller-default-backend 1/1 1 1 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ddclient-deployment-fcbf95d66 1 1 1 51m
replicaset.apps/nginx-ingress-controller-6b9cf4684f 1 1 1 64s
replicaset.apps/nginx-ingress-controller-default-backend-6798d86668 1 1 1 64s
Well, here is the problem. In theory, if you put the load balacer external ip:
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
in the browser, you should see the nginx web page. I cant, just see an error message saying
"ERR_CONNECTION_TIMED_OUT".
I dont know what I am missing...
Thanks for the help!

How to access simple nginx deployment on kubernetes?

I want to deploy a simple nginx app on my own kubernetes cluster.
I used the basic nginx deployment. On the machine with the ip 192.168.188.10. It is part of cluster of 3 raspberries.
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 2d20h v1.18.2
node1-pi4 Ready <none> 2d19h v1.18.2
node2-pi3 Ready <none> 2d19h v1.18.2
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl create service nodeport nginx --tcp=80:80
service/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-8fb6d868-6957j 1/1 Running 0 10m
my-nginx-8fb6d868-8c59b 1/1 Running 0 10m
nginx-f89759699-n6f79 1/1 Running 0 4m20s
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.98.41.205
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 31400/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I always get a time out
$ curl http://192.168.188.10:31400/
curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out
Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port 31400?
As mentioned by #suren, you are creating a stand-alone service without any link with your deployment.
You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
After, type kubectl get svc to get the nodeport to access your service.
nginx-svc NodePort 10.100.136.135 <none> 80:31816/TCP 34s
To access use http://<YOUR_NODE_IP>:31816
so is 192.168.188.10 your host ip / your vm ip ?
you have to check it first if any service using that port or maybe you haven't add it into your security group if you using cloud platform.
just to make sure you can create a pod and access it using fqdn like my-svc.my-namespace.svc.cluster-domain.example

Kubernetes services cannot reach each other anymore

I’m running Kubernetes on GKE, this was working before but about 2 days ago something changed. I don’t think I changed anything to my configuration. My services do not seem to work anymore. None of my services can talk to each other. When SSHing into a running pod I cannot ping them via their service name but also not via their internal IP addresses. The external IP of the load balancer is not approachable. Here is an example of how I define the deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
ksonnet.io/component: app-name
name: app-name
spec:
replicas: 1
template:
metadata:
labels:
app: app-name
And here the service:
apiVersion: v1
kind: Service
metadata:
labels:
ksonnet.io/component: app-name
name: app-name
spec:
loadBalancerIP: x.x.x.x
ports:
- port: 4999
targetPort: 5000
selector:
app: app-name
type: LoadBalancer
I am fairly new to Kubernetes and networking and I have no clue where to look or how to debug this issue.
EDIT:
Here are the relevant kubectl get services -n test
dashboard ClusterIP 10.47.242.176 <none> 5000/TCP 1h
app-name LoadBalancer 10.47.246.63 x.xxx.xx.xx 4999:31439/TCP 1h
Then here is the kubectl describe service app-name -n test
Name: app-name
Namespace: test
Labels: app.kubernetes.io/deploy-manager=ksonnet
ksonnet.io/component=app-name
Annotations: ksonnet.io/managed: {pristine...}
Selector: app=app-name
Type: LoadBalancer
IP: 10.47.246.63
IP: xx.xxx.xx.x
LoadBalancer Ingress: xx.xxx.xx.x
Port: <unset> 4999/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31439/TCP
Endpoints: 10.44.1.141:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
EDIT 2: I tried the curl command on the default port and it timed out:
curl: (7) Failed to connect to app-name port 80: Connection timed out
When trying it on the full endpoint it got a connection refused:
curl: (7) Failed to connect to app-name port 4999: Connection refused
When looking at the deployment I get the following pod template:
Pod Template:
Labels: app=app-name
Containers:
model-manager:
Image: gcr.io/ns-delay/app-name:0.1
Port: 5000/TCP
Host Port: 0/TCP
As i see your selector in service is not matching the labels in Deployment , change to
metadata:
labels:
app: app-name
in your Deployment and it should work then.

How do I get one pod to network to another pod in Kubernetes? (SIMPLE)

I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.
So, can anyone share a simple example of the following (as a yaml file)? All I want is
two pods
let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React).
A way to network between them.
And then an example of calling an api call from the back to the front.
I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.
Hopefully if this example exists on stackoverflow it will serve other people as well.
Any help would be appreciated. Thanks.
EDIT; it looks like the easiest example may be using the Ingress controller.
EDIT EDIT;
I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.
So below is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
What I believe this is doing is
Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.
It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.
Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.
What happens when I run this?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
So first, it appears to create all the parts that I need fine with no errors.
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.
HOWEVER
I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
So I have no address I can use, and www.kubeplaytime.example does not appear to work.
What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.
For example, take a look at this medium article: https://medium.com/#cashisclay/kubernetes-ingress-82aa960f658e.
It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.
QUESTIONS
Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?
I feel I am close, what am I doing wrong?
FULL YAML
Available here: https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
Thanks for the help!
EDIT EDIT EDIT
I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.
The file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
Deploying this to the cluster however runs into this error:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
So, the question then becomes, well crap how do I debug this?
If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.
Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!
If anyone has a better way to debug a helm deploy add it to the list of open questions.
EDIT EDIT EDIT EDIT
To simplify in the extreme I attempt to make a call from one pod to another only using namespace.
So here is my React code where I make the http request:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.
Here is my node code handling the get:
router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
Here is my yaml file that I uploading to the kubectl cluster:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
The uploading to the cluster appears to work as I can see in my terminal:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
However, when I attempt to make the request I get the following error:
return from backend and error:
Error: Network Error
Stack trace:
createError#http://168.62.37.100/static/js/bundle.js:1555:15
handleError#http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.
EDIT X5
I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.
First of all, let's clarify some apparent misconceptions. You mentioned your front-end being a React application, that will presumably run in the users browser. For this to work, your actual problem is not your back-end and front-end pods communicating with each other, but the browser needs to be able to connect to both these pods (to the front-end pod in order to load the React application, and to the back-end pod for the React app to make API calls).
To visualize:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
As already stated, the easiest solution for this would be to use an Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide for more information.
Define services
Start by defining Service resources for both your front-end and back-end application (these would also allow your Pods to communicate with each other). A service definition might look like this:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Make sure that your Pods have labels that can be selected by the Service resource (in this example, I'm using app=backend and app=frontend as labels).
If you want to establish Pod-to-Pod communication, you're done now. In each Pod, you can now use backend.<namespace>.svc.cluster.local (or backend as shorthand) and frontend as host names to connect to that Pod.
Define Ingresses
Next up, you can define the Ingress resources; since both services will need connectivity from outside the cluster (the users browser), you will need Ingress definitions for both services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
Alternatively, you could also aggregate frontend and backend with a single Ingress resource (no "right" answer here, just a matter of preference):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
After that, make sure that both www.your-application.example and api.your-application.example point to your Ingress controller's external IP address, and you should be done.
As it turns out I was over-complicating things. Here is the Kubernetes file that works to do what I want. You can do this using two deployments (front end, and backend) and one service entrypoint. As far as I can tell, a service can load balance to many (not just 2) different deployments, meaning for practical development this should be a good start to micro service development. One of the benefits of an ingress method is allowing the use of path names rather than port numbers, but given the difficulty it doesn't seem practical in development.
Here is the yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
Here are the bash commands I use to get it to spin up (you may have to add a login command - docker login - to push to dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
The actual code is just a frontend react app making an axios http call to a backend node route on componentDidMount of the starting App page.
You can also see a working example here: https://github.com/patientplatypus/KubernetesMultiPodCommunication
Thanks again everyone for your help.
To use ingress controller you need to have valid domain (DNS server configured to point your ingress controller ip). This is not due to any kubernetes "magic" but due to the way how vhosts work (here is an example for nginx - very often used as ingress server, but any other ingress implementation will work the same way under the hood).
If you can't configure your domain the easiest way for dev purpose would be creating kubernetes service. There is a nice short cut for doing it using kubectl expose
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend

Cannot access Kubernetes service

We are not able to access the nginx from outside the pod cluster. Kindly help us understand if below seems right and which port will be serving nginx. Running a curl on NodeIP:NodePort throws our company proxy access denied page. We have VM on openstack and Security Group is open.
[root#ip-10-0-0-3 pods]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-demo 2 2 2 2 4m
[root#ip-10-0-0-3 pods]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-demo-1947000120-6omcz 1/1 Running 0 5m
nginx-demo-1947000120-exewa 1/1 Running 0 5m
Below is the Kubernetes Deployment and service file.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
minReadySeconds: 20
template:
metadata:
labels:
app: nginx-demo
version: v0.1
spec:
containers:
- name: nginx-demo
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
env:
- name: DEMO_ENV
value: staging
**---**(ignore stars)
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo-svc
spec:
ports:
- port: 80
protocol: TCP
name: www
nodePort: 30089
selector:
app: nginx-demo
type: NodePort
[root#ip-10-0-0-3 pods]# kubectl describe svc
Name: nginx-demo-svc
Namespace: default
Labels: app=nginx-demo
Selector: app=nginx-demo
Type: NodePort
IP: 192.168.1.20
Port: www 80/TCP
NodePort: www 30089/TCP
Endpoints: 172.17.50.2:80,172.17.67.2:80
Session Affinity: None
No events.
[root#ip-10-0-0-3 pods]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 2d
nginx-demo-svc 192.168.1.20 <nodes> 80/TCP 9m
The selector section of your service must contains all labels :
selector:
app: nginx-demo
version: v0.1
It's better?

Resources