I have deployed and exposed Nginx with the following commands:
sudo kubectl create deployment mynginx1 --image=nginx
sudo kubectl expose deployment mynginx1 --type NodePort --port 8080
I access using http://<master node IP>:<port> or http://172.17.135.42:31788
But I am getting Error 404. Help appreciated.
gtan#master:~$ kubectl get pods -owide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default mynginx1-f544c49cb-g92w2 1/1 Running 0 3m19s 172.168.10.2 slave1 <none> <none>
kube-system coredns-66bff467f8-92r4n 1/1 Running 0 7m56s 172.168.10.2 master <none> <none>
kube-system coredns-66bff467f8-gc7tc 1/1 Running 0 7m56s 172.168.10.3 master <none> <none>
kube-system etcd-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 8m6s 172.17.82.100 master <none> <none>
kube-system kube-flannel-ds-amd64-24pwc 1/1 Running 3 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-flannel-ds-amd64-q5qwg 1/1 Running 0 5m28s 172.17.82.100 master <none> <none>
kube-system kube-proxy-hf59b 1/1 Running 0 4m58s 172.17.82.110 slave1 <none> <none>
kube-system kube-proxy-r7pz6 1/1 Running 0 7m56s 172.17.82.100 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 0 8m5s 172.17.82.100 master <none> <none>
gtan#master:~$
gtan#master:~$ curl -IL http://172.17.82.100:30131
curl: (7) Failed to connect to 172.17.82.100 port 30131: Connection refused where "172.17.82.100" is the master node ip address.
gtan#master:~$ kubectl get services -o wide -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m <none>
default mynginx1 NodePort 10.102.106.240 <none> 80:30131/TCP 10m app=mynginx1
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15m k8s-app=kube-dns
What is the architecture of your setup? do you have worker node and master node on same machine?
check the nginx pod status with :
kubectl get pods
If the pod is running without issue then hit your worker machine IP with NodePort http:/Workernode_IP:Nodeport
The default nginx container port is 80 as you can see here. Just change the container port from 8080 to 80 in your second command:
sudo kubectl expose deployment mynginx1 --type NodePort --port 80
and try to reach the service using the NodePort showed in the output of the command, for example:
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mynginx1 NodePort 10.97.142.170 <none> 80:31591/TCP 8m9s
Altenatively, you can use this yaml spec to configure your pod and service:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Testing using curl:
$ curl -IL http://localhost:31591
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Tue, 12 May 2020 10:05:04 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
Also, I recommend you to reserve a time to take a look in these documentations pages:
Kuberentes Concepts
Services
Related
Ok lets to explain my probrem...
I have deployed a Kind Kubernetes. This is my script:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
# Mongo
- containerPort: 30005
hostPort: 27017
protocol: TCP
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
- role: worker
extraMounts:
- hostPath: C:/Kind
containerPath: /data
The next step is deploy MetalLB (the Load Balancer). I have used thoose yamls:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
To configure the layer 2, I set a ip range, inside the kind network. For know it:
docker network inspect -f '{{.IPAM.Config}}' kind
This commmnad show this:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
So, I set the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.255.200-172.18.255.250
Ok, the last step is install Nginx controller, I did with the following commnad:
helm install nginx-ingress-controller bitnami/nginx-ingress-controller
All deployed ok and with this command I can see all:
kubectl get all
This command show:
NAME READY STATUS RESTARTS AGE
pod/ddclient-deployment-fcbf95d66-ndldk 1/1 Running 0 51m
pod/nginx-ingress-controller-6b9cf4684f-7hsw2 1/1 Running 0 64s
pod/nginx-ingress-controller-default-backend-6798d86668-7b552 1/1 Running 0 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
service/nginx-ingress-controller-default-backend ClusterIP 10.96.247.49 <none> 80/TCP 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ddclient-deployment 1/1 1 1 51m
deployment.apps/nginx-ingress-controller 1/1 1 1 64s
deployment.apps/nginx-ingress-controller-default-backend 1/1 1 1 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ddclient-deployment-fcbf95d66 1 1 1 51m
replicaset.apps/nginx-ingress-controller-6b9cf4684f 1 1 1 64s
replicaset.apps/nginx-ingress-controller-default-backend-6798d86668 1 1 1 64s
Well, here is the problem. In theory, if you put the load balacer external ip:
service/nginx-ingress-controller LoadBalancer 10.96.49.179 172.18.255.200 80:30307/TCP,443:31387/TCP 64s
in the browser, you should see the nginx web page. I cant, just see an error message saying
"ERR_CONNECTION_TIMED_OUT".
I dont know what I am missing...
Thanks for the help!
I want to showcase kubernetes load balancing capabilities. On my local system, I have one node in the cluster. Want to deploy nginx container in 3 pods and replace the index.html (default) with my modified index.html (having some variances). I am creating a service and assigning a port to forward all requests to port 80 of the containers. I want to access my pod as http://localhost:3030. Depending on the pod the request hits, the index.html will display the content. However with the below deployment and service code I could not hit any pod. If I do port-forward to an individual pod, I can reach it though.
I followed the approach explained here but no luck. Any idea what I am missing.
Here is what I see when get all.
$ k get all
NAME READY STATUS RESTARTS AGE
pod/app-server-6ccf5d55db-2qt2r 1/1 Running 0 3d20h
pod/app-server-6ccf5d55db-96lkb 1/1 Running 0 3d20h
pod/app-server-6ccf5d55db-ljsc4 1/1 Running 0 3d20h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-server 3/3 3 3 3d20h
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3030
selector:
app: app-server
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
replicas: 3
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: nginx:latest
ports:
- containerPort: 80
Ok, I did two mistakes.
Both service and app server deployment is in single file.
I messed up the port and servicePort values
Here are the changes I made which worked.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- name: httpport
protocol: TCP
port: 32766
nodePort: 32766
targetPort: 80
selector:
app: app-server
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
replicas: 3
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: nginx:latest
ports:
- containerPort: 80
I deployed the server first and then the service. Then I was able to reach the nginx server with http://localhost:32766
Here is the output of my k get all
$ k get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/app-server-6ccf5d55db-9xjwh 1/1 Running 0 60s 10.1.0.201 docker-desktop <none> <none>
pod/app-server-6ccf5d55db-mdtrx 1/1 Running 0 60s 10.1.0.200 docker-desktop <none> <none>
pod/app-server-6ccf5d55db-smmcg 1/1 Running 0 60s 10.1.0.199 docker-desktop <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/app-service NodePort 10.110.72.85 <none> 32766:32766/TCP 54s app=app-server
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/app-server 3/3 3 3 60s web-server nginx:latest app=app-server
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/app-server-6ccf5d55db 3 3 3 60s web-server nginx:latest app=app-server,pod-template-hash=6ccf5d55db
I can't access to Network IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in k3s. Its 1 master and 1 workers. Each one has its own Private IP.
Master 192.168.0.13
Worker 192.168.0.13
I Installed k3s with INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"
Now I am trying to deploy a app using MetalLB and nginx ingress
--set configInline.address-pools[0].name=default \
--set configInline.address-pools[0].protocol=layer2 \
--set configInline.address-pools[0].addresses[0]=192.168.0.21-192.168.0.30
helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
--set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller\
--set controller.image.tag=0.30.0 \
--set controller.image.runAsUser=33 \
--set defaultBackend.enabled=false
I Can see every pod up and running
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-d798c9dd-lsdnp 1/1 Running 5 37h 10.42.0.25 c271-k3s-ocrh <none> <none>
local-path-provisioner-58fb86bdfd-bcpl7 1/1 Running 5 37h 10.42.0.22 c271-k3s-ocrh <none> <none>
metrics-server-6d684c7b5-v9tmh 1/1 Running 5 37h 10.42.0.24 c271-k3s-ocrh <none> <none>
metallb-speaker-4kbmw 1/1 Running 0 4m7s 192.168.0.14 c271-k3s-agent <none> <none>
metallb-controller-75bf779d4f-nb47l 1/1 Running 0 4m7s 10.42.1.45 c271-k3s-agent <none> <none>
metallb-speaker-776p9 1/1 Running 0 4m7s 192.168.0.13 c271-k3s-ocrh <none> <none>
nginx-ingress-default-backend-5b967cf596-554bq 1/1 Running 0 98s 10.42.1.46 c271-k3s-agent <none> <none>
nginx-ingress-controller-674675d5b6-blndp 1/1 Running 0 98s 10.42.1.47 c271-k3s-agent <none> <none>
App getting IP 192.168.0.21
❯ kubectl get services -n kube-system -l app=nginx-ingress -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-default-backend ClusterIP 10.43.170.195 <none> 80/TCP 112s app=nginx-ingress,component=default-backend,release=nginx-ingress
nginx-ingress-controller LoadBalancer 10.43.220.166 192.168.0.21 80:31735/TCP,443:31566/TCP 111s app=nginx-ingress,component=controller,release=nginx-ingress
I Can access the app from master and worker by curl to nginx controller pod
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 21 Mar 2020 10:43:34 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive
But the IP is not accessible from local 192.168.0.21
Diagnosis : DHCP is on, and 192.168.0.21-192.168.0.30 is absolutely free., When i try to allocate 192.168.0.21 to master or agent by netplan config they get the IP.
Please Guide me, What i am missing.
You need to make sure that the source IP address (external-ip assigned by metallb) is preserved. To achieve this, set the value of the externalTrafficPolicy field of the ingress-controller Service spec to Local. For example
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
helm.sh/chart: webapp-0.1.0
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: webapp
app.kubernetes.io/instance: my-app
externalTrafficPolicy: Local
The default value for externalTrafficPolicy field is 'Cluster'. So change it to Local
In my setup with Cilium and HAProxy ingress controller I'd to change externalTrafficPolicy from Local to Cluster
kubectl --namespace ingress-controller patch svc haproxy-ingress \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
from two years I've been using metalLb in my home-lab, and I didn't get the error (although I got other errors for example ., MetalLB fails to assign an IP address from the pool)
I want to share my current setup with folks who are still struggling on the internet.
helm install --create-namespace metallb metallb/metallb -n metallb-system -f values.yaml
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.21/30
# can use series like 192.168.0.21-24 too.
Debugging - try to get logs from all the pod in namespace metallb.
kail -n metallb
K8S installed with calico using https://github.com/geerlingguy/ansible-role-kubernetes
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Maybe switching externalTrafficPolicy to local/cluster may help however, I didn't try. my setup works out of the box.
Good Luck.
I'm trying to expose kubernetes dashboard publicly via an ingress on a single master bare-metal cluster. The issue is that the LoadBalancer (nginx ingress controller) service I'm using is not opening the 80/443 ports which I would expect it to open/use. Instead it takes some random ports from the 30-32k range. I know I can set this range with --service-node-port-range but I'm quite certain I didn't have to do this a year ago on another server. Am I missing something here?
Currently this is my stack/setup (clean install of Ubuntu 16.04):
Nginx Ingress Controller (installed via helm)
MetalLB
Kubernetes Dashboard
Kubernetes Dashboard Ingress to deploy it publicly on <domain>
Cert-Manager (installed via helm)
k8s-dashboard-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: <domain>
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
tls:
- hosts:
- <domain>
secretName: kubernetes-dashboard-staging-cert
This is what my kubectl get svc -A looks like:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.101.142.87 <none> 9402/TCP 23h
cert-manager cert-manager-webhook ClusterIP 10.104.104.232 <none> 443/TCP 23h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d6h
ingress-nginx nginx-ingress-controller LoadBalancer 10.100.64.210 10.65.106.240 80:31122/TCP,443:32697/TCP 16m
ingress-nginx nginx-ingress-default-backend ClusterIP 10.111.73.136 <none> 80/TCP 16m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d6h
kubernetes-dashboard cm-acme-http-solver-kw8zn NodePort 10.107.15.18 <none> 8089:30074/TCP 140m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.228.215 <none> 8000/TCP 5d18h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.99.250.49 <none> 443/TCP 4d6h
Here are some more examples of what's happening:
curl -D- http://<public_ip>:31122 -H 'Host: <domain>'
returns 308, as the protocol is http not https. This is expected
curl -D- http://<public_ip> -H 'Host: <domain>'
curl: (7) Failed to connect to <public_ip> port 80: Connection refused
port 80 is closed
curl -D- --insecure https://10.65.106.240 -H "Host: <domain>"
reaching the dashboard through an internal IP obviously works and I get the correct k8s-dashboard html.
--insecure is due to the let's encrypt not working yet as the acme challenge on port 80 is unreachable.
So to recap, how do I get 2. working? E.g. reaching the service through 80/443?
EDIT: Nginx Ingress Controller .yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-02-12T20:20:45Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.1
component: controller
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "1785264"
selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-controller
uid: b3ce0ff2-ad3e-46f7-bb02-4dc45c1e3a62
spec:
clusterIP: 10.100.64.210
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31122
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 32697
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.65.106.240
EDIT 2: metallb configmap yaml
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.65.106.240-10.65.106.250
So, to solve the 2nd question, as I suggested, you can use hostNetwork: true parameter to map container port to the host it is running on. Note that this is not a recommended practice, and you should always avoid to do this, unless you have a reason.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 80 # this parameter is optional, but recommended when using host network
name: nginx
When I deploy this yaml, I can check where the pod is running and curl that host's port 80.
root#v1-16-master:~# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 105s 10.132.0.50 v1-16-worker-2 <none> <none>
Note: now I know the pod is running on worker node 2. I just need its IP address.
root#v1-16-master:~# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
v1-16-master Ready master 52d v1.16.4 10.132.0.48 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-1 Ready <none> 52d v1.16.4 10.132.0.49 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-2 Ready <none> 52d v1.16.4 10.132.0.50 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-3 Ready <none> 20d v1.16.4 10.132.0.51 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
root#v1-16-master:~# curl 10.132.0.50 2>/dev/null | grep title
<title>Welcome to nginx!</title>
root#v1-16-master:~# kubectl delete po nginx
pod "nginx" deleted
root#v1-16-master:~# curl 10.132.0.50
curl: (7) Failed to connect to 10.132.0.50 port 80: Connection refused
And of course it also works if I go to the public IP on my browser.
update:
i didn't see the edit part of the question when I was writing this answer. it doesn't make sense given the additional info provided. please disregard.
original:
apparently the cluster you are using now has its ingress controller setup over a node-port type service instead of a load-balancer. in order to get desired behavior you need to change configuration of ingress-controller. refer to nginx ingress controller documentation for metalLB cases how to do this.
I try to access to my deployment but can't reach NodePort net.
curl 10.99.12.214:30991
curl: (7) Failed connect to 10.99.12.214:30991; Aucun chemin d'accès pour atteindre l'hôte cible
kubectl get ep
NAME ENDPOINTS AGE
dark-room-dep 172.17.0.10:8085,172.17.0.9:8085 19h
kubernetes 10.66.222.223:6443 8d
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dark-room-dep NodePort 10.99.12.214 <none> 8085:30991/TCP 19h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
kubectl cluster-info
Kubernetes master is running at https://10.66.222.223:6443
Heapster is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dark-room-dep 2 2 2 2 20h
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default dark-room-dep-577bf64bb8-9n5p7 1/1 Running 0 20h
default dark-room-dep-577bf64bb8-jmppg 1/1 Running 0 20h
kube-system etcd-localhost.localdomain 1/1 Running 6 8d
kube-system heapster-69b5d4974d-qvtrj 1/1 Running 0 1d
kube-system kube-apiserver-localhost.localdomain 1/1 Running 5 8d
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 4 8d
kube-system kube-dns-86f4d74b45-njzj9 3/3 Running 0 1d
kube-system kube-flannel-ds-h9c2m 1/1 Running 3 6d
kube-system kube-flannel-ds-tcbd7 1/1 Running 5 8d
kube-system kube-proxy-7v6mf 1/1 Running 3 6d
kube-system kube-proxy-hwbwl 1/1 Running 4 8d
kube-system kube-scheduler-localhost.localdomain 1/1 Running 6 8d
kube-system kubernetes-dashboard-7d5dcdb6d9-q42q5 1/1 Running 0 1d
kube-system monitoring-grafana-69df66f668-zf2kc 1/1 Running 0 1d
kube-system monitoring-influxdb-78d4c6f5b6-nhdbx 1/1 Running 0 1d
route -n
Table de routage IP du noyau
Destination Passerelle Genmask Indic Metric Ref Use Iface
0.0.0.0 10.66.222.1 0.0.0.0 UG 100 0 0 ens192
10.66.222.0 0.0.0.0 255.255.254.0 U 100 0 0 ens192
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.25.1.0 172.25.1.0 255.255.255.0 UG 0 0 0 flannel.1
kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
k8s-01 Ready <none> 6d v1.10.2
localhost.localdomain Ready master 8d v1.10.2
from k8s-master:
curl 10.66.222.223:30991
curl: (7) Failed connect to 10.66.222.223:30991; Aucun chemin d'accès pour atteindre l'hôte cible
from a lambda pc:
PS C:\Users\XXX> curl 10.66.222.223:30991
curl : can not connect to distant host
at char Ligne:1 : 1
+ curl 10.66.222.223:30991
kubectl describe svc dark-room
Name: dark-room-dep
Namespace: default
Labels: app=dark-room
Annotations: <none>
Selector: app=dark-room
Type: NodePort
IP: 10.99.12.214
Port: <unset> 8085/TCP
TargetPort: 8085/TCP
NodePort: <unset> 30991/TCP
Endpoints: 172.17.0.10:8085,172.17.0.9:8085
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
cat dark-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: dark-room
namespace: default
labels:
run: dark-room
app: dark-room-svc
spec:
externalIPs:
- 10.66.222.223
type: ClusterIP
ports:
- name: http
port: 8085
nodePort: 8086
targetPort: http
protocol: TCP
selector:
run: dark-room
app: dark-room
NodePort will bind the external port to node IP.
Try
curl <node external IP>:<external port>
curl 10.66.222.223:30991
or
curl <service internal IP>:<internal port>
curl 10.99.12.214:8085
nodePort range is between 30000-32767. Try to replace targetPort: http
to targetPort: 80 – gavinlin
Thx you.
It work when I force on port 80.
expose deployment dark-room-dep --type=NodePort --port=80 --name=dark-svc
But I don't understand why doesn't work on any orther port I try (I have no firewall end setenforce 0)