I have tried some example to run a specific image on my kubernetes cluster, I did this command:
kubectl run my-nginx --image=nginx --replicas=2 --port=80
then:
kubectl expose rc my-nginx --port=80 --type=LoadBalancer
and when I go:
kubectl get service
I get:
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
my-nginx run=my-nginx run=my-nginx 10.0.100.19 80/TCP
now, I have a cluster that I created with my kubernetes, and onw I want to put something in my browser and see the landing page of nginx...
I tried to put my master machine ip with port 80 in the end and it didnt work, what should I do?
thanks!!
describe svc:
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: x.x.xxx.xx
LoadBalancer Ingress: dasfasdgfgaasok23o4j34ij4ofa69da-1772099277.us-west-2.elb.amazonaws.com
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31331/TCP
Endpoints: x.x.xxx.x:80,xx.xxx.x.x:80
Session Affinity: None
No events.
Related
I'm following the quickstart guide https://kubernetes.github.io/ingress-nginx/deploy/#aws to install it on an aws eks cluster. The cluster runs in a private subnet and will receive traffic via a cloudflare argo tunnel.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy.yaml
When I then check the service I can see that it is pending:
kubectl get svc --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.64.86 <pending> 80:31323/TCP,443:31143/TCP 2d5h
The service generated seems ok, with valid annotations:
kubectl describe svc ingress-nginx-controller --namespace=ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.1.0
helm.sh/chart=ingress-nginx-4.0.10
Annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.64.86
IPs: 10.100.64.86
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31323/TCP
Endpoints: 192.168.193.149:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31143/TCP
Endpoints: 192.168.193.149:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30785
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 2m23s (x646 over 2d5h) service-controller Ensuring load balancer
Not sure how to troubleshoot or fix
what worked for me was to download the installation file (https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy.yaml) and add an annotation to the controller-service and then reapply the installation.
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
not sure why it doesnt work as-is.
i would suggest try applying this changes,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
The change that you have is for internal load balancer that wont give you a public exposed IP or Loabalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
You can follow this Guide and this will create the NLB loadbalancer for you and the ingress tutorial also.
Read more at : https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000
I'm trying to expose kubernetes dashboard publicly via an ingress on a single master bare-metal cluster. The issue is that the LoadBalancer (nginx ingress controller) service I'm using is not opening the 80/443 ports which I would expect it to open/use. Instead it takes some random ports from the 30-32k range. I know I can set this range with --service-node-port-range but I'm quite certain I didn't have to do this a year ago on another server. Am I missing something here?
Currently this is my stack/setup (clean install of Ubuntu 16.04):
Nginx Ingress Controller (installed via helm)
MetalLB
Kubernetes Dashboard
Kubernetes Dashboard Ingress to deploy it publicly on <domain>
Cert-Manager (installed via helm)
k8s-dashboard-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: <domain>
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
tls:
- hosts:
- <domain>
secretName: kubernetes-dashboard-staging-cert
This is what my kubectl get svc -A looks like:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.101.142.87 <none> 9402/TCP 23h
cert-manager cert-manager-webhook ClusterIP 10.104.104.232 <none> 443/TCP 23h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d6h
ingress-nginx nginx-ingress-controller LoadBalancer 10.100.64.210 10.65.106.240 80:31122/TCP,443:32697/TCP 16m
ingress-nginx nginx-ingress-default-backend ClusterIP 10.111.73.136 <none> 80/TCP 16m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d6h
kubernetes-dashboard cm-acme-http-solver-kw8zn NodePort 10.107.15.18 <none> 8089:30074/TCP 140m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.228.215 <none> 8000/TCP 5d18h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.99.250.49 <none> 443/TCP 4d6h
Here are some more examples of what's happening:
curl -D- http://<public_ip>:31122 -H 'Host: <domain>'
returns 308, as the protocol is http not https. This is expected
curl -D- http://<public_ip> -H 'Host: <domain>'
curl: (7) Failed to connect to <public_ip> port 80: Connection refused
port 80 is closed
curl -D- --insecure https://10.65.106.240 -H "Host: <domain>"
reaching the dashboard through an internal IP obviously works and I get the correct k8s-dashboard html.
--insecure is due to the let's encrypt not working yet as the acme challenge on port 80 is unreachable.
So to recap, how do I get 2. working? E.g. reaching the service through 80/443?
EDIT: Nginx Ingress Controller .yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-02-12T20:20:45Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.1
component: controller
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "1785264"
selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-controller
uid: b3ce0ff2-ad3e-46f7-bb02-4dc45c1e3a62
spec:
clusterIP: 10.100.64.210
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31122
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 32697
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.65.106.240
EDIT 2: metallb configmap yaml
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.65.106.240-10.65.106.250
So, to solve the 2nd question, as I suggested, you can use hostNetwork: true parameter to map container port to the host it is running on. Note that this is not a recommended practice, and you should always avoid to do this, unless you have a reason.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 80 # this parameter is optional, but recommended when using host network
name: nginx
When I deploy this yaml, I can check where the pod is running and curl that host's port 80.
root#v1-16-master:~# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 105s 10.132.0.50 v1-16-worker-2 <none> <none>
Note: now I know the pod is running on worker node 2. I just need its IP address.
root#v1-16-master:~# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
v1-16-master Ready master 52d v1.16.4 10.132.0.48 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-1 Ready <none> 52d v1.16.4 10.132.0.49 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-2 Ready <none> 52d v1.16.4 10.132.0.50 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-3 Ready <none> 20d v1.16.4 10.132.0.51 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
root#v1-16-master:~# curl 10.132.0.50 2>/dev/null | grep title
<title>Welcome to nginx!</title>
root#v1-16-master:~# kubectl delete po nginx
pod "nginx" deleted
root#v1-16-master:~# curl 10.132.0.50
curl: (7) Failed to connect to 10.132.0.50 port 80: Connection refused
And of course it also works if I go to the public IP on my browser.
update:
i didn't see the edit part of the question when I was writing this answer. it doesn't make sense given the additional info provided. please disregard.
original:
apparently the cluster you are using now has its ingress controller setup over a node-port type service instead of a load-balancer. in order to get desired behavior you need to change configuration of ingress-controller. refer to nginx ingress controller documentation for metalLB cases how to do this.
My my-app service exposes multiple ports:
/Mugen$ kubectl get endpoints
NAME ENDPOINTS AGE
my-app 172.17.0.7:80,172.17.0.7:8003,172.17.0.7:8001 + 3 more... 7m
kubernetes 192.168.99.100:8443 10h
mysql-server 172.17.0.5:3306 10h
When executing minikube service my-app -n default --url, I'm getting each port forwarded by minikube, however I can't tell which is which without querying them. Is there a simple way to print the mapping or to set the port forwarding myself?
/Mugen$ minikube service my-app -n default --url
http://192.168.99.100:30426
http://192.168.99.100:30467
http://192.168.99.100:31922
http://192.168.99.100:32008
http://192.168.99.100:30895
http://192.168.99.100:31602
You can easily check the port and TargetPort mapping in kubernetes service using:
kubectl descrive svc my-app
Name: my-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: NodePort
IP: 10.152.183.56
Port: http 80/TCP
TargetPort: 9376/TCP
NodePort: http 30696/TCP
Endpoints: <none>
Port: https 443/TCP
TargetPort: 9377/TCP
NodePort: https 32715/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
This way you can find port, targetport and endpoints mapping.
I am fairly new to Kubernetes and I have recently exposed a service using miniKube using NodePort type. I want to test the running of my application but I dont see any external ip but the port only. Here is the output of my:
$kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 1h
kubernetes-bootcamp 10.0.0.253 <nodes> 8080:31180/TCP 20m
$kubectl describe services/kubernetes-bootcamp
Name: kubernetes-bootcamp
Namespace: default
Labels: run=kubernetes-bootcamp
Annotations: <none>
Selector: run=kubernetes-bootcamp
Type: NodePort
IP: 10.0.0.253
Port: <unset> 8080/TCP
NodePort: <unset> 31180/TCP
Endpoints: 172.17.0.2:8080
Session Affinity: None
Events: <none>
What is the External IP in this case so that I can use curl to get the output of my app exposed, I followed the tutorial while working on my laptop : https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-interactive/.
P.S : What does that <nodes> means in the output of the get service command under External-IP?
As you are using minikube, the command minikube ip will return the IP you are looking for.
In case you are not using minikube, kubectl get nodes -o yaml will show you, amongst other data, the IP address of the node.