kubernetes: a service is not accessible outside host - networking

I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks

Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.

If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel

found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.

Related

Kubernetes Ingress nginx on Minikube fails

minikube v1.13.0 on Ubuntu 18.04 with Kubernetes v1.19.0 on Docker 19.03.8. Using helm/helmfile ("v3.3.4"). The Ubuntu VM is on VM-Workstation running on Win10, networking set as NAT, everything in my home wifi network.
I am trying to use ingress-backend stable/nginx-ingress 1.36.0 . I do have the nginx-ingress-1.36.0.tgz in the ingress/charts folder, and I have ingress/enabled minikube addons enable ingress.
Before I had enabled ingress on minikube, everything will get deployed successfully (no errors) but the service/LB stayed pending:
ClusterIP 10.101.41.156 <none> 8080/TCP
ingress-controller-nginx-ingress-controller LoadBalancer 10.98.157.222 <pending> 80:30050/TCP,443:32294/TCP
After I enabled ingress on minikube, I now get this connection refused error:
STDERR:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
COMBINED OUTPUT:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
I don't know what is this IP 10.105.131.220 - looks like pvt IP. It is not my minikube IP, or my VM IP or my laptop IP, I cant ping it.
But it all still deploys fine- but the Load Balancer still shows pending.
Update
I had missed one of the Steps based on documentation
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
I stopped/deleted minkube and redid everything, now the error is gone, but the loadbalancer is still <pending>
By default all solutions like minikube does not provide you LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for you automatically by spinning in the background separate LB. Thats why you see Pending status.
Solutions:
use MetalLB on minikube
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
Installation:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
use minikube tunnel
Services of type LoadBalancer can be exposed via the minikube tunnel
command. It must be run in a separate terminal window to keep the
LoadBalancer running. Ctrl-C in the terminal can be used to terminate
the process at which time the network routes will be cleaned up.
minikube tunnel runs as a process, creating a network route on the host to the service CIDR of the cluster using the cluster’s IP
address as a gateway. The tunnel command exposes the external IP
directly to any program running on the host operating system.

How to fix Kubernetes Ingress Controller cutting off nodes from cluster

I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).
I tried using nginx, traefik and kong but all got the same results.
I'm installing my the nginx helm chart using the following values.yaml:
controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true
With command:
helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx
When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.
When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.
Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?
After a long search, we finally found a working solution for this problem.
As mentioned by #A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.
For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.

how to access local kubernetes minikube dashboard remotely

Kubernetes newbie (or rather basic networking) question:
Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox.
minikube start command completes successfully as well
minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)
what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.
Also I want to access the same minikube dashboard from the internet.
For LAN Access:
Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command
VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22"
as listed here
In my case it will be something like this
VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080"
Am i thinking along the right lines here?
Also for remotely accessing the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
#Jeff provided the perfect answer, put more hints for newbies.
Start a proxy using #Jeff's script, as default it will open a proxy on '0.0.0.0:8001'.
kubectl proxy --address='0.0.0.0' --disable-filter=true
Visit the dashboard via the link below:
curl http://your_api_server_ip:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
More details please refer to the officially doc.
I reached this url with search keywords: minikube dashboard remote.
In my case, minikube (and its dashboard) were running remotely and I wanted to access it securely from my laptop.
[my laptop] --ssh--> [remote server with minikube]
Following gmiretti's answer, my solution was local forwarding ssh tunnel:
On minikube remote server, ran these:
minikube dashboard
kubectl proxy
And on my laptop, ran these (keep localhost as is):
ssh -L 12345:localhost:8001 myLogin#myRemoteServer
The dashboard was then available at this url on my laptop:
http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
The ssh way
Assuming that you have ssh on your ubuntu box.
First run kubectl proxy & to expose the dashboard on http://localhost:8001
Then expose the dashboard using ssh's port forwarding, executing:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
Now you should access the dashboard from your macbook in your LAN pointing the browser to http://192.168.0.20:30000
To expose it from outside, just expose the port 30000 using no-ip.com, maybe change it to some standard port, like 80.
Note that isn't the simplest solution but in some places would work without having superuser rights ;) You can automate the login after restarts of the ubuntu box using a init script and setting public key for connection.
I had the same problem recently and solved it as follows:
Get your minikube VM onto the LAN by adding another network adapter in bridge network mode. For me, this was done through modifying the minikube VM in the VirtualBox UI and required VM stop/start. Not sure how this would work if you're using hyperkit. Don't muck with the default network adapters configured by minikube: minikube depends on these. https://github.com/kubernetes/minikube/issues/1471
If you haven't already, install kubectl on your mac: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Add a cluster and associated config to the ~/.kube/config as below, modifying the server IP address to match your newly exposed VM IP. Names can also be modified if desired. Note that the insecure-skip-tls-verify: true is needed because the https certificate generated by minikube is only valid for the internal IP addresses of the VM.
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://192.168.0.101:8443
name: mykubevm
contexts:
- context:
cluster: mykubevm
user: kubeuser
name: mykubevm
users:
- name: kubeuser
user:
client-certificate: /Users/myname/.minikube/client.crt
client-key: /Users/myname/.minikube/client.key
Copy the ~/.minikube/client.* files referenced in the config from your linux minikube host. These are the security key files required for access.
Set your kubectl context: kubectl config set-context mykubevm. At this point, your minikube cluster should be accessible (try kubectl cluster-info).
Run kubectl proxy http://localhost:8000 to create a local proxy for access to the dashboard. Navigate to that address in your browser.
It's also possible to ssh to the minikube VM. Copy the ssh key pair from ~/.minikube/machines/minikube/id_rsa* to your .ssh directory (renaming to avoid blowing away other keys, e.g. mykubevm & mykubevm.pub). Then ssh -i ~/.ssh/mykubevm docker#<kubevm-IP>
Thanks for your valuable answers, If you have to use the kubectl proxy command unable to view permanently, using the below "Service" object in YAML file able to view remotely until you stopped it. Create a new yaml file minikube-dashboard.yaml and write the code manually, I don't recommend copy and paste it.
apiVersion : v1
kind: Service
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard-test
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard
type: NodePort
Execute the command,
$ sudo kubectl apply -f minikube-dashboard.yaml
Finally, open the URL:
http://your-public-ip-address:30000/#!/persistentvolume?namespace=default
Slight variation on the approach above.
I have an http web service with NodePort 30003. I make it available on port 80 externally by running:
sudo ssh -v -i ~/.ssh/id_rsa -N -L 0.0.0.0:80:localhost:30003 ${USER}#$(hostname)
Jeff Prouty added useful answer:
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
But for me it didn't worked initially.
I run this command on the CentOS 7 machine with running kubectl (local IP: 192.168.0.20).
When I tried to access dashboard from another computer (which was in LAN obviously):
http://192.168.0.20:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
then only timeout was in my web browser.
The solution for my case is that in CentOS 7 (and probably other distros) you need to open port 8001 in your OS firewall.
So in my case I need to run in CentOS 7 terminal:
sudo firewall-cmd --zone=public --add-port=8001/tcp --permanent
sudo firewall-cmd --reload
And after that. It works! :)
Of course you need to be aware that this is not safe solution, because anybody have access to your dashbord now. But I think that for local lab testing it will be sufficient.
In other linux distros, command for opening ports in firewall can be different. Please use google for that.
Wanted to link this answer by iamnat.
https://stackoverflow.com/a/40773822
Use minikube ip to get your minikube ip on the host machine
Create the NodePort service
You should be able to access the configured NodePort id via < minikubeip >:< nodeport >
This should work on the LAN as well as long as firewalls are open, if I'm not mistaken.
Just for my learning purposes I solved this issue using nginx proxy_pass. For example if the dashboard has been bound to a port, lets say 43587. So my local url to that dashboard was
http://127.0.0.1:43587/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Then I installed nginx and went to the out of the box config
sudo nano /etc/nginx/sites-available/default
and edited the location directive to look like this:
location / {
proxy_set_header Host "localhost";
proxy_pass http://127.0.0.1:43587;
}
then I did
sudo service nginx restart
then the dashboard was available from outside at:
http://my_server_ip/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/cronjob?namespace=default

Kubernetes service external ip pending

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
and now I want to expose its port 80 on port 30062 of node, for that I created a service below,
kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.
If you are using Minikube, there is a magic command!
$ minikube tunnel
Hopefully someone can save a few minutes with this.
Reference link
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.
...
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10
I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.
$ kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 10.107.37.153 <pending> 80:30047/TCP,443:31307/TCP 110s
In my case I've patched the service like this:
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
After this, it started serving over the public IP
$ kubectl get svc argo-ui -n argo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-ui LoadBalancer 10.103.219.8 172.31.71.218 80:30981/TCP 7m50s
To access a service on minikube, you need to run the following command:
minikube service [-n NAMESPACE] [--url] NAME
More information here : Minikube GitHub
When using Minikube, you can get the IP and port through which you
can access the service by running:
minikube service [service name]
E.g.:
minikube service kubia-http
If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.
Step 1: Install MetalLB in your cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Step 2: Configure it by using a configmap
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Step 3: Create your service to get an external IP (would be a private IP though).
FYR:
Before MetalLB installation:
After MetalLB installation:
If running on minikube, don't forget to mention namespace if you are not using default.
minikube service << service_name >> --url --namespace=<< namespace_name >>
Following #Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.
$ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'
This will replace that 'pending' with a new patched up IP address you can use for your cluster.
For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes
Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.
If you are using minikube then run commands below from terminal,
$ minikube ip
$ 172.17.0.2 // then
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245
In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending> should turn into an actual IP address then.
A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.
ClusterIP
The Service is only reachable from within the cluster.
NodePort
You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
ExternalName
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.
I hope your doubt may go away.
You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .
Taking reference with above posts , Following worked for me :
kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'
Adding a solution for those who encountered this error while running on amazon-eks.
First of all run:
kubectl describe svc <service-name>
And then review the events field in the example output below:
Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Review the error message:
Failed to ensure load balancer: could not find any suitable subnets for creating the ELB
In my case, the reason that no suitable subnets were provided for creating the ELB were:
1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).
2: The Subnets weren't tagged according to the requirements mentioned here.
Tagging VPC with:
Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared
Tagging public subnets with:
Key: kubernetes.io/role/elb
Value: 1
If you are using a bare metal you need the NodePort type
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc
k edit service ingress-nginx-controller
type: NodePort
spec:
externalIPs:
- xxx.xxx.xxx.xx
using the public IP
Use NodePort:
$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2 --port=5000
$ kubectl expose deployment user-login --type=NodePort --name=user-login-service
$ kubectl describe services user-login-service
(Note down the port)
$ kubectl cluster-info
(IP-> Get The IP where master is running)
Your service is accessible at (IP):(port)
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service
minikube tunnel
The below solution works in my case.
First of all, try this command:
minikube tunnel
If it's not working for you. follow the below:
I restart minikube container.
docker minikube stop
then
docker minikube start
After that re-run kubernetes
minikube dashboard
After finish execute :
minikube tunnel
I have the same problem.
Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2
Following the official docs on my side, I resolve with:
PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service servizio-php.
PS E:\docker\apache-php> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
servizio-php LoadBalancer 10.98.218.86 127.0.0.1 8080:30270/TCP 4m39s
The open browser on http://127.0.0.1:8080/
same issue:
os>kubectl get svc right-sabertooth-wordpress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP
os>minikube service list
|-------------|----------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------------|--------------------------------|
| default | kubernetes | No node port |
| default | right-sabertooth-mariadb | No node port |
| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |
| | | http://192.168.99.100:30427 |
| kube-system | kube-dns | No node port |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------------|--------------------------------|
It is, however,accesible via that http://192.168.99.100:30454.
There are three types of exposing your service
Nodeport
ClusterIP
LoadBalancer
When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online
Note not a domain name but a dns.
So loadbalancer type does not work in our local minikube env.
Those who are using minikube and trying to access the service of kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So a good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.
If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/
but it's in beta yet..
Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.
Finally, delete existing service and create a new one solved my problem.
For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.
I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.
Deleting all older services and creating new resolved my issue. IP was bound to older service. just try "$kubectl get svc" and then delete all svc's one by one "$kubectl delete svc 'svc name' "
May be the subnet in which you are deploying your service, have not enough ip's
If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.
Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M
You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

Dev machine as part of Minikube's network?

Is it possible to have my development machine to be part of Minikube's network?
Ideally, it should work both ways:
While developing an application in my IDE, I can access k8s resources inside Minikube using the same addressing that pods would use.
Pods running in Minikube can access my application running in the IDE, for example via HTTP requests.
This sounds like the first part is feasible on GCE using network routes, so I wonder if it's doable locally using Minikube.
There is an issue open upstream (kubernetes/minikube#38) in order to discuss that particular use case.
kube-proxy already adds the IPtables rules needed for IP forwarding inside the minikube VM (this is not specific to minikube), so all you have to do is add a static route to the container network via the IP of minikube's eth1 interface on your local machine:
ip route add 10.0.0.0/24 via 192.168.42.58 (Linux)
route -n add 10.0.0.0/24 192.168.42.58 (macOS)
Where 10.0.0.0/24 is the container network CIDR and 192.168.42.58 is the IP of your minikube VM (obtained with the minikube ip command).
You can then reach Kubernetes services from your local environment using their cluster IP. Example:
❯ kubectl get svc -n kube-system kubernetes-dashboard
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.0.0.56 <nodes> 80:30000/TCP 35s
This also allows you to resolve names in the cluster.local domain via the cluster DNS (kube-dns addon):
❯ nslookup kubernetes-dashboard.kube-system.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: kubernetes-dashboard.kube-system.svc.cluster.local
Address: 10.0.0.56
If you also happen to have a local dnsmasq running on you local machine you can easily take advantage of this and forward all DNS requests for the cluster.local domain to kube-dns:
server=/cluster.local/10.0.0.10

Resources