Is it possible to have my development machine to be part of Minikube's network?
Ideally, it should work both ways:
While developing an application in my IDE, I can access k8s resources inside Minikube using the same addressing that pods would use.
Pods running in Minikube can access my application running in the IDE, for example via HTTP requests.
This sounds like the first part is feasible on GCE using network routes, so I wonder if it's doable locally using Minikube.
There is an issue open upstream (kubernetes/minikube#38) in order to discuss that particular use case.
kube-proxy already adds the IPtables rules needed for IP forwarding inside the minikube VM (this is not specific to minikube), so all you have to do is add a static route to the container network via the IP of minikube's eth1 interface on your local machine:
ip route add 10.0.0.0/24 via 192.168.42.58 (Linux)
route -n add 10.0.0.0/24 192.168.42.58 (macOS)
Where 10.0.0.0/24 is the container network CIDR and 192.168.42.58 is the IP of your minikube VM (obtained with the minikube ip command).
You can then reach Kubernetes services from your local environment using their cluster IP. Example:
❯ kubectl get svc -n kube-system kubernetes-dashboard
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.0.0.56 <nodes> 80:30000/TCP 35s
This also allows you to resolve names in the cluster.local domain via the cluster DNS (kube-dns addon):
❯ nslookup kubernetes-dashboard.kube-system.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: kubernetes-dashboard.kube-system.svc.cluster.local
Address: 10.0.0.56
If you also happen to have a local dnsmasq running on you local machine you can easily take advantage of this and forward all DNS requests for the cluster.local domain to kube-dns:
server=/cluster.local/10.0.0.10
Related
minikube v1.13.0 on Ubuntu 18.04 with Kubernetes v1.19.0 on Docker 19.03.8. Using helm/helmfile ("v3.3.4"). The Ubuntu VM is on VM-Workstation running on Win10, networking set as NAT, everything in my home wifi network.
I am trying to use ingress-backend stable/nginx-ingress 1.36.0 . I do have the nginx-ingress-1.36.0.tgz in the ingress/charts folder, and I have ingress/enabled minikube addons enable ingress.
Before I had enabled ingress on minikube, everything will get deployed successfully (no errors) but the service/LB stayed pending:
ClusterIP 10.101.41.156 <none> 8080/TCP
ingress-controller-nginx-ingress-controller LoadBalancer 10.98.157.222 <pending> 80:30050/TCP,443:32294/TCP
After I enabled ingress on minikube, I now get this connection refused error:
STDERR:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
COMBINED OUTPUT:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
I don't know what is this IP 10.105.131.220 - looks like pvt IP. It is not my minikube IP, or my VM IP or my laptop IP, I cant ping it.
But it all still deploys fine- but the Load Balancer still shows pending.
Update
I had missed one of the Steps based on documentation
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
I stopped/deleted minkube and redid everything, now the error is gone, but the loadbalancer is still <pending>
By default all solutions like minikube does not provide you LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for you automatically by spinning in the background separate LB. Thats why you see Pending status.
Solutions:
use MetalLB on minikube
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
Installation:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
use minikube tunnel
Services of type LoadBalancer can be exposed via the minikube tunnel
command. It must be run in a separate terminal window to keep the
LoadBalancer running. Ctrl-C in the terminal can be used to terminate
the process at which time the network routes will be cleaned up.
minikube tunnel runs as a process, creating a network route on the host to the service CIDR of the cluster using the cluster’s IP
address as a gateway. The tunnel command exposes the external IP
directly to any program running on the host operating system.
We have an original network with an number of linux VM's with docker swarm containers. We're migrating them to AKS but this is have to be done in steps. Currently we have setup up an ingress on a private subnet with an internal ingress below as below, use the following example https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip
The issue:- when connected to the original v-net We can't connect to 10.3.0.4 on the 2 subnet, i.e. 10.0.0.12:80 -> 10.3.0.4:80. On a test VM on the same subnet an the AKS cluster We can connect to the 10.3.0.4 just fine.
The strange things we can connect from the original v-net to a service end point in the nic , i.e 10.0.0.12:80 -> 10.3.3.134:80/.
This is not a solution though as we could have multi replicas.
Any ideas why the loadbalancer is not visible to the original v-net?
Connected Devices in V-Net
original subnet
//10.0.0.0/24
10.0.0.0 - 10.0.0.255 (65536 addresses)
//10.0.1.0/24
10.0.1.0 - 10.0.1.255 (65536 addresses)
which have peering set to each other
aks subnet
10.3.0.0/16
10.3.0.0 - 10.3.255.255 (65536 addresses)
az aks create \
--resource-group AKS_Workshop \
--name workshop-aks-cluster \
--network-plugin azure \
--vnet-subnet-id "/subscriptions/../resourceGroups/AKS_Workshop/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/default" \
--docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.4.0.2 \
--service-cidr 10.4.0.0/16 \
--generate-ssh-keys \
--max-pods 96 \
--node-vm-size Standard_DS2_v2
controller:
service:
loadBalancerIP: 10.3.0.4
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
helm install stable/nginx-ingress /
--name backend-ingress /
--namespace ingress -f ingress-basic.yaml
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aks-helloworld ClusterIP 10.4.121.75 <none> 80/TCP 25h
backend-ingress-nginx-ingress-controller LoadBalancer 10.4.233.18 10.3.0.4 80:30904/TCP,443:32023/TCP 23h
backend-ingress-nginx-ingress-default-backend ClusterIP 10.4.198.93 <none> 80/TCP 23h
ingress-demo ClusterIP 10.4.53.2 <none> 80/TCP 25h
Name: ingress-demo
Namespace: ingress
Labels: <none>
Annotations: <none>
Selector: app=acs-helloworld-saucy-antelope
Type: ClusterIP
IP: 10.4.53.2
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.3.3.134:80
Session Affinity: None
Events: <none>
Turns out the issue is relate to the 2 v-net being in different regions. Once I created a v-net in the same region I had no issues hitting the internal load balancer endpoint, as it says you can get to the internal endpoint, but if you set a number of replicas for your balancer you can only hit one of the endpoints thus rendering it pointless
VNet peering
What is VNet peering?
VNet peering (or virtual network peering) enables you to connect virtual networks. A VNet peering connection between virtual networks enables you to route traffic between them privately through IPv4 addresses. Virtual machines in the peered VNets can communicate with each other as if they are within the same network. These virtual networks can be in the same region or in different regions (also known as Global VNet Peering). VNet peering connections can also be created across Azure subscriptions.
Can I create a peering connection to a VNet in a different region?
Yes. Global VNet peering enables you to peer VNets in different regions. Global VNet peering is available in all Azure public regions, China cloud regions, and Government cloud regions. You cannot globally peer from Azure public regions to national cloud regions.
What are the constraints related to Global VNet Peering and Load Balancers?
If the two virtual networks in two different regions are peered over Global VNet Peering, you cannot connect to resources that are behind a Basic Load Balancer through the Front End IP of the Load Balancer. This restriction does not exist for a Standard Load Balancer. The following resources can use Basic Load Balancers which means you cannot reach them through the Load Balancer's Front End IP over Global VNet Peering. You can however use Global VNet peering to reach the resources directly through their private VNet IPs, if permitted.
VMs behind Basic Load Balancers
Virtual machine scale sets with Basic Load Balancers
Redis Cache
Application Gateway (v1) SKU
Service Fabric
SQL MI
API Management
Active Directory Domain Service (ADDS)
Logic Apps
HDInsight
Azure Batch
App Service Environment
You can connect to these resources via ExpressRoute or VNet-to-VNet through VNet Gateways.
Kubernetes newbie (or rather basic networking) question:
Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox.
minikube start command completes successfully as well
minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)
what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.
Also I want to access the same minikube dashboard from the internet.
For LAN Access:
Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command
VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22"
as listed here
In my case it will be something like this
VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080"
Am i thinking along the right lines here?
Also for remotely accessing the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
#Jeff provided the perfect answer, put more hints for newbies.
Start a proxy using #Jeff's script, as default it will open a proxy on '0.0.0.0:8001'.
kubectl proxy --address='0.0.0.0' --disable-filter=true
Visit the dashboard via the link below:
curl http://your_api_server_ip:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
More details please refer to the officially doc.
I reached this url with search keywords: minikube dashboard remote.
In my case, minikube (and its dashboard) were running remotely and I wanted to access it securely from my laptop.
[my laptop] --ssh--> [remote server with minikube]
Following gmiretti's answer, my solution was local forwarding ssh tunnel:
On minikube remote server, ran these:
minikube dashboard
kubectl proxy
And on my laptop, ran these (keep localhost as is):
ssh -L 12345:localhost:8001 myLogin#myRemoteServer
The dashboard was then available at this url on my laptop:
http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
The ssh way
Assuming that you have ssh on your ubuntu box.
First run kubectl proxy & to expose the dashboard on http://localhost:8001
Then expose the dashboard using ssh's port forwarding, executing:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
Now you should access the dashboard from your macbook in your LAN pointing the browser to http://192.168.0.20:30000
To expose it from outside, just expose the port 30000 using no-ip.com, maybe change it to some standard port, like 80.
Note that isn't the simplest solution but in some places would work without having superuser rights ;) You can automate the login after restarts of the ubuntu box using a init script and setting public key for connection.
I had the same problem recently and solved it as follows:
Get your minikube VM onto the LAN by adding another network adapter in bridge network mode. For me, this was done through modifying the minikube VM in the VirtualBox UI and required VM stop/start. Not sure how this would work if you're using hyperkit. Don't muck with the default network adapters configured by minikube: minikube depends on these. https://github.com/kubernetes/minikube/issues/1471
If you haven't already, install kubectl on your mac: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Add a cluster and associated config to the ~/.kube/config as below, modifying the server IP address to match your newly exposed VM IP. Names can also be modified if desired. Note that the insecure-skip-tls-verify: true is needed because the https certificate generated by minikube is only valid for the internal IP addresses of the VM.
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://192.168.0.101:8443
name: mykubevm
contexts:
- context:
cluster: mykubevm
user: kubeuser
name: mykubevm
users:
- name: kubeuser
user:
client-certificate: /Users/myname/.minikube/client.crt
client-key: /Users/myname/.minikube/client.key
Copy the ~/.minikube/client.* files referenced in the config from your linux minikube host. These are the security key files required for access.
Set your kubectl context: kubectl config set-context mykubevm. At this point, your minikube cluster should be accessible (try kubectl cluster-info).
Run kubectl proxy http://localhost:8000 to create a local proxy for access to the dashboard. Navigate to that address in your browser.
It's also possible to ssh to the minikube VM. Copy the ssh key pair from ~/.minikube/machines/minikube/id_rsa* to your .ssh directory (renaming to avoid blowing away other keys, e.g. mykubevm & mykubevm.pub). Then ssh -i ~/.ssh/mykubevm docker#<kubevm-IP>
Thanks for your valuable answers, If you have to use the kubectl proxy command unable to view permanently, using the below "Service" object in YAML file able to view remotely until you stopped it. Create a new yaml file minikube-dashboard.yaml and write the code manually, I don't recommend copy and paste it.
apiVersion : v1
kind: Service
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard-test
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard
type: NodePort
Execute the command,
$ sudo kubectl apply -f minikube-dashboard.yaml
Finally, open the URL:
http://your-public-ip-address:30000/#!/persistentvolume?namespace=default
Slight variation on the approach above.
I have an http web service with NodePort 30003. I make it available on port 80 externally by running:
sudo ssh -v -i ~/.ssh/id_rsa -N -L 0.0.0.0:80:localhost:30003 ${USER}#$(hostname)
Jeff Prouty added useful answer:
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
But for me it didn't worked initially.
I run this command on the CentOS 7 machine with running kubectl (local IP: 192.168.0.20).
When I tried to access dashboard from another computer (which was in LAN obviously):
http://192.168.0.20:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
then only timeout was in my web browser.
The solution for my case is that in CentOS 7 (and probably other distros) you need to open port 8001 in your OS firewall.
So in my case I need to run in CentOS 7 terminal:
sudo firewall-cmd --zone=public --add-port=8001/tcp --permanent
sudo firewall-cmd --reload
And after that. It works! :)
Of course you need to be aware that this is not safe solution, because anybody have access to your dashbord now. But I think that for local lab testing it will be sufficient.
In other linux distros, command for opening ports in firewall can be different. Please use google for that.
Wanted to link this answer by iamnat.
https://stackoverflow.com/a/40773822
Use minikube ip to get your minikube ip on the host machine
Create the NodePort service
You should be able to access the configured NodePort id via < minikubeip >:< nodeport >
This should work on the LAN as well as long as firewalls are open, if I'm not mistaken.
Just for my learning purposes I solved this issue using nginx proxy_pass. For example if the dashboard has been bound to a port, lets say 43587. So my local url to that dashboard was
http://127.0.0.1:43587/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Then I installed nginx and went to the out of the box config
sudo nano /etc/nginx/sites-available/default
and edited the location directive to look like this:
location / {
proxy_set_header Host "localhost";
proxy_pass http://127.0.0.1:43587;
}
then I did
sudo service nginx restart
then the dashboard was available from outside at:
http://my_server_ip/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/cronjob?namespace=default
I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks
Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.
If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel
found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.
My ssh stops working after I successfully installed Docker (following the official site instruction https://docs.docker.com/engine/installation/) on an ubuntu machine A. Now my laptop cannot ssh to A but ok for other machines, say B, that sitting in the same network environment as A. A can ssh to B and B can also ssh to A. What could be the problem? Can anyone suggest how I can make a diagnostic?
If you are using a vpn service you might be encountering an ip conflict between docker0 interface and your vpn service.
to resolve this:
stop docker service:
sudo service docker stop
remove old docker0 interface created by docker
ip link del docker0
configure docker0 bridge (in my case i only had to define "bip" option)
start the docker service:
sudo service docker start
Most probably there is ip conflict between docker0 interface and your VPN service. As already answered, way is to stop docker service, remove docker0 interface and configure daemon.json file. I added following lines to my daemon.json
{
"default-address-pools":
[
{"base":"10.10.0.0/16","size":24}
]
}
My VPN was providing me an IP like 192.168.. so I chose a base IP that does not fall in that range. Note that the daemon.json file does not exist, so you have to create it in, etc/docker/.