Kubernetes newbie (or rather basic networking) question:
Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox.
minikube start command completes successfully as well
minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)
what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.
Also I want to access the same minikube dashboard from the internet.
For LAN Access:
Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command
VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22"
as listed here
In my case it will be something like this
VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080"
Am i thinking along the right lines here?
Also for remotely accessing the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
#Jeff provided the perfect answer, put more hints for newbies.
Start a proxy using #Jeff's script, as default it will open a proxy on '0.0.0.0:8001'.
kubectl proxy --address='0.0.0.0' --disable-filter=true
Visit the dashboard via the link below:
curl http://your_api_server_ip:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
More details please refer to the officially doc.
I reached this url with search keywords: minikube dashboard remote.
In my case, minikube (and its dashboard) were running remotely and I wanted to access it securely from my laptop.
[my laptop] --ssh--> [remote server with minikube]
Following gmiretti's answer, my solution was local forwarding ssh tunnel:
On minikube remote server, ran these:
minikube dashboard
kubectl proxy
And on my laptop, ran these (keep localhost as is):
ssh -L 12345:localhost:8001 myLogin#myRemoteServer
The dashboard was then available at this url on my laptop:
http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
The ssh way
Assuming that you have ssh on your ubuntu box.
First run kubectl proxy & to expose the dashboard on http://localhost:8001
Then expose the dashboard using ssh's port forwarding, executing:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
Now you should access the dashboard from your macbook in your LAN pointing the browser to http://192.168.0.20:30000
To expose it from outside, just expose the port 30000 using no-ip.com, maybe change it to some standard port, like 80.
Note that isn't the simplest solution but in some places would work without having superuser rights ;) You can automate the login after restarts of the ubuntu box using a init script and setting public key for connection.
I had the same problem recently and solved it as follows:
Get your minikube VM onto the LAN by adding another network adapter in bridge network mode. For me, this was done through modifying the minikube VM in the VirtualBox UI and required VM stop/start. Not sure how this would work if you're using hyperkit. Don't muck with the default network adapters configured by minikube: minikube depends on these. https://github.com/kubernetes/minikube/issues/1471
If you haven't already, install kubectl on your mac: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Add a cluster and associated config to the ~/.kube/config as below, modifying the server IP address to match your newly exposed VM IP. Names can also be modified if desired. Note that the insecure-skip-tls-verify: true is needed because the https certificate generated by minikube is only valid for the internal IP addresses of the VM.
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://192.168.0.101:8443
name: mykubevm
contexts:
- context:
cluster: mykubevm
user: kubeuser
name: mykubevm
users:
- name: kubeuser
user:
client-certificate: /Users/myname/.minikube/client.crt
client-key: /Users/myname/.minikube/client.key
Copy the ~/.minikube/client.* files referenced in the config from your linux minikube host. These are the security key files required for access.
Set your kubectl context: kubectl config set-context mykubevm. At this point, your minikube cluster should be accessible (try kubectl cluster-info).
Run kubectl proxy http://localhost:8000 to create a local proxy for access to the dashboard. Navigate to that address in your browser.
It's also possible to ssh to the minikube VM. Copy the ssh key pair from ~/.minikube/machines/minikube/id_rsa* to your .ssh directory (renaming to avoid blowing away other keys, e.g. mykubevm & mykubevm.pub). Then ssh -i ~/.ssh/mykubevm docker#<kubevm-IP>
Thanks for your valuable answers, If you have to use the kubectl proxy command unable to view permanently, using the below "Service" object in YAML file able to view remotely until you stopped it. Create a new yaml file minikube-dashboard.yaml and write the code manually, I don't recommend copy and paste it.
apiVersion : v1
kind: Service
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard-test
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard
type: NodePort
Execute the command,
$ sudo kubectl apply -f minikube-dashboard.yaml
Finally, open the URL:
http://your-public-ip-address:30000/#!/persistentvolume?namespace=default
Slight variation on the approach above.
I have an http web service with NodePort 30003. I make it available on port 80 externally by running:
sudo ssh -v -i ~/.ssh/id_rsa -N -L 0.0.0.0:80:localhost:30003 ${USER}#$(hostname)
Jeff Prouty added useful answer:
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
But for me it didn't worked initially.
I run this command on the CentOS 7 machine with running kubectl (local IP: 192.168.0.20).
When I tried to access dashboard from another computer (which was in LAN obviously):
http://192.168.0.20:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
then only timeout was in my web browser.
The solution for my case is that in CentOS 7 (and probably other distros) you need to open port 8001 in your OS firewall.
So in my case I need to run in CentOS 7 terminal:
sudo firewall-cmd --zone=public --add-port=8001/tcp --permanent
sudo firewall-cmd --reload
And after that. It works! :)
Of course you need to be aware that this is not safe solution, because anybody have access to your dashbord now. But I think that for local lab testing it will be sufficient.
In other linux distros, command for opening ports in firewall can be different. Please use google for that.
Wanted to link this answer by iamnat.
https://stackoverflow.com/a/40773822
Use minikube ip to get your minikube ip on the host machine
Create the NodePort service
You should be able to access the configured NodePort id via < minikubeip >:< nodeport >
This should work on the LAN as well as long as firewalls are open, if I'm not mistaken.
Just for my learning purposes I solved this issue using nginx proxy_pass. For example if the dashboard has been bound to a port, lets say 43587. So my local url to that dashboard was
http://127.0.0.1:43587/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Then I installed nginx and went to the out of the box config
sudo nano /etc/nginx/sites-available/default
and edited the location directive to look like this:
location / {
proxy_set_header Host "localhost";
proxy_pass http://127.0.0.1:43587;
}
then I did
sudo service nginx restart
then the dashboard was available from outside at:
http://my_server_ip/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/cronjob?namespace=default
I am running Artifactory Pro (5.3.1), and was trying to use the docker registry functionality.
I created a docker repository, and gave it a port 5001 in the "Registry Port" config.
However, there's nothing running on port 5001 ("telnet localhost 5001" refuses to connect), and the logs show this:
[http-nio-8081-exec-7] [ERROR] (o.a.s.s.SshAuthServiceImpl:210) - Failed to start SSH server
java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_72-internal]
at sun.nio.ch.Net.bind(Net.java:433) ~[na:1.8.0_72-internal]
at sun.nio.ch.Net.bind(Net.java:425) ~[na:1.8.0_72-internal]
at sun.nio.ch.AsynchronousServerSocketChannelImpl.bind(AsynchronousServerSocketChannelImpl.java:162) ~[na:1.8.0_72-internal]
at org.apache.sshd.common.io.nio2.Nio2Acceptor.bind(Nio2Acceptor.java:66) ~[sshd-core-0.14.0.jar:0.14.0]
Any idea what could cause a "permission denied"? There's nothing running on that port (same error for any other port). It's on Ubuntu 14.04.
I had a misunderstanding how the docker registry worked with Artifactory.
The Artifactory service doesn't actually open the port assigned to the repo (5001 in this case), but the reverse proxy will listen on it and forward it (with the right X-forwarded-port) to the "normal" Artifactory service port (e.g. 8081).
After setting up the reverse proxy for it, it worked fine.
I am attempting to run my ASP.NET Core 1.1 web API in a Docker container, but I cannot connect to the web API from a browser or curl. To troubleshoot, I have also brought up standard nginx and Apache httpd containers and cannot connect to these either, so I believe this is a Docker/Docker Toolbox/configuration issue rather than a problem with my application.
I'll focus on what I have done with nginx and Apache:
I am running Docker Toolbox on Windows 7 Professional, and everything seems to work as I would expect.
Docker commands all work as expected
I can access the underlying Windows filesystem
I can get the expected results from curl http://localhost (if I start the default IIS website on Windows 7)
So now I shut down IIS and run nginx in a container:
$ docker run -d -p 80:80 nginx
45bb1f373c11b820d8431de3eb3bf222d57d412de53e8625f461b62c4279e644
Docker now shows nginx running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45bb1f373c11 nginx "nginx -g 'daemon off" 47 seconds ago Up 48 seconds 0.0.0.0:80->80/tcp, 443/tcp admiring_pike
But I cannot connect with either curl (within the Docker Toolbox command prompt) or a web browser in Windows:
$ curl http://localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
I get exactly the same results if I run an Apache 2.4 (httpd) container.
Any ideas? Thanks! Peter
I have found the answer in another question here.
Because Docker Toolbox is running on a lightweight Linux VM, it has its own IP address. One needs either to map localhost to the VM using DOCKER_HOST ir access the VM via it's IP address, found using the command:
docker-machine ip default
As you are running on VM, you need to follow this docker document from here.
After that run the following command to check the IP address of your VM.
docker-machine ip default
Start the nginx and hit [ip default address]:port in the browser. It works!
I have Docker running on a Centos VM, with bridged network. running
ifconfig
shows that my VM gets a valid IP address. Now I'm running some software within a docker container/image (which works within other docker/networking configurations). Some of my code running in the docker container uses SSL Connection (java) to connect to itself. In all other run configurations, this works perfectly. But when running in bridged mode with Centos VM and docker-compose, I'm getting an SSL Connect exception, error: Host unreachable. I can ping to and ssh into the VM with the same IP address and this all works fine. I'm sorry that I can't post actual setup/code and scripts as it's too much to post and it's also proprietary.
I'm baffled by this - why am I getting Host Unreachable in the aforementioned configuration?
FYI, I resolved the problem on centos by using the default "bridged" containers provided by Docker, but adding the following to my firewalld configuration:
firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --reload
service firewalld restart
You might also need to open up a port to allow external communication, like so:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
My solution was to switch to an Ubuntu VM, because switching my docker compose to the default "bridged" network broke my aliases, which I really needed
The only remaining question here is why after configuring firewalld, a user-configured network on docker-compose cannot access the external IP, forcing us to switch to the default bridged network
I am installing cloudera manager on local machine.
When trying to add new host getting following error
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server
(check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being
added
(some of the logs can be found in the installation details).
i checked the logs,it shows like hostname differs from canonical name
So I also changed the hostname from /etc/resolv.conf
But still getting sam error
I had the same error for a simple mistake in the file /etc/hosts :
Have you checked you have DNS and reverse DNS ?
Then to check if your port is open 7182, you should do a telnet IP 7182 (replace IP by the host of Cloudera Manager Server).
If there are still some problems, maybe you have forgotten to deactivate the firewall (iptables).
Regards, K.
To resolve this issue you need to check first all port opened on your server service listing to the port no, use command: sudo netstat -lpten
Check if any thing is running on 9000 or 90001, mostly java services required for set up is running on port 9000, and cloudera-scm-agent listner also runs on port 9000. to over come this issue you can re-configure theports as well in /etc/cloudera-scm-agent/config.ini by changing as below:
--------------------------------------------------
## It should not normally be necessary to modify these.
# Port that the CM agent should listen on.
listening_port=9001
-------------------------------------------------
and then restart the cloudera-scm-agent service by command:
service cloudera-scm-agent restart
To verify this port is not activated for other sshd service check Ports opened in /etc/ssh/sshd_config.
I hope this resolution will work for others too.
Cheers,
Ankit Gupta