I've created a new network namespace for USB-ethernet and Wireless interface.
I run a DNSMASQ dhcp server on an interface with
sudo dnsmasq --port 5353 --interface wlp2s0 -F 123.12.1.101,123.12.1.200,24h
Which works like a charm.
I then want to set up another DNSMASQ dhcp server on the other interface with:
sudo dnsmasq --port 5454 --interface enx65ad574sa -F 123.12.1.101,123.12.1.200,24h
But this just reports
dnsmasq: failed to bind DHCP server socket: Address already in use.
I am able to setup multiple dnsmasq dhcp servers if i run the dnsmasq outside the namespace, but inside a namespace, i can only have it running once.
If i create a configuration file:
interface=wlp2s0
dhcp-range=wlp2s0,123.12.1.101,123.12.1.200,255.255.255.0,24h
interface=enx65ad574saaw
dhcp-range=enx65ad574saaw,192.168.0.101,192.168.0.200,255.255.255.0,24h
listen-address=::1,127.0.0.1,192.168.0.1,123.12.1.1
It also works just fine on both interfaces inside the namespace... So what's the difference? I need to run this dynamically from the commandline, so i can't use the configuration file.
I ended up creating 2 separate dnsmasq configuration files, both has dhcp serve setup on the same interface, and then run the configarions concurrently in each their own network namespace - That means when the wireless interface is added to the network namespace, dnsmasq is already serving as dhcp server on the interface.
Apparently, setting up and starting 2 dnsmasq, that has their own configuration but with the same interface in both, does not interfere or break each other, and works quite seamlessly (atleast when they are started in separate network namespaces).
Related
Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!
Kubernetes newbie (or rather basic networking) question:
Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox.
minikube start command completes successfully as well
minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)
what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.
Also I want to access the same minikube dashboard from the internet.
For LAN Access:
Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command
VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22"
as listed here
In my case it will be something like this
VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080"
Am i thinking along the right lines here?
Also for remotely accessing the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
#Jeff provided the perfect answer, put more hints for newbies.
Start a proxy using #Jeff's script, as default it will open a proxy on '0.0.0.0:8001'.
kubectl proxy --address='0.0.0.0' --disable-filter=true
Visit the dashboard via the link below:
curl http://your_api_server_ip:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
More details please refer to the officially doc.
I reached this url with search keywords: minikube dashboard remote.
In my case, minikube (and its dashboard) were running remotely and I wanted to access it securely from my laptop.
[my laptop] --ssh--> [remote server with minikube]
Following gmiretti's answer, my solution was local forwarding ssh tunnel:
On minikube remote server, ran these:
minikube dashboard
kubectl proxy
And on my laptop, ran these (keep localhost as is):
ssh -L 12345:localhost:8001 myLogin#myRemoteServer
The dashboard was then available at this url on my laptop:
http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
The ssh way
Assuming that you have ssh on your ubuntu box.
First run kubectl proxy & to expose the dashboard on http://localhost:8001
Then expose the dashboard using ssh's port forwarding, executing:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
Now you should access the dashboard from your macbook in your LAN pointing the browser to http://192.168.0.20:30000
To expose it from outside, just expose the port 30000 using no-ip.com, maybe change it to some standard port, like 80.
Note that isn't the simplest solution but in some places would work without having superuser rights ;) You can automate the login after restarts of the ubuntu box using a init script and setting public key for connection.
I had the same problem recently and solved it as follows:
Get your minikube VM onto the LAN by adding another network adapter in bridge network mode. For me, this was done through modifying the minikube VM in the VirtualBox UI and required VM stop/start. Not sure how this would work if you're using hyperkit. Don't muck with the default network adapters configured by minikube: minikube depends on these. https://github.com/kubernetes/minikube/issues/1471
If you haven't already, install kubectl on your mac: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Add a cluster and associated config to the ~/.kube/config as below, modifying the server IP address to match your newly exposed VM IP. Names can also be modified if desired. Note that the insecure-skip-tls-verify: true is needed because the https certificate generated by minikube is only valid for the internal IP addresses of the VM.
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://192.168.0.101:8443
name: mykubevm
contexts:
- context:
cluster: mykubevm
user: kubeuser
name: mykubevm
users:
- name: kubeuser
user:
client-certificate: /Users/myname/.minikube/client.crt
client-key: /Users/myname/.minikube/client.key
Copy the ~/.minikube/client.* files referenced in the config from your linux minikube host. These are the security key files required for access.
Set your kubectl context: kubectl config set-context mykubevm. At this point, your minikube cluster should be accessible (try kubectl cluster-info).
Run kubectl proxy http://localhost:8000 to create a local proxy for access to the dashboard. Navigate to that address in your browser.
It's also possible to ssh to the minikube VM. Copy the ssh key pair from ~/.minikube/machines/minikube/id_rsa* to your .ssh directory (renaming to avoid blowing away other keys, e.g. mykubevm & mykubevm.pub). Then ssh -i ~/.ssh/mykubevm docker#<kubevm-IP>
Thanks for your valuable answers, If you have to use the kubectl proxy command unable to view permanently, using the below "Service" object in YAML file able to view remotely until you stopped it. Create a new yaml file minikube-dashboard.yaml and write the code manually, I don't recommend copy and paste it.
apiVersion : v1
kind: Service
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard-test
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard
type: NodePort
Execute the command,
$ sudo kubectl apply -f minikube-dashboard.yaml
Finally, open the URL:
http://your-public-ip-address:30000/#!/persistentvolume?namespace=default
Slight variation on the approach above.
I have an http web service with NodePort 30003. I make it available on port 80 externally by running:
sudo ssh -v -i ~/.ssh/id_rsa -N -L 0.0.0.0:80:localhost:30003 ${USER}#$(hostname)
Jeff Prouty added useful answer:
I was able to get running with something as simple as:
kubectl proxy --address='0.0.0.0' --disable-filter=true
But for me it didn't worked initially.
I run this command on the CentOS 7 machine with running kubectl (local IP: 192.168.0.20).
When I tried to access dashboard from another computer (which was in LAN obviously):
http://192.168.0.20:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
then only timeout was in my web browser.
The solution for my case is that in CentOS 7 (and probably other distros) you need to open port 8001 in your OS firewall.
So in my case I need to run in CentOS 7 terminal:
sudo firewall-cmd --zone=public --add-port=8001/tcp --permanent
sudo firewall-cmd --reload
And after that. It works! :)
Of course you need to be aware that this is not safe solution, because anybody have access to your dashbord now. But I think that for local lab testing it will be sufficient.
In other linux distros, command for opening ports in firewall can be different. Please use google for that.
Wanted to link this answer by iamnat.
https://stackoverflow.com/a/40773822
Use minikube ip to get your minikube ip on the host machine
Create the NodePort service
You should be able to access the configured NodePort id via < minikubeip >:< nodeport >
This should work on the LAN as well as long as firewalls are open, if I'm not mistaken.
Just for my learning purposes I solved this issue using nginx proxy_pass. For example if the dashboard has been bound to a port, lets say 43587. So my local url to that dashboard was
http://127.0.0.1:43587/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Then I installed nginx and went to the out of the box config
sudo nano /etc/nginx/sites-available/default
and edited the location directive to look like this:
location / {
proxy_set_header Host "localhost";
proxy_pass http://127.0.0.1:43587;
}
then I did
sudo service nginx restart
then the dashboard was available from outside at:
http://my_server_ip/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/cronjob?namespace=default
I have Docker running on a Centos VM, with bridged network. running
ifconfig
shows that my VM gets a valid IP address. Now I'm running some software within a docker container/image (which works within other docker/networking configurations). Some of my code running in the docker container uses SSL Connection (java) to connect to itself. In all other run configurations, this works perfectly. But when running in bridged mode with Centos VM and docker-compose, I'm getting an SSL Connect exception, error: Host unreachable. I can ping to and ssh into the VM with the same IP address and this all works fine. I'm sorry that I can't post actual setup/code and scripts as it's too much to post and it's also proprietary.
I'm baffled by this - why am I getting Host Unreachable in the aforementioned configuration?
FYI, I resolved the problem on centos by using the default "bridged" containers provided by Docker, but adding the following to my firewalld configuration:
firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --reload
service firewalld restart
You might also need to open up a port to allow external communication, like so:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
My solution was to switch to an Ubuntu VM, because switching my docker compose to the default "bridged" network broke my aliases, which I really needed
The only remaining question here is why after configuring firewalld, a user-configured network on docker-compose cannot access the external IP, forcing us to switch to the default bridged network
My ssh stops working after I successfully installed Docker (following the official site instruction https://docs.docker.com/engine/installation/) on an ubuntu machine A. Now my laptop cannot ssh to A but ok for other machines, say B, that sitting in the same network environment as A. A can ssh to B and B can also ssh to A. What could be the problem? Can anyone suggest how I can make a diagnostic?
If you are using a vpn service you might be encountering an ip conflict between docker0 interface and your vpn service.
to resolve this:
stop docker service:
sudo service docker stop
remove old docker0 interface created by docker
ip link del docker0
configure docker0 bridge (in my case i only had to define "bip" option)
start the docker service:
sudo service docker start
Most probably there is ip conflict between docker0 interface and your VPN service. As already answered, way is to stop docker service, remove docker0 interface and configure daemon.json file. I added following lines to my daemon.json
{
"default-address-pools":
[
{"base":"10.10.0.0/16","size":24}
]
}
My VPN was providing me an IP like 192.168.. so I chose a base IP that does not fall in that range. Note that the daemon.json file does not exist, so you have to create it in, etc/docker/.
Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.