Vagrant with VPN connection over host computer - vpn

I'm trying to get my Vagrant CentOS box connected to VPN thru my host computer. I followed this: https://gist.github.com/mitchellh/1277049
but I still can't connect to the VPN only hosts.
I'm on Vagrant version 1.3.5 and CentOS release 6.4.
Vagrant configs: config.vm.network :public_network and, as noted by the link above, I have
vb.cusotomize["modifyvm", :id, "--natdnshostresolver1", "on"]
With this setup I don't get any errors, it just doesn't seem to be working. I can reach hosts on my host machine but not thru my VM. When the VM is booting I choose my 2) en0: Ethernet 1 connection.

As Terry Wang answered, remove public_network and as then follow this answer https://superuser.com/questions/542709/vagrant-share-host-vpn-with-guest
The newer versions of Vagrant will only use host as dns with this:
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end

The Gist 1277049 is using default NAT networking for the Vagrant box.
However, you are using Public Network (Bridged) with your en0. That's why it is NOT working.
NOTE: I don't think you can bridge to a VPN connection (virtual adaptors, no driver). By using NAT, you are able to access the systems on the other side of the VPN connection.
To fix, just comment out the config.vm.network :public_network line. By default it'll use NAT and the box should be able to access whatever the host is capable of.

Related

vagrant Multipe networking

I have installed magento 2 in vagrant with in docker machine, this docker machine have port forwarding concepts, I set private network, with nat and host-only, Now only access magento 2 in hostmachine.
I need to access locally connected remote machine also so, i try to change private network to public network with bridge.
Vagrant File:
Vagrant.configure("2") do |config|
config.vm.box = "machine"
config.ssh.username = "vagrant"
config.vm.hostname = "www.myhost.net"
config.ssh.forward_agent = "true"
config.vm.network "public_network", ip: "192.168.56.40"
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
if Vagrant::Util::Platform.windows?
config.vm.synced_folder ".", "/vagrant", :mount_options => ["dmode=777", "fmode=777"]
else
config.vm.synced_folder ".", "/vagrant", :nfs => { :mount_options => ["dmode=777", "fmode=777"] }
end
end
But, throw
NFS requires a host-only network to be created.
Please add a host-only network to the machine (with either DHCP or a
static IP) for NFS to work.
I need to add Multiple Network to vagrant
nat
host-onloy(for nfs)
bridge (for access remote machine)
Suggest me How to resolve this.
You need to change your public_network to private_network for nfs to work
If you are using the VirtualBox provider, you will also need to make sure you have a private network set up. This is due to a limitation of VirtualBox's built-in networking. With VMware, you do not need this.
so :
you can change to VMWare (but you have some additional fees)
you do not use nfs
you can setup another network interface for bridge and use this network interface if you need to connect to the remote machine, you should be able to ping (ping -I ethX mylocalmachine) but I am not sure how to work to get connection in

Docker 1.10 container's IP in LAN

Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.

Vagrant cannot forward the specified ports on this VM

I have Vagrant in use for one box profile. Now I want to use Vagrant for another box (b2), but it says that bioiq's instance is consuming the forwarded port 2222 (which it is).
Now, if I configure b2 with the below, Vagrant still tries to use 2222.
Vagrant.configure("2") do |config|
config.vm.box = 'precise32'
config.vm.box_url = 'http://files.vagrantup.com/precise32.box'
config.vm.network :forwarded_port, guest: 22, host: 2323
# Neither of these fix my problem
# config.vm.network :private_network, type: :dhcp
# config.vm.network :private_network, ip: "10.0.0.200"
end
I've tried various ways from other SO questions to set the :forwarded_port (see here and here). I also tried this Google Group post, to no avail. I keep getting this message.
Vagrant cannot forward the specified ports on this VM, since they
would collide with some other application that is already listening
on these ports. The forwarded port to 2222 is already in use
on the host machine.
To fix this, modify your current projects Vagrantfile to use another
port. Example, where '1234' would be replaced by a unique host port:
config.vm.network :forwarded_port, guest: 22, host: 1234
Sometimes, Vagrant will attempt to auto-correct this for you. In this
case, Vagrant was unable to. This is usually because the guest machine
is in a state which doesn't allow modifying port forwarding.
I don't know why Vagrant consistently ignores my directives. The posted configuration doesn't work. Has anyone overcome this?
In case of ssh port, Vagrant solves port collisions by itself:
==> ubuntu64: Fixed port collision for 22 => 2222. Now on port 2200.
However, you still can create unavoidable collision by:
Creating first vagrant env (it will get port 2222 for ssh)
Suspend that env (vagrant suspend)
Create second vagrant env (it will again get port 2222, since it is now unused)
Try bringing first environment up again by vagrant up
You will get the error message you are getting now.
The solution is to use vagrant reload, to let vagrant discard virtual machine state (which means it will shut it down the hard way - so be careful if you have any unsaved work there) and start the environment again, solving any ssh port collisions on the way by itself.
I've just run into a problem on current versions of Mac OSX (10.9.4) and VirtualBox (4.3.14) where the default ssh port 2222 is both unused and unbound by vagrant up. It was causing the sanity check ssh connection to timeout indefinitely.
This isn't the exact same problem, but an explicit forward resolved it:
config.vm.network :forwarded_port, guest: 22, host: 2201, id: "ssh", auto_correct: true
This suggestion comes from a comment on the Vagrant GitHub issue 1740.
It's not clear whether the port forwarded to 22 is being detected or if the ID is used, but it's working for me.
My computer is Windows 10, and I solved this problem by disabling the 8080 port. because it is said that "the forwarded port 8080 is already in use on the host machine."
So I edit the Vagrantfile and comment the port 8080.

Setting subnet mask In vagrant for public network

I need to run some services in vagrant, so that its accessible in browser. By giving network type as public_network in Vagrantfile, I am getting a vagrant Ip (10.251.70.201).
Now, using this vagrant Ip am able to get these service in other device's browser (which are in the same network: 10.251.70.*). Now I need to expand the visibility of the vagrant Ip in other networks (like 10.251.*.*). How can I achieve this?
I assume you are using Virtualbox provider. As an example:
config.vm.network "public_network", :netmask => "255.255.0.0"
This is example of using Virtualbox provider for Vagrant version 2 config file:
config.vm.network "public_network", bridge: "eth0", ip:"192.168.1.20", netmask:"255.255.0.0"

browser in host can not see vagrant box, portforward does not work

I have installed Vagrant in my Window XP, and in my Vagrantfile I have:
Vagrant::Config.run do |config|
# Setup the box
config.vm.box = "lucid32"
config.vm.forward_port 80, 8080
config.vm.network :hostonly, "192.168.10.200"
end
But I see no sign of my vagrant box when I type "http://192.168.10.200:8080" in browser.
IP address of the virtual box is correct, because from within the vbox, I have:
vagrant#lucid32:~$ ifconfig
....
eth1 Link encap:Ethernet HWaddr 08:00:27:79:c5:4b
inet addr:192.168.10.200 Bcast:192.168.10.255 Mask:255.255.255.0
There seem to be no firewall problem because if I type
vagrant#lucid32:~$ curl 'http://google.com'
it works fine.
I have read Vagrant's port forwarding not working
and tried:
vagrant#lucid32:~$ curl 'http://localhost:80'
curl: (7) couldn't connect to host
and also
vagrant#lucid32:~$ curl 'http://localhost:8080'
curl: (7) couldn't connect to host
So, looks like port forward is not working...
If you know what I can do so I can access my vbox from host browser, can you help me?
Thanks in advance
If you just started a Vagrant box with this Vagrantfile, there is nothing more than an empty Ubuntu Lucid, which does not run any service yet. So there is nothing served on port 80, this is why there is nothing to see either from inside the box on port 80 or the host machine on 8080.
For you Vagrant machine to provide some services (such as a web server on port 80), you have to do some provisioning. You can do it manually or using Chef or Puppet which are hooked into Vagrant's up process.
I had a similar problem. Sometimes using port forwarding for ports below 2000 is a problem. What worked for me is choosing ports that are above 2000. So my vagrantfile now looks like:
config.vm.network :forwarded_port, host: 4500, guest: 9000
Typing localhost:4500 on my host machine now just works fine. It seems like you are on an older version of vagrant than mine, so you can edit your vagrant file to something like
config.vm.forward_port 9000, 4500
Now typing localhost:4500 on your host machine should work fine.
Good luck,

Resources