I need some information about routing public IP addresses assigned to the hypervisor into a VM.
I have installed XEN hypervisor on Centos 6.5, I have one NIC with IP 80.86.84.34 & Mask:255.255.255.0 I have an additional IP 85.25.14.195 & Mask: 255.255.255.255
Dom0 has eth0 & virbr0 with a virtual dhcp, the VM has address 192.168.122.4 & Mask:255.255.255.0 the VM has working outbound internet connection.
How do I correctly set dom0 to route connections for 85.25.14.195 into the VM?
Many thanks for your help and apologies if this is a basic question that has been answered before, please point me in the right direction.
First EDIT
I have managed to route the public IP by adding the below route in Dom0, DomU now correctly responds to packets received by Dom0 for the public IP forwarded over virbr0.
route add -net 85.25.14.195 gw 192.168.122.1 netmask 255.255.255.255
My follow up question is what rule is required in IP tables to allow traffic? As currently it is blocked when the firewall is running.
Second EDIT
OK, so I figured out the iptables, I had to remove the REJECT line on virbr0, I also had to add the following rule to make the outbound IP from Dom0 appear correctly:
-A POSTROUTING -s 192.168.122.2 -p tcp -j SNAT --to 85.25.14.195
You should be able to assign 85.25.14.195 as alias IP on virbr0 ( may be virbr0:1 ) and do simple IP Nat or forwarding. You need to do # sysctl -w net.ipv4.ip_forward=1 to be able to forward traffic coming on Public IP to internal Private IP.
Related
I have installed docker with default settings on 3 physical machines. Docker created interface docker0 with default ip 172.17.0.1 in bridge mode.
I expected that this network would be private. Problem is that I cannot ping 172.17.0.1 but i can apring 172.17.0.1. why is this so? I want this network to bi completely private.
➜ ~ arping -I eno1 172.17.0.1
ARPING 172.17.0.1 from 172.19.20.35 eno1
Unicast reply from 172.17.0.1 [00:19:99:16:3E:24] 0.678ms
Unicast reply from 172.17.0.1 [00:19:99:16:3E:70] 0.685ms
Unicast reply from 172.17.0.1 [70:4D:7B:3D:83:33] 0.687ms
Is it safe to run this on corporate network or should I get permission of sysadmins?
The created bridge is private to the host it is running on. Can you verify that the IP 172.17.0.1 is not part of your private/corporate network? It's possible that some other host in your network is responding.
If this is the problem, you should probably use another IP and CIDR for the docker0 bridge. Having clashes between docker internal CIDRs and private/corporate network CIDRs will result in strange and hard to debug behavior inside your containers.
Please read https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/ for details on how to customize these settings.
I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance
I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.
I got a problem at the moment and really don't know where the mistake is. I got a Root-Server from my ISP. This Root-Server has already one IP included and today i booked two more IP-Addresses. So what I want to do now is to map this two new IP-Adresses to two virtual Machines but also hold the included IP for the Root-Server. So how I realize this?
I thought something like:
br0 - holds the original IP of the Root-Server
br0:0 - holds first IP of first virtual Machine
br0:1 - holds second IP of second virtual Machine
But this doesn't work. Any Ideas. I'm really frustrated. Worked the hole Day on it and no solution.
I was also struggling with similar scenario, I've got server and got to point that setting up bridge did cut me out and had to restart to be able to reach it , anyway I've managed to handle it by iptables ..
#create alias for your second ip address (lets say its 111.222.333.2 , local address 192.168.1.2)
ifconfig eth0:1 111.222.333.2
#you should add netmask to be proper if you've got subnet
#now you should be able to ping this second address from outside world - try it,
#that is if you have not set up firewall to block pings ... flush iptables rules if you are not sure...
#set up NAT rule (network-address-translate : outside ip-> local ip and back local ip->outside ip)
#assumes your virtual machines lives as 192.168.1.2
iptables -t nat -A PREROUTING -d 111.222.333.2 -j DNAT --to-destination 192.168.1.2
iptables -t nat -A POSTROUTING -s 192.168.1.2 -j SNAT --to-source 111.222.333.2
This did help me with server which has multiple IP addresses and KVM virtual machines,
which were originally run in default network (forward mode=nat), so they had internet through NAT and internal IP at first , this also gives them outside-world public IP address.
You can also do these redirects on port-by-port basis by adjusting iptables rule to set address like -d 111.222.333.2:80 -p tcp and also adding port to local address ...
You may also need to turn on device IP forwarding, you can check that by for example sysctl -a | grep forward (where you should see it on for your eth0 device) , optionally adjusting it by proper sysctl command like
sysctl -w net.ipv4.ip_forward=1
Map br0 to VM1 and VM2 as TAP DEVICE and in VM1 and VM2 you can see that as eth device;
Assign IP1 and IP2 to VM1 and VM2 respectively; With this configuration you can ping from VM1 to VM2 and from host machine to any guest machine(VM1 or VM2);
The following link will help you setting up TAP device for VM via bridge; See qemu-ifup script specified there and understand it well.
I recently installed a Virtual Machine under Ubuntu 11.10, Right now, I assume, it is using NAT and its internal address is 192.168.122.88.
I have setup a web server in my virtual machine and I want to be able to access it when I go to 192.168.122.88 . However, right now it times out. When I log in to the virtual machine and try to access localhost it works.
So, for some reason, my iptables is blocking traffic from the host to the virtual machine (But not the other way around).
How can I allow traffic to flow from my host to my vm so I can see the webserver from the host?
I used Ubuntu Virtual Machine Manager w/KVM and libvirt.
I tried doing someting like this
iptables -t nat -A PREROUTING -d 192.168.0.10 -p tcp --dport 80 -j DNAT --to-destination 192.168.122.88:80
with no avail. Apparently it says there is no route to host??
'No route to host' means that the host machine doesn't have a IP address that can match the net you are trying to reach (you even don't have a default route), assure you have both nets on the host.
For example:
$ ip route show
default via 192.168.1.254 dev p3p1 src 192.168.1.103
default via 172.16.128.1 dev p3p1
169.254.0.0/16 dev p3p1 scope link metric 1003
172.16.128.0/17 dev p3p1 proto kernel scope link src 172.16.128.2
192.168.1.0/24 dev p3p1 proto kernel scope link src 192.168.1.103
On KVM host machines, I attach the virtual interfaces to some bridge. For example:
<interface type='bridge'>
<mac address='01:02:03:04:05:06'/>
<source bridge='br4'/>
<target dev='vnet4'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Then, I assign an IP address to the bridge on the host, and set it on up:
ip address add 192.168.0.1/24 dev br4
ip link set up dev br4
On my virtual machine, I assign some IP address on the subnet like 192.168.0.2, then the ping should be successful between them.
ping 192.168.0.1
Maybe you need to allow forwarded connections to the virtual machines. Try this:
iptables -I FORWARD -m state -d 192.168.122.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT
Hope this helps.