openstack networking can't ping/ssh from/to VMs - openstack

I've installed multi-node openstak using devstack script. Can run VMs, but problem with networking, can't ssh/ping from one VM to another. I can ssh to VM only from host (control1,computeX) where it running, from other hosts can't. Any suggestions?
nova-compute control1 nova enabled :-)
nova-cert control1 nova enabled :-)
nova-network control1 nova enabled :-)
nova-scheduler control1 nova enabled :-)
nova-consoleauth control1 nova enabled :-)
nova-compute compute1 nova enabled :-)
nova-volume compute1 nova enabled :-)
nova-network compute1 nova enabled :-)
nova-compute compute2 nova enabled :-)
nova-volume compute2 nova enabled :-)
nova-network compute2 nova enabled :-)
control1 /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 172.16.0.1
#address 172.16.0.101
netmask 255.255.255.0
network 172.16.0.0
broadcast 172.16.0.255
gateway 172.16.0.254
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet static
address 11.0.0.4
netmask 255.255.255.0
network 11.0.0.0
broadcast 11.0.0.255
compute1 /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 172.16.0.2
netmask 255.255.255.0
network 172.16.0.0
broadcast 172.16.0.255
gateway 172.16.0.254
dns-nameservers 8.8.8.8
auto eth1
iface eth1 inet static
address 11.0.0.5
netmask 255.255.255.0
network 11.0.0.0
broadcast 11.0.0.255
control1 /etc/nova/nova.conf
[DEFAULT]
verbose=True
auth_strategy=keystone
allow_resize_to_same_host=True
root_helper=sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
dhcpbridge_flagfile=/etc/nova/nova.conf
fixed_range=10.1.0.0/16
s3_host=172.16.0.1
s3_port=3333
network_manager=nova.network.manager.FlatDHCPManager
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
my_ip=172.16.0.1
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth1
sql_connection=mysql://root:supersecret#172.16.0.1/nova?charset=utf8
libvirt_type=qemu
libvirt_cpu_mode=none
instance_name_template=instance-%08x
novncproxy_base_url=http://172.16.0.1:6080/vnc_auto.html
xvpvncproxy_base_url=http://172.16.0.1:6081/console
vncserver_listen=127.0.0.1
vncserver_proxyclient_address=127.0.0.1
api_paste_config=/etc/nova/api-paste.ini
image_service=nova.image.glance.GlanceImageService
ec2_dmz_host=172.16.0.1
rabbit_host=172.16.0.1
rabbit_password=supersecret
glance_api_servers=172.16.0.1:9292
force_dhcp_release=True
multi_host=True
send_arp_for_ha=True
use_syslog=True
logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s
volume_api_class=nova.volume.cinder.API
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=ec2,osapi_compute,metadata

You may need to add rules to the default OpenStack security group to enable ping and SSH:
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
The first rule enables the Internet Control Message Protocol (ICMP) for VM instances (the ping command). The second rule enables TCP connections via the 22 port, which is used by SSH.

Try changing network_manager=nova.network.manager.FlatDHCPManager to network_manager=nova.network.manager.FlatManager and also try other configurations for your network_manager setting. It says that FLatManager should work here: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-flat-networking.html and it is similair to FLatDHCPManager, so not quite sure what the problem is as it seems you are bound to a physical ethernet card.

Try adding the following option to nova.conf, which controls whether the firewall (iptables) will allow traffic between instances:
allow_same_net_traffic=true
It should be on by default, so that's probably not your problem, but it's the first thing I would try.
This is from the table called Description of nova.conf file configuration options of networking options from the OpenStack Compute Admin guide.

Related

Need to configure second NIC to bridge LXC

I installed Ubuntu 16.04 Server on a machine with 4 network cards. I have interfaces eth0 and eth1 connected to the same switch. The interface eth0 is meant for the remote SSH connection to manage the server. I want to use eth1 to be bridged by br0. This bridge I want to use for LXC containers. This setup in a DHCP environment did not cause me any problems. The challenge is that the network this server is installed in is fully static. I received an IP range for this server with same subnet mask and gateway.
Setting up eth0 was no problem:
auto eth0
iface eth0 inet static
address 195.x.x.2
network 195.x.x.0
netmask 255.255.255.0
gateway 195.x.x.1
broadcast 195.x.x.255
dns-nameservers 150.x.x.105 150.x.x.106
The problem comes with the second interface eth1, because it has the same gateway as eth0 Ubuntu warns that only one default gateway can be set (which is logical). Therefor I had set eth1 as follows:
auto eth1
iface eth1 net static
address 195.x.x.3
network 195.x.x.0
netmask 255.255.255.0
broadcast 195.x.x.255
Problem with this setup is that I can externally ping eth0 at IP 195.x.x.2 but eth1 cannot be pinged or accessed via SSH. I managed to make it work with a lot of routing trickery but as many articles write on this that this way is a hole which gets deeper if you have static bridge and containers for this.
My question is: Does anyone has a straight forward approach for my issue? How should I configure eth0 and eth1 to normally bridge the containers to eth1 with static IP numbers?
Ok I solved it in the following manner, by still proceeding with the gateway routing solution as described in the question. Maybe people with the same issue could use this approach as well or if somebody knows a better solution feel free to comment.
On the host:
I enabled ARP filtering:
sysctl -w net.ip4.conf.all.arp_filter=1
echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf
Configured the /etc/network/interfaces:
auto lo
iface lo net loopback
# The primary network interface
auto etc0
iface eth0 inet static
address 195.x.x.2
network 195.x.x.0
netmask 255.255.255.0
gateway 195.x.x.1
broadcast 195.x.x.255
up ip route add 195.x.x.0/24 dev eth0 src 195.x.x.2 table eth0table
up ip route add default via 195.x.x.1 dev eth0 table eth0table
up ip rule add from 195.x.x.2 table eth0table
up ip route add 195.x.x.0/24 dev eth0 src 195.0.0.2
dns-nameservers 150.x.x.105 150.x.x.106
# The secondary network interface
auto eth1
iface eth1 net manual
# LXC bridge interface
auto br0
iface br0 inet static
address 195.x.x.3
network 195.x.x.0
netmask 255.255.255.0
bridge_ifaces eth1
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
up ip route add 195.x.x.0/24 dev br0 src 195.x.x.3 table br0table
up ip route add default via 195.x.x.1 dev br0 table br0table
up ip rule add from 195.x.x.3 table br0table
up ip route add 195.x.x.0/24 dev br0 src 195.0.0.3
Added the following lines to /etc/iproute2/rt_tables:
...
10 et0table
20 br0table
At the container config file (/var/lib/lxc/[container name]/config):
...
lxc.network.type = vets
lxc.network.link = br0
lxc.network.flags = up
lxc.network.hwadr = [auto create when bringing up container]
lxc.network.ipv4 = 195.x.x.4/24
lxc.network.ipv4.gateway = 195.x.x.1
lxc.network.veth.pair = [readable server name] (when using ifconfig)
lxc.start.auto = 0 (1 if you want the server to autostart)
lxc.start.delay = 0 (fill in seconds you want the container to wait before start)
I tested it by enabling apache2 on the container and accessed the webpage from outside the network. Hope it helps anybody who bumps into the same challenge I did.
PS: Do not forget if you choose to have the container's config file to assign the IP, that you disable it in the interface file of the container itself.
auto lo
iface lo inet loopback
auto eth0
iface eth0 net manual

Debian Network-Configuration for KVM - Provider OVH

I need some help to configure the network for my KVM. My Hostingprovider is OVH, and since they are a bit different, I'm in need of help.
My old Network-Interfaces File:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 94.23.209.170
netmask 255.255.255.0
network 94.23.209.0
broadcast 94.23.209.255
gateway 94.23.209.254
auto br0
iface br0 inet static
address 91.134.173.185
netmask 255.255.255.0
broadcast 91.134.173.185
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 8.8.8.8
iface eth0 inet6 static
address 2001:41d0:0002:54aa::
netmask 64
dns-nameservers 2001:41d0:3:163::1
post-up /sbin/ip -family inet6 route add 2001:41d0:0002:54ff:ff:ff:ff:ff dev eth0
post-up /sbin/ip -family inet6 route add default via 2001:41d0:0002:54ff:ff:ff:ff:ff
pre-down /sbin/ip -family inet6 route del default via 2001:41d0:0002:54ff:ff:ff:ff:ff
pre-down /sbin/ip -family inet6 route del 2001:41d0:0002:54ff:ff:ff:ff:ff dev eth0
I had to go into the resecue mode and remove the bridge, otherwise my machine wouldn't come up again. Can someone help me maybe, and tell me what I did wrong?
Thanks, and have a good day/night! :)
I had a similar problem. I just moved to OVH from Phoenix nap. I like the control panel better but their networking is a little weird. I have an IP on a /24 and I ordered a /29 for whm/cpanel and some other virtual machines.
My config to get the host functional:
auto eth0
iface eth0 inet manual
address 111.222.333.145
netmask 255.255.255.0
network 111.222.333.0
broadcast 111.222.333.255
gateway 111.222.333.254
auto br0
iface br0 inet static
address 111.222.333.145
netmask 255.255.255.0
network 111.222.333.0
broadcast 111.222.333.255
gateway 111.222.333.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 213.186.33.99
NOTE: 111.222.333 is your first 3 octets. Obviously change them. the .145 was arbitrary to illustrate a host assigned to you.
Then restart the networking service.
service networking restart
Now I had to get a CentOS container for WHM/cPanel going and a few debian containers.
I'm assuming you bought a block of IPs and need to get that IP into a VM. Log into the OVH control panel, Select IP. Expand the IP block. to right you will see a gear you can click on. Create an OVH Virtual MAC. Take note of that MAC!
For CentOS the guide is correct.
In Debian it was a missing little something.
You want to edit the /etc/libvirt/qemu/autostart/YOU_VM_NAME.xml
...
<interface type='bridge'>
<mac address='YO:UR:VI:RT:MA:CA'/>
...
After saving restart the libvirtd service. Restart your debian container to pick up the new MAC and you should be good.
When installing I could not set an IP out side the range of my network. After getting virt-manager up, I logged in blew out the GW and modified the interfaces file according to the guide:
Don't need to change your host network config.
You need a Failover IP (create in OVH Panel). Then, assign a Virtual MAC for it.
In your dedicated server:
virsh net-edit default
Change this way:
<network>
<name>default</name>
<uuid>...</uuid>
<bridge name='virbr0' stp='off' delay='0'/>
<mac address='...'/>
</network>
Now edit the VM:
virsh edit myvmname
and set (change "eno1" to your network card name, like "eth0" or "ens0p0" etc):
<interface type='direct'>
<mac address='--VIRTUAL MAC CREATED IN OVH PANEL--'/>
<source dev='eno1' mode='bridge'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Now edit your VM network (in my example, a Debian /etc/network/interfaces and change the network name as well):
auto eno1
iface eno1 inet static
address -FAILOVER IP-
netmask 255.255.255.255
gateway -HOST GATEWAY-
broadcast -FAILOVER IP-
So, the VM will have the failover IP and use the same gateway than the host. In OVH the gateway is final .254 (or use ip r in the host).

Docker public network configuration

I have 1 host with ip 10.120.194.214/24
And I have a range set from my router to my host ip, the range is 10.120.187.0/24 and his gateway is 10.120.187.1
I'm trying to create a docker network with this range
docker network create --driver=bridge --subnet=10.120.187.0/24 --ip- range=10.120.187.128/25 --gateway=10.120.187.254 -o "com.docker.network.bridge.enable_icc=true" -o "com.docker.network.bridge.host_binding_ipv4"="10.120.187.1" mypublicnet
if I try to ping to 10.120.187.254 from the LAN i don't receive ping
the host configuration is this
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.120.194.214
netmask 255.255.255.0
gateway 10.120.194.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 10.120.194.1 10.120.194.10
The idea is that I can run containers with ip accesible from the LAN, Every container must have diferent ip.
Contrary to what you think, docker bridge network is not bridged to the physical interface, but is NATed.
To achieve what you are asking for in production, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.

Xen dom0 cannot connect to the internet

I configured a Linux Mint 17 host O/S to install Xen as per the following guide
Xen Project Beginner's Guide
Now, after configuring the network interfaces as instructed, I rebooted the machine. I can see that the bridge has an IP assigned to it via DHCP, but I cannot connect to the internet.
I can even successfully ping to the gateway, but not any other address.
What am I doing wrong?
This is my /etc/network/interfaces file
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto xenbr0
iface xenbr0 inet dhcp
bridge_ports eth0
In my case, I only added eth0 and eth1 interfaces into the bridge
bridge name bridge id STP enabled interfaces
br0 8000.00259e1c426c no eth0
eth1
But there was an interface called vif3.0 which has to be included into the bridge interface. So, i did this
brctl addif br0 vif3.0
Everything works fine.

openstack instance getting ip and not getting ip

I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance

Resources