neutron openvswitch br-int can't reach external network br-ex through patch - networking

I installed openstack liberty with 2 node configuration (1 controller, 1 compute), each one having 1 public nic and 1 private nic, and following this scenario with dvr: http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
In the controller node I created br-ex which has the eth0' IP (this is the public nic), and installed l3-agent (dvr_snat mode) ovs-agent dhcp-agent and services.
Using the admin account I created the ext-net and attached my subnet to it.
Using the demo tenant, I created then a demo-net, a demo-subnet and a demo-router, then I set the gateway neutron router-gateway-set demo-net ext-net
So my ovs-vsctl show looks as the following
Bridge br-int
fail_mode: secure
Port "sg-ef30b544-a4"
tag: 4095
Interface "sg-ef30b544-a4"
type: internal
Port "qr-a4b8653c-78"
tag: 4095
Interface "qr-a4b8653c-78"
type: internal
Port "qg-d33db11d-60"
tag: 1
Interface "qg-d33db11d-60"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap9f36ccde-1e"
tag: 4095
Interface "tap9f36ccde-1e"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port "eth0"
Interface "eth0"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
ovs_version: "2.4.0"
and the network namespaces
root#controller:~# ip netns
qdhcp-3e662de0-9a85-4d7d-bb85-b9d4568ceaec
snat-f3f6213c-384c-4ec5-914c-e98aba89936f
qrouter-f3f6213c-384c-4ec5-914c-e98aba89936f
My problem is that l3-agent fails to setup the snat network, because it seems that the network is unreachable.
ERROR neutron.agent.l3.agent Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'snat-f3f6213c-384c-4ec5-914c-e98aba89936f', 'ip', '-4', 'route', 'replace', 'default', 'via', '149.XXX.YYY.ZZZ', 'dev', 'qg-d33db11d-60']
ERROR neutron.agent.l3.agent Exit code: 2
ERROR neutron.agent.l3.agent Stdin:
ERROR neutron.agent.l3.agent Stdout:
ERROR neutron.agent.l3.agent Stderr: RTNETLINK answers: Network is unreachable
ping -I br-ex 8.8.8.8 works.
ping -I br-int 8.8.8.8 says network unreachable.
As you can see there is a patch between br-int and br-ex, so it should work, but it doesn't.

Related

cannot ping each other in same lan on openwrt with virtual port and physical port

my openwrt-x86 has been running for a while inside exsi virtual environment(it's a VM,eth0 eth1 is virtual NIC of exsi),and one day I tried to add a pass through port(eth2 physical) into this openwrt as a lan port so I can access the lan managed by this openwrt by physically connect a wire into eth2, but I found that I can got ip address and dhcp normally,but cannnot connect other ipaddress in the same lan except the openwrt itself and wan network.
my config file of openwrt was
root#OpenWrt:/etc/config# cat network
config interface 'loopback'
option device 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fdc8:982a:611a::/48'
config device
option name 'br-lan'
option type 'bridge'
list ports 'eth0'
list ports 'eth2'
option ipv6 '0'
config interface 'lan'
option device 'br-lan'
option proto 'static'
option ip6assign '60'
option ipaddr '10.0.0.1'
option netmask '255.255.0.0'
config interface 'wan'
option device 'eth1'
option proto 'dhcp'
option metric '5'
config interface 'wan6'
option device 'eth1'
option proto 'dhcpv6'
for example I got 10.0.0.10 dhcp ipaddr by physically connected to eth2,then my wan network still fine I can go google,but when I tried ping 10.0.0.151(a vm that in openwrt's lan) and got icmp not reachable
[root#master1 ~]# ping 10.0.0.151
PING 10.0.0.151 (10.0.0.151) 56(84) bytes of data.
From 10.0.0.10 icmp_seq=1 Destination Host Unreachable
From 10.0.0.10 icmp_seq=2 Destination Host Unreachable
From 10.0.0.10 icmp_seq=3 Destination Host Unreachable
From 10.0.0.10 icmp_seq=4 Destination Host Unreachable
From 10.0.0.10 icmp_seq=5 Destination Host Unreachable
From 10.0.0.10 icmp_seq=6 Destination Host Unreachable
and the route table on 10.0.0.10 seems fine
[root#master1 ~]# ip route
default via 10.0.0.1 dev ens192 proto dhcp src 10.0.0.10 metric 100
10.0.0.0/16 dev ens192 proto kernel scope link src 10.0.0.10 metric 100
solved,due to Exsi set internal switch NIC
Promiscuous Mode =false
Forged Transmits =false
by default,so vm in virtual lan cannot receive ARP response delivered,enable them to make it works

OpenStack multi-node network configuration on Ubuntu

I am attempting to get a simple 2-node deployment set up via devstack. I have followed both the multi-node lab and the "Using devstack with Neutron" guide. i made the most progress with the latter. however, i still cannot seem to communicate with instances running on my compute-only node. instances that run on the controller/compute node seem fine. i can ping/ssh to them from any machine in my office.
my environment: 2 ubuntu 18.04 bare metal servers, private network with a router handing out DHCP addresses (i have a small range of addresses set aside). i disabled Ubuntu NetworkManager and configured via ifupdown in /etc/network/interfaces:
auto enp0s31f6
iface enp0s31f6 inet static
address 192.168.7.170
netmask 255.255.255.0
gateway 192.168.7.254
multicast 192.168.7.255
dns-nameservers 8.8.8.8 8.8.4.4
controller/compute node local.conf is configured according to the guide:
[[local|localrc]]
HOST_IP=192.168.7.170
SERVICE_HOST=192.168.7.170
MYSQL_HOST=192.168.7.170
RABBIT_HOST=192.168.7.170
GLANCE_HOSTPORT=192.168.7.170:9292
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
ADMIN_PASSWORD=Passw0rd
LOGFILE=/opt/stack/logs/stack.sh.log
## Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="192.168.7.0/24"
IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
Q_FLOATING_ALLOCATION_POOL=start=192.168.7.249,end=192.168.7.253
PUBLIC_NETWORK_GATEWAY="192.168.7.254"
PUBLIC_INTERFACE=enp0s31f6
# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE=False
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex
the one difference is in Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE. i found if i did not set this, i saw a lot of packet loss on the server. i don't understand why the gateway would be added to the vSwitch as a secondary address.
another oddity that i noticed is that once the OVS bride was set up and my public interface added as a port, the network gateway no longer worked as a DNS server. if i use google's it's fine.
on the compute only node i have local.conf:
[[local|localrc]]
HOST_IP=192.168.7.172
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_HOST=192.168.7.170
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ADMIN_PASSWORD=Passw0rd
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
PUBLIC_INTERFACE=enp0s31f6
ENABLED_SERVICES=n-cpu,rabbit,q-agt,placement-client
i run stack.sh on the controller/compute node, then the compute only node. the installation looks good. i can set up the security group, ssh keypair etc. and launch instances. i allocate floating IPs for each and associate. the addresses come from the pool as expected. i can see the tunnels set up on each node with OVS:
controller$ sudo ovs-vsctl show
1cc8a95d-660d-453f-9772-02393adc2031
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-c0a807ac"
Interface "vxlan-c0a807ac"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.170", out_key=flow, remote_ip="192.168.7.172"}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "enp0s31f6"
Interface "enp0s31f6"
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qg-7db4efa8-8f"
tag: 2
Interface "qg-7db4efa8-8f"
type: internal
Port "tap88eb8a36-86"
tag: 1
Interface "tap88eb8a36-86"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port "qr-e0e43871-2d"
tag: 1
Interface "qr-e0e43871-2d"
type: internal
Port "qvo5a54876d-0c"
tag: 1
Interface "qvo5a54876d-0c"
Port "qr-9452dacf-82"
tag: 1
Interface "qr-9452dacf-82"
type: internal
ovs_version: "2.8.1"
and on the compute-only node:
compute$ sudo ovs-vsctl show
c817878d-7127-4d17-9a69-4ff296adc157
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qvo1b56a018-10"
tag: 1
Interface "qvo1b56a018-10"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vxlan-c0a807aa"
Interface "vxlan-c0a807aa"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.172", out_key=flow, remote_ip="192.168.7.170"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.8.1"
any ideas on what might be wrong in my set up? any suggestions for what i might try?
It turned out that the Ubuntu Network Manager was reasserting itself after i had stopped the services. i took a more drastic step of disabling and purging it from my servers: systemctl disable NetworkManager.service and then apt-get purge network-manager
once it was gone for good, things started working as advertised. i started with the local.conf above and was able to spin up instances on both servers, and connect to them and they had no trouble connecting to each other etc. i then added more pieces to my stack (heat, magnum, barbican, lbaasv2) and things continue to be reliable.
the moral of the story is: Ubuntu Network Manager and devstack ovs config do not play well together. to get the latter working, you must remove the former (as near as i can tell).
also, prior to all this trouble with ovs, i had to apply a proposed fix to devstack's lib/etcd3 script on my compute-only node. it's a small, but required, change in the stable/queens branch as of 27-Sept-2018. see https://github.com/openstack-dev/devstack/commit/19279b0f8. without this stack.sh fails on the compute node trying to bind to an address on the controller node.

Cannot acces provider network (Openstack Packstack Opendaylight integration)

I try to integerate Openstack that build with packstack (Centos) with OpenDayLight.
this is my topology
Openstack Controller : 10.210.210.10 & 10.211.211.10
- eth1 : 10.211.211.10/24
- eth0 : 10.210.210.10/24
Openstack Compute : 10.210.210.20 & 10.211.211.20
- eth1 : 10.211.211.20/24
- eth0 : 10.210.210.20/24
OpenDayLight : 10.210.210.30
- eth1 : 10.210.210.30/24
Provider Network : 10.211.211.0/24
Tenant Network : 10.210.210.0/24
Openstack Version : Newton
OpenDayLight Version : Nitrogen SR1
this is my packstack configuration changes
CONFIG_HEAT_INSTALL=y
CONFIG_NEUTRON_FWAAS=y
CONFIG_NEUTRON_VPNAAS=y
CONFIG_LBAAS_INSTALL=y
CONFIG_CINDER_INSTALL=n
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_AODH_INSTALL=n
CONFIG_GNOCCHI_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
CONFIG_PROVISION_DEMO=n
CONFIG_COMPUTE_HOSTS=10.X0.X0.20
CONFIG_USE_EPEL=y
CONFIG_KEYSTONE_ADMIN_PW=rahasia
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,gre,vlan,flat,local
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=external
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=external:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth1
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex
I try to follow this tutorial : http://docs.opendaylight.org/en/stable-nitrogen/submodules/netvirt/docs/openstack-guide/openstack-with-netvirt.html
the instance is getting dhcp in tenant network and ping the ip tenant router gateway. but i cant ping all of provider network.
this is all of my configuration when integrating with opendaylight
OPENDAYLIGHT
** Set ACL
mkdir -p etc/opendaylight/datastore/initial/config/
cp system/org/opendaylight/netvirt/aclservice-impl/0.5.1/aclservice-impl-0.5.1-config.xml etc/opendaylight/datastore/initial/config/netvirt-aclservice-config.xml
sed -i s/stateful/transparent/ etc/opendaylight/datastore/initial/config/netvirt-aclservice-config.xml
export JAVA_HOME=/usr/java/jdk1.8.0_162/jre
./bin/karaf
** Install Feature
feature:install odl-dluxapps-nodes odl-dlux-core odl-dluxapps-topology odl-dluxapps-applications odl-netvirt-openstack odl-netvirt-ui odl-mdsal-apidocs
OPENSTACK CONTROLLER NODE
systemctl stop neutron-server
systemctl stop neutron-openvswitch-agent
systemctl disable neutron-openvswitch-agent
systemctl stop neutron-l3-agent
systemctl disable neutron-l3-agent
systemctl stop openvswitch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch
ovs-vsctl set-manager tcp:10.210.210.30:6640
ovs-vsctl del-port br-int eth1
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653
ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.10
ovs-vsctl get Open_vSwitch . other_config
yum -y install python-networking-odl
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
cat <<EOT>> /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_odl]
password = admin
username = admin
url = http://10.210.210.30:8080/controller/nb/v2/neutron
EOT
crudini --set /etc/neutron/plugins/neutron.conf DEFAULT service_plugins odl-router
crudini --set /etc/neutron/plugins/dhcp_agent.ini OVS ovsdb_interface vsctl
mysql -e "DROP DATABASE IF EXISTS neutron;"
mysql -e "CREATE DATABASE neutron CHARACTER SET utf8;"
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
systemctl start neutron-server
sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings=external:eth1
OPENSTACK COMPUTE NODE
systemctl stop neutron-openvswitch-agent
systemctl disable neutron-openvswitch-agent
systemctl stop neutron-l3-agent
systemctl disable neutron-l3-agent
systemctl stop openvswitch
rm -rf /var/log/openvswitch/*
rm -rf /etc/openvswitch/conf.db
systemctl start openvswitch
ovs-vsctl set-manager tcp:10.210.210.30:6640
ovs-vsctl set-manager tcp:10.210.210.30:6640
ovs-vsctl del-port br-int eth1
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653
ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.20
ovs-vsctl get Open_vSwitch . other_config
yum -y install python-networking-odl
sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings=external:eth1
i try to mapping to eth1 or br-ex but its same. i cant ping all provider network. (only the gateway 10.211.211.1 from controller or compute node). thanks :)
I have successfully deployed L3 routing with OpenStack and OpenDaylight.
I wrote a blog about it at https://communities.cisco.com/community/developer/openstack/blog/2017/02/01/how-to-deploy-openstack-newton-with-opendaylight-boron-and-open-vswitch.
The reference configurations are at https://github.com/vhosakot/Cisco-Live-Workshop/tree/master/openstack_ODL. Please keep in mind that some configurations may have changed in the newer releases.
Use the networking-odl project at https://github.com/openstack/networking-odl which automates the installation of OpenStack with OpenDaylight.
There is also another sample/example configuration file at https://github.com/openstack/networking-odl/blob/master/devstack/local.conf.example.
REPORT
#
OVS-VSCTL SHOW
CONTROLLER
[root#pod21-controller ~]# ovs-vsctl show
525fbe7c-e60c-4135-b0a5-178d76c04529
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "gre-0ad2d214"
Interface "gre-0ad2d214"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-0ad2d214"
Interface "vxlan-0ad2d214"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: "2.6.1"
COMPUTE
[root#pod21-compute ~]# ovs-vsctl show
f4466d5a-c1f5-4c5c-91c3-636944cd0f97
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-0ad2d20a"
Interface "gre-0ad2d20a"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-0ad2d20a"
Interface "vxlan-0ad2d20a"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"}
ovs_version: "2.6.1"
OVS-VSCTL AFTER CONFIG
CONTROLLER
[root#pod21-controller ~]# ovs-vsctl show
71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1
Manager "tcp:10.210.210.30:6640"
is_connected: true
Bridge br-int
Controller "tcp:10.210.210.30:6653"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge br-ex
Controller "tcp:10.210.210.30:6653"
is_connected: true
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "2.6.1"
COMPUTE
[root#pod21-compute ~]# ovs-vsctl show
3bede8e2-eb29-4dbb-97f0-4cbadb2c0195
Manager "tcp:10.210.210.30:6640"
is_connected: true
Bridge br-ex
Controller "tcp:10.210.210.30:6653"
is_connected: true
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-int
Controller "tcp:10.210.210.30:6653"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
ovs_version: "2.6.1"
AFTER ADDING INSTANCE
CONTROLLER
[root#pod21-controller ~(keystone_admin)]# ovs-vsctl show
71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Manager "tcp:10.210.210.30:6640"
is_connected: true
Bridge br-int
Controller "tcp:10.210.210.30:6653"
is_connected: true
fail_mode: secure
Port "tapab981c1e-4b"
Interface "tapab981c1e-4b"
type: internal
Port "qr-cba77b1d-73"
Interface "qr-cba77b1d-73"
type: internal
Port br-int
Interface br-int
type: internal
Port "tun7314cbc7b3e"
Interface "tun7314cbc7b3e"
type: vxlan
options: {key=flow, local_ip="10.210.210.10", remote_ip="10.210.210.20"}
Bridge br-ex
Controller "tcp:10.210.210.30:6653"
is_connected: true
Port "qg-1ba8c01a-15"
Interface "qg-1ba8c01a-15"
type: internal
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "2.6.1"
COMPUTE
[root#pod21-compute ~]# ovs-vsctl show
3bede8e2-eb29-4dbb-97f0-4cbadb2c0195
Manager "tcp:10.210.210.30:6640"
is_connected: true
Bridge br-ex
Controller "tcp:10.210.210.30:6653"
is_connected: true
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-int
Controller "tcp:10.210.210.30:6653"
is_connected: true
fail_mode: secure
Port "tun51bba5158fe"
Interface "tun51bba5158fe"
type: vxlan
options: {key=flow, local_ip="10.210.210.20", remote_ip="10.210.210.10"}
Port "tap1e71587f-32"
Interface "tap1e71587f-32"
Port "tap5c0a404b-75"
Interface "tap5c0a404b-75"
Port br-int
Interface br-int
type: internal
ovs_version: "2.6.1"
Did you mean you cannot ping 10.211.211.10?It seems like because you have add eth1 onto br-ex,so you cannot ping eth1 directly,you can try this:
ifconfig eth1 0
ifconfig br-ex 10.211.211.10
or you can just delete the port eth1 from br-ex:
ovs-vsctl del-port br-ex eth1

Ping failed to second ip in openstack instance

I have RDO openstack environment in a machine for testing. The RDO was installed with packstack --allinone command. Using HOT I have created two instances. One with cirros image and another with Fedora. The Fedora instance have two interfaces that are connected to same network while cirros have only one interface and connected to same network. The template looks like this -
heat_template_version: 2015-10-15
description: Simple template to deploy two compute instances
resources:
local_net:
type: OS::Neutron::Net
local_signalling_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: local_net }
cidr: "50.0.0.0/24"
ip_version: 4
fed:
type: OS::Nova::Server
properties:
image: fedora
flavor: m1.small
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: fed_port1 }
- port: { get_resource: fed_port2 }
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fed_port2:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
cirr:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: cirr_port }
cirr_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
The Fedora instance got two ips (50.0.0.3 and 50.0.0.4). Cirros got ip 50.0.0.5. I can ping 50.0.0.3 from cirros instance but not the ip 50.0.0.4. If I manually down the interface with ip 50.0.0.3 in the Fedora instance, then only I can ping 50.0.0.4 from cirros instance. Is there a restriction in the configuration of neutron that prohibits ping to both the ips of Fedora instance at same time. Please help.
This happens because of the default firewall-ing done by OpenStack networking (neutron) -- it simply drops any packets received on a port if the source address of the packet does not match the IP address assigned to the port.
When cirros instance sends ping packet to 50.0.0.4, fedora instance receives it on the interface with IP address 50.0.0.4. However, when it is responding back to cirros's IP address 50.0.0.5, the linux networking stack on your fedora machine has two interfaces to choose from to send out the response (because both those interfaces are connected to the same network). In your case, fedora choose to respond back on on 50.0.0.3. However, the source IP address in the packet is still 50.0.0.4, and thus the OpenStack networking layer simply drops it.
General recommendation is to not have multiple interfaces on the same network. If you want multiple IP addresses from the same network for your VM, you can use "fixed_ips" option in your heat template:
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fixed_ips:
- ip_address: "50.0.0.4"
- ip_address: "50.0.0.3"
Since DHCP server would offer only IP address, fedora would be configured with only one IP. You can add another IP to your interface using "ip addr add" command (see http://www.unixwerk.eu/linux/redhat/ipalias.html):
ip addr add 50.0.0.3/24 brd + dev eth0 label eth0:0

Minimal devstack with nova and floating ips

I'm trying to set up a minimal devstack that can launch nova instances, some which will have public addresses, and some which will need to open connections to the public network. I'd like to be able to assign floating ips to the instances, and have traffic originating from the instances with public addresses reach the public network.
Addressing
Devstack will be running on a single Ubuntu 14.04 box with two physical interfaces. The first interface eth0 is on 10.48.4.0/22, on which I own the address 10.48.6.232; this is the management connection to the box. The second interface eth1 is on 10.48.8.0/22 and owns the addresses 10.48.11.6 and 10.48.11.57-10.48.11.59. eth1 is configured to use the 10.48.11.6 address, leaving a small pool of addresses for the floating range.
auto eth1
iface eth1 inet static
address 10.48.11.6
netmask 255.255.252.0
I'd like to use the range 10.48.11.57-10.48.11.59 as the floating IP pool. This makes up the start of my local.conf
[[local|localrc]]
# Devstack host IP eth1 address
HOST_IP=10.48.11.6
# Private network
FIXED_RANGE=10.90.100.0/24
NETWORK_GATEWAY=10.90.100.1
# Public network
Q_FLOATING_ALLOCATION_POOL=start=10.48.11.57,end=10.48.11.59
FLOATING_RANGE=10.48.8.0/22
PUBLIC_NETWORK_GATEWAY=10.48.8.1
# Public network is eth1
PUBLIC_INTERFACE=eth1
ML2
The remainder of the relevant part of my local.conf is configuring neutron and ovs to use the public network. I've followed the instructions in the comments in neutron-legacy.
# Neutron
# -------
PUBLIC_BRIDGE=br-ex
Q_USE_PROVIDERNET_FOR_PUBLIC=True
PUBLIC_PHYSICAL_NETWORK=public
OVS_BRIDGE_MAPPINGS=public:br-ex
# Neutron Provider Network
ENABLE_TENANT_TUNNELS=True
PHYSICAL_NETWORK=public
OVS_PHYSICAL_BRIDGE=br-ex
# Use ml2 and openvswitch
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,logger
Q_AGENT=openvswitch
enable_service q-agt
# ml2 vxlan
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8472)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True
Resulting network
I changed the default security policy for the demo project to be permissive.
The resulting network routes traffic between the devstack host and the private subnet, but not between the devstack host and the 10.48.8.0/22, between instances and the physical 10.48.8.0/22 or between the physical 10.48.8.0/22 network and the public 10.48.8.0/22 subnet.
\ destination gateway devstack router1 private
source \ 10.48.8.1 10.48.11.6 10.48.11.57 10.90.100.0/24
physical pings X X na
10.48.8.0/22
devstack X pings pings pings
10.48.11.6
private X pings pings pings
10.90.100.0/24
Traffic leaving the public network should reach the physical network. Traffic leaving the private network should be NATed onto the public network. Traffic entering from the physical network should reach the public network.
The resulting ovs bridges are
$sudo ovs-vsctl show
33ab25b5-f5d9-4f9f-b30e-20452d099f2c
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tapc5733ec7-e7"
tag: 1
Interface "tapc5733ec7-e7"
type: internal
Port "qvo280f2d3e-14"
tag: 1
Interface "qvo280f2d3e-14"
Port br-int
Interface br-int
type: internal
Port "qr-9a91aae3-7c"
tag: 1
Interface "qr-9a91aae3-7c"
type: internal
Port "qr-54611e0f-77"
tag: 1
Interface "qr-54611e0f-77"
type: internal
Port "qg-9a39ed65-f0"
tag: 2
Interface "qg-9a39ed65-f0"
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.0.2"
The routing table on the devstack box is
$ip route
default via 10.48.4.1 dev eth0
10.48.4.0/22 dev eth0 proto kernel scope link src 10.48.6.232
10.48.8.0/22 dev br-ex proto kernel scope link src 10.48.11.6
10.90.100.0/24 via 10.48.11.57 dev br-ex
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
The routing table of router1 is
$sudo ip netns exec qrouter-cf0137a4-49cc-45f9-bad8-5d71340b5462 ip route
default via 10.48.8.1 dev qg-9a39ed65-f0
10.48.8.0/22 dev qg-9a39ed65-f0 proto kernel scope link src 10.48.11.57
10.90.100.0/24 dev qr-9a91aae3-7c proto kernel scope link src 10.90.100.1
What's wrong? How can I set up a simple devstack that can host both public and private interfaces for nova instances?

Resources