I'm trying to set up a minimal devstack that can launch nova instances, some which will have public addresses, and some which will need to open connections to the public network. I'd like to be able to assign floating ips to the instances, and have traffic originating from the instances with public addresses reach the public network.
Addressing
Devstack will be running on a single Ubuntu 14.04 box with two physical interfaces. The first interface eth0 is on 10.48.4.0/22, on which I own the address 10.48.6.232; this is the management connection to the box. The second interface eth1 is on 10.48.8.0/22 and owns the addresses 10.48.11.6 and 10.48.11.57-10.48.11.59. eth1 is configured to use the 10.48.11.6 address, leaving a small pool of addresses for the floating range.
auto eth1
iface eth1 inet static
address 10.48.11.6
netmask 255.255.252.0
I'd like to use the range 10.48.11.57-10.48.11.59 as the floating IP pool. This makes up the start of my local.conf
[[local|localrc]]
# Devstack host IP eth1 address
HOST_IP=10.48.11.6
# Private network
FIXED_RANGE=10.90.100.0/24
NETWORK_GATEWAY=10.90.100.1
# Public network
Q_FLOATING_ALLOCATION_POOL=start=10.48.11.57,end=10.48.11.59
FLOATING_RANGE=10.48.8.0/22
PUBLIC_NETWORK_GATEWAY=10.48.8.1
# Public network is eth1
PUBLIC_INTERFACE=eth1
ML2
The remainder of the relevant part of my local.conf is configuring neutron and ovs to use the public network. I've followed the instructions in the comments in neutron-legacy.
# Neutron
# -------
PUBLIC_BRIDGE=br-ex
Q_USE_PROVIDERNET_FOR_PUBLIC=True
PUBLIC_PHYSICAL_NETWORK=public
OVS_BRIDGE_MAPPINGS=public:br-ex
# Neutron Provider Network
ENABLE_TENANT_TUNNELS=True
PHYSICAL_NETWORK=public
OVS_PHYSICAL_BRIDGE=br-ex
# Use ml2 and openvswitch
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,logger
Q_AGENT=openvswitch
enable_service q-agt
# ml2 vxlan
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8472)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True
Resulting network
I changed the default security policy for the demo project to be permissive.
The resulting network routes traffic between the devstack host and the private subnet, but not between the devstack host and the 10.48.8.0/22, between instances and the physical 10.48.8.0/22 or between the physical 10.48.8.0/22 network and the public 10.48.8.0/22 subnet.
\ destination gateway devstack router1 private
source \ 10.48.8.1 10.48.11.6 10.48.11.57 10.90.100.0/24
physical pings X X na
10.48.8.0/22
devstack X pings pings pings
10.48.11.6
private X pings pings pings
10.90.100.0/24
Traffic leaving the public network should reach the physical network. Traffic leaving the private network should be NATed onto the public network. Traffic entering from the physical network should reach the public network.
The resulting ovs bridges are
$sudo ovs-vsctl show
33ab25b5-f5d9-4f9f-b30e-20452d099f2c
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tapc5733ec7-e7"
tag: 1
Interface "tapc5733ec7-e7"
type: internal
Port "qvo280f2d3e-14"
tag: 1
Interface "qvo280f2d3e-14"
Port br-int
Interface br-int
type: internal
Port "qr-9a91aae3-7c"
tag: 1
Interface "qr-9a91aae3-7c"
type: internal
Port "qr-54611e0f-77"
tag: 1
Interface "qr-54611e0f-77"
type: internal
Port "qg-9a39ed65-f0"
tag: 2
Interface "qg-9a39ed65-f0"
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.0.2"
The routing table on the devstack box is
$ip route
default via 10.48.4.1 dev eth0
10.48.4.0/22 dev eth0 proto kernel scope link src 10.48.6.232
10.48.8.0/22 dev br-ex proto kernel scope link src 10.48.11.6
10.90.100.0/24 via 10.48.11.57 dev br-ex
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
The routing table of router1 is
$sudo ip netns exec qrouter-cf0137a4-49cc-45f9-bad8-5d71340b5462 ip route
default via 10.48.8.1 dev qg-9a39ed65-f0
10.48.8.0/22 dev qg-9a39ed65-f0 proto kernel scope link src 10.48.11.57
10.90.100.0/24 dev qr-9a91aae3-7c proto kernel scope link src 10.90.100.1
What's wrong? How can I set up a simple devstack that can host both public and private interfaces for nova instances?
Related
my openwrt-x86 has been running for a while inside exsi virtual environment(it's a VM,eth0 eth1 is virtual NIC of exsi),and one day I tried to add a pass through port(eth2 physical) into this openwrt as a lan port so I can access the lan managed by this openwrt by physically connect a wire into eth2, but I found that I can got ip address and dhcp normally,but cannnot connect other ipaddress in the same lan except the openwrt itself and wan network.
my config file of openwrt was
root#OpenWrt:/etc/config# cat network
config interface 'loopback'
option device 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fdc8:982a:611a::/48'
config device
option name 'br-lan'
option type 'bridge'
list ports 'eth0'
list ports 'eth2'
option ipv6 '0'
config interface 'lan'
option device 'br-lan'
option proto 'static'
option ip6assign '60'
option ipaddr '10.0.0.1'
option netmask '255.255.0.0'
config interface 'wan'
option device 'eth1'
option proto 'dhcp'
option metric '5'
config interface 'wan6'
option device 'eth1'
option proto 'dhcpv6'
for example I got 10.0.0.10 dhcp ipaddr by physically connected to eth2,then my wan network still fine I can go google,but when I tried ping 10.0.0.151(a vm that in openwrt's lan) and got icmp not reachable
[root#master1 ~]# ping 10.0.0.151
PING 10.0.0.151 (10.0.0.151) 56(84) bytes of data.
From 10.0.0.10 icmp_seq=1 Destination Host Unreachable
From 10.0.0.10 icmp_seq=2 Destination Host Unreachable
From 10.0.0.10 icmp_seq=3 Destination Host Unreachable
From 10.0.0.10 icmp_seq=4 Destination Host Unreachable
From 10.0.0.10 icmp_seq=5 Destination Host Unreachable
From 10.0.0.10 icmp_seq=6 Destination Host Unreachable
and the route table on 10.0.0.10 seems fine
[root#master1 ~]# ip route
default via 10.0.0.1 dev ens192 proto dhcp src 10.0.0.10 metric 100
10.0.0.0/16 dev ens192 proto kernel scope link src 10.0.0.10 metric 100
solved,due to Exsi set internal switch NIC
Promiscuous Mode =false
Forged Transmits =false
by default,so vm in virtual lan cannot receive ARP response delivered,enable them to make it works
My device is running on Debian OS strech version (not desktop).
I am not an IT personal, but a programmer. I need to know how to configure the network on the debian so both PPP cellular modem & the ethernet interface can access the internet.
There are 3 network interfaces:
1. Ethernet interface enp1s0: dhcp client. (gets ip from the dhcp server and access to the internet)
2. Ethernet interface snp2s0: static ip
3. Modem PPP: wvdial gets access to the internet using the modem
/etc/network/interface file:
auto lo
iface lo inet loopback
allow-hotplug enp1s0
iface enp1s0 inet dhcp
auto enp2s0
iface enp2s0 inet static
address 10.0.13.1
netmask 255.0.0.0
manual ppp0
iface ppp0 inet wvdial
ip route
default via 10.0.0.100 dev enp1s0
10.0.0.0/24 dev enp1s0 proto kernel scope link src 10.0.0.11
10.0.0.0/8 dev enp2s0 proto kernel scope link src 10.0.13.1
/etc/resolv.conf file:
domain mydomain.local
search mydomain.local
nameserver 10.0.0.3
/etc/wvdial.conf file:
[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0
Init3 = AT+CGDCONT=1,"IP","internetg"
Init4 = AT+CGATT=1
Phone = *99***1#
Modem Type = USB Modem
Baud = 460800
New PPPD = yes
Modem = /dev/ttyACM2
ISDN = 0
Password = ''
Username = ''
Auto DNS = Off
/etc/ppp/peers/wvdial file:
noauth
name wvdial
usepeerdns
Problem:
1. My device is running and enp1s0 is connected to the internet. (modem is down)
2. I then run command to perform dialup of the ppp: ifup ppp0
3. As a result the device ppp0 appears in the 'ip a' command, but the ethernet interface enp1s0 is not connected to the internet anymore and also the modem is not connected, but has ip which means there is some problem with routing table and/or dns.
After dialup the ip route table does not have any default/rule for the PPP.
ip route:
default via 10.0.0.100 dev enp1s0
10.0.0.0/24 dev enp1s0 proto kernel scope link src 10.0.0.11
10.0.0.0/8 dev enp2s0 proto kernel scope link src 10.0.13.1
After dialup I noticed that the /etc/resolv.conf file changed and the dns of the ethernet interface is deleted and now appears the PPP dns entries:
/etc/resolv.conf
nameserver 194.90.0.11
nameserver 212.143.0.11
domain mydomain.local
search mydomain.local
The network should behave as follows:
1. If both PPP and ethernet interface are up, then both should have access to the internet at the same time
2. If only 1 of the devices are up (PPP or ethernet interface) then it should work
3. Dialup/Dialdown should not affect the ethernet connection to the internet
What are the exact commands needed and file configuration in order to be able to have PPP and ethernet interface enp1s0 work at the same time?
- ip routing table
- dns
- wvdial
for default route, add defaultroute and replacedefaultroute option to /etc/ppp/peers/wvdial file.
I am attempting to get a simple 2-node deployment set up via devstack. I have followed both the multi-node lab and the "Using devstack with Neutron" guide. i made the most progress with the latter. however, i still cannot seem to communicate with instances running on my compute-only node. instances that run on the controller/compute node seem fine. i can ping/ssh to them from any machine in my office.
my environment: 2 ubuntu 18.04 bare metal servers, private network with a router handing out DHCP addresses (i have a small range of addresses set aside). i disabled Ubuntu NetworkManager and configured via ifupdown in /etc/network/interfaces:
auto enp0s31f6
iface enp0s31f6 inet static
address 192.168.7.170
netmask 255.255.255.0
gateway 192.168.7.254
multicast 192.168.7.255
dns-nameservers 8.8.8.8 8.8.4.4
controller/compute node local.conf is configured according to the guide:
[[local|localrc]]
HOST_IP=192.168.7.170
SERVICE_HOST=192.168.7.170
MYSQL_HOST=192.168.7.170
RABBIT_HOST=192.168.7.170
GLANCE_HOSTPORT=192.168.7.170:9292
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
ADMIN_PASSWORD=Passw0rd
LOGFILE=/opt/stack/logs/stack.sh.log
## Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="192.168.7.0/24"
IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
Q_FLOATING_ALLOCATION_POOL=start=192.168.7.249,end=192.168.7.253
PUBLIC_NETWORK_GATEWAY="192.168.7.254"
PUBLIC_INTERFACE=enp0s31f6
# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE=False
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex
the one difference is in Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE. i found if i did not set this, i saw a lot of packet loss on the server. i don't understand why the gateway would be added to the vSwitch as a secondary address.
another oddity that i noticed is that once the OVS bride was set up and my public interface added as a port, the network gateway no longer worked as a DNS server. if i use google's it's fine.
on the compute only node i have local.conf:
[[local|localrc]]
HOST_IP=192.168.7.172
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_HOST=192.168.7.170
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ADMIN_PASSWORD=Passw0rd
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
PUBLIC_INTERFACE=enp0s31f6
ENABLED_SERVICES=n-cpu,rabbit,q-agt,placement-client
i run stack.sh on the controller/compute node, then the compute only node. the installation looks good. i can set up the security group, ssh keypair etc. and launch instances. i allocate floating IPs for each and associate. the addresses come from the pool as expected. i can see the tunnels set up on each node with OVS:
controller$ sudo ovs-vsctl show
1cc8a95d-660d-453f-9772-02393adc2031
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-c0a807ac"
Interface "vxlan-c0a807ac"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.170", out_key=flow, remote_ip="192.168.7.172"}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "enp0s31f6"
Interface "enp0s31f6"
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qg-7db4efa8-8f"
tag: 2
Interface "qg-7db4efa8-8f"
type: internal
Port "tap88eb8a36-86"
tag: 1
Interface "tap88eb8a36-86"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port "qr-e0e43871-2d"
tag: 1
Interface "qr-e0e43871-2d"
type: internal
Port "qvo5a54876d-0c"
tag: 1
Interface "qvo5a54876d-0c"
Port "qr-9452dacf-82"
tag: 1
Interface "qr-9452dacf-82"
type: internal
ovs_version: "2.8.1"
and on the compute-only node:
compute$ sudo ovs-vsctl show
c817878d-7127-4d17-9a69-4ff296adc157
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qvo1b56a018-10"
tag: 1
Interface "qvo1b56a018-10"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vxlan-c0a807aa"
Interface "vxlan-c0a807aa"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.172", out_key=flow, remote_ip="192.168.7.170"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.8.1"
any ideas on what might be wrong in my set up? any suggestions for what i might try?
It turned out that the Ubuntu Network Manager was reasserting itself after i had stopped the services. i took a more drastic step of disabling and purging it from my servers: systemctl disable NetworkManager.service and then apt-get purge network-manager
once it was gone for good, things started working as advertised. i started with the local.conf above and was able to spin up instances on both servers, and connect to them and they had no trouble connecting to each other etc. i then added more pieces to my stack (heat, magnum, barbican, lbaasv2) and things continue to be reliable.
the moral of the story is: Ubuntu Network Manager and devstack ovs config do not play well together. to get the latter working, you must remove the former (as near as i can tell).
also, prior to all this trouble with ovs, i had to apply a proposed fix to devstack's lib/etcd3 script on my compute-only node. it's a small, but required, change in the stable/queens branch as of 27-Sept-2018. see https://github.com/openstack-dev/devstack/commit/19279b0f8. without this stack.sh fails on the compute node trying to bind to an address on the controller node.
I installed openstack liberty with 2 node configuration (1 controller, 1 compute), each one having 1 public nic and 1 private nic, and following this scenario with dvr: http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
In the controller node I created br-ex which has the eth0' IP (this is the public nic), and installed l3-agent (dvr_snat mode) ovs-agent dhcp-agent and services.
Using the admin account I created the ext-net and attached my subnet to it.
Using the demo tenant, I created then a demo-net, a demo-subnet and a demo-router, then I set the gateway neutron router-gateway-set demo-net ext-net
So my ovs-vsctl show looks as the following
Bridge br-int
fail_mode: secure
Port "sg-ef30b544-a4"
tag: 4095
Interface "sg-ef30b544-a4"
type: internal
Port "qr-a4b8653c-78"
tag: 4095
Interface "qr-a4b8653c-78"
type: internal
Port "qg-d33db11d-60"
tag: 1
Interface "qg-d33db11d-60"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap9f36ccde-1e"
tag: 4095
Interface "tap9f36ccde-1e"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port "eth0"
Interface "eth0"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
ovs_version: "2.4.0"
and the network namespaces
root#controller:~# ip netns
qdhcp-3e662de0-9a85-4d7d-bb85-b9d4568ceaec
snat-f3f6213c-384c-4ec5-914c-e98aba89936f
qrouter-f3f6213c-384c-4ec5-914c-e98aba89936f
My problem is that l3-agent fails to setup the snat network, because it seems that the network is unreachable.
ERROR neutron.agent.l3.agent Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'snat-f3f6213c-384c-4ec5-914c-e98aba89936f', 'ip', '-4', 'route', 'replace', 'default', 'via', '149.XXX.YYY.ZZZ', 'dev', 'qg-d33db11d-60']
ERROR neutron.agent.l3.agent Exit code: 2
ERROR neutron.agent.l3.agent Stdin:
ERROR neutron.agent.l3.agent Stdout:
ERROR neutron.agent.l3.agent Stderr: RTNETLINK answers: Network is unreachable
ping -I br-ex 8.8.8.8 works.
ping -I br-int 8.8.8.8 says network unreachable.
As you can see there is a patch between br-int and br-ex, so it should work, but it doesn't.
I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance