Ping failed to second ip in openstack instance - openstack

I have RDO openstack environment in a machine for testing. The RDO was installed with packstack --allinone command. Using HOT I have created two instances. One with cirros image and another with Fedora. The Fedora instance have two interfaces that are connected to same network while cirros have only one interface and connected to same network. The template looks like this -
heat_template_version: 2015-10-15
description: Simple template to deploy two compute instances
resources:
local_net:
type: OS::Neutron::Net
local_signalling_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: local_net }
cidr: "50.0.0.0/24"
ip_version: 4
fed:
type: OS::Nova::Server
properties:
image: fedora
flavor: m1.small
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: fed_port1 }
- port: { get_resource: fed_port2 }
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fed_port2:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
cirr:
type: OS::Nova::Server
properties:
image: cirros
flavor: m1.tiny
key_name: heat_key
networks:
- network: local_net
networks:
- port: { get_resource: cirr_port }
cirr_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
The Fedora instance got two ips (50.0.0.3 and 50.0.0.4). Cirros got ip 50.0.0.5. I can ping 50.0.0.3 from cirros instance but not the ip 50.0.0.4. If I manually down the interface with ip 50.0.0.3 in the Fedora instance, then only I can ping 50.0.0.4 from cirros instance. Is there a restriction in the configuration of neutron that prohibits ping to both the ips of Fedora instance at same time. Please help.

This happens because of the default firewall-ing done by OpenStack networking (neutron) -- it simply drops any packets received on a port if the source address of the packet does not match the IP address assigned to the port.
When cirros instance sends ping packet to 50.0.0.4, fedora instance receives it on the interface with IP address 50.0.0.4. However, when it is responding back to cirros's IP address 50.0.0.5, the linux networking stack on your fedora machine has two interfaces to choose from to send out the response (because both those interfaces are connected to the same network). In your case, fedora choose to respond back on on 50.0.0.3. However, the source IP address in the packet is still 50.0.0.4, and thus the OpenStack networking layer simply drops it.
General recommendation is to not have multiple interfaces on the same network. If you want multiple IP addresses from the same network for your VM, you can use "fixed_ips" option in your heat template:
fed_port1:
type: OS::Neutron::Port
properties:
network_id: { get_resource: local_net }
fixed_ips:
- ip_address: "50.0.0.4"
- ip_address: "50.0.0.3"
Since DHCP server would offer only IP address, fedora would be configured with only one IP. You can add another IP to your interface using "ip addr add" command (see http://www.unixwerk.eu/linux/redhat/ipalias.html):
ip addr add 50.0.0.3/24 brd + dev eth0 label eth0:0

Related

Can we rename our ports in heat template ? Do they appear in VM in same order as listed in heat template?

I want to create a VM with 3 SRIOV ports ( heat template pasted below). I would like each port to appear with some specific name in VM, is it possible? . Is there a guarantee that port will appear in VM in order specified in heat template?
For Eg
resources:
vm1_server_0:
type: OS::Nova::Server
properties:
name: {get_param: [vm1_names, 0]}
image: {get_param: vm1_image_name}
flavor: {get_param: vm1_flavor_name}
availability_zone: {get_param: availability_zone_0}
networks:
- port: {get_resource: vm1_0_direct_port_0}
- port: {get_resource: vm1_0_direct_port_1}
- port: {get_resource: vm1_0_direct_port_2}
Can I rename vm1_0_direct_port_0 to "eth0" and vm1_0_direct_port_1 to "10Geth0" and vm1_0_direct_port_2 to "10Geth1" in heat template itself ?
If above is not possible, I need to be sure of order with which they appear in lspci|grep "Virtual Function" ( if those are sriov ports) ie like vm1_0_direct_port_0 appearing as 0000:04.01.00 and next vm1_0_direct_port_1 as 0000:04:01.01 and vm1_0_direct_port_2 as 0000:04:01.02 ? for me to rename using udev rules in VM.

Vagrant Setup with two boxes connected via a third box simulating a switch/bridge

I would like to have a setup as depicted here:
I would like for the two VMs to only be able to talk to each other via a third container simulating a switch, or just a bridge for starters. I care about the host network or outside connectivity only to the extend that I want to ssh into each box.
I tried to build on the tutorial for multiple machines as follows:
Vagrant.configure("2") do |config|
config.vm.define "switch" do |switch|
switch.vm.box = "hashicorp/bionic64"
switch.vm.network "public_network", ip: "192.168.50.6"
end
config.vm.define "interlocking" do |interlocking|
interlocking.vm.box = "hashicorp/bionic64"
interlocking.vm.network "public_network", ip: "192.168.50.5", bridge: "192.168.50.6"
end
config.vm.define "point" do |point|
point.vm.box = "hashicorp/bionic64"
point.vm.network "public_network", ip: "192.168.50.4", bridge: "192.168.50.6"
end
end
But I don't know how to stop the two VMs from just finding each other in the network right away without using the bridge. Can somebody point me in the right direction?
A good way to do this outside of vagrant would also be fine.
I ended up using OpenVSwitch with this configuration in ansible:
- hosts: all
become: true
tasks:
- name: install wireshark
apt:
name: wireshark
- name: install tshark
apt:
name: tshark
- name: install Open vSwitch
apt:
name: openvswitch-switch
- name: create bridge Interface br0
openvswitch_bridge:
bridge: br0
- name: bridging ethNode1
openvswitch_port:
bridge: br0
port: eth1
- name: bridgeing ethNode2
openvswitch_port:
bridge: br0
port: eth2
- name: bridgeing ethNode3
openvswitch_port:
bridge: br0
port: eth3

Can't access cross VM of different zones

I have 2 VM instances using the same network(default), same subnet (default), but in 2 different zones. I accessed the VM and then ping to another VM but they did not resolve! What do I have to do to make them communicate? Below is the information of the system:
Network:
- Name: default
Subnet:
- Name: default
- Network: default
- Ip range: 10.148.0.0/20
- Region: asia-southeast1
VM1:
- Subnet: default
- IP: 10.148.0.54
- Zone: asia-southeast1-c
VM2:
- Subnet: default
- IP: 10.148.0.56
- Zone: asia-southeast1-b
Please help me! thank you!
First check if the ARP is resolved for the remote VM you want to ping.
Also check if there is a firewall rule for the default network blocking the communication between the VM's.

OpenStack multi-node network configuration on Ubuntu

I am attempting to get a simple 2-node deployment set up via devstack. I have followed both the multi-node lab and the "Using devstack with Neutron" guide. i made the most progress with the latter. however, i still cannot seem to communicate with instances running on my compute-only node. instances that run on the controller/compute node seem fine. i can ping/ssh to them from any machine in my office.
my environment: 2 ubuntu 18.04 bare metal servers, private network with a router handing out DHCP addresses (i have a small range of addresses set aside). i disabled Ubuntu NetworkManager and configured via ifupdown in /etc/network/interfaces:
auto enp0s31f6
iface enp0s31f6 inet static
address 192.168.7.170
netmask 255.255.255.0
gateway 192.168.7.254
multicast 192.168.7.255
dns-nameservers 8.8.8.8 8.8.4.4
controller/compute node local.conf is configured according to the guide:
[[local|localrc]]
HOST_IP=192.168.7.170
SERVICE_HOST=192.168.7.170
MYSQL_HOST=192.168.7.170
RABBIT_HOST=192.168.7.170
GLANCE_HOSTPORT=192.168.7.170:9292
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
ADMIN_PASSWORD=Passw0rd
LOGFILE=/opt/stack/logs/stack.sh.log
## Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="192.168.7.0/24"
IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
Q_FLOATING_ALLOCATION_POOL=start=192.168.7.249,end=192.168.7.253
PUBLIC_NETWORK_GATEWAY="192.168.7.254"
PUBLIC_INTERFACE=enp0s31f6
# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE=False
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex
the one difference is in Q_ASSIGN_GATEWAY_TO_PUBLIC_BRIDGE. i found if i did not set this, i saw a lot of packet loss on the server. i don't understand why the gateway would be added to the vSwitch as a secondary address.
another oddity that i noticed is that once the OVS bride was set up and my public interface added as a port, the network gateway no longer worked as a DNS server. if i use google's it's fine.
on the compute only node i have local.conf:
[[local|localrc]]
HOST_IP=192.168.7.172
LOGFILE=/opt/stack/logs/stack.sh.log
SERVICE_HOST=192.168.7.170
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ADMIN_PASSWORD=Passw0rd
DATABASE_PASSWORD=Passw0rd
RABBIT_PASSWORD=Passw0rd
SERVICE_PASSWORD=Passw0rd
PUBLIC_INTERFACE=enp0s31f6
ENABLED_SERVICES=n-cpu,rabbit,q-agt,placement-client
i run stack.sh on the controller/compute node, then the compute only node. the installation looks good. i can set up the security group, ssh keypair etc. and launch instances. i allocate floating IPs for each and associate. the addresses come from the pool as expected. i can see the tunnels set up on each node with OVS:
controller$ sudo ovs-vsctl show
1cc8a95d-660d-453f-9772-02393adc2031
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-c0a807ac"
Interface "vxlan-c0a807ac"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.170", out_key=flow, remote_ip="192.168.7.172"}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "enp0s31f6"
Interface "enp0s31f6"
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qg-7db4efa8-8f"
tag: 2
Interface "qg-7db4efa8-8f"
type: internal
Port "tap88eb8a36-86"
tag: 1
Interface "tap88eb8a36-86"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port "qr-e0e43871-2d"
tag: 1
Interface "qr-e0e43871-2d"
type: internal
Port "qvo5a54876d-0c"
tag: 1
Interface "qvo5a54876d-0c"
Port "qr-9452dacf-82"
tag: 1
Interface "qr-9452dacf-82"
type: internal
ovs_version: "2.8.1"
and on the compute-only node:
compute$ sudo ovs-vsctl show
c817878d-7127-4d17-9a69-4ff296adc157
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qvo1b56a018-10"
tag: 1
Interface "qvo1b56a018-10"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vxlan-c0a807aa"
Interface "vxlan-c0a807aa"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.7.172", out_key=flow, remote_ip="192.168.7.170"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.8.1"
any ideas on what might be wrong in my set up? any suggestions for what i might try?
It turned out that the Ubuntu Network Manager was reasserting itself after i had stopped the services. i took a more drastic step of disabling and purging it from my servers: systemctl disable NetworkManager.service and then apt-get purge network-manager
once it was gone for good, things started working as advertised. i started with the local.conf above and was able to spin up instances on both servers, and connect to them and they had no trouble connecting to each other etc. i then added more pieces to my stack (heat, magnum, barbican, lbaasv2) and things continue to be reliable.
the moral of the story is: Ubuntu Network Manager and devstack ovs config do not play well together. to get the latter working, you must remove the former (as near as i can tell).
also, prior to all this trouble with ovs, i had to apply a proposed fix to devstack's lib/etcd3 script on my compute-only node. it's a small, but required, change in the stable/queens branch as of 27-Sept-2018. see https://github.com/openstack-dev/devstack/commit/19279b0f8. without this stack.sh fails on the compute node trying to bind to an address on the controller node.

neutron openvswitch br-int can't reach external network br-ex through patch

I installed openstack liberty with 2 node configuration (1 controller, 1 compute), each one having 1 public nic and 1 private nic, and following this scenario with dvr: http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
In the controller node I created br-ex which has the eth0' IP (this is the public nic), and installed l3-agent (dvr_snat mode) ovs-agent dhcp-agent and services.
Using the admin account I created the ext-net and attached my subnet to it.
Using the demo tenant, I created then a demo-net, a demo-subnet and a demo-router, then I set the gateway neutron router-gateway-set demo-net ext-net
So my ovs-vsctl show looks as the following
Bridge br-int
fail_mode: secure
Port "sg-ef30b544-a4"
tag: 4095
Interface "sg-ef30b544-a4"
type: internal
Port "qr-a4b8653c-78"
tag: 4095
Interface "qr-a4b8653c-78"
type: internal
Port "qg-d33db11d-60"
tag: 1
Interface "qg-d33db11d-60"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap9f36ccde-1e"
tag: 4095
Interface "tap9f36ccde-1e"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port "eth0"
Interface "eth0"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
ovs_version: "2.4.0"
and the network namespaces
root#controller:~# ip netns
qdhcp-3e662de0-9a85-4d7d-bb85-b9d4568ceaec
snat-f3f6213c-384c-4ec5-914c-e98aba89936f
qrouter-f3f6213c-384c-4ec5-914c-e98aba89936f
My problem is that l3-agent fails to setup the snat network, because it seems that the network is unreachable.
ERROR neutron.agent.l3.agent Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'snat-f3f6213c-384c-4ec5-914c-e98aba89936f', 'ip', '-4', 'route', 'replace', 'default', 'via', '149.XXX.YYY.ZZZ', 'dev', 'qg-d33db11d-60']
ERROR neutron.agent.l3.agent Exit code: 2
ERROR neutron.agent.l3.agent Stdin:
ERROR neutron.agent.l3.agent Stdout:
ERROR neutron.agent.l3.agent Stderr: RTNETLINK answers: Network is unreachable
ping -I br-ex 8.8.8.8 works.
ping -I br-int 8.8.8.8 says network unreachable.
As you can see there is a patch between br-int and br-ex, so it should work, but it doesn't.

Resources