I am testing openstack on 4 computers: 1 deploy, 3 hosts.
The 3 hosts have 2 nics each, one for the connectivity with the lan, the other only for openstack, which mean for exemple, that enp2s0 is on the dhcp 172.16.0.1/12 (lan), enp3s0 is openstack only (see configuration for each machine below). It is configured like a single nic style on enp3s0.
The deploy computer, is with one nic. No specific configuration has been added.
According to the setup manual, the first thing I have to do is openstack-ansible setup-hosts.yml, which complete without any problems. Then I execute openstack-ansible setup-infrastructure.yml which crash at the task Get list of repo packages.
Ansible give this reason: fatal: [infra1_utility_container-4c9c698c]: FAILED! => {"changed": false, "content": "", "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://172.29.236.11:8181/constraints/upper_constraints_cached.txt"}.
I don’t understand why the connectivity to the utility container disappear. I went on the infra computer, and this container is started, iptables is on ACCEPT by default. I don’t have the beginning of an idea of what is wrong.
Here is the user configuration used :
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
# The internal and external VIP should be different IPs, however they
# do not need to be on separate networks.
external_lb_vip_address: 172.29.236.11
internal_lb_vip_address: 172.29.236.11
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "101:200,301:400"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
os-infra_hosts:
infra1:
ip: 172.29.236.11
# load balancer
# haproxy_hosts:
# infra1:
# ip: 172.29.236.11
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
# glance
image_hosts:
infra1:
ip: 172.29.236.11
# placement
placement-infra_hosts:
infra1:
ip: 172.29.236.11
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.12
# cinder storage host (LVM-backed)
storage_hosts:
storage1:
ip: 172.29.244.18
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name: LVM_iSCSI
iscsi_ip_address: "172.29.244.18"
The infra computer network configuration :
auto enp3s0
iface enp3s0 inet manual
# Container/Host management VLAN interface
auto enp3s0.10
iface enp3s0.10 inet manual
vlan-raw-device enp3s0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto enp3s0.30
iface enp3s0.30 inet manual
vlan-raw-device enp3s0
# Storage network VLAN interface (optional)
auto enp3s0.20
iface enp3s0.20 inet manual
vlan-raw-device enp3s0
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.10
address 172.29.236.11
netmask 255.255.252.0
# gateway 172.29.236.1
dns-nameservers 8.8.8.8 8.8.4.4
# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Nodes hosting Neutron agents must have an IP address on this interface,
# including COMPUTE, NETWORK, and collapsed INFRA/NETWORK nodes.
#
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.30
address 172.29.240.16
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
#
# The "br-vlan" bridge is no longer necessary for deployments unless Neutron
# agents are deployed in a container. Instead, a direct interface such as
# enp3s0 can be specified via the "host_bind_override" override when defining
# provider networks.
#
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports enp3s0
# compute1 Network VLAN bridge
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
#
# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
auto br-storage
iface br-storage inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.20
# compute1 Storage bridge
#auto br-storage
#iface br-storage inet static
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports enp3s0.20
# address 172.29.244.16
# netmask 255.255.252.0
The compute computer network configuration :
auto enp3s0
iface enp3s0 inet manual
# Container/Host management VLAN interface
auto enp3s0.10
iface enp3s0.10 inet manual
vlan-raw-device enp3s0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto enp3s0.30
iface enp3s0.30 inet manual
vlan-raw-device enp3s0
# Storage network VLAN interface (optional)
auto enp3s0.20
iface enp3s0.20 inet manual
vlan-raw-device enp3s0
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.10
address 172.29.236.12
netmask 255.255.252.0
# gateway 172.29.236.1
dns-nameservers 8.8.8.8 8.8.4.4
# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Nodes hosting Neutron agents must have an IP address on this interface,
# including COMPUTE, NETWORK, and collapsed INFRA/NETWORK nodes.
#
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.30
address 172.29.240.17
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
#
# The "br-vlan" bridge is no longer necessary for deployments unless Neutron
# agents are deployed in a container. Instead, a direct interface such as
# bond0 can be specified via the "host_bind_override" override when defining
# provider networks.
#
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond0
# compute1 Network VLAN bridge
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
#
# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
#auto br-storage
#iface br-storage inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond0.20
# compute1 Storage bridge
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp3s0.20
address 172.29.244.17
netmask 255.255.252.0
The storage computer network configuration :
auto enp2s0
iface enp2s0 inet manual
# Container/Host management VLAN interface
auto enp2s0.10
iface enp2s0.10 inet manual
vlan-raw-device enp2s0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto enp2s0.30
iface enp2s0.30 inet manual
vlan-raw-device enp2s0
# Storage network VLAN interface (optional)
auto enp2s0.20
iface enp2s0.20 inet manual
vlan-raw-device enp2s0
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp2s0.10
address 172.29.236.13
netmask 255.255.252.0
# gateway 172.16.0.1
dns-nameservers 8.8.8.8 8.8.4.4
# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Nodes hosting Neutron agents must have an IP address on this interface,
# including COMPUTE, NETWORK, and collapsed INFRA/NETWORK nodes.
#
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp2s0.30
address 172.29.240.18
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
#
# The "br-vlan" bridge is no longer necessary for deployments unless Neutron
# agents are deployed in a container. Instead, a direct interface such as
# enp2s0 can be specified via the "host_bind_override" override when defining
# provider networks.
#
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports enp2s0
# compute1 Network VLAN bridge
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
#
# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
#auto br-storage
#iface br-storage inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports enp2s0.20
# compute1 Storage bridge
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports enp2s0.20
address 172.29.244.18
netmask 255.255.252.0
The error means the repo server is unavailable at 172.29.236.11:8181 even if the utility container is running.
I had this same error and it turned out to be some of my netmasks were incorrect. Please double check your CIDR settings in /etc/openstack_deploy/user_variables.yml and /etc/openstack_deploy/openstack_user_config.yml! Some of the defaults in online examples are pretty wild.
Related
So, I've managed to get my subnet to work however I can not get a single additional IP to work
I'm new to this, so not sure exactly the best way to do stuff, if what I've done is wrong.
Here's my interfaces:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address [main-ip]
gateway [gateway]
pointopoint [gateway]
iface eno1 inet6 static
address [ipv6-addr]/128
gateway [ipv6-gateway]
up sysctl -p
auto vmbr0
iface vmbr0 inet static
address [subnet-ip]/28
bridge-ports none
bridge-stp off
bridge-fd 0
#Subnet
iface vmbr0 inet6 static
address [ipv6-addr]/64
up ip -6 route add [ipv6-subnet]/64 dev vmbr0
auto vmbr1
iface vmbr1 inet static
address 10.0.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
#LAN
auto vmbr2
iface vmbr2 inet static
address [additional-ip]/24
bridge-ports none
bridge-stp off
bridge-fd 0
# Additional IP
In vmbrX interfaces you should have bridge-ports mapped to physical interfaces and that vmbrX interface set IP static IP address
For example
iface **eno1** inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.10.2/24
gateway 192.168.10.1
**bridge-ports eno1**
bridge-stp off
bridge-fd 0
https://pve.proxmox.com/wiki/Network_Configuration#_default_configuration_using_a_bridge
I need some help to configure the network for my KVM. My Hostingprovider is OVH, and since they are a bit different, I'm in need of help.
My old Network-Interfaces File:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 94.23.209.170
netmask 255.255.255.0
network 94.23.209.0
broadcast 94.23.209.255
gateway 94.23.209.254
auto br0
iface br0 inet static
address 91.134.173.185
netmask 255.255.255.0
broadcast 91.134.173.185
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 8.8.8.8
iface eth0 inet6 static
address 2001:41d0:0002:54aa::
netmask 64
dns-nameservers 2001:41d0:3:163::1
post-up /sbin/ip -family inet6 route add 2001:41d0:0002:54ff:ff:ff:ff:ff dev eth0
post-up /sbin/ip -family inet6 route add default via 2001:41d0:0002:54ff:ff:ff:ff:ff
pre-down /sbin/ip -family inet6 route del default via 2001:41d0:0002:54ff:ff:ff:ff:ff
pre-down /sbin/ip -family inet6 route del 2001:41d0:0002:54ff:ff:ff:ff:ff dev eth0
I had to go into the resecue mode and remove the bridge, otherwise my machine wouldn't come up again. Can someone help me maybe, and tell me what I did wrong?
Thanks, and have a good day/night! :)
I had a similar problem. I just moved to OVH from Phoenix nap. I like the control panel better but their networking is a little weird. I have an IP on a /24 and I ordered a /29 for whm/cpanel and some other virtual machines.
My config to get the host functional:
auto eth0
iface eth0 inet manual
address 111.222.333.145
netmask 255.255.255.0
network 111.222.333.0
broadcast 111.222.333.255
gateway 111.222.333.254
auto br0
iface br0 inet static
address 111.222.333.145
netmask 255.255.255.0
network 111.222.333.0
broadcast 111.222.333.255
gateway 111.222.333.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 213.186.33.99
NOTE: 111.222.333 is your first 3 octets. Obviously change them. the .145 was arbitrary to illustrate a host assigned to you.
Then restart the networking service.
service networking restart
Now I had to get a CentOS container for WHM/cPanel going and a few debian containers.
I'm assuming you bought a block of IPs and need to get that IP into a VM. Log into the OVH control panel, Select IP. Expand the IP block. to right you will see a gear you can click on. Create an OVH Virtual MAC. Take note of that MAC!
For CentOS the guide is correct.
In Debian it was a missing little something.
You want to edit the /etc/libvirt/qemu/autostart/YOU_VM_NAME.xml
...
<interface type='bridge'>
<mac address='YO:UR:VI:RT:MA:CA'/>
...
After saving restart the libvirtd service. Restart your debian container to pick up the new MAC and you should be good.
When installing I could not set an IP out side the range of my network. After getting virt-manager up, I logged in blew out the GW and modified the interfaces file according to the guide:
Don't need to change your host network config.
You need a Failover IP (create in OVH Panel). Then, assign a Virtual MAC for it.
In your dedicated server:
virsh net-edit default
Change this way:
<network>
<name>default</name>
<uuid>...</uuid>
<bridge name='virbr0' stp='off' delay='0'/>
<mac address='...'/>
</network>
Now edit the VM:
virsh edit myvmname
and set (change "eno1" to your network card name, like "eth0" or "ens0p0" etc):
<interface type='direct'>
<mac address='--VIRTUAL MAC CREATED IN OVH PANEL--'/>
<source dev='eno1' mode='bridge'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Now edit your VM network (in my example, a Debian /etc/network/interfaces and change the network name as well):
auto eno1
iface eno1 inet static
address -FAILOVER IP-
netmask 255.255.255.255
gateway -HOST GATEWAY-
broadcast -FAILOVER IP-
So, the VM will have the failover IP and use the same gateway than the host. In OVH the gateway is final .254 (or use ip r in the host).
I have 1 host with ip 10.120.194.214/24
And I have a range set from my router to my host ip, the range is 10.120.187.0/24 and his gateway is 10.120.187.1
I'm trying to create a docker network with this range
docker network create --driver=bridge --subnet=10.120.187.0/24 --ip- range=10.120.187.128/25 --gateway=10.120.187.254 -o "com.docker.network.bridge.enable_icc=true" -o "com.docker.network.bridge.host_binding_ipv4"="10.120.187.1" mypublicnet
if I try to ping to 10.120.187.254 from the LAN i don't receive ping
the host configuration is this
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.120.194.214
netmask 255.255.255.0
gateway 10.120.194.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 10.120.194.1 10.120.194.10
The idea is that I can run containers with ip accesible from the LAN, Every container must have diferent ip.
Contrary to what you think, docker bridge network is not bridged to the physical interface, but is NATed.
To achieve what you are asking for in production, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.
I have a dedicated server that I'd like to run some VMs on using KVM.
I'm trying to set up bridge networking so the VMs can be accessed from the outside with dedicated IPs.
I tried doing this using this article, but once I bring up br0 I lose connectivity to my server over ssh (and anything else for that matter).
Here is my /etc/network/interfaces:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 66.147.230.23
netmask 255.255.255.0
network 66.147.230.0
broadcast 66.147.230.255
gateway 66.147.230.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 208.67.222.222 208.67.220.220
dns-search samgwydir.com
# bridge
auto br0
iface br0 inet static
# address 216.120.250.44
# netmask 255.255.255.0
# network 216.120.250.0
# broadcast 216.120.250.255
# gateway 216.120.250.1
address 192.168.1.1
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
I have commented out a failed setup that had br0 use a dedicated IP and instead tried a local IP to no avail.
don't configure the eth0, as eth0 is the bridge device (with the IP 192.168.1.1):
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet manual
# bridge
auto br0
iface br0 inet static
address 192.168.1.1
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
you might be able to assign multiple IP addresses to br0, if you want your host to be multihomed
ISP gave me two IP configs:
10.0.1.5 / 255.255.255.0 / gw 10.0.1.1
10.0.9.8 / 255.255.255.0 / gw 10.0.9.1
I've set up Dom0 is 10.0.1.5 with bridge.
There is the config:
auto lo
iface lo inet loopback
auto eth0
auto br0
iface br0 inet static
address 10.0.1.5
netmask 255.255.255.0
gateway 10.0.1.1
bridge_ports eth0
bridge_stp no
VM config:
...
vif = [ 'type=ioemu, bridge=br0' ]
...
So, when I launch and configure my VM to 10.0.9.8 - the network is unreachable from the VM.
I know that 10.0.9.1 is connected directly via switch with my 10.0.1.5.
Any ideas?
would the following work:
ip route add default via 10.0.1.5
like was done in: https://github.com/mcclurmc/devstack/blob/xcp-toolstack/tools/xcp-toolstack/build_domU.sh