openstack instance getting ip and not getting ip - openstack

I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!

Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance

Related

Need to configure second NIC to bridge LXC

I installed Ubuntu 16.04 Server on a machine with 4 network cards. I have interfaces eth0 and eth1 connected to the same switch. The interface eth0 is meant for the remote SSH connection to manage the server. I want to use eth1 to be bridged by br0. This bridge I want to use for LXC containers. This setup in a DHCP environment did not cause me any problems. The challenge is that the network this server is installed in is fully static. I received an IP range for this server with same subnet mask and gateway.
Setting up eth0 was no problem:
auto eth0
iface eth0 inet static
address 195.x.x.2
network 195.x.x.0
netmask 255.255.255.0
gateway 195.x.x.1
broadcast 195.x.x.255
dns-nameservers 150.x.x.105 150.x.x.106
The problem comes with the second interface eth1, because it has the same gateway as eth0 Ubuntu warns that only one default gateway can be set (which is logical). Therefor I had set eth1 as follows:
auto eth1
iface eth1 net static
address 195.x.x.3
network 195.x.x.0
netmask 255.255.255.0
broadcast 195.x.x.255
Problem with this setup is that I can externally ping eth0 at IP 195.x.x.2 but eth1 cannot be pinged or accessed via SSH. I managed to make it work with a lot of routing trickery but as many articles write on this that this way is a hole which gets deeper if you have static bridge and containers for this.
My question is: Does anyone has a straight forward approach for my issue? How should I configure eth0 and eth1 to normally bridge the containers to eth1 with static IP numbers?
Ok I solved it in the following manner, by still proceeding with the gateway routing solution as described in the question. Maybe people with the same issue could use this approach as well or if somebody knows a better solution feel free to comment.
On the host:
I enabled ARP filtering:
sysctl -w net.ip4.conf.all.arp_filter=1
echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf
Configured the /etc/network/interfaces:
auto lo
iface lo net loopback
# The primary network interface
auto etc0
iface eth0 inet static
address 195.x.x.2
network 195.x.x.0
netmask 255.255.255.0
gateway 195.x.x.1
broadcast 195.x.x.255
up ip route add 195.x.x.0/24 dev eth0 src 195.x.x.2 table eth0table
up ip route add default via 195.x.x.1 dev eth0 table eth0table
up ip rule add from 195.x.x.2 table eth0table
up ip route add 195.x.x.0/24 dev eth0 src 195.0.0.2
dns-nameservers 150.x.x.105 150.x.x.106
# The secondary network interface
auto eth1
iface eth1 net manual
# LXC bridge interface
auto br0
iface br0 inet static
address 195.x.x.3
network 195.x.x.0
netmask 255.255.255.0
bridge_ifaces eth1
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
up ip route add 195.x.x.0/24 dev br0 src 195.x.x.3 table br0table
up ip route add default via 195.x.x.1 dev br0 table br0table
up ip rule add from 195.x.x.3 table br0table
up ip route add 195.x.x.0/24 dev br0 src 195.0.0.3
Added the following lines to /etc/iproute2/rt_tables:
...
10 et0table
20 br0table
At the container config file (/var/lib/lxc/[container name]/config):
...
lxc.network.type = vets
lxc.network.link = br0
lxc.network.flags = up
lxc.network.hwadr = [auto create when bringing up container]
lxc.network.ipv4 = 195.x.x.4/24
lxc.network.ipv4.gateway = 195.x.x.1
lxc.network.veth.pair = [readable server name] (when using ifconfig)
lxc.start.auto = 0 (1 if you want the server to autostart)
lxc.start.delay = 0 (fill in seconds you want the container to wait before start)
I tested it by enabling apache2 on the container and accessed the webpage from outside the network. Hope it helps anybody who bumps into the same challenge I did.
PS: Do not forget if you choose to have the container's config file to assign the IP, that you disable it in the interface file of the container itself.
auto lo
iface lo inet loopback
auto eth0
iface eth0 net manual

Raspberry pi is unable to reach gateway

Recently we brought raspberry pi 3b.Beginning we used to access the internet using an ethernet cable and it used to connect properly but now raspberry pi is not able to reach the gateway itself and it's taking its default IP address i.e 169.xxx.xxx.xx.
what would be the issue?we tried to reinstalling the operating system again the same issue .it worked for one day after that same problem.so please help me to solve the issue.
Finally, I am able to figure it out after trial and error method. I have missed "auto eth0" before the iface statement i.e
auto eth0
iface eth0 inet static
address xxx.xxx.xxx.xxx
network 255.255.255.0
gateway xxx.xxx.xxx.xxx
dns-nameservers 8.8.8.8
Assuming that you have a windows computer available, open cmd and run the following command:
ipconfig
note down the values that display. Now on your pi, enter the command
sudo nano /etc/network/interfaces
This will open the network interfaces file. Look for the line similar to 'inet eth0 inet manual' Then remove this line and everything to do with the eth0 interface, since we are going to start over.
in the interfaces file, add the following section:
auto eth0
inet eth0 inet static
address xxx.xxx.xxx.xxx
network 255.255.255.0
gateway xxx.xxx.xxx.xxx
dns-nameservers 8.8.8.8
Replace the x in address with the first 3 groups of the value taken from the windows system. For example, if the ip address on the windows system was 192.168.0.221, enter 192.168.0.xxx
The last group of xxx for address should be something unique to everything else on your network.
'gateway' should be whatever the gateway value in windows was (assuming these machines are on the same network)
[Ctrl]+[x], Save changes
reboot via
sudo reboot
once the system has rebooted
ifconfig eth0
should list the new settings. Test them by pinging the below address (google)
sudo ping 8.8.8.8

Docker public network configuration

I have 1 host with ip 10.120.194.214/24
And I have a range set from my router to my host ip, the range is 10.120.187.0/24 and his gateway is 10.120.187.1
I'm trying to create a docker network with this range
docker network create --driver=bridge --subnet=10.120.187.0/24 --ip- range=10.120.187.128/25 --gateway=10.120.187.254 -o "com.docker.network.bridge.enable_icc=true" -o "com.docker.network.bridge.host_binding_ipv4"="10.120.187.1" mypublicnet
if I try to ping to 10.120.187.254 from the LAN i don't receive ping
the host configuration is this
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.120.194.214
netmask 255.255.255.0
gateway 10.120.194.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 10.120.194.1 10.120.194.10
The idea is that I can run containers with ip accesible from the LAN, Every container must have diferent ip.
Contrary to what you think, docker bridge network is not bridged to the physical interface, but is NATed.
To achieve what you are asking for in production, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.

Configure LXC to use wireless hosted network

I found most of the configuration is for giving static or private network. But I want it to act as a different machine so it will get a separate IP address from the DHCP and I want to do it through nmcli.
Thanks in advance.
If you are using docker as tagged, rather than LXC, use pipework to map the wlan interface from the host to the container
pipework eth2 $CONTAINERID 10.10.9.9/24
or alternatively let the container do the dhcp negotiation for you
pipework eth1 $CONTAINERID dhclient
This setup is based on a macvlan interface so the same concept should work with LXC you just won't get the easy front end.
I'm confused if this is a docker question or an LXC question.
EDIT: as per the comments, wlan interface support in a bridge depends on the wlan vendor. It may work, or it may not work at all.
In any case, you should be able to create a bridge, add your wlan0 interface to the bridge, and then have your LXC container connect to this bridge directly. Then, when you run your DHCP client in the container, it will grab it from the wlan0 interface.
Configure bridge (manually for now)
# ifconfig wlan0 up
# brctl addbr br0
# brctl addif br0 wlan0
# ifconfig br0 up
# dhclient br0
Configure LXC configuration
If using traditional priviliged LXC, edit the container's config file at /var/lib/lxc/$NAME/config,
and update this value to point to your new bridge.
lxc.network.link = br0
Run DHCP in container
# lxc-attach -n $NAME
# dhclient eth0
# ip a
If the output to ip a shows the desired IP, you're all set!
If you want to make the configuration persistent, you'll have to add the bridge to your /etc/network/interfaces file.
IEEE 802.11 doesn’t like multiple MAC addresses on a single client, so bridge and macvlans are not the right solution here.
Use ipvlan in L2 mode.

Openstack VM is not accessible on LAN

I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.

Resources