Openstack Cirros instance cannot ping or resolve Internet hosts - openstack

I have deployed RDO Openstack Xena on Virtual Box. There were no errors in the installation. Created external network another network named blue and attached it to a router. I defined 8.8.8.8 as DNS. Everything looks fine but when I create Cirros instance, this instance cannot ping outside the Internet. Floating IP has been defined. The second Cirros instance has the same issue as well.
Any help is much appreciated.

Most probably, you need to revise the Security Group assigned to the instance.
Ensure you have an Egress rule like this.
If two cirros instances can ping the internal ip (not the floating) one another but can not ping the public (floating) ip, then the problem is that you need a router between the internal network and the public network.

happy you brought this issue to see the light of day.
This is something that still intrigues me to date.
My OpenStack proof of concept is an OSA openstack ansible using three nodes a infra, compute and a storage all VM's on top of my proxmox as per OSA deployment documentation.
The deployment goes on without a hitch, a bit lengthy though must admint ~ 1 to 2 hours give or take.
I created a public (external) network using a provider flat that using the same nic assigned to get an external ip from my isp router
Then to cut it short, I assinged the new cirros instance this same external network that has a limited dhcp from a range not conflicting with my main router's dhcp to avoid messing up.
I get my cirros successfully deployed and I can even ssh it from my external network all works fine but I can't somehow make it connect to the external world, kind of DNS is lost somehow.
root#infra2:~# ssh cirros#10.171.101.28 The authenticity of host '10.171.101.28 (10.171.101.28)' can't be established. ECDSA key fingerprint is SHA256:IGTpW0rXV44lIMJVmT+hRyxUqTuj0DZU8rqMe2Te3rU. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.171.101.28' (ECDSA) to the list of known hosts. cirros#10.171.101.28's password: $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:01:dc:8d brd ff:ff:ff:ff:ff:ff
inet 10.171.101.28/24 brd 10.171.101.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe01:dc8d/64 scope link
valid_lft forever preferred_lft forever $ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.171.101.1 0.0.0.0 UG 0 0 0 eth0
10.171.101.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.169.254 10.171.101.10 255.255.255.255 UGH 0 0 0 eth0 $ nslookup www.google.com ;; connection timed out; no servers could be reached
And yes my cirros instance that got an ip from the same subnet as my router isp can ping my router main gateway.
$ ping 10.171.101.1
PING 10.171.101.1 (10.171.101.1): 56 data bytes
64 bytes from 10.171.101.1: seq=0 ttl=64 time=1.601 ms
64 bytes from 10.171.101.1: seq=1 ttl=64 time=0.748 ms
64 bytes from 10.171.101.1: seq=2 ttl=64 time=0.869 ms
64 bytes from 10.171.101.1: seq=3 ttl=64 time=1.549 ms
64 bytes from 10.171.101.1: seq=4 ttl=64 time=0.953 ms
Os network topology
And I tweaked the neutron network agent to make sure the provider flat is using the same nic the node is receiving an ip from my router isp.
root#infra1-utility-container-f9cbd806:~# openstack network agent list
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
| 33455545-6dc5-4c08-b169-71f13aa3abb1 | L3 agent | infra2 | nova | :-) | UP | neutron-l3-agent |
| 6cdb7d41-f14e-49c4-b302-4878d82ff9cc | Metering agent | infra2 | None | :-) | UP | neutron-metering-agent |
| 78dfafac-d347-48b6-a0e1-83b55890f989 | DHCP agent | infra2 | nova | :-) | UP | neutron-dhcp-agent |
| d1ca6f9c-aee8-4ab3-8b51-942c8c9df05b | Linux bridge agent | infra2 | None | :-) | UP | neutron-linuxbridge-agent |
| e912fbc7-f066-4fc7-838c-77013ad30239 | Metadata agent | infra2 | None | :-) | UP | neutron-metadata-agent |
| ed12f36e-8f59-43b0-9786-b6f5d77880b0 | Linux bridge agent | compute2 | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
root#infra1-utility-container-f9cbd806:~# openstack network agent show d1ca6f9c-aee8-4ab3-8b51-942c8c9df05b
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| agent_type | Linux bridge agent |
| alive | :-) |
| availability_zone | None |
| binary | neutron-linuxbridge-agent |
| configuration | {'bridge_mappings': {}, 'devices': 2, 'extensions': [], 'interface_mappings': {'vlan': 'br-vlan', 'physnet1': 'ens18'}, 'l2_population': False, 'tunnel_types': ['vxlan'], 'tunneling_ip': '172.29.240.11'} |
| created_at | 2022-06-30 21:31:57 |
| description | None |
| ha_state | None |
| host | infra2 |
| id | d1ca6f9c-aee8-4ab3-8b51-942c8c9df05b |
| last_heartbeat_at | 2022-07-01 23:34:39 |
| name | None |
| resources_synced | None |
| started_at | 2022-07-01 22:34:40 |
| topic | N/A |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
root#infra1-utility-container-f9cbd806:~# openstack network agent show ed12f36e-8f59-43b0-9786-b6f5d77880b0
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| agent_type | Linux bridge agent |
| alive | :-) |
| availability_zone | None |
| binary | neutron-linuxbridge-agent |
| configuration | {'bridge_mappings': {}, 'devices': 1, 'extensions': [], 'interface_mappings': {'vlan': 'br-vlan', 'physnet1': 'ens18'}, 'l2_population': False, 'tunnel_types': ['vxlan'], 'tunneling_ip': '172.29.240.12'} |
| created_at | 2022-06-30 21:33:58 |
| description | None |
| ha_state | None |
| host | compute2 |
| id | ed12f36e-8f59-43b0-9786-b6f5d77880b0 |
| last_heartbeat_at | 2022-07-01 23:35:19 |
| name | None |
| resources_synced | None |
| started_at | 2022-07-01 22:35:19 |
| topic | N/A |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
And depite being really close to getting it work the cirros instance still can't get to my routers dns somehow to resolve names.
$ nslookup www.google.com
;; connection timed out; no servers could be reached
Anyone that had this issue solved please chime in, really appreciated to hear from you how you solved this!
As I zeron on in this issue it seems to be related to the linux-bridge somehow.
As we can see here the my os router linux-bridge took over my ens18 physical network with brq270399cc-48
root#infra2:~# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.171.101.1 0.0.0.0 UG 0 0 0 brq270399cc-48
10.0.3.0 0.0.0.0 255.255.255.0 U 0 0 0 lxcbr0
10.171.101.0 0.0.0.0 255.255.255.0 U 0 0 0 brq270399cc-48
172.29.236.0 0.0.0.0 255.255.252.0 U 0 0 0 br-mgmt
172.29.240.0 0.0.0.0 255.255.252.0 U 0 0 0 br-vxlan
As my from my linuxbridge namespace the dns also doesn't work.
root#infra2:~# ip netns list
qrouter-1d2b49cf-87db-4a13-8dce-ab29817974a7 (id: 17)
qdhcp-270399cc-4830-45a7-97df-e9e0c0929706 (id: 16)
root#infra2:~# ip netns exec qrouter-1d2b49cf-87db-4a13-8dce-ab29817974a7 ip route s
default via 10.171.101.1 dev qg-f9fd9de9-98 proto static
10.171.101.0/24 dev qg-f9fd9de9-98 proto kernel scope link src 10.171.101.22
root#infra2:~# ip netns exec qrouter-1d2b49cf-87db-4a13-8dce-ab29817974a7 dig www.google.com +short
;; communications error to 127.0.0.53#53: connection refused
And it is a sure thing a DNS issue cause I can ping google's public ip address from my cirros intance
$ ping 142.251.32.68
PING 142.251.32.68 (142.251.32.68): 56 data bytes
64 bytes from 142.251.32.68: seq=0 ttl=116 time=11.621 ms
64 bytes from 142.251.32.68: seq=1 ttl=116 time=11.090 ms
64 bytes from 142.251.32.68: seq=2 ttl=116 time=25.061 ms
Tried to change the /etc/resolve.conf but still no luck
$ cat /etc/resolv.conf
search openstacklocal
nameserver 8.8.8.8
#nameserver 10.171.101.1

Related

Openstack nova: cannot reach virtual machine

Kolla Ansible was installed in the all-in-one config and a provisioned nova VM is not reachable either via ping or ssh. The default security group rules are added to let ingress 22 and icmp on all remote IPs 0.0.0.0/0. There is only one interface on the controller node so 2 veth pairs are created so one can be supplied to network_interface: kolla_i and other to neutron_external_interface: neutron_i + ironic_dnsmasq_interface: neutron_i in globals.yml. The two veth pairs are kolla_i/kolla_b and neutron_i/neutron_b. Testing the interfaces by assigning them IP addresses on the same network, one can ping another, and both are reached from other physical machines on the network. The vm is being launched on the OpenStack controller node.
A network is created on physical interface e2 named n1
(venv) [admin#controller]# openstack network create --share --provider-network-type flat --provider-physical-network physnet1 --external n1
(venv) [admin#controller]# openstack subnet create --network n1 --allocation-pool start=10.0.2.6,end=10.0.2.230 --dns-nameserver 8.8.8.8 --gateway 10.0.3.1 --subnet-range 10.0.0.0/16 n1-subnet
Provisioning baremetal works and can be reached but VMs are not reachable. The vms are created successfully though:
(venv) [admin#controller]# openstack server create --flavor m1.small --image centos8-dev --nic net-id=403a56b9-5ac2-4ec0-9b59-831dfa7fed37 --security-group default --key-name mykey vm01
(venv) [admin#controller]# svrls
+--------------------------------------+---------------------------+--------+--------------------------+----------------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------------------------+--------+--------------------------+----------------------+----------+
| f05e9708-91e8-40c4-9a06-16d7ab9f387c | vm01 | ACTIVE | validation=10.0.2.131 | centos8-dev | m1.small |
+--------------------------------------+---------------------------+--------+--------------------------+----------------------+----------+
(venv) [root#r20s04 kolla-dev]# openstack port list
+--------------------------------------+-----------------------------------------------------------------------------------------+-------------------+----------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+-----------------------------------------------------------------------------------------+-------------------+----------------------------------------------------------------------------+--------+
| 17af7b4f-c290-45ef-8421-781e17df8b46 | | fa:16:3e:b3:2a:45 | ip_address='10.0.2.131', subnet_id='afd6221b-26d1-4469-b9af-478756fdd661' | ACTIVE |
+--------------------------------------+-----------------------------------------------------------------------------------------+-------------------+----------------------------------------------------------------------------+--------+
It seems as if openvswitch is not doing its job correctly
+-------+
| e2 |
+---+---+
|
+------------+ +-----------+ +------------+ +---+---+ +-------+ +--------+
| ovssystem +------+ neutron_i +------+ neutron_b +------+ e2_br +------+kolla_b+---------+kolla_i |
+----+-------+ +-----------+ +------------+ +-------+ +-------+ +--------+ openstack services
| ironic_dnsmasq |10.0.0.4|
| +--------+
+------+----------+
| vm networking |
+-----------------+
In globals.yml:
network_interface: "kolla_i"
neutron_external_interface: "neutron_i"
ironic_dnsmasq_interface: "neutron_i"
One possible issue is to change ironic_dnsmasq_interface to kolla_i instead of neutron_i but not sure if this will resolve the issue of vm machines not being reached on the network.
Using the correct image (not the baremetal one) and making sure the security group for port 22 and icmp ingress is enabled solved the issue.

Using SNMP retrieve IP and MAC addresses of directly connected machines to a SNMP Device

How to get connected machine's IP and Mac of SNMP device.
ARP cache is not giving correct details.
Example for Linux shell commands (no tag for other languages or Windows at time of writing)
Providing that the machine you want to query does run a SNMP Daemon ( generally snmpd from Net-SNMP under Linux ) and that you know how/are allowed to speak to it ( version 1, 2c or 3 with various community names or usernames/passwords/encoding for v3 ) you may issue the following SNMP requests:
For the test I started a snmpd on a CentOS 7 virtual machine whose main address was 192.168.174.128.
I choose port 1610 over the traditional 161 in order not to sudo or to setcap (snmpd). The snmpd.conf file contents is out of the range of this question.
This first one for IPs
snmptable -v 2c -c private 192.168.174.128:1610 ipAddrTable
SNMP table: IP-MIB::ipAddrTable
ipAdEntAddr ipAdEntIfIndex ipAdEntNetMask ipAdEntBcastAddr ipAdEntReasmMaxSize
127.0.0.1 1 255.0.0.0 0 ?
192.168.122.1 3 255.255.255.0 1 ?
192.168.174.128 2 255.255.255.0 1 ?
The second command (with 3 columns only printed) for MAC
snmptable -v 2c -c private 192.168.174.128:1610 ifTable | awk -c '{print $1 "\t" $2 "\t\t" $6}'
SNMP table:
ifIndex ifDescr ifPhysAddress
1 lo up
2 ens33 0:c:29:53:aa:c6
3 virbr0 52:54:0:e6:6b:2f
4 virbr0-nic 52:54:0:e6:6b:2f
When we check under CentOS 7 we get
ifconfig
ens33: ... mtu 1500
inet 192.168.174.128 netmask 255.255.255.0 broadcast 192.168.174.255
inet6 ...
ether 00:0c:29:53:aa:c6 netmask 255.0.0.0
...
lo: ... mtu 65536
inet 127.0.0.1
...
virbr0: ... mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:e6:6b:2f ...
...
Bonus shell command:
snmptranslate -Oaf IF-MIB::ifTable
.iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable
and
snmptranslate -Oaf IP-MIB::ipAddrTable
.iso.org.dod.internet.mgmt.mib-2.ip.ipAddrTable
I do not know why/if there is a single table holding both information.

Can't ping/ssh Openstack VM internal instance from controller

I have a working single-node centos Openstack instance which is working in most regards nicely, except for one problem which has me tearing my hair out.
The problem is this: when I create new VM instances, I am unable to ping/ssh them from the controller machine. So, when the VM comes up on the 10.0.1.x network, I cannot directly access it from the controller. I can access the machine from the Horizon Console app - which baffles me, since horizon is running on the controller. If I add a floating IP to the machine, I can access it no problem both from the controller as well as from any system on the LAN.
I have already confirmed that security groups are properly set up and opened to allow access to both ssh and icmp. Here's the security group settings:
ALLOW IPv6 to ::/0
ALLOW IPv4 to 0.0.0.0/0
ALLOW IPv4 from default
ALLOW IPv6 from default
ALLOW IPv4 22/tcp from 0.0.0.0/0
ALLOW IPv4 icmp from 0.0.0.0/0
And here's various other settings which might help identify the problem:
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.200 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::dacb:8aff:fea4:471 prefixlen 64 scopeid 0x20<link>
ether d8:cb:8a:a4:04:71 txqueuelen 0 (Ethernet)
RX packets 418591 bytes 66744430 (63.6 MiB)
RX errors 0 dropped 51 overruns 0 frame 0
TX packets 217891 bytes 165063129 (157.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[liz#openstack-controlle ~(keystone_admin)]$ neutron net-list
+--------------------------------------+---------------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------------+-----------------------------------------------------+
| a1ab7093-2884-4032-8511-003e89fcb81e | external | c184b9ef-f16d-4aad-9c7b-5d2f5e49ce58 192.168.1.0/24 |
| bb3da742-1223-4859-83f1-d03bda84ff2d | intenal-saidi | 160b1f41-de0a-40e0-9d3d-9a6630347e0e 10.0.1.0/24 |
+--------------------------------------+---------------+-----------------------------------------------------+
[liz#openstack-controlle ~(keystone_admin)]$ neutron router-list
+--------------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id | name | external_gateway_info | distributed | ha |
+--------------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| 9cd5d292-a7bb-4dcf-969d-f174e397b949 | router-saidi | {"network_id": "a1ab7093-2884-4032-8511-003e89fcb81e", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "c184b9ef-f16d-4aad-9c7b-5d2f5e49ce58", "ip_address": "192.168.1.201"}]} | False | False |
+--------------------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

How can I configure Openstack Packstack Juno to work with external network on Centos 7

I have first disabled both NetworkManager and selinux on a Centos 7 x86_64 minimal install.
I have followed the Red Hat instructions for deploying Openstack with Packstack here:
https://openstack.redhat.com/Running_an_instance_with_Neutron
After spinning up a Cirros instance, my floating ip is one matching the DHCP pool I setup, however its not assigned to eth0 by default.
I logged into the VM, and configured eth0 to match the floating ip, but it is still unreachable, even when I set the default gateway with route.
The Security group has ingress rules for tcp and IMCP on 0.0.0.0/0, so it's my understanding that I should be able to access it, if it were configured.
I have launched a Centos7 image, but I suspect its having the same issue because I can't connect.
Can someone please let me know how I might debug this? I am using neutron on this server and followed the instructions to the T
My network is 192.168.1.0/24
# neutron net-show public
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | cfe5a8cc-1ece-4d63-85ea-6bd8803f2997 |
| name | public |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 10 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | 9b14aa61-eea9-43e0-b03c-7767adc4cd62 |
| tenant_id | 75505125ed474a3a8e904f6ea8638cf0 |
+---------------------------+--------------------------------------+
# neutron subnet-show public_subnet
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.1.100", "end": "192.168.1.220"} |
| cidr | 192.168.1.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 9b14aa61-eea9-43e0-b03c-7767adc4cd62 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public_subnet |
| network_id | cfe5a8cc-1ece-4d63-85ea-6bd8803f2997 |
| tenant_id | 75505125ed474a3a8e904f6ea8638cf0 |
+-------------------+----------------------------------------------------+
# neutron router-show router1
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | True |
| distributed | False |
| external_gateway_info | {"network_id": "cfe5a8cc-1ece-4d63-85ea-6bd8803f2997", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.100"}]} |
| ha | False |
| id | ce896a71-3d7a-4849-bf67-0e61f96740d9 |
| name | router1 |
| routes | |
| status | ACTIVE |
| tenant_id | |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 5dddaf6c-7aa3-4b59-943c-65c7f05f8597 | | fa:16:3e:b0:8b:29 | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.101"} |
| 6ce2580c-4967-488b-a803-a0f9289fe096 | | fa:16:3e:50:2f:de | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.100"} |
| 920a7b64-76c0-48a0-a682-5a0051271252 | | fa:16:3e:85:33:9a | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.102"} |
| 9636c04a-c3b0-4dde-936d-4a9470c9fd53 | | fa:16:3e:8b:f2:0b | {"subnet_id": "892beeef-0e1c-4b61-94ac-e8e94943d485", "ip_address": "10.0.0.2"} |
| 982f6394-c188-4eab-87ea-954345ede0a3 | | fa:16:3e:de:7e:dd | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.103"} |
| d88af8b3-bf39-4304-aeae-59cc39589ed9 | | fa:16:3e:23:b8:c5 | {"subnet_id": "892beeef-0e1c-4b61-94ac-e8e94943d485", "ip_address": "10.0.0.1"} |
I can ping the gateway created by Neutron from my local network here at:
# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.437 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=0.063 ms
However I can't ping this gateway when I configure it within a guest vm.
Using ovsctl, I see that the bridge is there and has it's external port set correctly on my second NIC:
[root#server neutron]# ovs-vsctl list-br
br-ex
br-int
br-tun
[root#server neutron]# ovs-vsctl list-ports br-ex
enp6s0f1
First, you may have a misconception about how this is supposed to work:
After spinning up a Cirros instance, my floating ip is one matching the DHCP > pool I setup, however its not assigned to eth0 by default.
Floating ip addresses are never assigned directly to your instances. Your instance receives an address from an internal network (that you have created with neutron net-create, neutron subnet-create, etc).
When you associate a floating ip with an instance:
nova floating-ip-create <external_network_name>
nova floating-ip-associate <instance_name_or_id> <ip_address>
This address is configured inside a neutron router, and neutron creates NAT rules that will map this address to the internal address of your instance.
With this in mind:
A default packstack mediated setup will generate result in two networks configured in neutron, named public and private. Instances should be attached to the private network; if you are using credentials for the admin user, you will need to make this explicit by passing --nic net-id=nnnnn to the nova boot command.
If you launch an instance as the demo user, this will happen automatically, because the private network is owned by the demo tenant and is the only non-external network visible to that tenant.
Your instance should receive an ip address from the private network, which for a default packstack configuration will be the 10.0.0.0/24 network.
If your instance DOES NOT receive an ip address
There is a configuration problem somewhere that is preventing dhcp requests originating with your instance from reaching the dhcp server for your private network, which is running on your controller in a network namespace named dhcp-nnnn, where nnnn is the UUID of the private network. Applying tcpdump at various points along the path from the instance to the dhcp namespace is a good way to diagnose things at this point.
This article (disclaimer: I am in the author) goes into detail about how the various components are connected in a Neutron environment. It's a little long in the tooth (e.g., it does not cover newer features like DVR or HA routers), but it's still a good overview of what connects to what.
If your instance DOES receive an ip address
If your instance does receive an ip address from the private network, then you'll need to focus your attention on the configuration of your neutron router and your external network.
A neutron router is realized as a network namespace named qrouter-nnnn, where nnnn is the UUID of the associated neutron router. You can inspect this namespace by using the ip netns command. For example, given:
$ neutron router-list
+--------------------------------------+------------+...
| id | name |...
+--------------------------------------+------------+...
| 92a5e69a-8dcf-400a-a2c2-46c775aee06b | router-nat |...
+--------------------------------------+------------+...
You can run:
# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip addr
And see the interface configuration for the router:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: qr-416ca0b2-c8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether fa:16:3e:54:51:50 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-416ca0b2-c8
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe54:5150/64 scope link
valid_lft forever preferred_lft forever
13: qg-2cad0370-bb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether fa:16:3e:f8:f4:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.10/24 brd 192.168.200.255 scope global qg-2cad0370-bb
valid_lft forever preferred_lft forever
inet 192.168.200.2/32 brd 192.168.200.2 scope global qg-2cad0370-bb
valid_lft forever preferred_lft forever
inet 192.168.200.202/32 brd 192.168.200.202 scope global qg-2cad0370-bb
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fef8:f4c4/64 scope link
valid_lft forever preferred_lft forever
You can also use ip netns to do things like ping from inside the
router namespace to verify connectivity to external addresses. This
is a good place to start, actually -- ensure that you have functional
outbound connectivity from within the router namespace before you
starting trying to test things from your Nova instances.
You should see one or more address on the qg-nnnn interface that are
within the CIDR range of your floating ip network. If you run ip
route inside the namespace:
# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip route
default via 192.168.200.1 dev qg-2cad0370-bb
10.0.0.0/24 dev qr-416ca0b2-c8 proto kernel scope link src 10.0.0.1
192.168.200.0/24 dev qg-2cad0370-bb proto kernel scope link src 192.168.200.10
You should see a default route using the gateway address appropriate
for your floating ip network.
I'm going to pause here. If you run through some of these diagnostics and spot problems or have questions, please let me know and I'll try to update this appropriately.

SmartOS configuring zone networking issue - no connectivity

I am experimenting a bit with SmartOS on a spare dedicated server.
I have 2 IP adresses on the server.
for ex 1.1.1.1 and 2.2.2.2 (They are not in the same range).
The global zone was configured my global zone to use the IP 1.1.1.1
Here is the configuration of my global zone
[root#global ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
igb0 phys 1500 up -- --
igb1 phys 1500 up -- --
net0 vnic 1500 ? -- igb0
[root#global ~]# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
igb0 Ethernet up 1000 full igb0
igb1 Ethernet up 1000 full igb1
[root#global ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
igb0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 2
inet 1.1.1.1 netmask ffffff00 broadcast 1.1.1.255
ether c:c4:7a:2:xx:xx
igb1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether c:c4:7a:2:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
I configured my zone the following way
[root#global ~]# vmadm get 84c201d4-806c-4677-97f9-bc6da7ad9375 | json nics
[
{
"interface": "net0",
"mac": "02:00:00:78:xx:xx",
"nic_tag": "admin",
"gateway": "2.2.2.254",
"ip": "2.2.2.2",
"netmask": "255.255.255.0",
"primary": true
}
]
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2
inet 2.2.2.2 netmask ffffff00 broadcast 2.2.2.255
ether 2:0:0:78:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
net0 vnic 1500 up -- ?
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 87.98.252.254 UG 2 47 net0
87.98.252.0 87.98.252.162 U 4 23 net0
127.0.0.1 127.0.0.1 UH 2 0 lo0
However i have no connectivity to the internet in my zone.
Is there anything misconfigured?
I suggest you want to bypass your second real IP to guest zone.
According to wiki you should configure tag for second NIC (igb1) and use it in your guest zone.

Resources