I am using OpenStack Packstack Train. I installed it on my server but I have external servers in my lab that I want to reach from the VMs that are on OpenStack. These VM have already a floating IPs.
I want to know if there is like a virtual switch to link them to the external servers or something like that.
When I run brctl show, I get
bridge name bridge id STP enabled interfaces
When I do an ip a, I see
br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 32:6c:e3:45:a2:41 brd ff:ff:ff:ff:ff:ff
inet 172.24.4.1/24 scope global br-ex valid_lft forever preferred_lft forever
inet6 fe80::306c:e3ff:fe45:a241/64 scope link valid_lft forever preferred_lft forever
but when I do cat /etc/sysconfig/network-scropts/ifcfg-br-ex, I'm getting this message:
cat: ifcfg-br-ex: No such file or directory.
By default, Packstack creates an "external" network that is not external at all. It is isolated on the Packstack server. You can see that from br-ex's IP address 172.24.4.1, and from the floating IPs of your VMs.
Luckily, the RDO project has instructions for connecting your cloud to your network.
It might also be possible to add a provider network to the current cloud, but you would have to reconfigure your bridging and Neutron manually, and judging from the question, this is probably beyond your skills (personally, I would have to experiment even after working with OpenStack for seven years). I suggest you just reinstall.
If your VMs contain important data, you could create snapshots, copy them to a safe place, and use them to launch the VMs on the newly deployed cloud.
By the way, brctl reports nothing because it only works with Linuxbridges, not Openvswitch bridges (and br-ex is the latter). There is no ifcfg file because Packstack doesn't bother persisting br-ex's configuration, which will cause you grief when you reboot the Packstack server.
Related
I have a wireshark PCAP file, I want to find the MAC addresses and the private IP (local network IP address) of each of the devices in the network. When I see the ethernet tab under conversations, I can see the corresponding MAC addresses. I also see multiple IP addresses (multiple v4 and multiple v6) in the IP tabs corresponding to the MAC. Is it possible to have more than 1 local IP address per MAC? I understand there can be multiple IP addresses associated with a MAC but I was wondering on how to find those only in the local network.
Is it possible to have more than 1 local IP address per MAC?
Yes and it is more common than you think; however, I think the PCAP may be misleading you a bit.
For unicast IP addresses, multicast has different rules, there are two main reasons you would see two different IP addresses go to/from a certain MAC:
The associated interface has multiple IP addresses. Below is an example of a interface in linux having multiple addresses:
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:43:c6:d5 brd ff:ff:ff:ff:ff:ff
inet 192.168.110.106/24 brd 192.168.110.255 scope global ens160
valid_lft forever preferred_lft forever
inet 192.168.110.102/24 brd 192.168.110.255 scope global secondary ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe43:c6d5/64 scope link
valid_lft forever preferred_lft forever
So both 192.168.110.106 adn 192.168.110.102 map to the MAC address 00:0c:29:43:c6:d5
The associated MAC Address is a router/gateway. In the above example lets assume 192.168.110.1 is the gateway and has a MAC address of 00:0c:29:43:c6:d6. If 192.168.1.10 want to send a packet to 8.8.8.8 (or any IP not in 192.168.110.1) then it will send the packets to MAC address 00:0c:29:43:c6:d6 but have the destination IP address still be 8.8.8.8. So in your PCAP you will see 00:0c:29:43:c6:d6 associated with 8.8.8.8 even though the MAc address technically belongs to 192.168.110.1.
I used DevStack(victoria branch) to quick-deploy the OpenStack all-in-one on my Ubuntu-20.04 system. This machine has a public ip address 222.XXX.XXX.XXX on interface eno1, and the DevStack script has automatically added br-ex and virbr0 interfaces on this machine. Here is my config.
#ifconfig
br-ex: inet 172.24.4.1 netmask 255.255.255.0 broadcast 0.0.0.0
eno1: inet 222.XXX.XXX.XXX netmask 255.255.255.128 broadcast 222.XXX.XXX.XXX
virbr0: inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
Now I created an VM instance on image cirros. On my OpenStack dashboard, I created a private network demo-net of type vxlan, and it has a subnet 'demo-subnet', with the CIDR 10.56.1.0/24 and Gateway 10.56.1.1. The DHCP option is on.
Meanwhile, DevStack has already created a public net with CIDR 172.24.4.0/24(bonded to br-ex) and Gateway 172.24.4.1.
There is a router connecting the demo-net and public net.
I allocated a floating IP 172.24.4.124 in the public net's pool to this instance. I can ping this IP on this machine, and vice versa. But the problem is, when I ping 172.24.4.124 on another machine, it fails. I hope to access the VM instance outside the host, so what should I do to fix it?
Any help will be greatly appreciated! Thank you.
By default, Devstack creates an isolated "external" network which it calls public. You can only connect to this network, and all virtual networks that are attached to it, from the Devstack host. You could try to configure port forwarding (iptables command) on the Devstack host, but the real solution is below.
You need to configure Devstack so that it uses your external network 222.XXX.XXX.XXX. The way this is done is documented at https://docs.openstack.org/devstack/latest/networking.html#shared-guest-interface (assuming your Devstack host has a single NIC eno1). In your case, you need to put this in local.conf:
PUBLIC_INTERFACE=eno1
HOST_IP=222.x.x.x
FLOATING_RANGE=222.x.y.z/PREFIX
PUBLIC_NETWORK_GATEWAY=your router, probably 222.something
Q_FLOATING_ALLOCATION_POOL=start=222.a.b.c,end=222.d.e.f
FLOATING_RANGE is the CIDR for the subnet to which eno1 is connected, and PREFIX is the prefix used by eno1. Q_FLOATING_ALLOCATION_POOL is the range of IP addresses in the 222.x.x.x network that you want to use for floating IPs.
You will have to recreate a Devstack (although it might be possible to change the configuration of the current cloud, I would not know how). Before you do that, I would also strongly recommend reinstalling Ubuntu, to ensure no unwanted configurations from your current setup remain.
I tried to install Google Anthos in my bare metal server. But I stuck in finding the ip adress needed to set the yaml configuration. I found an article https://cloud.google.com/blog/topics/developers-practitioners/hands-anthos-bare-metal stating a statement
The CIDR range for my local network is 192.168.86.0/24. Furthermore, I have my Intel NUCs all on the same switch, so they are all on the same L2 network.
What is this CIDR range the writer talking about? How could we check the CIDR range of our local network in terminal? (I am using Linux Ubuntu 18 machine
Posting this answer as a community wiki as the question was addressed in the comments by #John Hanley.
Feel free to edit/expand it.
The CIDR range is determined by your network. If you look at another machine on the same network running Windows, Linux or macOS, it is fairly easy to determine. Run a network utility such as ipconfig, ifconfig, ip, etc. Look for netmask or Subnet Mask. Common values are 255.255.255.0 which is CIDR /24 or 255.255.0.0 which is CIDR /16.
There are tools on the Internet to translate from netmasks to CIDRs. In simple terms a CIDR is the number of most significant consecutive ones in a netmask. If you convert 255 to binary, that is 8 ones. Repeat. 255.255.255.0 has 24 consecutive ones.
Note that a lot of networks are not setup correctly for client machines. It is generally best to speak to someone who controls your network. The router or network switch will have the correct netmask value. Use that value if available. It is also important to know if IP addresses are static or allocated by a DHCP server and the DNS servers.
Example:
ip a (10.211.55.4/24)
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
<-- OMITTED -->
2: enp0s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:1c:42:1a:1e:57 brd ff:ff:ff:ff:ff:ff
inet --> 10.211.55.4/24 <-- brd 10.211.55.255 scope global dynamic enp0s5
<-- OMITTED -->
A side note explanation:
By that example of network configuration the CIDR will be 10.211.55.0/24.
Jodies.de: Ipcalc
Adding to that, there is quite extensive documentation about Anthos:
Cloud.google.com: Anthos: Docs
The networking part which the question is connected with can be found here:
Cloud.google.com: Anthos: Clusters: Docs: Bare metal: 1.6: Concepts: Network requirements
Additional resources:
En.wikipedia.org: Wiki: Classless inter domain routing
I am facing a problem how to set up network correctly while using ubuntu 17.10 in virtualbox. I have problem with pinging my instances from host PC and even from guest VM. Same problem in instances, they can't ping VMs or host pc. In virtualbox I am using 3 network adapters (NAT for internet access, 2x host only network paravirtualized [one for communication between nodes another one was meant to be public interface for instances]).
/etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# VirtualBox NAT -- for Internet access to VM
auto enp0s3
iface enp0s3 inet dhcp
auto enp0s8
iface enp0s8 inet static
address 172.18.161.6
netmask 255.255.255.0
auto enp0s9
iface enp0s9 inet manual
up ip link set dev $iface up
down ip link set dev $iface down
And devstack local.conf was from this page (tried all of them):
https://docs.openstack.org/devstack/latest/guides/neutron.html
I don't know what your configuration files looks like, but for sure, I can suggest for these kinda issues, try to debug step by step.
1: From instance, ping default GW, i.e. virtual router connecting internal network with the external network. If success, go to step 2. If fail, you got your culprit.
2: from the virtual router, ping host endpoint. If successful, try the other way round. If fail, you got your culprit.
If everything works fine, check configuration files, default gw, routing rules etc ...
Do let me, if it works or not !!
After to successfully install Devstack, if you want to grant access from and to instances, you need configure a bunch of settings:
In Security Groups add ingress rules to ICMP, SSH, HTTP, HTTPS, etc;
In the private Network, edit private-subnet to add a DNS Name Servers (8.8.8.8, 1.1.1.1, etc);
Allocated some Floating IP's;
Launch some instances;
Associate a floating IP to each instance;
Set the proxy_arp and iptables (in the host Devstack).
Try to follow this:
How to expose the Devstack floating ip to the external world?
I'm currently trying to automate our beaglebone flashing - therefore we have to manually change the ip address.
I created a script which basically adds sth. like:
# The primary network interface
auto eth0
iface eth0 inet static
address theip
netmask 255.255.255.0
gateway gateway
to /etc/network/interfaces
After adding this I restart networking via:
service networking restart
Which returns "ok", but ifconfig doesn't return "theip" it seems like it just ignores the changes and still uses dhcp.
When rebooting the system, the ip is changed and everything works as expected, but I don't want to restart the system. So how do I correctly restart the networking?
Thanks in advance,
Lukas
Do ip addr flush dev eth0 first and then restart the networking service.
Explanation
The /etc/network/interfaces file is used by the ifupdown system. This is different than the graphical NetworkManager system that manages the network by default.
When you add lines to control eth0 in the /etc/network/interfaces file, most graphical network managers assume you are now using the ifupdown service for that interface and drops the option to manage it.
The ifupdown system is less complicated and less sophisticated. Since eth0 is new to the ifupdown system, it assumes that it is unconfigured and tries to "add" the specified address using the ip command. Since the interface already has an ip address assigned by dhclient for that network, I suspect it is erroring out. You then need to put the interface in a known state for ifupdown to be able to start managing it. That is without an address assigned to the interface via the ip command.