OpenStack: Assigning IPs manually - networking

I am deploying OpenStack Havana over Ubuntu Server 12.04 LTS following the official documentation (http://docs.openstack.org/havana/install-guide/install/apt/content/index.html). I'm using a single-node installation, so one physical machine is acting as controller node and compute node at the same time.
Right now I have everything working except for the network. I should remark that I am not using Neutron, just Nova Network. Also, I should say I'm far from being a networking expert.
The problem is the next one: in my enterprise, as far as I know, every device has a public IP. This is, there are no IPs such as 192.168.X.X or 10.0.X.X. Rather, all IPs are located in a public subnet, to say, A.B.0.0/16. In particular, my department has the subnet A.B.C.0/24 assigned, so all our devices should be assigned an IP in that range. The gateway has assigned the IP A.B.C.2.
So far, I have not been able to configure the network correctly. What I would like to do is the following:
Using, nova network create, create a new network which is the same one that the physical machine:
nova network-create vmnet --fixed-range-v4=A.B.C.0/24 --gateway=A.B.C.2 --dns1=8.8.8.8 --dns2=4.4.4.4
Then, assign IPs manually to each virtual machine. If IPs were assigned in that subnet, it would override other IPs from existing computers. So what I would like is doing pretty much what I can do with VirtualBox when I setup the adapter as a "Bridge Adapter", i.e., assigning an IP manually in the guest OS.
Is that even possible?
Thanks a lot.

Use Neutron network and specifically go for ovs plugin. Because the instructions I am giving below will only work for it.
You have to setup the ovs plugin with the following configuration in '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'
[OVS]
tenant_network_type = gre
network_vlan_ranges = EXTNet
enable_tunneling = True
tunnel_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
bridge_mappings = EXTNet:br-ex
local_ip = <your machine IP here>
Note the Bridge mappings entry. It maps the EXTNet to br-ex. Later you will use this EXTNet as provider physical network while creating your network in Openstack. For now you have to add one of your host's interfaces that is connected to your enterprise networks to br-ex. After adding it you may not be able to access your host through that interface so always use a secondary interface for this.
Once you are done with the setup do the following.
quantum net-create EXTNet --provider:physical_network EXTNet --provider:network_type flat
quantum net-update EXTNet --router:external True
quantum net-update EXTNet --shared True
quantum subnet-create --name EXTSubnet --gateway <external network gateway> EXTNet <external network CIDR> --enable_dhcp False
There may be other ways of doing. But I have tested this approach and hence recommend.
Once you have successfully created a subnet, just lauch instances in it.
One thing to note here is since you have disabled dhcp in your subnet openstack will not run dnsmasq on it ahd hence you should have to provide your own dhcp server.
Second since the network_type is flat there wont be any vlan packets. The packets from your instance will flow as it is on your external network, which is what you want.

Related

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

What does this Vagrant error mean and how do you fix it? For 'public_network' and 'private_network' together

I have this in my Vagrantfile
Vagrant.configure("2") do |config|
config.vm.network "public_network"
config.vm.network "private_network", type: "dhcp"
It's giving me this error when I try vagrant up
==> default: Clearing any previously set network interfaces...
A host only network interface you're attempting to configure via DHCP
already has a conflicting host only adapter with DHCP enabled. The
DHCP on this adapter is incompatible with the DHCP settings. Two
host only network interfaces are not allowed to overlap, and each
host only network interface can have only one DHCP server. Please
reconfigure your host only network or remove the virtual machine
using the other host only network.
It uses a lot of words, but I still don't understand it. All of my virtual machines are powered off. Why can't there be more than one DHCP client on the network anyways? There are often multiple DHCP clients on the same network! All of my machines are using NAT adapters except for one using Bridged Adapter.
VirtualBox 5.2.4
Vagrant 2.0.1
In Vagrant a public network is like a private one(in a pure networking sense) with dhcp that is implicitly bridged to your host so it can be accessed from outside your machine, it is a bit ambiguous as the documentation state.
So you are trying to create two networks using DHCP on the same hypervisor for the same machine, this cannot work with Virtualbox as Virtualbox can only assign one IP to one machine over DHCP.
If you don't need "the outside world" to access your machine the public network is useless, just use a private network over DHCP.
Or try using a public network with a private one with a static IP.
I had the same error, and in my case I did the following:
1) Enabled a new 'host-only' adapter in virtualbox: just select your box, click 'settings', click 'network' and enable a different adapter than your other boxes have.
2) Check the ip of that adapter you created by running 'ipconfig' in powershell or command line in Windows.
3) Finally, in your vagrant configuration file, specify an ip within the adapter's network: config.vm.network "private_network", ip: "place_ip_here".
If your adapter's ipv4 is '172.28.128.1', for example, and subnet mask '255.255.255.0', then your first three numbers in the IP remain the same '172.28.128.another_number_here'
This issue may happens for different reasons. In my case when vagrant boots the vm, it brings up a other network adapter vboxnet2 while I have already created and attached an adapter vboxnet1. Thus, those two adapters were overlapping.
This thread enter link description here helped me to solve my issue

Proxmox with OPNsense as pci-passthrough setup used as Firewall/Router/IPsec/PrivateLAN/MultipleExtIPs

This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed.
The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Proxmox Server with 1 NIC(eth0)
3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
eth0 is PCI-Passthroughed to the OPNsense KVM
A private network on vmbr30, 10.1.7.0/24
An IPsec mobile client connect (172.16.0.0/24) to LAN
To better outline the setup, i create this [drawing][1]: (not sure its perfect, tell me what to improve)
Questions:
How to setup such a scenario using PCI-Passthrough instead of the Bridged Mode.
Follow ups
I) Why i cannot access PROXMOX.2 but access VMEXT.11 (ARP?)
II) is why do i need a from * to * IPSEC chain rule to get ipsec running. That is most probably a very much opnsense related question.
III) I tried to handle the 2 additional external IPs by adding virtual ips in OPNsense, adding a 1:1 nat to the internal LAN ip and opening the firewall for the ports needed ( for each private lan IP ) - but yet i could not get it running. The question is, should each private IP have a seperate MAC or not? What is specifically needed to get a multi-ip setup on WAN
General high level perspective
Adding the pci-passthrough
A bit out of scope, but what you will need is
a serial console/LARA to the proxmox host.
a working LAN connection from opnsense (in my case vmbr30) to proxmox private ( 10.1.7.2 ) and vice versa. You will need this when you only have the tty console and need to reconfigure the opnsense intefaces to add em0 as the new WAN device
You might have a working IPsec connection before or opened WAN ssh/gui for further configuration of opnsense after the passthrough
In general its this guide - in short
vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
update-grub
vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Then reboot and ensure you have a iommu table
find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
Now find your network card
lspci -nn
in my case
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
After this command, you detach eth0 from proxmox and lose network connection. Ensure you have a tty! Please replace "8086 15b7" and 00:1f.6 with your pci-slot ( see above)
echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind
Now edit your VM and add the PCI network card:
vim /etc/pve/qemu-server/100.conf
and add ( replace 00:1f.6)
machine: q35
hostpci0: 00:1f.6
Boot opnsense connect using ssh root#10.1.7.1 from your tty proxmox host, edit the interfaces, add em0 as your WAN interface and set it on DHCP - reboot your opnsense instance and it should be up again.
add a serial console to your opnsense
In case you need a fast disaster recovery or your opnsense instance is borked, a CLI based serial is very handy, especially if you connect using LARA/iLO whatever.
Do get this done, add
vim /etc/pve/qemu-server/100.conf
and add
serial0: socket
Now in your opnsense instance
vim /conf/config.xml
and add / change this
<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>
Be sure you replace the current serialspeed with 9600. No reboot your opnsense vm and then
qm terminal 100
Press Enter again and you should see the login prompt
hint: you can also set your primaryconsole to serial, helps you get into boot prompts and more and debug that.
more on this under https://pve.proxmox.com/wiki/Serial_Terminal
Network interfaces on Proxmox
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
address 10.1.7.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
OPNsense
WAN is External-IP1, attached em0 (eth0 pci-passthrough), DHCP
LAN is 10.1.7.1, attached to vmbr30
Multi IP Setup
Yet, i only cover the ExtraIP part, not the extra Subnet-Part. To be able to use the extra IPs, you have to disable seperate MACs for each ip in the robot - so all extra IPs have the same MAC ( IP1,IP2,IP3 )
Then, in OPN, for each extern IP you add a Virtual IP in Firewall-VirtualIPs(For every Extra IP, not the Main IP you bound WAN to). Give each Virtual IP a good description, since it will be in the select box later.
Now you can go to either Firewall->NAT->Forward, for each port
Destination: The ExtIP you want to forward from (IP2/IP3)
Dest port rang: your ports to forward, like ssh
Redirect target IP: your LAN VM/IP to map on, like 10.1.7.52
Set the redirect port, like ssh
Now you have two options, the first one considered the better, but could be more maintenance.
For every domain you access the IP2/IP3 services with, you should define local DNS "overrides" mapping on the actually private IP. This will ensure that you can communicate from the inner to your services and avoids the issues you would have since you used NATing before.
Otherwise you need to care about NAT reflection - otherwise your LAN boxes will not be able to access the external IP2/IP3, which can lead to issues in Web applications at least. Do this setup and activate outbound rules and NAT reflection:
What is working:
OPN can route a]5]5ccess the internet and has the right IP on WAN
OPN can access any client in the LAN ( VMPRIV.151 and VMEXT.11 and PROXMOX.2)
i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
i can access 10.1.7.1 ( opnsense ) while connected with IPsec
i can access VMEXT using the IPsec client
i can forward ports or 1:1NAT from the extra IP2/IP3 to specific private VMs
Bottom Line
This setup works out a lot better then the alternative with the bridged mode i described. There is no more async-routing anymore, there is no need for a shorewall on proxmox, no need for a complex bridge setup on proxmox and it performs a lot better since we can use checksum offloding again.
Downsides
Disaster recovery
For disaster recovery, you need some more skills and tools. You need a LARA/iPO serial console the the proxmox hv ( since you have no internet connection ) and you will need to configure you opnsense instance to allow serial consoles as mentioned here, so you can access opnsense while you have no VNC connection at all and now SSH connection either ( even from local LAN, since network could be broken ). It works fairly well, but it needs to be trained once to be as fast as the alternatives
Cluster
As far as i can see, this setup is not able to be used in a cluster proxmox env. You can setup a cluster initially, i did by using a tinc-switch setup locally on the proxmox hv using Seperate Cluster Network. Setup the first is easy, no interruption. The second join needs to already taken into LARA/iPO mode since you need to shutdown and remove the VMs for the join ( so the gateway will be down ). You can do so by temporary using the eth0 NIC for internet. But after you joined, moved your VMs in again, you will not be able to start the VMs ( and thus the gateway will not be started). You cannot start the VMS, since you have no quorum - and you have no quorum since you have no internet to join the cluster. So finally a hen-egg issue i cannot see to be overcome. If that should be handled, only by actually a KVM not being part of the proxmox VMs, but rather standalone qemu - not desired by me right now.

How can I get fixed IP address on my vagrant even when I move to other network?

I'm using vagrant as Linux machine.
I'm a student and I'm coding in like everywhere such as home, classroom, univ, cafe, library, etc.
The problem is that everytime I move to other place, I have to halt the vagrant machine and re-up again because the network is changed.
For example, I do some coding in cafe, where the private network IP address is 192.168.1.x. Now, I move to other place, say classroom, where the IP address this time is 192.168.99.x.
Since, IP has been changed, I have to reboot the vagrant machine. Although it takes only couple of mins but it is kinda bothering much to me.
I want to keep programming on my vagrant environment even if network environment has been changed. Need your help, Thanks.
You can have static IP wether you're using private or public network, just by specifying which IP you want to use
for public network:
config.vm.network "public_network", ip: "192.168.0.17"
for private network:
config.vm.network "private_network", ip: "192.168.50.4"
While the answer provided by Frédéric Henri is accurate, it may not actually be helpful. The problem with setting a static IP in the Vagrantfile is when you change networks (or subnets) as you described, the network device in charge of handing out IPs might not be willing to give that IP back to you - it may already be in use, or on another subnet or network.
Assuming you're trying to regain network connectivity from the Guest, you can just reboot the adapter or network interface you need in the Guest by doing the following (from the Guest):
ifdown eth0
ifup eth0
Where eth0 is the name of the network adapter you need to restart. You can verify this by running ifconfig on your Guest and determining which network interface is being used to get the IP you want to renew.
See this similar question for more information.

How does open stack assign ip to virtual machines?

I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.

Resources