How are Neutron namespaced networks connected to the physical interface? - networking

Openstack is using namespaces to isolate each network created by 'neutron net-create'.
Since namespaces are isolated from each other but also from the main non-namespaced area, how they end up being connected to the physical interfaces which reside in this "non-namespace" main area?
Which Linux techniques are used for that?

As to what I have seen in the environment I worked on, these interfaces are attached to a virtual bridge in you machine, which is linked also to your physical interfaces.

libvirt creates the interface and it's name should be visible in the kvm / qemu extended process name. it should also be query capable via libvirt.

Related

Do all servers have one base OS, like in RED HAT openstack architecture?

I'm a noob learning openstack. And The resources are all over the place tbh. I came across this image and would like to know one thing,
So, Suppose I have 100TB of storage and 10 server grade processors, and ram of 1TB, do all these resources make up of only one base OS- RED hat enterprise Linux? So, they sell resources to connect all the equipment and connect to install one single OS which can comprehend them all?
And Upon this, we throw an Openstack architecture so clients can use them as needed? Do we need as many NICs or the NICs virtual?
How to scale?
As you say, you just add a server. Install RHEL or another supported Linux distro (it's best to install the same distro and version on all servers), then OpenStack and configure it. The new server will register with the OpenStack controllers and can be used for launching virtual machines immediately.
The process is a bit more involved when you run a cloud with baremetal instances (i.e. you don't launch VMs but provision physical systems), but in principle it's the same.
by definition(at consumer scale-like one laptop) we need a network interface card for one IP
This is incorrect. You can configure multiple IP addresses on a single interface, even on your PC at home, even if that PC runs Windows.
An enterprise cloud requires connecting nodes to several networks. Usually, servers have several physical NICs, bond them together, and use VLANs or other multiplexing technologies to implement the networks. See this blog (five years old, but the principles still apply today, and it's well-written) for a good example of a real-world OpenStack network architecture.
Openstack uses one big special NIC
OpenStack can be deployed in many ways. It is not a shrink-wrapped solution. It can be used on servers with single NICs, bonded NICs, VLANs, normal networks, etc. Your statement is almost correct if you think of a typical deployment and a bond interface as a "big special NIC".
If you are interested to try this out at home, see the OpenStack installation tutorial. You will learn a lot.

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

Assigning NICs to a XenProject VM

I wish to install a VM on my Xen Project machine that will run a Zentyal Firewall. My machine has three networks cards: one integrated, and two discreet, similar cards (they have the same Realtek chip, but are from different manufacturers). For the firewall to work optimally, what I want to do is assign and dedicate the two discreet NICs to my firewall VM, and use the integrated card for Dom0 and other VMs. I have been able to do similar things with other virtualisation software in the past, but have not been able to find a way to do it with Xen Project.
This page provides many useful configurations, but I don't think any of them match what I want to do. Is this at all possible, or must I give up hope of virtualising my firewall computer?
I think the best way to solve this would be using PCI passthrough in Xen. What this means is that you can leave 1 of your NICs attached to the dom0 (which can then be bridged to allow the other VMs to connect through the same interface - look at one of the Xen articles on network configuration for some examples of how to set this up, it'll be the same as if you only had a single NIC) and allow the firewall VM full control over the other two NICs.
The process for this is somewhat involved and can vary by distribution so I would advise you check the first article I linked but I will describe the basic process.
Check the PCI addresses of the two network cards you want to pass through using lspci. The lines of output for your cards will look something like the following (although the details will be very different the structure will be the same):
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
00:19.1 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
Make a note of the first column (00:19.0 and 00:19.1 in this example). Add this to the config for your firewall VM in the following format:
pci=['00:19.0','00:19.1']
On its own this will cause the VM to fail to boot as it will be unable to pass through the devices. In order for the devices to be passed through they will need to be bound to the pciback driver on dom0 with a command like:
xl pci-assignable-add 00:19.0
xl pci-assignable-add 00:19.1
This may not be possible in all situations but there are other methods if it is not. I strongly advise you to read the article I mentioned before to fully understand what the best way to do this is in your case.

docker containers share unix abstract socket or dbus

Can docker containers share a unix abstract socket like the ones for DBUS?
If it can be done, how do you do it?
If it cannot or cannot yet, is there a way to share a dbus connection among the host and containers, or between containers?
Here is the answer from another site :
DBus uses abstract sockets , which are network-namespace specific.
So the only real way to fix this is to not use a network namespace
(i.e. docker run --net=host). Alternatively, you can run a process on
the host which proxies access to the socket. I think that's what
xdg-app does basically (also for security reasons to act as a filter).
There might be some other way, but that's all I can think of offhand.
http://ask.projectatomic.io/en/question/3647/how-to-connect-to-session-dbus-from-a-docker-container/

Is Riak a viable choice for dynamic network environments?

We are considering Riak for use in an embedded device context (embedded Linux) where devices are dynamically addressed (DHCP).
Is this a viable choice?
We can assume that appropriate auto-discovery protocols are in place to enable devices to discover each other. Upon joining the network, a device would obviously need to do a riak-admin cluster join <other device>. Other than this, would Riak be capable of handling devices leaving and re-joining the network on a fairly non-frequent basis? Or, does it play much more nicely in a statically-addressed environment?
DHCP doesn't necessarily mean the device has to join when it boots. If the node names are resolvable via DNS or hosts file, and the listeners are configured to 0.0.0.0, the Riak nodes should communicate quite happily even if their IPs change on reboot.

Resources