instance gets multiple fixed IPs - openstack

I am running a nova-network node with FlatDHCPManager and 4 nova-compute nodes (one of them is also the nova-network node.
I have multiple networks on the nodes via vlans (eth1.101, eth1.102, etc)
Which I also created inside openstack with this command:
nova network-create --fixed-range-v4 10.10.3.0/24 --vlan 102 --bridge br102 --bridge-interface eth1.102 --project-id 6e497c3118da4b8297c67f7f8e2d99d8 net102
All networks are created like the above example, with --project-id included.
Then when I create an instance inside a project, it get's fixed IPs from all created networks like this:
Shouldn't they get only from the asossiated project?
Is this a bug or am I missing something?
I also asked this here.

Related

Heat template which shares resources with other heat template

I have created a heat template which created network (flat and vlans), subnet, trunk, ports and created VM. Now I want to create another VM which will share some flat network/subnet created already, though Vlan network created on same network will be different.
Actually it is all-in-onesetup (single m/c) and I also would like to know if there is some concept like namespace where multiple VMs can be without any conflicts from other VMs?
Another thing I have to change all resources name also as otherwise it says it exists, Is there any way by which all resources name be prefixed/suffixed automatically by some variable provided ?

Setting Openstack compute node with a fake hypervisor

I'm trying to set up openstack compute nodes that mimics a real node, however never actually sets up the VMs on a physical host.
In the openstack tests, there are usages of fake drivers (defined in nova/virt/fake.py) through a complex system of testing classes.
I wish to get such a node up and running not within a test (meaning, I don't want to use these classes to spawn the compute node), but on an actual VM/container, however, I cannot figure out how to get a compute process to run with this fake hypervisor (or more specifically, one that will be defined by me).
How do I inject this fake driver instead of the real driver in a compute node?
(also, I'm installing OS using devstack (latest))
For more clarification, my goal is to do stress testing of OS, running multiple fake compute nodes, not in all-in-one configuration. The usage of devstack to setup the controller node is for simplifying the process, but the system should be:
A controller node, running the core services (Nova, Glance, Keystone etc.).
Multiple compute nodes, using fake hypervisors on different machines.
When installing a new compute node, there is a configuration file nova-compute.conf that is being created automatically.
It seems that in /etc/nova/nova-compute.conf there is an option:
compute_driver = libvirt.LibvirtDriver
That uses libvirt as the default hypervisor for a compute node. In addition to hyperv, vmwareapi and xenapi, according to the nova configuration documentation, one can choose using the fake driver by changing this option to:
compute_driver = fake.FakeDriver
In order to set the fake driver to our implementation, we may replace the fake driver written in fake.py with something else.

How do you define an Openstack node?

So i've read several articles & looked through Openstack docs for the definition of a node.
Node
A Node is a logical object managed by the Senlin service. A node can
be a member of at most one cluster at any time. A node can be an
orphan node which means it doesn’t belong to any clusters.
Node types
According to the Oracle docs, there are different node types (controller node, compute node etc.). What I'm confused about is if a single node is a single physical computer host. Does that mean I can still deploy multiple nodes with different node types on the same host?
Node Cluster
I read that a cluster is a group of nodes. How could the cluster for the controller node look like?
CONTROLLER NODE
The controller node is the control plane for the OpenStack
environment. The control pane handles identity (keystone), dashboard
(Horizon), telemetry (ceilometer), orchestration (heat) and network
server service (neutron).
In this architecture, I have different Openstack services (Horizon, Glance etc.) running on one node. Can I conclude from this picture whether it's part of a cluster?
Ok, so a node in the context of the Openstack documentation is synonymous to host:
The example architecture requires at least two nodes (hosts)
from the sentence on the page: https://docs.openstack.org/newton/install-guide-ubuntu/overview.html
You already found out that what a node is in the context of Senlin.
Node types: the nodes referred here are the physical hosts, like in the rest of the Openstack documentation. The node type is determined by the services running on the host. Usually you can run serveral services on a host.
In Openstack the word cluster is only used to referred to service collection managed by Senlin. So usually no, these services need not form a cluster.

What is the replacement for `--net=container` in new docker networking?

In the pre docker 1.9 days I used to have a vpn provider container which I could use as the network gateway for my other containers by passing the option --net=container:[container-name].
This was very simple but had a major limitation in that the provider container had to exist prior to starting the consumers and it could not be restarted.
The new docker networking stack seems to have dropped this provision in favour of creating networks which does sound better, but I'm struggling to get equivalent behaviour.
Right now I have created an internal network docker network create isolated --internal --subnet=172.32.0.0/16 and brought up 2 containers one of which is attached only to internal network and one which is attached to both the default bridge and the internal network.
Now I need to route all network traffic from the isolated container through the connected one. I've messed around with some iptable rules but tbh this is not my strongest area.
So my questions are simply: Is my approach along the right lines? What rules need to be in place in the two containers to get this working as --net=container?

openstack: relation between controller & compute nodes

I just started playing with openstack, and many things still don't understand. As I see it, to start a VM instance, we normally execute some commands on the controller e.g.
glance image-create
nova boot
But how does the controller know:
1) on which compute node to start the VM
2) how many compute nodes it has
Where does it take this information?
The controller will boot determine the location to launch the instance based on the information provided by nova-scheduler:
http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html
As for how many compute nodes are recognized, this is determined when you register a compute node with nova compute on the controller. Here is a reference for how compute is installed and configured for RHEL/CentOS/Fedora:
http://docs.openstack.org/juno/install-guide/install/yum/content/ch_nova.html
I'd suggest to learn the OpenStack software architecture for such questions, for example, look at this page http://docs.openstack.org/openstack-ops/content/example_architecture.html.
Simply speacking, OpenStack saves all the configurations in database which is by default mysql, so Controller knows all the information. A Nova component named nova-scheduler running as a controller service will decide where to place VM among all available hosts.
A good staring point is to deploy multiple nodes env. You will know how OpenStack works in the deployment procedure.

Resources