Openstack: Ephemeral disk on local disk - openstack

currently our openstack environment uses glusterfs to store/run instances and images - /var/lib/nova/instance is glusterfs mounted point. Everything works fine, but we have a need to create ephemeral disk that is running on the compute nodes hard drives not on the glusterfs share where instances are running.
Currently I don't see any options in nova.conf where you can explicitly to say where ephemeral to be stored.

Related

What happened if compute node restarts or stop working?

I wonder what is happening when a machine that runs computation node with active VMs is shutdown due hardware malfunction or power outage. After some time restarting and returns back? Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node? What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Does OpenStack somehow manage "to move" VMs that were configured to that node to run on another node?
Not automatically.
If your OpenStack infrastructure has been configured with a common storage system for the compute nodes, then an instance that was running on the failed node can be migrated to another node and then booted.
What happened to networking between VMs on other nodes trying to reach VMs that were running on the shutdown node?
Once the instance from the failed node has been restarted on a new node, other VMs will be able to talk to it ... using the instance's IP address.
Of course, network connections won't survive the failure. (If a compute node fails, that brings down all instances that were running on it ...)

Network settings in Openstack with single OpenVPN connection

I'm trying to set up an Openstack environment with two Kubernetes clusters, one production and one testing. My idea was to separate them with two networks in Openstack and then have a VPN in front, to limit the exposure through floating ip:s (for this I would have a proxy that routes requests into the correct internal addresses).
However, issues arise when trying to tunnel requests to both networks when connected to the VPN. Either I choose to run the VPN in its own network or in one of the two, but I don't seem to be able to make requests across network boundaries.
Is there a better way to configure the networking in Openstack or OpenVPN, so that I can keep the clusters separated and still have access to all resources through one installation of OpenVPN?
Is it better to run everything in the same Openstack network and separate them with subnets? Can I still have the production and test cluster expose different IP-addresses externally? Are they still separated enough to limit the risk of them accessing each other?
Sidenote: I use Terraform to deploy the infrastructure and Ansible to install resources, if someone has suggestion in the line of already prepared scripts.
Thanks,
The solution I went for was to separate the environments with their own networks and cidr and then attach them to the VPN instance to let it get access to them. From there I just tunnel everything.

How do networking and load balancer work in docker swarm mode?

I am new to Dockers and containers. I was going through the tutorials for docker and came across this information.
https://docs.docker.com/get-started/part3/#docker-composeyml
networks:
- webnet
networks:
webnet:
What is webnet? The document says
Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves will publish to web’s port 80 at an ephemeral port.)
So, by default, the overlay network is load balanced in docker cluster? What is load balancing algo used?
Actually, it is not clear to me why do we have load balancing on the overlay network.
Not sure I can be clearer than the docs, but maybe rephrasing will help.
First, the doc you're following here uses what is called the swarm mode of docker.
What is swarm mode?
A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.
From SO Documentation:
A swarm is a number of Docker Engines (or nodes) that deploy services collectively. Swarm is used to distribute processing across many physical, virtual or cloud machines.
So, with swarm mode you have a multi host (vms and/or physical) cluster a machines that communicate with each other through their docker engine.
Q1. What is webnet?
webnet is the name of an overlay network that is created when your stack is launched.
Overlay networks manage communications among the Docker daemons participating in the swarm
In your cluster of machines, a virtual network is the created, where each service has an ip - mapped to an internal DNS entry (which is service name), and allowing docker to route incoming packets to the right container, everywhere in the swarm (cluster).
Q2. So, by default, overlay network is load balanced in docker cluster ?
Yes, if you use the overlay network, but you could also remove the service networks configuration to bypass that. Then you would have to publish the port of the service you want to expose.
Q3. What is load balancing algo used ?
From this SO question answered by swarm master bmitch ;):
The algorithm is currently round-robin and I've seen no indication that it's pluginable yet. A higher level load balancer would allow swarm nodes to be taken down for maintenance, but any sticky sessions or other routing features will be undone by the round-robin algorithm in swarm mode.
Q4. Actually it is not clear to me why do we have load balancing on overlay network
Purpose of docker swarm mode / services is to allow orchestration of replicated services, meaning that we can scale up / down containers deployed in the swarm.
From the docs again:
Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. The swarm manager uses internal load balancing to distribute requests among services within the cluster based upon the DNS name of the service.
So you can have deployed like 10 exact same container (let's say nginx with you app html/js), without dealing with private network DNS entries, port configuration, etc... Any incoming request will be automatically load balanced to hosts participating in the swarm.
Hope this helps!

PCI passthrough strategy in Docker or oVirt

We have to deploy a test system where a Docker container or a VM (oVirt 3.5) shares up to 4x 10GB network cards with other containers/VMs.
So far we are using just oVirt for this purpose but we would like to shift to a Dockerized system to save some resources on the machines.
Does anybody have some experience or suggestion?
Docker containers are really just processes; it can run them each in a separate network namespace (the default) or let them use the host's network directly (--net=host).
If running in a separate network namespace then they won't have any access to the host's network cards; in the default config (--net=bridge) they are NAT networked via a Linux bridge, so if that matches your requirements, you're away.
Link to Docker docs on networking

Configuring openstack for a in-house test cloud

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Resources