Choose a specific network among multiple networks - openstack

I am using Cloudify 2.7 with OpenStack Icehouse.
I would like to attach the Cloudify Management VM to two private networks:
private-net-1 and private-net-2
The Cloudify Shell, however, is attached only to private-net-1.
So, how should I configure che cloud driver so that the bootstrap-cloud process will work?

The Cloudify shell just needs access to the Cloud API. It does not matter if it is connected to one network or more. It is not related to how the Cloudify Manager is set up.
The Cloudify compute template configuration allows you to specify static networks that the compute machine will connect to. See example here:
http://getcloudify.org/guide/2.7/clouddrivers/network.html
Note this section:
// Optional. Use existing networks.
computeNetwork {
networks (["SOME_INTERNAL_NETWORK"])
}
So you can specify multiple networks here:
// Optional. Use existing networks.
computeNetwork {
networks (["SOME_INTERNAL_NETWORK1", "SOME_INTERNAL_NETWORK2"])
}

Related

Setting Openstack compute node with a fake hypervisor

I'm trying to set up openstack compute nodes that mimics a real node, however never actually sets up the VMs on a physical host.
In the openstack tests, there are usages of fake drivers (defined in nova/virt/fake.py) through a complex system of testing classes.
I wish to get such a node up and running not within a test (meaning, I don't want to use these classes to spawn the compute node), but on an actual VM/container, however, I cannot figure out how to get a compute process to run with this fake hypervisor (or more specifically, one that will be defined by me).
How do I inject this fake driver instead of the real driver in a compute node?
(also, I'm installing OS using devstack (latest))
For more clarification, my goal is to do stress testing of OS, running multiple fake compute nodes, not in all-in-one configuration. The usage of devstack to setup the controller node is for simplifying the process, but the system should be:
A controller node, running the core services (Nova, Glance, Keystone etc.).
Multiple compute nodes, using fake hypervisors on different machines.
When installing a new compute node, there is a configuration file nova-compute.conf that is being created automatically.
It seems that in /etc/nova/nova-compute.conf there is an option:
compute_driver = libvirt.LibvirtDriver
That uses libvirt as the default hypervisor for a compute node. In addition to hyperv, vmwareapi and xenapi, according to the nova configuration documentation, one can choose using the fake driver by changing this option to:
compute_driver = fake.FakeDriver
In order to set the fake driver to our implementation, we may replace the fake driver written in fake.py with something else.

How to setup multinode corda network in lab

I followed the documentation from docs.corda.net to setup 3 node dev corda network on a single machine.
My goal is to setup multinode production level corda network that involves multiple physical machines. Can someone please help me how can I achieve this?
I want to learn about the corda network capabilities, its different configuration modes etc etc.
I've already setup 3 node dev corda network on a single machine
There are two approaches with which you can achieve the above
Using the network bootstrapper , refer : https://docs.corda.net/network-bootstrapper.html
Using the Network Map Service
For a production level it is preferable to use the network map service as you can manage the nodes dynamically. This is not possible with the networkbootstrapper as the nodes informations are shared within the nodes during the boostrapping which cannot be changed
For NetworkMap Service you can refer Cordite NetwokMapService.

How to provide support for Persistent Volume provisioner within an kubernetes cluster?

Ok so this might be a basic question, but i'm new to kubernetes and tried to install wordpress using helm unto it, using the stable/wordpress chart, but i keep getting an error "pod has unbound immediate PersistentVolumeClaims (repeated 2 times)" is this because of the requirement in here https://github.com/helm/charts/tree/master/stable/wordpress. "PV provisioner support in the underlying infrastructure" how do i enable this in my infrastructure, i have setup my cluster across three nodes on digitalocean, i've tried searching for tutorials on this, with no luck until now. Please let me know what i'm missing, thanks.
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
Flexvolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Portworx Volumes
ScaleIO Volumes
StorageOS
You can enable support for PVs or Dynamic PVs using thoese plugins.
detail reference
On Digital Ocean you can use block storage for volumes.
details
Kubernetes can be set-up for Dynamic Volume Provisioning. This would allow the Chart to run to completion using the default configuration as the PVs would be provisioned on-demand.

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

Can the Docker driver on OpenStack coexist with libvirt.LibvirtDriver?

The following documentation link indicates that the docker driver needs to be configured on all compute nodes
from
compute_driver= libvirt.LibvirtDriber
to
compute_driver=docker.DockerDriver
Does this means there will not be an option to select the instantiation of a normal VM ? Will the horizon UI allow to select which type of virtualization ( docker vs kvm ) to be selected ?
In openstack you cannot have hybrid compute drivers unless they are separated by AZs. So it's either one or the other.
Of course the hackish work around would be to spin up an openstack compute instance inside of the docker / lxc environment and join it to a new az as a libvirt node....
a bit of inception there though, and it makes your scheduler basically worthless.
With the basic OpenStack you can't, but you can write and add a filter which makes it possible... Just write a class with a host_passes method and add your new filter to nova scheduler filters.
I did it and it works.

Resources