communication between Openstack VM - mpi

How to make two VMs communicate with each other? I have to split a task between two VMs, so I think MPI has to be used, If so are there any useful resources that I can use to get started? Any help would be appreciated.
P.S : I have instaled devstack juno

Your question is not really clear.
Openstack is just a virtualization technology. There's almost no difference between having two hardware servers and two VMs. E.g. normally if two servers belong to the same network segment they will have access to each other's open ports. Openstack works just in the same way - if you assign the same network to VMs then this will also work.
However if you wish to install two VMs that will consume from a list of tasks and do them in parallel I would recommend you to read about Enterprise Integration Patterns (e.g. here). Technically this is implemented by using one or several messaging middleware servers such as ActiveMQ or ZeroMQ.

Related

Do all servers have one base OS, like in RED HAT openstack architecture?

I'm a noob learning openstack. And The resources are all over the place tbh. I came across this image and would like to know one thing,
So, Suppose I have 100TB of storage and 10 server grade processors, and ram of 1TB, do all these resources make up of only one base OS- RED hat enterprise Linux? So, they sell resources to connect all the equipment and connect to install one single OS which can comprehend them all?
And Upon this, we throw an Openstack architecture so clients can use them as needed? Do we need as many NICs or the NICs virtual?
How to scale?
As you say, you just add a server. Install RHEL or another supported Linux distro (it's best to install the same distro and version on all servers), then OpenStack and configure it. The new server will register with the OpenStack controllers and can be used for launching virtual machines immediately.
The process is a bit more involved when you run a cloud with baremetal instances (i.e. you don't launch VMs but provision physical systems), but in principle it's the same.
by definition(at consumer scale-like one laptop) we need a network interface card for one IP
This is incorrect. You can configure multiple IP addresses on a single interface, even on your PC at home, even if that PC runs Windows.
An enterprise cloud requires connecting nodes to several networks. Usually, servers have several physical NICs, bond them together, and use VLANs or other multiplexing technologies to implement the networks. See this blog (five years old, but the principles still apply today, and it's well-written) for a good example of a real-world OpenStack network architecture.
Openstack uses one big special NIC
OpenStack can be deployed in many ways. It is not a shrink-wrapped solution. It can be used on servers with single NICs, bonded NICs, VLANs, normal networks, etc. Your statement is almost correct if you think of a typical deployment and a bond interface as a "big special NIC".
If you are interested to try this out at home, see the OpenStack installation tutorial. You will learn a lot.

Is it possible to have isolated networks in a Kubernetes cluster?

I'm trying to set up a general architecture for a system that I'm moving to Kubernetes (self-hosted, probably on VSphere).
I'm not very well versed in networking and I have the following problem that I cannot seem to be able to conceptually solve:
I have many microservices which were split out of a monolith, but the monolith is still significant. All of it is moving to K8s. It's a clustered application and does a lot of all-to-all networking under high load, which I would like to separate from all the other services in the Kubernetes cluster.
Before moving to K8s we provided a way to specify a network device that is used only for the cluster communication, and as such could be strictly separated from other traffic, and alas, even use separate networking hardware for clustering.
So this is where I would request your input: is it possible to have completely separate networking for this application-level cluster inside the Kubernetes cluster? The ideal solution would allow me to continue using our existing logic, i.e. to have a separate network (and network adapter) for the chatty bits but it's not a hard requirement to keep it that way. I have looked at Calico, Flannel, and Istio, but haven't been able to come up with a sound concept.
Use k8s NetworkPolicies, by applying these policing, you can allow/deny traffic for pods base on label selector. you can try WeaveNet and Calico, both are good and support NetworkPolicies.
It is good to have Calico network plugin. Because Flannel doesn't support network policies. You can create NetworkPolicies resources for allow/deny the traffics.
On OpenShift, you can have an isolated network per project (Kubernetes namespace). See https://docs.openshift.com/container-platform/3.5/admin_guide/managing_networking.html#isolating-project-networks

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

How to start multiple virtual machines simultaneously in CloudStack

Is there a way to start multiple virtual machines (instances) simultaneously in CloudStack?
Apparently this can't be done using the http user interface. Also, the http API request specifies only one id for targeting the virtual machine.
All I can think to solve this problem is to fire multiple individual start requests for each instance, then polling each of the job for results. Is there a better way?
CloudStack is an API driven system, if there is no API call where you can specify multiple VMs to be created (and I don't think there is), then it is not possible.
If you do need to create multiple machines (nearly) simultaneously, the only option I see is to fire multiple API calls, as you already mentioned.
See this answer on another question for a list of tools that make interfacing with CloudStack easier.
To start VM on cloudstack simultaneously tho in serial, I used cloudmonkey and created a bash script to setup a group of know VM UUID. See here for my experience
https://sites.google.com/site/cloudfyp/tutorial/cloudmonkey/commands-on-cloudmonkey

Rerouting Application Network Traffic at the Data Link Layer

Consider the following situation:
You have an application you are tesing, but in order to test the networking functionality of said program, you are required to run multiple instances of it and have them communicate with one another.
Possible solutions are:
- Run software on individual machines connected by WAN or LAN.
- Run the software on virtual machines, all on the same computer.
I do not want to use either of these methods (the reasoning is irrelevant). I want to know if there is a way that I can reroute network transmissions from the test application (ideally in any programmming language) in a way such that I can run multiple instances of the same software on one computer, and have them behave as if they were the only instance running on that computer.
In other words, I want to be able to code the application so that each instance listens on the same "listening" port (since only one instance will be running on each computer when in production). Then, I want to know if I can reroute the network requests at a lower level then the application so that they do not interfere with eachother (clash over the same port number).
Essentially, I want to build a virtual environment which only redirects the network calls (whereas a virtual machine takes far more resources, and has way more involved). Is this possible, and how might I approach this problem?
Thank you!
UPDATE: This is a more accurate idea of what I want to accomplish:
Basically, I want to program another application which TRANSPARENTLY redirects bind requests to available ports, and manages which applications are bound where... So from the applications perspective, all the instances are bound to port 1000, but in reality, this other application is automatically managing which instance is bound where, and avoiding potential conflicts. I feel like this could be accomplished with Windows Hooks, but I'm not sure how you could implement this?
As far as I know, there is no sane way to multiplex the same port on the same network device. At the very minimum, you will need to choose on of the following:
Run each instance of your program on a different port
Create multiple virtual network interfaces
The first choice is easy and may be the one I would choose. The second one is more towards what you are looking for but it would be a true PITA to set up - you can look into VirtualBox and its host-only networks for inspiration. If you are writing things on linux you might look into pipes and chrooting but you'll be spending more time setting up this environment than writing your software.

Resources