How do you define an Openstack node? - openstack

So i've read several articles & looked through Openstack docs for the definition of a node.
Node
A Node is a logical object managed by the Senlin service. A node can
be a member of at most one cluster at any time. A node can be an
orphan node which means it doesn’t belong to any clusters.
Node types
According to the Oracle docs, there are different node types (controller node, compute node etc.). What I'm confused about is if a single node is a single physical computer host. Does that mean I can still deploy multiple nodes with different node types on the same host?
Node Cluster
I read that a cluster is a group of nodes. How could the cluster for the controller node look like?
CONTROLLER NODE
The controller node is the control plane for the OpenStack
environment. The control pane handles identity (keystone), dashboard
(Horizon), telemetry (ceilometer), orchestration (heat) and network
server service (neutron).
In this architecture, I have different Openstack services (Horizon, Glance etc.) running on one node. Can I conclude from this picture whether it's part of a cluster?

Ok, so a node in the context of the Openstack documentation is synonymous to host:
The example architecture requires at least two nodes (hosts)
from the sentence on the page: https://docs.openstack.org/newton/install-guide-ubuntu/overview.html
You already found out that what a node is in the context of Senlin.
Node types: the nodes referred here are the physical hosts, like in the rest of the Openstack documentation. The node type is determined by the services running on the host. Usually you can run serveral services on a host.
In Openstack the word cluster is only used to referred to service collection managed by Senlin. So usually no, these services need not form a cluster.

Related

How to setup multinode corda network in lab

I followed the documentation from docs.corda.net to setup 3 node dev corda network on a single machine.
My goal is to setup multinode production level corda network that involves multiple physical machines. Can someone please help me how can I achieve this?
I want to learn about the corda network capabilities, its different configuration modes etc etc.
I've already setup 3 node dev corda network on a single machine
There are two approaches with which you can achieve the above
Using the network bootstrapper , refer : https://docs.corda.net/network-bootstrapper.html
Using the Network Map Service
For a production level it is preferable to use the network map service as you can manage the nodes dynamically. This is not possible with the networkbootstrapper as the nodes informations are shared within the nodes during the boostrapping which cannot be changed
For NetworkMap Service you can refer Cordite NetwokMapService.

akka.net scaling in azure asp.net website

I have set up Akka.net actors running inside an ASP.net application to handle some asynchronous & lightweight procedures. I'm wondering how Akka scales when I scale out the website on Azure. Let's say that in code I have a single actor to process messages of type FooBar. When I have two instances of the website, is there still a single actor or are there now two actors?
By default, whenever you'll call ActorOf method, you'll order creation of a new actor instance. If you'll call that in two actor systems, you'll end up with two separate actors, having the same relative paths inside their systems, but different global addresses.
There are several ways to share an information about actors between actor systems:
When using Akka.Remote you can call actors living on another actor system given their addresses or IActorRefs. Requirements:
You must know the path to a given actor.
You must know the actual address (URL or IP) of an actor system, on which that actor lives.
Both actor systems must be able to communicate via TCP between actor system (i.e. open ports on firewall).
When using Akka.Cluster actor systems (also known as nodes) can form a cluster. They will exchange information about their localization in the network, track incoming nodes and eventually detect a dead or unreachable ones. On top of it, you can use higher level components i.e. cluster routers. Requirements:
Every node must be able to open TCP channel to every other (so again, firewalls etc.)
A new incoming node must know at least one node that is already part of the cluster. This is easily achievable as part of the pattern known as lighthouse or via plugins and 3rd party services like consul.
All nodes must have the same name.
Finally, when using cluster configuration you can make use of Akka.Cluster.Sharding - it's essentially a higher level abstraction over actor's location inside the cluster. When using it, you don't need to explicitly tell, where to find or when to create an actor. Instead, all you need is a unique actor identifier. When sending a message to such actor, it will be created ad-hoc somewhere in the cluster if it didn't exist before, and rebalanced to equally spread the workload in cluster. This plugin also handles all logic associated with routing the message to that actor.

Can I use two controllers in two different machine

In my scenario, Transaction is between two nodes in two different Machines. Currently am using a controller in Machine A which acts as a notary as well. Can i use two controllers one in each machine?
As discussed here: Corda Controller Node, Corda has no concept of a "controller" node.
Up until Corda 2, each network had a single network map node, no matter how many machines were involved. Each node's configuration file would point to this network map node, using its IP address and port number.
In Corda 3, the network map node was replaced with a server distributing network map files. Details about how to deploy a network across machines in Corda 3 can be found here: https://docs.corda.net/tutorial-cordapp.html#running-nodes-across-machines.
yes, you can setup your case. NotaryChangeFlow (initiating), which should be used to change a state’s notary.

How to get the proxy node in openstack swift cluster?

I know the command swift-ring-builder /etc/swift/object.builder can get all storage nodes in a swift cluster. Now I want to know if there are any commands like it to get the proxy nodes in the cluster?
Every controller node itself acts as a proxy server first.The requests hit the proxy-server code in the controller node which resolves functions and methods to be called and acts upon.
The list of storage nodes MUST be accessible for all nodes in the cluster.
However, swift is agnostic about the list of proxies it has, so there is no command like that.
One suggestion, if you really need this information, would be to look at the storage nodes logs and find out the ips doing the requests. This way you can discover some or all proxies. However this method is totally imprecise.

openstack: relation between controller & compute nodes

I just started playing with openstack, and many things still don't understand. As I see it, to start a VM instance, we normally execute some commands on the controller e.g.
glance image-create
nova boot
But how does the controller know:
1) on which compute node to start the VM
2) how many compute nodes it has
Where does it take this information?
The controller will boot determine the location to launch the instance based on the information provided by nova-scheduler:
http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html
As for how many compute nodes are recognized, this is determined when you register a compute node with nova compute on the controller. Here is a reference for how compute is installed and configured for RHEL/CentOS/Fedora:
http://docs.openstack.org/juno/install-guide/install/yum/content/ch_nova.html
I'd suggest to learn the OpenStack software architecture for such questions, for example, look at this page http://docs.openstack.org/openstack-ops/content/example_architecture.html.
Simply speacking, OpenStack saves all the configurations in database which is by default mysql, so Controller knows all the information. A Nova component named nova-scheduler running as a controller service will decide where to place VM among all available hosts.
A good staring point is to deploy multiple nodes env. You will know how OpenStack works in the deployment procedure.

Resources