I'm totally new to openstack and after seeing tutorials running it on both single node and multi node(at least 3 consisting 1 controller 1 compute and 1 network node) i was wondering whats the diffrence and is there any advantages with multi nodes over single nodes ones?
Open stack system consists of lot services. If you are running all these in single node , then
there will be resource scarcity issue unless you have a machine with very high CPU,RAM etc. Another advantage of multi node configuration is failover. In 3 node config if one node is down then you can continue with 2 nodes (provided you have service replication). Better go for at least 3 node config which is recommended by openstack.
With Multinode configuration, you can achieve scale-out storage solution by adding more storage needs to your needs. Also, several compute nodes can be used to increase computation needs.
Related
I am using AWS ElastiCache for Redis as the caching solution for my spring-boot application. I am using spring-boot-starter-data-redis and jedis client to connect with my cache.
Imagine that I am having my cache in cluster-mode-enabled and 3 shards with 2 nodes in each. I agree then the best way of doing it is using the configuration-endpoint. Alternatively, I can list all the endpoints of all nodes and let the job done.
However, even if I use a single node's endpoint from one of the shards, my caching solution works. That doesn't looks right to me. I feel even if it works, that might case problems in the cluster in long run. When there are all together 6 nodes partitioned into 3 shards but only using one node's endpoint. I have following questions.
Is using one node's endpoint create an imbalance in the cluster?
or
Is that handled automatically by the AWS ElastiCache for Redis?
If I use only one node's endpoint does that mean the other nodes will never being used?
Thank you!
To answer your questions;
Is using one node's endpoint create an imbalance in the cluster?
NO
Is that handled automatically by the AWS ElastiCache for Redis?
Somewhat
if I use only one node's endpoint does that mean the other nodes will never being used?
No. All nodes are being used.
This is how Cluster Mode Enabled works. In your case, you have 3 shards meaning all your slots (where key-value data is stored) are divided into 3 sub-clusters ie. shards.
This was explained in this answer as well - https://stackoverflow.com/a/72058580/6024431
So, essentially, your nodes are smart enough to re-direct your requests to the nodes that has the key-slot where your data needs to be stored. So, no imbalances. Redis handles the redirection for you.
Now, while using Node endpoints, you're going to be facing other problems.
Elasticache is running on cloud (which is essentially AWS Hardware). All hardware faces issues. You have 3 primaries (1p, 2p, 3p) and 3 (1r, 2r, 3r) replicas.
So, if a primary goes down due to hardware issue (lets say 1p), the replica will get promoted to become the new Primary for the cluster (1r).
Now the problem would be, your application is connected directly to 1p which has now been demoted to replica. So, all the WRITE operations will fail.
And you will have to change the application code manually whenever this happens.
Alternatively, if you were using configurational endpoint (or other cluster level endpoints) instead of node-endpoints, this issue would only be a blip to your application at most, perhaps for 1-2 seconds.
Cheers!
I am running nginx on a kubernetes cluster having 3 nodes.
I am wondering if there is any benefit of having for example 4 pods and limit their cpu/mem to approx. 1/4 of the nodes capacity vs running 1 pod per node limiting cpu/mem so that pod can use resources of the whole node (for the sake of simplicity, we leave cubernet services out of the equation).
My feeling is that the fewer pods the less overhead and going for 1 pod per node should be the best in performance?
Thanks in advance
With more then 1 Pod, you have a certain high availability. Your pod will die at one point, and if it is behind a controller (which is how is must be), it will be re-created, but you will have a small downtime.
Now, take into consideration that if you deploy more then one replica of your app, even though you give it 1/n resources, there is a base image and dependencies that are going to be replicated.
As an example, let's imagine an app that runs on Ubuntu, and has 5 dependencies:
If you run 1 replica of this app, you are deploying 1 Ubuntu + 5 dependencies + the app itself.
If you are run 4 replicas of this app, you are running 4 Ubuntus + 4*5 dependencies + 4 times the app.
My point is, if your base image would be big, and you would need heavy dependencies, it would be not a linear increase of resources.
Performance-wise, I don't think there is much difference. One of your nodes will be heavily bombed as all your requests will end up there, but if your nodes can handle it, there should be no problem.
What you are referring to is the difference between horizontal and vertical scaling. Regarding vertical scaling, you would increase the resources of your application as you see fit. Otherwise, you would scale horizontally by increasing the amount of replicas of your application.
Doing one or the other depends on features that you application may or may not have. In the case of nginx scaling horizontally would split traffic per pod and also per node which would result in a better throughput for your most likely reverse proxy.
So i've read several articles & looked through Openstack docs for the definition of a node.
Node
A Node is a logical object managed by the Senlin service. A node can
be a member of at most one cluster at any time. A node can be an
orphan node which means it doesn’t belong to any clusters.
Node types
According to the Oracle docs, there are different node types (controller node, compute node etc.). What I'm confused about is if a single node is a single physical computer host. Does that mean I can still deploy multiple nodes with different node types on the same host?
Node Cluster
I read that a cluster is a group of nodes. How could the cluster for the controller node look like?
CONTROLLER NODE
The controller node is the control plane for the OpenStack
environment. The control pane handles identity (keystone), dashboard
(Horizon), telemetry (ceilometer), orchestration (heat) and network
server service (neutron).
In this architecture, I have different Openstack services (Horizon, Glance etc.) running on one node. Can I conclude from this picture whether it's part of a cluster?
Ok, so a node in the context of the Openstack documentation is synonymous to host:
The example architecture requires at least two nodes (hosts)
from the sentence on the page: https://docs.openstack.org/newton/install-guide-ubuntu/overview.html
You already found out that what a node is in the context of Senlin.
Node types: the nodes referred here are the physical hosts, like in the rest of the Openstack documentation. The node type is determined by the services running on the host. Usually you can run serveral services on a host.
In Openstack the word cluster is only used to referred to service collection managed by Senlin. So usually no, these services need not form a cluster.
Can one node be in 2 different networks simultaneously, or i need to start another node for this? And if it's impossible how to merge networks, because i found only a few words about this in technical whitepaper.
As of Corda 3.1, a node can only be part of one network. There is no functionality to allow a node to connect to two databases, have two sets of configs, etc.
I have deployed 2 identical compute nodes in Openstack environment (Mitaka).
Each Compute node has 2 Physical CPU, 12 Cores each.
I would like to create a single VM which has have much processors as possible.
I don't want to oversubscribe between pCPU to vCPU, i.e. I would keep physical to virtual as 1:1 ratio.
However, it seems only allow me max. to create 24 vCPU in single VM even I have 48 vCPU in my resource pool (sum up by 2 compute nodes, each contribute 24 vCPU).
Anyone have an idea how to create more vCPU in my case?
You cannot create an instance that spans multiple compute nodes with OpenStack ... or with any open-source virtualization platform that I am aware of.
The proprietary vSMP product (vendor ScaleMP) can do this and there may be other products.
The other approach that you could take is to build a cluster consisting of multiple instances, and use a batch scheduler and / or some kind of message passing framework to perform computations spanning the cluster.