openstack: how to run multiple glance instances? - openstack

I'm new to openstack. And I have followed the installation guide to set an openstack on a single host server. Now I got a question. On the single node, I registered a glance service and relative endpoints in keystone. If I want to run glance on multiple host servers, do I need to register two glance services in keystone? Or do I still just need one glance service but add more glance endpoints?

You would probably want to find a way to load balance the glace API endpoints then place the load balanced address into the keystone catalog as a PublicURL.
The trick is finding a way to load balance glance. The big issues is the size of queries to glance in both time and data throughput. Not exactly an easy service to load balance.

Related

WSO2 clustering in a distributed deployment

I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)

Kubernetes statefulsets in a GCE multiple zone deployment

I'm working on a project to run a Kubernetes cluster on GCE. My goal is to run a cluster containing a WordPress site in multiple zones. I've been reading a lot of documentation, but I can't seem to find anything that is direct and to the point on persistent volumes and statefulsets in a multiple zone scenario. Is this not a supported configuration? I can get the cluster up and the statefulsets deployed, but I'm not getting the state replicated throughout the cluster. Any suggestions?
Thanks,
Darryl
Reading the docs, I see that the recommended configuration would be to create a MySQL cluster with replication: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/. This way, you would have the data properly replicated between the instances of your cluster (if you are in a multi-zone deployment you may have to create an external endpoint).
Regarding the Wordpress data, my advice would be to go for an immutable deployment: https://engineering.bitnami.com/articles/why-your-next-web-service-should-be-immutable.html . This way, if you need to add a plugin or perform upgrades, you would create a new container image and re-deploy it. Regarding the media library assets and immutability, I think the best option would be to use an external storage service like S3 https://wordpress.org/plugins/amazon-s3-and-cloudfront/
So, to answer the original question: I think that statefulset synchronization is not available in K8s (at the moment). Maybe using a volume provider that allows ReadWriteMany access mode could fit your needs (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), though I am quite unsure about the stability of it.

Machine's uptime in OpenStack

I would like to know (and retrieve via REST API) the uptime of individual VMs running in OpenStack.
I was quite surprised that OpenStack web UI has a colon called "Uptime" but it actually show time since the VM was created. If i stop the VM, the UI shows Status=Shutoff, Power State=Shutdown, but the Uptime is still being incremented...
Is there a "real" uptime (I mean for a machine that is UP)?
Can I retrieve it somehow via the OpenStack's REST API?
I saw the comment at How can I get VM instance running time in openstack via python API? but the page with the extension mentioned there does not exists and it looks to me that this extension will not be available in all OpenStack environment. I would like to have some standard way to retrieve the uptime.
Thanks.
(Version Havana)
I haven't seen any documentation saying this is the reason, but the nova-scheduler doesn't differentiate between a running and powered off instance. So your cloud can't be over-allocated or leave an instance in a position that would be unable to be powered on. I would like to see a metric of actual system runtime as well, but at the moment the only way to gather that would be through ceilometer or via Rackspaces StackTach

devstack multi node installation

I have 3 nodes which i am using for multi node setup. I am thinking of following the below structure
Controller: keystone, horizon, g-reg, g-api, n-api, n-crt, n-sch, n-cond, n-cauth, n-obj, n-novnc, n-xvnc, c-api, c-sch (this node will have mysql and rabbitmq as well)
Network: q-svc, q-agt, q-dhcp, q-l3, q-meta, quantum
Compute: n-cpu, c-vol
I have a few questions. 1. In Compute node, do i need to keep n-api? Also what else is needed apart from n-api and c-vol? Is q-agt needed in compute? 2. Will i need c-api along with c-vol? Does compute node need rabbit mq installed?
Q1)
You don't want the nova-api on the compute nodes generally. It's better on the controller.
Nova api makes use of pasted hard system credentials and you don't want that paste file exposed on any node that a user may compromise with a hypervisor escape.
nova-compute and nova-volume is all you probably need. they do communicate with the scheduler over rabbitmq so make sure that's working =P
Q2)
You don't NEED cinder to run an openstack cloud, though I see no reason not to include it.
I don't know what impact disabling cinder has on the devstack stack.sh script, I've never done it.
As per RabbitMQ see above answer.

Should nova-api run on different compute nodes?

I am dealing with OpenStackļ¼ˆFolsom) and I want to deploy OpenStack to work on different
compute nodes. Is it necessary to run Nova Api service on every node?
It seems that every compute node needs a nova-api service in my equirement, but I think it does not make sense.
In my understanding only one nova-api service is required in the hole cloud system.
Request -> nova-api -> nova-schedule to determine which node to use.
Yes I think it is so, and according to the office guide writen by the OpenStack Installing Additional Compute Nodes only the dependence and the nova-* component on the additional compute node should be installed or just the nova-compute package.
In general, you only need one nova-api service running.
However, if your networking is configured for multi-host, then you will need to run a metadata service on each compute node. In this scenario, you need to run nova-api-metadata service on each compute node.
It is not necessary to run Nova-API service in every compute node. But, if you are using some of the available images with cloud init script that looks for metadata from Nova API then you need to install it in every compute node.
If you can build your own VM image without cloud init scripts, then it will not be required.

Resources