Does someone have a simple autoscaling example (e.g. for two cirros instances and cpu/memory trigger) for OpenStack Ocata? I'm looking for a simple heat YAML file that can demonstrate.
Related
I want to setup HA for airflow(2.3.1) on centos7. Messaging queue - Rabbitmq and metadata db - postgres. Anybody knows how to setup it.
Your question is very large, because the high availability has multiple level and definition:
Airflow availability: multiple scheduler, multiple workers, auto scaling to avoid pressure, high storage volume, ...
The databases: a HA cluster for Rabbitmq and a HA cluster for postgres
Even if you have the first two levels, how many node you want to use? you cannot put everything in the same node, you need to run one service replica per node
Suppose you did that, and now you have 3 different nodes running in the same data center, what if there is a fire in the data center? So you need to use multiple nodes in different regions
After doing all of above, is there a risk for network problem? of course there is
If you just want to run airflow in HA mode, you have multiple option to do that on any OS:
docker compose: usually we use it for developing, but you can use it for production too, you can create multiple scheduler instances, with multiple workers, it can help you to improve the availability of your service
docker swarm: similar to docker compose with additional features (scaling, multi nodes, ...), you will not find much resources to install it, but you can use the compose files and just do some changes
kubernetes: the best solution, K8S can help you to ensure the availability of your services, easy install with helm
or just running the different services on your host: not recommended, because of manual tasks, and applying the HA is complicated
This website show nova tool could create instance with multiple ephemeral disks, but how could I achieve that through openstack command or openstacksdk?
And I could find any clue through the openstack flavor create -h, it shows that only support one option Ephemeral Disk GB, and I can't figure out how to add multiple Ephemeral Disks.
Unless you already found the answer, this was added in python-openstackclient 5.5.0 References: 1,2,3
Ok so this might be a basic question, but i'm new to kubernetes and tried to install wordpress using helm unto it, using the stable/wordpress chart, but i keep getting an error "pod has unbound immediate PersistentVolumeClaims (repeated 2 times)" is this because of the requirement in here https://github.com/helm/charts/tree/master/stable/wordpress. "PV provisioner support in the underlying infrastructure" how do i enable this in my infrastructure, i have setup my cluster across three nodes on digitalocean, i've tried searching for tutorials on this, with no luck until now. Please let me know what i'm missing, thanks.
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
Flexvolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Portworx Volumes
ScaleIO Volumes
StorageOS
You can enable support for PVs or Dynamic PVs using thoese plugins.
detail reference
On Digital Ocean you can use block storage for volumes.
details
Kubernetes can be set-up for Dynamic Volume Provisioning. This would allow the Chart to run to completion using the default configuration as the PVs would be provisioned on-demand.
I am curious about the how the horizon interacts with neutron in OpenStack.
For example, when I upload a yaml-file in Orchestration, it will show the network topology in Network Topology.
I am customizing a map which could show the details in each node. So I have to know the interactivity about the neutron and horizon. I do not know how the horizon-level gets the information from neutron-level (specifically says, how can I build the topology from the data in neutron).
Could you please tell why and how the interactivity between launching the stack and building the network topology? Thanks in heart.
Horizon is a Django app which uses Python clients to interact with various OpenStack components (keystone, nova, neutron, etc).
Heat is OpenStack's orchestration engine which creates OpenStack resources from a template.
You have to go through the Horizon code to understand how Horizon-Neutron interaction works.
Horizon network topology-github
I'm working on a project to run a Kubernetes cluster on GCE. My goal is to run a cluster containing a WordPress site in multiple zones. I've been reading a lot of documentation, but I can't seem to find anything that is direct and to the point on persistent volumes and statefulsets in a multiple zone scenario. Is this not a supported configuration? I can get the cluster up and the statefulsets deployed, but I'm not getting the state replicated throughout the cluster. Any suggestions?
Thanks,
Darryl
Reading the docs, I see that the recommended configuration would be to create a MySQL cluster with replication: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/. This way, you would have the data properly replicated between the instances of your cluster (if you are in a multi-zone deployment you may have to create an external endpoint).
Regarding the Wordpress data, my advice would be to go for an immutable deployment: https://engineering.bitnami.com/articles/why-your-next-web-service-should-be-immutable.html . This way, if you need to add a plugin or perform upgrades, you would create a new container image and re-deploy it. Regarding the media library assets and immutability, I think the best option would be to use an external storage service like S3 https://wordpress.org/plugins/amazon-s3-and-cloudfront/
So, to answer the original question: I think that statefulset synchronization is not available in K8s (at the moment). Maybe using a volume provider that allows ReadWriteMany access mode could fit your needs (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), though I am quite unsure about the stability of it.