How to install Ceilometer and Gnocchi on different servers? - openstack

I have an Openstack environment with 3 controller servers HA with Pacemaker.
I'm trying to install Ceilometer on one controller, due to the high load of gnocchi I want to install gnocchi on a separate server.
first, I wonder if there is a way to install gnocchi on an independent server and how to connect it to ceilometer?
second, what is the proposed solution to install gnocchi alongside 3 controller servers that are configured HA with pacemaker.
any suggestion would be appreciated.

Related

Jfrog Artifactory High availability and maintenance

We are using Jfrog artifactory selfhosted instance with license for our project and many customers are using for thir package and binary management.
Since this is hosted i our private selfhosted environments over linux platform, regularly we may need to have a maintenance window atleast 2 times in a month to apply patches to our servers and all. So we are considering for high availability for our currently running Jfrog instance which should resolve this downtime during the maintenance. Also we are looking for some better managemental scenarios as below and couldnt find any helpful guidance from the docs.
How the Jfrog server insance service status can be monitored along with auto restart if the service is in failed state after the server reboot.
Is there any way to set and populate a notification messsage to the sustomers regarding the sceduled maintenance.
How can we enable the high availability for JFrog Artifactory and Xray. ?
Here are some of the workaround you can follow to mitigate the situation
To monitor the health of the JFrog services you can use the below rest API
curl -u : -XGET
http://<Art_IP>:8046/router/api/v1/topology/health -H 'Content-Type:
application/json'
If you are looking for a more lightweight check you can use
curl -u: -XGET
http://<Art_IP>:8081/artifactory/api/system/ping
By default, the systemctl scripts check for the availability of the services and restart them when they see a failure. The same applies to the system restart as well.
There is no option for a pop-up message however, you can set a custom message as a banner in the Artifactory. Navigate to Administration -> General settings -> Customer message. Here is the wiki link
When you add another node to the mix, Artifactory/Xray becomes a cluster to balance the load (or as a failover) however it is the responsibility of the load balancer/Reverse proxy to manage the traffic between the cluster nodes according to the availability of the backend node.

OpenStack and Open Source MANO: Instantiating a Kubernetes Cluster on the Openstack server for OSM

I am trying to deploy a 5G network using open-source software. I ran into an issue that seems to have nothing on it. When I try to instantiate a Network Service, it says that it cannot find the K8s cluster that meet the following requirements: {}.
This is how it actually looks:
No k8scluster with requirements='{}' at vim_account=34510160-24e6-4c6a-93ec-787d00a2518a found for member_vnf_index=oai_cn5g_amf
The VIM account is simply the Openstack server. Do you guys know how to link a kubernetes cluster in Openstack?
Thanks,
Taylor

How to send Airflow Metrics to datadog

We have a requirement where we need to send airflow metrics to datadog. I tried to follow the steps mentioned here
https://docs.datadoghq.com/integrations/airflow/?tab=host
Likewise, I included statsD in airflow installation and updated the airflow configuration file (Steps 1 and 2)
After this point, I am not able to figure out how to send my metrics to datadog. Do I follow the Host configurations or containarized configurations? For the Host configurations, we have to update the datadog.yaml file which is not in our repo and for containerized version, they have specified how to do it for Kubernetics but we don't use Kubernetics.
We are using airflow by creating a docker build and running it over on Amazon ECS. We also have a datadog agent running parallely in the same task (not part of our repo). However I am not able to figure out what configurations I need to make in order to send the StatsD metrics to datadog. Please let me know if anyone has any answer.

Mounting Google Cloud network locally

We have a Google Cloud project with several VM instances and also Kubernetes cluster.
I am able to easily access Kubernetes services with kubefwd and I can ping them and also curl them. The problem is that kubefwd works only for Kubernetes, but not for other VM instances.
Is there a way to mount the network locally, so I could ping and curl any instance without it having public IP and with DNS the same as inside the cluster?
I would highly recommend rolling a vpn server like openvpn. You can also run this inside of the Kubernetes Cluster.
I have a make install ready repo for ya to check out at https://github.com/mateothegreat/k8-byexamples-openvpn.
Basically openvpn is running inside of a container (inside of a pod) and you can set the routes that you want the client(s) to be able to see.
I would not rely on kubefwd as it isn't production grade and will give you issues with persistent connections.
Hope this help ya out.. if you still have questions/concerns please reach out.

Running Kubernetes on vCenter

So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.
Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.
I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.
My basic understanding is:
Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
Remove/disable the MASQUERADE rule installed by Docker.
Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
Some other setup is required to make use of load balanced Services and dynamic DNS.
Provision 5 VMs: 1 master, 4 minions
Install/configure Docker on all 5 VMs
Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
Install/configure kubelet and kube-proxy on each minion and run them as services/daemons
This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.
I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.
How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?
I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.
On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model .
From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection.
See also Setting up the network for Kubernetes for a nice answer.
Comments on your other points:
1.
All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
For anyone who is interested in doing the same, here is my current plan.
I found the kube-up.sh script which installs a production-ish quality Kubernetes cluster on your AWS account. Essentially it creates 1 Kubernetes master EC2 instance and 4 minion instances.
On the master it installs etcd, apiserver, controller manager, and the scheduler. On the minions it installs kubelet and kube-proxy. It also creates an auto-scaling group for the minions (nice), and creates a whole slew of security- and networking-centric things on AWS for you. If you run the script and it fails creating the AWS S3 bucket, create a bucket of the same exact name manually and then re-run the script.
When the script is finished you will have Kubernetes up and running and ready for near-production usage (I keep saying "near" and "production-ish" because I'm too new to Kubernetes to know what actually constitutes a real deal productionalized cluster). You will need the AWS CLI installed and configured with a user that has full admin access to your AWS account (it goes ahead and creates IAM roles, etc.).
My game plan will be to:
Get comfortable working with Kubernetes on AWS
Keep hounding the Kubernetes team on Slack to help me understand how Kubernetes works under the hood
Reverse engineer the kube-up.sh script so that I can get Kubernetes running on premise (vCenter)
Blog about this process
Update this answer with a link to said blog.
Give me some time and I'll follow through.

Resources