Openstack Monitoring - openstack

I have done deployment using openstack-ansible community playbook and integrated with nagios (seperate nagios-server) for service monitoring. All my service inside containers run in private IP so I got stuck in position how to monitor service running inside container.

You can use ceilometer to monitor services, if your goal is to create alarms and auto-scale your stacks.
If you want to have a dashboard with some of the applications then it is user-side monitoring. In this case there are many tools like datadog.

Related

Airflow stored in the cloud?

I would like to know if I can make the airflow UI accessible to all people who have a user, web page type. For this, I would have to connect it to a server, no? Which server do you recommend for this? I was looking around and some were using Amazon EC2.
If your goal is just making the airflow UI visible to public, there is a lot of solutions, where you can do it even in your local computer (of course it is not a good idea).
Before choosing the cloud provider and the service, you need to think about the requirements:
in your team, do you have the skills and the time to manage the server? if no you need a managed service like GCP cloud composer or AWS MWAA.
which executor yow want to use? KubernetesExecutor? CeleryExecutor on K8S? if yes you need a K8S service and not just a VM.
do you have a huge loading? do you need a HA mode? what about the scalability?
After defining the requirements, you can choose between the options:
Small server with LocalExecutor or CeleryExecutor on a VM -> AWS EC2 with a static IP and Route 53 for DNS name
A scalable server in HA mode on a K8S cluser -> AWS EKS or google GKE
A managed service and focusing only on the development part -> google cloud composer

Communication between asp.net website and Kubernetes/docker microservices in Azure

We're planning an architecture where asp.net websites must communicate with microservices.
The plan is to hos this in Azure, with the Website being an Azure ASP.NET Website and the microservices in Kubernetes/docker containers.
I was thinking kubenet was the way to go, so that a number of microservice instances could be spawned on demand without the need for the website to know about this, but it seems like VM-Kubernetes connectivity is not supported unless initiated by the Pod, or am I misunderstanding something?
https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-virtual-networks
You can add VM in the same Virtual Network as Kubernetes Cluster. And provide private Ip to Kubernetes services using
annotation "service.beta.kubernetes.io/azure-load-balancer-internal: "true"
and VM So they can communicate with each other.

Create Kubernetes Pod Network Map

I am looking to map out various network connections between pods in a namespace to understand which pod is talking to which other pods. Is there a way to query the etcd to get this information?
There are many tools to visualize k8s topology.
In order of Github stars:
Cockpit:
Cockpit Project — Cockpit Project Cockpit makes GNU/Linux discoverable. See your server in a web browser and perform system tasks with a mouse. It’s easy to start containers, administer storage, configure networks, and inspect logs.
Weave Scope (Github: weaveworks/scope) is a troubleshooting and monitoring tool for Docker and Kubernetes clusters. It can automatically generate applications and infrastructure topologies which can help you to identify application performance bottlenecks easily. You can deploy Weave Scope as a standalone application on your local server/laptop, or you can choose the Weave Scope Software as a Service (SaaS) solution on Weave Cloud. With Weave Scope, you can easily group, filter or search containers using names, labels, and/or resource consumption. :
spekt8/spekt8: Visualize your Kubernetes cluster in real time :
SPEKT8 is a new visualization tool for your Kubernetes clusters. It automatically builds logical topologies of your application and infrastructure, which enable your SRE and Ops team to intuitively understand, monitor, and control your containerized, microservices based application. Simply deploy our containerized application directly into your Kubernetes cluster.
KubeView (Github: benc-uk/kubeview: Kubernetes cluster visualiser and graphical explorer )
KubeView displays what is happening inside a Kubernetes cluster, it maps out the API objects and how they are interconnected. Data is fetched real-time from the Kubernetes API. The status of some objects (Pods, ReplicaSets, Deployments) is colour coded red/green to represent their status and health.
Kubernetes Topology Graph:
Provides a simple force directed topology graph for kubernetes items.
You can try to use Weave Scope to make a graphical map of your Kubernetes cluster.
It will generates a map of your process, containers and hosts in real time. You can also get logs from containers and run some diagnostic commands via WEB-UI.
To install on Kubernetes you can run:
kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
After launch you don't need to configure anything, Scope will listen you pods and network and make a map of you network.

Consul in a Docker-based Microservices architecture

We are working on switching over to micro services from a monolithic application.
Each microservice is going to be running on Docker through Amazon ECS.
We've decided to use Consul for service discovery. We have 3 servers running on EC2 instances inside the VPC.
My question is as follows:
How/Where do I start the Consul agent for each micro service? Do I run another container on each instance (through Docker-Compose) with Consul inside? Or do I somehow run a Consul agent inside the already existing Docker container for each micro service?
Attached is a rough representation of my situation.
Should the Consul Client (in yellow) be in its own Docker Container or inside the Node.js container?
Consul is another service, and I wouldn't deploy it inside the container of my microservice. In a large-scale scenario, I'd deploy several Consul containers: some would run the agent under Server mode (think of them as Masters), and some would run it under Client mode (think of them as Slaves).
I wouldn't deploy the agents running under client mode as part of my application's containers, because:
Isolating them means that they are stopped individually. Putting them together means that whenever I'd stop my application's container due to a version upgrade or a failure, I'd be needlessly stopping the Consul agent running therein. Same goes the other way around: stopping the Consul agent would stop my running application. This unneeded coupling isn't beneficial.
Isolating them means they can be scaled separately. I may need to scale my microservice and deploy more instances of it. If the container also contains Consul client agents, then scaling my microservice would end up scaling Consul as well. Or the other way around: I may need to scale Consul without scaling my microservice.
Isolating them is easier in terms of Docker container images. I can keep using the official Consul image and upgrade without much hassle. Putting Consul and my microservice together would mean that upgrading Consul requires me to modify the container image by myself.

Publishing hystrix metrics to API

I have a web service running as multiple docker containers. The docker hosts are not under my control. I use hystrix to record metrics. I thought of using Turbine to monitor the metrics. But i do not have access to the real hostnames and ports of my we app instances to give Turbine. So I am thinking of a push model where the individual instances of my web app publishes the metrics to another API on which i can run dadhboard tools. I looked at Servo, but it also does not suite my needs as it publishes to JMX. Can I use custom publisher for this? Are there examples for this use case?

Resources