I have a Google Kubernetes Engine running in the Google Cloud Platform.
I would now like to know how many HTTP requests are received by our Kubernetes cluster. I'd like that to be displayed in Google Stackdriver.
Unfortunately I can't find any appropriate metric in the documentation at https://cloud.google.com/monitoring/api/metrics_kubernetes.
Is there a way to get a chart with the count of all incoming HTTP request to a GKE cluster in Stackdriver?
Unfortunately, that's a limitation of Kubernetes itself - it doesn't expose very much information about the services running in it. You'll need to either install Istio and configure the Stackdriver adapter (see my post on this) or use something like OpenCensus in your app to create a custom metric. Another option would be to create a log-based metric to count the requests.
Related
We are currently having a requirement of passing the data to topics from 3rd party API. First, we have tried with HTTP bridge but looks like it is not supporting the HTTPS protocol. Now, we got some suggestions for having or creating the Kafka connector to solve this. Now, we got confused about whether Kafka connect can solve this or not. How it will be solved? I am looking for some suggestions for solving this problem.
Kafka Connect can send records to an HTTP(S) endpoint, or read from one and into Kafka. It will not allow external users to read through it, to get Kafka records.
The Strimzi Bridge or Confluent REST Proxy should both support HTTPS, even if that means you need to place either behind an SSL terminated reserve proxy such as Nginx, Caddy, etc.
I currently have services running on the Google App Engine platform which use the X-Appengine-Inbound-Appid header to limit HTTP requests to our apps only.
I recently found out that some of my services require a static IP and therefor I would like to move some of the services to the Kubernetes Engine.
Is there a way for Kubernetes Engine to secure requests using a similar header approach? The requests should only be allowed from our own Firebase apps.
Ideally I would keep things as simple as possible for the clients using the services.
Possibly I could generate a specific API key for each user which can be blacklisted on abuse, but that already adds quite a bit of complexity.
You can use ngnix ingress controller as an entry point for your cluster, and add whatever rules for ngnix.
I have a kubernetes cluster with running pods. In order to monitor and troubleshoot the infrastructure, I want to implement a centralized logging solution so all incoming and outgoing HTTP requests will be logged within one place.
For the incoming requests this is not a problem at all, I can use nginx log from ingress controller and present it.
I also understand that I can log outgoing requests inside the application I run in pod, but the problem is that applications from outside developers are also used and it may not contain logging implementation.
As for the outgoing requests, there is no any solution provided by default if I understand it right. I have explored k8s logging and k8s audit, but it does not provide such feature.
Probably, I need some network sniffer, but it is quite a low-level solution for such problem as I can see. So, the question is: is there any out-of-the-box implementation for such demand?
Thanks!
Take a look at a service mesh solution like Istio or Linkerd as well as tracing solutions like Jaeger or Zipkin. With these you can build to have full observability on how information flows in/out and through your kube cluster
Am new to graphite monitoring tool. I have one question in this setup. I have two servers and hear one server treated as a hosted server(installed graphite,collectd,statsd and grafana)and it grafana displays the all metrics. In the another second server i have installed the graphite and collectd.Now i would need to send this second server collectd informtion to first server(hosted server)and those metrics information will need to display the web using grafana...
could you please suggest me is there any plugin or any way to setup this configuration?
Thanks.
You don't actually need graphite on the second host, you can just configure collectd on that host to write to graphite (actually the carbon ingest api) on the first host.
https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_write_graphite
If you do want to have graphite on both servers for some reason, you can use multiple Node entries in your collectd config to have it send metrics to both graphite instances.
I have a web service running as multiple docker containers. The docker hosts are not under my control. I use hystrix to record metrics. I thought of using Turbine to monitor the metrics. But i do not have access to the real hostnames and ports of my we app instances to give Turbine. So I am thinking of a push model where the individual instances of my web app publishes the metrics to another API on which i can run dadhboard tools. I looked at Servo, but it also does not suite my needs as it publishes to JMX. Can I use custom publisher for this? Are there examples for this use case?