Configure prometheus-operator to scrape static icmp targets from blackbox-operator - prometheus-operator

How do I need to configure a monitoring.coreos.com/v1 Probe to instrument my prometheus-operator to collect icmp metrics from a defined list of targets?

This assumes that you have installed both prometheus- and blackbox-exporter as follows:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
helm install prometheus-blackbox-exporter prometheus-community/prometheus-blackbox-exporter -n monitoring
First you need to enable the icmp module of the blackbox-exporter in a values file, e.g. blackbox-exporter.yml:
config:
modules:
icmp:
prober: icmp
icmp:
preferred_ip_protocol: ip4
allowIcmp: true
apply the config:
helm upgrade prometheus-blackbox-exporter prometheus-community/prometheus-blackbox-exporter -f blackbox-exporter.yml
then you can define the Probe like this:
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: blackbox-probe-icmp
namespace: monitoring
labels:
release: prometheus
spec:
jobName: icmp
interval: 1m
scrapeTimeout: 5s
module: icmp
prober:
url: prometheus-blackbox-exporter:9115
targets:
staticConfig:
static:
- 192.168.1.1
- 8.8.8.8

Related

Expose both TCP and UDP from the same ingress

In kubernetes, I have the following service:
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Is there a way to do this?
Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.
The DigitalOcean CPI does not support mixed protocols in the Service
definition, it accepts TCP only for load balancer Services. It is
possible to ask for HTTP(S) and HTTP2 ports with Service annotations.
Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature
will have no effect on the DO bills.
See more: do-mixed-protocols.
A simple solution that may solve your problem is to create a reverse
proxy system on a standalone server - using Nginx and route UDP and
TCP traffic directly to your Kubernetes service.
Follow these two steps:
1. Create a NodePort service application
2. Create a small server instance and run Nginx with LB config on it
Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
nodePort: 30010
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
nodePort: 30011
selector:
tt: test
For example this service exposes test pods’ port 5555 through nodeIP:30010 with the TCP protocol and port 5556 through nodeIP:30011 with UDP. Please adjust ports according to your needs, this is just an example.
Then create a small server instance and run Nginx with LB config.
For this step, you can get a small server from any cloud provider.
Once you have the server, ssh inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).
Then you have to add the following to your nginx.conf (run command $ sudo vi /etc/nginx/nginx.conf):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 5555;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 5556 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx server using command:
$ sudo /etc/init.d/nginx start
If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>.
See more: tcp-udp-loadbalancer.
You can enable feature gates MixedProtocolLBService. For instructions on how to enable function gates, see below.
How do you enable Feature Gates in K8s?
Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.
MixedProtocolLBService is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.

Kubernetes NodePort not listening

I'm doing some tutorials using k3d (k3s in docker) and my yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
With the resulting node port being 31747:
:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18m
nginx NodePort 10.43.254.138 <none> 80:31747/TCP 17m
:~$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.18.0.2:6443 22m
nginx 10.42.0.8:80 21m
However wget does not work:
:~$ wget localhost:31747
Connecting to localhost:31747 ([::1]:31747)
wget: can't connect to remote host: Connection refused
:~$
What have I missed? I've ensured that my labels all say app: nginx and my containerPort, port and targetPort are all 80
The question is, is the NodePort range mapped from the host to the docker container acting as the node. The command docker ps will show you, for more details you can docker inspect $container_id and look at the Ports attribute under NetworkSettings. I don't have k3d around, but here is an example from kind.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2225b83a73 kindest/node:v1.17.0 "/usr/local/bin/entr…" 18 hours ago Up 18 hours 127.0.0.1:32769->6443/tcp kind-control-plane
$ docker inspect kind-control-plane
[
{
# [...]
"NetworkSettings": {
# [...]
"Ports": {
"6443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
# [...]
}
]
If it is not, working with kubectl port-forward as suggested in the comment is probably the easiest approach. Alternatively, start looking into Ingress. Ingress is the preferred method to expose workloads outside of a cluster, and in the case of kind, they have support for Ingress. It seems k3d also has a way to map the ingress port to the host.
Turns out I didn't expose the ports when creating the cluster
https://k3d.io/usage/guides/exposing_services/
maybe, your pod is running on the other work node, not localhost. you should use the correct node ip.

Kubernetes, access IP outside the cluster

I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

kubectl port-forward connection refused [ socat ]

I am running pyspark on one of the ports of kubernetes. I am trying to port forward to my local machine. I am getting this error while executing my python file.
Forwarding from 127.0.0.1:7077 -> 7077
Forwarding from [::1]:7077 -> 7077
Handling connection for 7077
E0401 01:08:11.964798 20399 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 68ced395bd081247d1ee6b431776ac2bd3fbfda4d516da156959b6271c2ad90c, uid : exit status 1: 2019/03/31 19:38:11 socat[1748104] E connect(5, AF=2 127.0.0.1:7077, 16): Connection refused
this a few lines of my python file. I am getting error in the lines where conf is defined.
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
conf = SparkConf().setMaster("spark://localhost:7077").setAppName("Stand Alone Python Script")
I already tried installing socat on the kubernetes. I am using spark version 2.4.0 locally. I even tried exposing port 7077 in YAML file. Did not work out.
This is the YAML file used for deployment.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2018-10-07T15:23:35Z
generation: 16
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
name: m3-zeppelin
namespace: default
resourceVersion: "55461362"
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/m3-zeppelin
uid: f56e86fa-ca44-11e8-af6c-42010a8a00f2
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
component: m3-zeppelin
serviceName: m3-zeppelin
template:
metadata:
creationTimestamp: null
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
spec:
containers:
- args:
- bash
- -c
- wget -qO- https://archive.apache.org/dist/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
| tar xz; mv spark-2.2.2-bin-hadoop2.7 spark; curl -sSLO https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar;
mv gcs-connector-latest-hadoop2.jar lib; ./bin/zeppelin.sh
env:
- name: SPARK_MASTER
value: spark://m3-master:7077
image: apache/zeppelin:0.8.0
imagePullPolicy: IfNotPresent
name: m3-zeppelin
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /zeppelin/conf
name: m3-zeppelin-config
- mountPath: /zeppelin/notebook
name: m3-zeppelin-notebook
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: m3-zeppelin-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
- metadata:
creationTimestamp: null
name: m3-zeppelin-notebook
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: m3-zeppelin-5779b84d99
observedGeneration: 16
readyReplicas: 1
replicas: 1
updateRevision: m3-zeppelin-5779b84d99
updatedReplicas: 1
Focusing specifically on the error from the Kubernetes perspective it could be related to:
Mismatch between the ports that request is sent to and the receiving end (for example sending a request to an NGINX instance on port: 1234)
Pod not listening on a desired port.
I've managed to reproduce this error with a Kubernetes cluster created with kubespray.
Assuming that you've run following steps:
$ kubectl create deployment nginx --image=nginx
$ kubectl port-forward deployment/nginx 8080:80
Everything should be correct and the NGINX welcome page should appear when running: $ curl localhost:8080.
If we made a change to the port-forward command like below (notice the 1234 port):
$ kubectl port-forward deployment/nginx 8080:1234
You will get following error:
Forwarding from 127.0.0.1:8080 -> 1234
Forwarding from [::1]:8080 -> 1234
Handling connection for 8080
E0303 22:37:30.698827 625081 portforward.go:400] an error occurred forwarding 8080 -> 1234: error forwarding port 1234 to pod e535674b2c8fbf66252692b083f89e40f22e48b7a29dbb98495d8a15326cd4c4, uid : exit status 1: 2021/03/23 11:44:38 socat[674028] E connect(5, AF=2 127.0.0.1:1234, 16): Connection refused
This would also work on a Pod that application haven't bound to the port and/or is not listening.
A side note!
You can simulate it by running an Ubuntu Pod where you try to curl its port 80. It will fail as nothing listens on its port. Try to exec into it and run $ apt update && apt install -y nginx and try to curl again (with kubectl port-forward configured). It will work and won't produce the error mentioned (socat).
Addressing the part of the question:
I even tried exposing port 7077 in YAML file. Did not work out.
If you mean that you've included the - containerPort: 8080. This field is purely informational and does not carry any configuration to be made. You can read more about it here:
Stackoverflow.com: Answer: Why do we need a port/containerPort in a Kuberntes deployment/container definition?
(which besides that I consider incorrect as you are using the port: 7077)
As for the $ kubectl port-forward --address 0.0.0.0. It's a way to expose your port-forward so that it would listen on all interfaces (on a host machine). It could allow for an access to your port-forward from LAN:
$ kubectl port-forward --help (part):
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
Additional resources:
Kubernetes.io: Docs: Tasks: Access application cluster: Port forward access to application cluster
Maybe you should use command:
kubectl get pods
to check whether your pods are running.
In my case, I start minikube without network connection, so I meet the same question when I use port-forward to trans pod's port to local mechine's port.

Resources