how to configure service for prometheus instead of executing port-forward command everytime - prometheus-operator

Every time I execute this command to bring up the prometheus dashboard.
kubectl port-forward -n monitoring prometheus-prometheus-operator-prometheus-0 9090
Is there any workaround to have the service running all the time without executing the command?
Note: Prometheus operator is deployed in Kubernetes cluster.

You can expose your service of Prometheus as LoadBalancer which will give you IP to access the service directly without running a y command.
You can expose your service as NodePort also as per need.
you can read more about the service type and changing type : https://kubernetes.io/docs/concepts/services-networking/service/
if you are using the Nginx ingress inside the Kubernetes you can also expose your service as the ingress and use that domain to access the Prometheus.

Related

Make a direct call to Kubernetes pod

I have several pods running inside the Kubernetes cluster.
How I can make http call to a specific Pod, without calling LoadBalancer service?
...make http call to a specific Pod, without calling LoadBalancer service?
There are several ways, try kubectl port-forward <pod name> 8080:80, then open another terminal and you can now do curl localhost:8080 which will forward your request to the pod. More details here.

How does the nginx resolver behave when pointed at a Kubernetes Service?

I'm exploring the nginx container as a load balancer. I'm using a Kubernetes Service in the upstream module.
Here is a snippet of my nginx.conf...
upstream backend {
hash $remote_addr consistent;
server mgmt-1; # <-- these entries are Kubernetes Services
server mgmt-2;
server mgmt-3;
}
Right now each service maps to only ONE pod. Which of course, is not ideal.
So, the resolver directive for nginx+ is said to consistently update the endpoint IP, in the event that the Service is changed/moved/deleted/ephemeralized, etc.
Question: If I wanted to scale up my pods, resulting in multiple endpoints maintained by one Service, would the nginx resolver extract and load the entire set of endpoints? Or will it not know what to do? Thanks
You can test this for yourself. It's quite fun. You spin up a pod running Ubuntu for example and then kubectl exec -it podname /bin/sh into the pod and from there you can run nslookup mgmt-1 (if they're in the same namespace).
What you will find is that the service resolves to a single IP. This IP is unique to the service. When you actually send the service IP traffic on a port it is listening on it will make it to the endpoints.
To answer your question. I do not believe Nginx will be able to tell mgmt-1 has three endpoints (for example) while mgmt-2 has 1 endpoint (for example). Nginx will only see the service IP.
To see the service IP for yourself you can run kubectl get svc/servicename -n namespacename

kubernetes (rancher) ingress understanding

what i know / have running:
i got a running rancher ha setup (2.4.2) on vsphere w/ a L4 nginx lb in front of it. access the ui and provision new clusters (vsphere node driver) works great. I know I'm not in the cloud and cannot use a L7 LB (apart from nip.ip or metal lb maybe), and deploying workloads and expose them via nodeport works great (so the workloads are available on the specified port on each node a according pod is running on).
my question:
is it possible to expose (maybe via ingress) applications on any of my running cluster under the domain/adress I can access the rancher ui (in my case: https://rancher-things.local)? like have external (local network, not public) if I would deploy maybe a harbor registry and can somehow expose it like https://rancherthings.local/harbor? or if this would not work is it possible to deploy a L4 load balancer for accessing applications on or in front of a specific cluster?
thank you.
There should be ingress resource already which exposes the rancher ui. You can edit the ingress and add a path /harbor to route the traffic to service for harbor.
paths:
- path: /harbor
backend:
serviceName: harbor
servicePort: 80
#arghya-sadhu, the LB is pointing to the HA cluster (a.k.a upstream/management/rke/ha cluster) running Rancher, not Harbor. It's not recommended to create any other ingresses in this HA cluster. Also, I think the harbor workload is running in a downstream cluster and there is no LB pointing to the nodes of this cluster.
Patrick, you can create a service exposing your application port via http and use Rancher's proxy mechanism to access the UI of your app via the Rancher URL. If you have monitoring enabled in your setup, you can follow how Grafana UI is exposed via this mechanism.
After creating the service, you can find the URL info using the following command:
kubectl -n <your_app_namespace> cluster-info
# or
kubectl cluster-info -A
The downside of this approach is you don't have a dedicated LoadBalancer handling the traffic, but for smaller scale setup, this should be ok.
Example URL of grafana:
https://<rancher-fqdn>/k8s/clusters/<cluster-id>/api/v1/namespaces/cattle-prometheus/services/http:access-grafana:80

K8S baremetal nginx-ingress-controller

OS: RHEL7 | k8s version: 1.12/13 | kubespray | baremetal
I have a standard kubespray bare metal cluster deployed and I am trying to understand what is the simplest recommended way to deploy nginx-ingress-controller that will allow me to deploy simple services. There is no load balancer provided. I want my MASTER public IP as the endpoint for my services.
Github k8s ingress-nginx suggests NodePort service as a "mandatory" step, which seems not to be enough to make it works along with kubespray's ingress_controller.
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Similar results using helm chart:
helm install -n ingress-nginx stable/nginx-ingress --set controller.service.externalIPs[0]="MASTER PUBLIC IP"
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Correct, that is not what LoadBalancer is intended for. It's intended for provisioning load balancers with cloud providers like AWS, GCP, or Azure, or a load balancer that has some sort of API so that the kube-controller-manager can interface with it. If you look at your kube-controller-manager logs you should see some errors. The way you made it work it's obviously a hack, but I suppose it works.
The standard way to implement this is just to use a NodePort service and have whatever proxy/load balancer (i.e. nginx, or haproxy) on your master to send traffic to the NodePorts. Note that I don't recommend the master to front your services either since it already handles some of the critical Kubernetes pods like the kube-controller-manager, kube-apiserver, kube-scheduler, etc.
The only exception is MetalLB which you can use with a LoadBalancer service type. Keep in mind that as of this writing the project is in its early stages.

How can I deploy an ingress controller for my Kubernetes cluster

So I built my Kubernetes cluster on AWS using KOPS
I then deployed SocketCluster on my K8s cluster using Baasil which deploys 7 YAML files
My problem is: the scc-ingress isn't getting any IP or endpoint as I have not deployed any ingress controller.
According to ingress controller docs, I am recommended to deploy an nginx ingress controller
I need easy and explained steps to deploy the nginx ingress controller for my specific cluster.
To view the current status of my cluster in a nice GUI, see the screenshots below:
Deployments
Ingress
Pods
Replica Sets
Services
The answer is here https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml
But obviously the scc-ingress file needed to be changed to have a host such as foo.bar.com
Also, need to generate a self-signed SSL using OpenSSL as per this link https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls
Finally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created

Resources