who has an idea how to make grafana detect changes in prometheus target ? ( Grafana use prometheus as data source )
for exemple if i change my target in prometheus from localhost:9090 to localhost:8080 grafana must detect those changes and adapt them in the graphs .
Related
I am looking to configure NIFI UI access via HTTP. I've set the values necessary (Or so I thought) in nifi.properties.
properties set:
nifi.web.http.host=192.168.1.99
nifi.web.http.port=8080
I know NIFI does allow for both HTTP and HTTPS to be used simultaneously so I removed the below default values and left them unset:
nifi.web.https.host=
nifi.web.https.port=
Once I saved this file, I restarted the service systemctl restart nifi.service to see if it would read the new config file. I ran netstat -plnt to see if the port was open to no avail.
Did you set the HTTP values? If not, you're not providing a port for NiFi to listen to.
I'd like to analize the traffic placing mimtproxy between the load balancer (traefik or nginx) and the service but I can't really understand how to do that. I don't want to set mitmproxy as ordinary proxy (that works like a charm) as I'd like to understand how the load balancers modify the requests.
I read the documentation on available mode of operation but I didn't recognize which situation suits me. I tend to exclude transparent mode (that I used on firewalls) and I don't really understand what is the --mode reverse:http://...: I thought it was a way to forward anything to the given address, so I tried it setting:
mitmweb:
image: mitmproxy/mitmproxy
tty: true
ports:
- "8080:8080" # proxy
- "8081:8081" # web-interface
command: mitmweb --web-host 0.0.0.0 --no-web-open-browser -p 8080 --mode reverse:http://django:8000/
...
but mitmproxy complains that
403: To protect against DNS rebinding, mitmweb can only be accessed by
IP at the moment. (https://github.com/mitmproxy/mitmproxy/issues/3234)
Is it any possible and how?
I misinterpreted the message that was not refusing to proxy. It just hinted me that I needed to access the web interface via an ip rather than a dns name.
The result is awesome. You get the traffic and can introspect the request/response cicle in a very useful way.
We are upgrading our AKS cluster in order to use Standard SKU load balancers that went GA recently. See this Microsoft Update Notification.Previously only basic SKU load balancers were available and they would not allow us to send a TCP reset when connections went stale. This lead to a lot of creative work arounds to deal with stale connections in connection pools for example.
So during creation of an ingress I can configur the load balancer by using annontations. For example I can set type to internal and timeout settings using annotations. However being able to set the TCP reset flag to true via annotations does not seem possible. I have found with some digging a list of the annotations in this Go Walker page.
I have managed to create a ingress controller using the following yaml. Note the annonations.
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
I ran the following commands:
helm install stable/nginx-ingress --namespace ingress -f dev-ingress.yaml --name dev-ingress --set controller.replicaCount=3
After a minute or so I can see the internal loadbalancer getting the specified IP address and I can also see it on the console see below:
kubectl -n ingress get svc dev-ingress-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dev-ingress-nginx-ingress-controller LoadBalancer 172.15.24.11 172.15.23.100 80:30962/TCP,443:30364/TCP 24m app=nginx-ingress,component=controller,release=dev-ingress
However the load balancing rules are created with a TCP reset to false. Which requires me to log into the console and change it. See screen shot below:
I really would like to script this into the creation as doing things via interfaces leads to Snowflake deployments.
Something like the yaml below:
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
service.beta.kubernetes.io/azure-load-balancer-tcp-reset: "true"
Anyone know how I can configure this during service/ingress creation?
Update:
Based on the limitations documented on the TCP Reset setting for loadbalancers document it appears that it is not supported from kubectl. However it also says that the portal is not supported.
You can take a look at Cloud provider for Azure. It provides an annotation to set the TCP reset of the load balancer rules, but it's only available for version 1.16 or later and the latest version for AKS is 1.15.
You can use aks-engine to achieve your purpose if you really want to use it. The aks-engine already supports version 1.16 for Kubernetes. Remember, create the aks-engine cluster with the standard load balancer.
seeing this file does not have such an annotation I would conclude this is not yet possible with annotations. you'd have to figure some other way, or create a pull request to kubernetes to support such an annotation
I change Kibana port from 5601 to 80 in kibana.yml file.
I restarted kibana after the change, but I didn't work.
Any ideas ?
The ports 0-1023 are the "well-known ports" and reserved, so you need higher privileges to bind to one of that port.
Are you sure the port 80 is vacant without being used by any other process?
Does elasticsearch work after the change you made in the Kibana yml? Because you need elasticsearch for Kibana to be up and running. Make sure you're pointing to the correct domain.
If you're getting any errors on the Kibana console, put it up here so that it will be useful to resolve.
Source: setting up kibana
As Kibana is the webUI for elasticsearch, it is better make it high availability. After reading the doc and make a demo, i can not find a way to set up two Kibana instances simultaneously for a single Elasticsearch cluster.
After some deep leaning about Kibana, i finally find that Kibana will store its data and configuration about dashboard and searches in the backend ES. This way Kibana is just like a proxy and ES serves as the DataBase for it.
So, the answer is yes. Kibana supports High Availability through ES.
You could simply change the server.port value to another free port (ie: something like 6602) in your kibana yml since 5601 is the default. Hence you're pointing to the same ES instance and having two (one running on the default port and the other running on port 6602) kibana instances as well.