how to send metrics to influx oss2 using jolokia in telegraf config? - telegraf

after running teltelegraf -debug with jolokia config
[[inputs.jolokia2_agent]]
urls = ["http://<other ip>:8080/jolokia-war-unsecured-1.6.2/"]
[[inputs.jolokia2_agent.metric]]
name = "jr"
mbean = "java.lang:type=Runtime"
paths = ["Uptime"]
I get this errors:
[agent] Initializing plugins
2022-07-02T12:51:57Z D! [agent] Connecting outputs
2022-07-02T12:51:57Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-07-02T12:51:57Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-07-02T12:51:57Z D! [agent] Starting service inputs
2022-07-02T12:52:07Z E! [outputs.influxdb_v2] When writing to [https://MYIP:8086]: Post "https://MYIP:8086/api/v2/write?bucket=monitoringdb&org=myorg": http: server gave HTTP response to HTTPS client
2022-07-02T12:52:07Z D! [outputs.influxdb_v2] Buffer fullness: 81 / 10000 metrics
2022-07-02T12:52:07Z E! [agent] Error writing to outputs.influxdb_v2: failed to send metrics to any configured server(s)

2022-07-02T12:52:07Z E! [outputs.influxdb_v2] When writing to [https://MYIP:8086]: Post "https://MYIP:8086/api/v2/write?bucket=monitoringdb&org=myorg": http: server gave HTTP response to HTTPS client
This error is coming from your influxdb output. It says your client is using https; however, the server responded with an http response. In your config, you probably specified a URL with https://, but the server is probably only using http://.

Related

GCP deployment with nginx - uwsgi - flask fails

I have a very simple flask app that is deployed on GKE and exposed via google external load balancer. And getting random 502 responses from the backend-service (added a custom headers on backend-service and nginx to make sure the source and I can see the backend-service's header but not nginx's)
The setup is;
LB -> backend-service -> neg -> pod (nginx -> uwsgi) where pod is the application built using flask and deployed via uwsgi and nginx.
The scenario is to handle image uploads in simple-secured way. Sender sends me a token with upload request.
My flask app
receive request and check the sent token via another service using "requests".
If token valid, proceed to handle the image and return 200
If token is not valid, stop and send back a 401 response.
First, I got suspicious about the 200 and 401's. And reverted all responses to 200. Following some of the expected responses, server starts to respond 502 and keep sending it. "Some of the messages at the very beginning succeeded".
nginx error logs contains below lines
2023/02/08 18:22:29 [error] 10#10: *145 readv() failed (104: Connection reset by peer) while reading upstream, client: 35.191.17.139, server: _, request: "POST /api/v1/imageUpload/image HTTP/1.1", upstream: "uwsgi://127.0.0.1:21270", host: "example-host.com"
my uwsgi.ini file is as below;
[uwsgi]
socket = 127.0.0.1:21270
master
processes = 8
threads = 1
buffer-size = 32768
stats = 127.0.0.1:21290
log-maxsize = 104857600
logdate
log-reopen
log-x-forwarded-for
uid = image_processor
gid = image_processor
need-app
chdir = /server/
wsgi-file = image_processor_application.py
callable = app
py-auto-reload = 1
pidfile = /tmp/uwsgi-imgproc-py.pid
my nginx.conf is as below
location ~ ^/api/ {
client_max_body_size 15M;
include uwsgi_params;
uwsgi_pass 127.0.0.1:21270;
}
Lastly, my app has a healthcheck method with simple JSON response. It does no extra stuff and simply returns. This never fails as explained above.
Edit : my nginx access logs in the pod shows the response as 401 while the client receives 502.
for those who gonna face with the same issue, the problem was post data reading (or not reading).
nginx was expecting to get post data read by the proxied, in our case uwsgi, app. But according to my logic I was not reading it in some cases and returning back the response.
Setting uwsgi post-buffering solved the issue.
post-buffering = %(16 * 1024 * 1024)
Which led me to this solution;
https://stackoverflow.com/a/26765936/631965
Nginx uwsgi (104: Connection reset by peer) while reading response header from upstream

Traefik as a simple Http Reverse Proxy not working

I am using Traefik as HTTP reverse proxy. I have two servers created using spring boot. Both servers are working properly on port 8081 and 8082
Traefik web UI is visible in port 8080.
What I wanted is to redirect http://localhost:7070/ to http://localhost:8081/ or http://localhost:8082/
traefik.toml config file
loglevel="INFO"
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":7070"
[file]
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host: localhost"
[backends]
[backends.backend1]
[backends.backend1.LoadBalancer]
method = "drr"
[backends.backend1.healthcheck]
path = "/app/health"
interval = "60s"
[backends.backend1.servers.server1]
url = "http://127.0.0.1:8081"
weight = 1
[backends.backend1.servers.server2]
url = "http://127.0.0.1:8082"
weight = 1
[api]
[ping]
[docker]
console output
INFO[2018-03-20T18:38:58+05:30] Using TOML configuration file
/home/kasun/apps/temp/traefik.toml
INFO[2018-03-20T18:38:58+05:30] Traefik version v1.5.4 built on 2018-
03-15_01:33:52PM
INFO[2018-03-20T18:38:58+05:30]
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on https://docs.traefik.io/basics/#collected-data
INFO[2018-03-20T18:38:58+05:30] Preparing server http &{Network:
Address::7070 TLS:<nil> Redirect:<nil> Auth:<nil>
WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil>
ForwardedHeaders:0xc4202a4520} with readTimeout=0s writeTimeout=0s
idleTimeout=3m0s
INFO[2018-03-20T18:38:58+05:30] Preparing server traefik &{Network:
Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil>
WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil>
ForwardedHeaders:0xc4202a4540} with readTimeout=0s writeTimeout=0s
idleTimeout=3m0s
INFO[2018-03-20T18:38:58+05:30] Starting server on :7070
INFO[2018-03-20T18:38:58+05:30] Starting provider *docker.Provider
{"Watch":true,"Filename":"","Constraints":null,"Trace":false,
"DebugLogGen
eratedTemplate":false,"Endpoint":
"unix:///var/run/docker.sock","Domain":"","TLS":null,
"ExposedByDefault":true,"UseBindPortIP":false,"SwarmMode":false}
INFO[2018-03-20T18:38:58+05:30] Starting server on :8080
INFO[2018-03-20T18:38:58+05:30] Starting provider *file.Provider
{"Watch":true,"Filename":"/home/kasun/apps/temp/traefik.toml",
"Constraints":null,"Trace":false,"DebugLogGeneratedTemplate":false,
"Directory":""}
INFO[2018-03-20T18:38:58+05:30] Server configuration reloaded on :7070
INFO[2018-03-20T18:38:58+05:30] Server configuration reloaded on :8080
INFO[2018-03-20T18:38:58+05:30] Server configuration reloaded on :7070
INFO[2018-03-20T18:38:58+05:30] Server configuration reloaded on :8080
WARN[2018-03-20T18:38:58+05:30] HealthCheck has failed
[http://127.0.0.1:8081]: Remove from server list
WARN[2018-03-20T18:38:58+05:30] HealthCheck has failed
[http://127.0.0.1:8082]: Remove from server list
WARN[2018-03-20T18:38:58+05:30] HealthCheck has failed
[http://127.0.0.1:8082]: Remove from server list
WARN[2018-03-20T18:38:58+05:30] HealthCheck has failed
[http://127.0.0.1:8081]: Remove from server list
When I load http://localhost:7070/ from the browser it gives
Service Unavailable
when I go to Traefik health dashboard it displays
Can anybody tell me what I am doing wrong here? I went through a few articles but unable to find the correct answer.
I suppose your are running Træfik in a container.
127.0.0.1 -> localhost inside the container, not in your local machine.

WSO2 API Manager Custom Domain error

I have configured my wso2 with custom name by setting
-->
secu.helomyl.in
<!--
Host name to be used for the Carbon management console
-->
<MgtHostName>secu.helomyl.in</MgtHostName>
It starts and i can access the url and get wso2.But the below error is in the logs.Can you please help?
[2017-02-17 14:46:32,513] INFO - QpidServiceComponent Successfully connected to AMQP server on port 5673
[2017-02-17 14:46:32,514] WARN - QpidServiceComponent MQTT Transport is disabled as per configuration.
[2017-02-17 14:46:32,514] INFO - QpidServiceComponent WSO2 Message Broker is started.
[2017-02-17 14:46:32,533] WARN - PropertiesFileInitialContextFactory Unable to create factory:Illegal character in query between indicies 66 and 1
amqp://admin:admin#clientid/carbon?brokerlist='tcp://15.100.133.77 :5673'
^
[2017-02-17 14:46:33,044] INFO - PassThroughHttpSSLListener Starting Pass-through HTTPS Listener...
[2017-02-17 14:46:33,047] INFO - PassThroughListeningIOReactorManager Pass-through HTTPS Listener started on 0.0.0.0
Check the api-manager.xml in wso2am-2.0.0/repository/conf location. There is space in the below configuration. That causes the issue.
tcp://15.100.133.77 :5673

Kibana - socket hang up error

I am running Kibana behind IIS reverse proxy server and getting following error
Courier Fetch Error: unhandled courier request error: socket hang up
I am on Version: 4.2.2, Build: 9177.
I get this error only when I use proxy server which I need to restrict access to Kibana. I am not sure what is causing this or how to fix it.
Error: unhandled courier request error: socket hang up
at handleError (http://kibana-server/bundles/kibana.bundle.js:70047:23)
at DocRequest.AbstractReqProvider.AbstractReq.handleFailure (http://kibana-server/bundles/kibana.bundle.js:69967:15)
at http://kibana-server/bundles/kibana.bundle.js:69861:18
at Array.forEach (native)
at http://kibana-server/bundles/kibana.bundle.js:69859:19
at wrappedErrback (http://kibana-server/bundles/commons.bundle.js:39286:79)
at http://kibana-server/bundles/commons.bundle.js:39419:77
at Scope.$eval (http://kibana-server/bundles/commons.bundle.js:40406:29)
at Scope.$digest (http://kibana-server/bundles/commons.bundle.js:40218:32)
at Scope.$apply (http://kibana-server/bundles/commons.bundle.js:40510:25)
If you have enabled Integrated Windows Authentication in your IIS the Kibana server cannot process the request, because the http-Authorization-Header is too large (group memberships are stored in the PAC field of the kerberos tickets).
We had the same Problem with an Apache reverse proxy server in front of Kibana. The solution is to unset the Authorization-Header after Kerberos/NTLM-Authentication is done and before sending the proxy request to Kibana.
Configuration for Apache:
RequestHeader unset Authorization
Try removing http.cors and http.compression as noted in https://github.com/elastic/kibana/issues/6719

GeoServer times out (504 Gateway Time-Out) when accessed from OpenLayers via nginx webserver

I have developed an OpenLayers web app that uses GeoServer. I am using nginx as my webserver with proxy_pass setup for GeoServer. Everything works as expected when I use "localhost" but when I switch to my IP address I get a 504 Gateway Time-Out error for
http://98.153.141.207/geoserver/cite/wfs.
I can access GeoServer at
http://98.153.141.207/geoserver/web
via a browser without problem so it would appear the proxy continues to work as expected.
The GeoServer log shows this when the problem occurs:
Request: describeFeatureType
service = WFS
version = 1.1.0
baseUrl = http://98.153.141.207:80/geoserver/
typeName[0] = {http://www.opengeospatial.net/cite}MyLayer
outputFormat = text/xml; subtype=gml/3.1.1
Then after a minute, I get the 504 Gateway Time-Out in my JavaScript console and this shows up in the GeoServer log:
09 May 06:02:15 WARN [geotools.xml] - Error parsing: http://98.153.141.207/geoserver/wfs/DescribeFeatureType?version=1.1.0&typename=cite:MyLayer
I have tried this supposed problem URL in a browser and it works fine.
The nginx erorr log contains this:
2013/05/09 06:02:15 [error] 420#3844: *54 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 98.153.141.207, server: localhost, request: "GET /geoserver/wfs/DescribeFeatureType?version=1.1.0&typename=cite:MyLayer HTTP/1.1", upstream: "http://127.0.0.1:8080/geoserver/wfs/DescribeFeatureType?version=1.1.0&typename=cite:MyLayer", host: "98.153.141.207"
Further investigation reveals that this problem seems to be restrict to WFS layers only. The WMS layers work fine. Here is the declaration of my WFS layer that fails:
myLayer = new OpenLayers.Layer.Vector("MyLayer",
{
strategies: [new OpenLayers.Strategy.BBOX(),saveStrategy],
projection: "EPSG:2276",
protocol: new OpenLayers.Protocol.WFS(
{
version: "1.1.0",
url: "http://" + hostip + "/geoserver/cite/wfs",
featureNS: "http://www.opengeospatial.net/cite",
srsName: "EPSG:2276",
featureType: "MyLayer",
geometryName: "Poly",
schema: "http://" + hostip + "/geoserver/wfs/DescribeFeatureType?version=1.1.0&typename=cite:MyLayer"
})
});
Any help would be appreciated. Thanks
I managed to get this working by remove the "schema" property from the OpenLayer.Protocol.WFS of my layer. Can anyone explain why this would be the problem?

Resources