When the user selects the ‘All’ filter on our dashboards, most queries fail and we get this error: 502 - Bad Gateway in Grafana. If it refreshes the page, the errors disappear and the dashboards work. We use an nginx as a reverse proxy and imagine that the problem is linked to URI size or headers. We made an attempt to increase the buffers: large_client_header_buffers 32 1024k. A second attempt was to change the InfluxDB method from GET to POST. Errors have diminished, but they still happen constantly. Our configuration uses nginx + Grafana + InfluxDB.
When using All nodes as filter on our dashboards ( the maximum of possible information), most of the queries return an failure (502 - Bad Gateway) on grafana. We have Keycloak for authetication and an nginx, working as an reverse proxy in front of our grafana server and somehow the problem is linked to it, when acessing the grafana server directly, trhough an ssh-tunnel for example, we do not experience the failure.
nginx log error example:
<my_ip> - - [22/Dec/2021:14:35:27 -0300] "POST /grafana/api/datasources/proxy/1/query?db=telegraf&epoch=ms HTTP/1.1" 502 3701 "https://<my_domain>/grafana/d/gQzec6oZk/compute-nodes-administrative-dashboard?orgId=1&refresh=1m" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
below prints of the error in grafana and the configuration variables
variables we use in them as a whole
error in grafana
Related
I'm trying to trigger a new dag run via Airflow 2.0 REST API. If I am logged in to the Airflow webserver on the remote machine and I go to the swagger documentation page to test the API, the call is successful. If I log out or if the API call is sent through Postman or curl, then I get a 403 forbidden message. The same 403 error message is received in curl or postman whether I provide the web server username password or not.
curl -X POST --user "admin:blabla" "http://10.0.0.3:7863/api/v1/dags/tutorial_taskflow_api_etl/dagRuns" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"conf\":{},\"dag_run_id\":\"string5\"}"
{
"detail": null,
"status": 403,
"title": "Forbidden",
"type": "https://airflow.apache.org/docs/2.0.0/stable-rest-api-ref.html#section/Errors/PermissionDenied"
}
The security for API has been changed to default, instead of deny_all (auth_backend = airflow.api.auth.backend.default). The installation of airflow has been done using pip using ubuntu 18 bionic. Dags are running fine if triggered manually or scheduled. The database backend is postgres.
Also tried copying the cookie details from Chrome into postman to get past this issue, but it did not work.
Here is the log on the web server for the two calls mentioned above.
airflowWebserver_container | 10.0.0.4 - - [05/Jan/2021:06:35:33 +0000] "POST /api/v1/dags/tutorial_taskflow_api_etl/dagRuns HTTP/1.1" 403 170 "http://10.0.0.3:7863/api/v1/ui/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
airflowWebserver_container | 10.0.0.4 - - [05/Jan/2021:06:35:07 +0000] "POST /api/v1/dags/tutorial_taskflow_api_etl/dagRuns HTTP/1.1" 409 251 "http://10.0.0.3:7863/api/v1/ui/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
I am using basic_auth for Airflow v2.0. The AIRFLOW__API__AUTH_BACKEND environment variable should be set to airflow.api.auth.backend.basic_auth. You will have to restart the webserver container. Then you should be able to access all stable APIs using the cURL commands with --user option.
In Airflow 2.0, There seems to be some bug.
If you set this auth configuration in airflow.cfg, it doesn't work.
auth_backend = airflow.api.auth.backend.basic_auth
But setting this as an environment variable works
AIRFLOW__API__AUTH_BACKEND: "airflow.api.auth.backend.basic_auth"
#AmitSingh was correct. Setting security to default only works with the experimental api. I changed the relevant configuration in airflow, restarted and added 'experimental' in the api path. Please see https://airflow.apache.org/docs/apache-airflow/stable/rest-api-ref.html
Maybe also good to know:
You can only disable authentication for experimental API, not the stable REST API.
See: https://airflow.apache.org/docs/apache-airflow/stable/security/api.html#disable-authentication
A word press site i maintain, gets infected with .ico extension PHP scripts and their invocation links. I periodically remove them. Now i have written a cron job to find and remove them every minute. I am trying to find the source of this hack. I have closed all the back doors as far as i know ( FTP, DB users etc..).
After reading similar questions and looking at https://perishablepress.com/protect-post-requests/, now i think this could be because of malware POST requests. Monitoring the access log i see plenty of POST requests that fail with 40X response. But i also see requests that succeed which should not. Example one below, first request fails, similar POST Requests succeeds with 200 response few hours later.
I tried duplicating a similar request from https://www.askapache.com/online-tools/http-headers-tool/, but that fails with 40X response. Help me understand this behavior. Thanks.
POST Fails as expected
146.185.253.165 - - [08/Dec/2019:04:49:13 -0700] "POST / HTTP/1.1" 403 134 "http://website.com/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.24 (KHTML, like Gecko) RockMelt/0.9.58.494 Chrome/11.0.696.71 Safari/534.24" website.com
Few hours later same post succeeds
146.185.253.165 - - [08/Dec/2019:08:55:39 -0700] "POST / HTTP/1.1" 200 33827 "http://website.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.861.0 Safari/535.2" website.com
146.185.253.167 - - [08/Dec/2019:08:55:42 -0700] "POST / HTTP/1.1" 200 33827 "http://website.com/" "Mozilla/5.0 (Windows NT 5.1)
i am running application in elixir plug and when i run this api app on port 80, it drops some packets and respond 400 bad request directly from cowboy, it is not even logging or anything else. when we debug it , we found that, some of the header values being dropped when getting cowboy request handler.
we are running under AWS load-balancer, when we run both on 8080, every thing is perfect but when we put on 80 packet starts dropping, can any one know workaround this ?
We made a first request:
"POST /ver2/user/update_token HTTP/1.1\r\nhost: int.oktalk.com\r\nAccept: /\r\nAccept-Encoding: gzip, deflate\r\nAccept-Language: en-GB,en;q=0.8,en-US;q=0.6,it;q=0.4\r\nCache-Control: no-cache\r\nContent-Type: application/json\r\nOrigin: chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop\r\nPostman-Token: 05f463a4-db55-6025-5cc1-f62b83db7c93\r\ntoken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoxNH0.Ind--phmd5saXMjBVjgRKNcCEL60qZoCbHggu-iAqY8\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nX-Forwarded-For: 27.34.245.42\r\nX-Forwarded-Port: 80\r\nX-Forwarded-Proto: http\r\nContent-Length: 103\r\nConnection: keep-alive\r\n\r\n"
Response for the first request: 200 OK
We made the same API call again as a second request. What we saw is the the content-length of the previous packet is 103 and the first 103 bytes is not seen in the next packet. I guess the system thinks the first 103 byte belongs to the previous packet itself.
"e\r\nAccept-Language: en-GB,en;q=0.8,en-US;q=0.6,it;q=0.4\r\nCache-Control: no-cache\r\nContent-Type: application/json\r\nOrigin: chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop\r\nPostman-Token: 0e52f1b6-120a-c321-2ba4-d6d20d5eb479\r\ntoken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoxNH0.Ind--phmd5saXMjBVjgRKNcCEL60qZoCbHggu-iAqY8\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nX-Forwarded-For: 27.34.245.42\r\nX-Forwarded-Port: 80\r\nX-Forwarded-Proto: http\r\nContent-Length: 103\r\nConnection: keep-alive\r\n\r\n"
Response of this : 400 bad Request which i see because the first dew bytes are missing.
We are using Elixir.Plug and cowboy
For others that find this question (like me) make sure you're not ignoring any returned conn structs from the Plug.Conn functions.
This snag is outlined fully in this issue, along with a gif illustrating how this goes wrong.
I frequently (10 per sec) receive requests to my wordpress website.
See my apache access log:
www.mydomain.de:80 dedicated.server - - [16/Oct/2016:21:56:26 +0200] "POST /xmlrpc.php HTTP/1.0" 403 477 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"
How do I figure out which ip is trying to access my apache webserver?
And how do I block it?
Normally I see an IP address but this log only shows "dedicated.server".
Based on the mod-log docs, for %h format string:
You might have the HostnameLookups directive set to On.
You might be defining them by name somewhere else.
I'd recommend using %a format string to log the client IP address.
To check whether it is a cli or http request, in PHP this method php_sapi_namecan be used, take a look here. I am trying to replicate that in apache conf file. The underlying idea is, if the request is coming from cli a 'minimal info' is served, if the request is from browsers then the users are redirected to different location. Is this possible?
MY PSEUDO CODE:
IF (REQUEST_COMING_FROM_CLI) {
ProxyPass / http://${IP_ADDR}:5000/
ProxyPassReverse / http://${IP_ADDR}:5000/
}ELSE IF(REQUEST_COMING_FROM_WEB_BROWSERS){
ProxyPass / http://${IP_ADDR}:8585/welcome/
ProxyPassReverse / http://${IP_ADDR}:8585/welcome/
}
Addition: cURL uses host of different protocols including http, ftp & telnet. Can apache figure out if the request is from cli or browser?
For as far as I know, there is no way to find the difference using apache.
if a request from the command-line is set up properly, apache can not make a difference between command-line and browser.
When you check it in PHP (using php_sapi_name, as you suggested), it only checks where php itself was called from (cli, apache, etc.), not where the http request came from.
using telnet for the command line, you can connect to apache, set the required http-headers and send the request as if you were using a browser(only, the browser sets the headers for you)
so, i do not think apache could differentiate between console or browser
The only way to do this is to test the user agent sent in the header of the request but this information can be easily changed.
By default every php http request looks like this to the apache server:
192.168.1.15 - - [01/Oct/2008:21:52:43 +1300] "GET / HTTP/1.0" 200 5194 "-" "-"
this information can be easily changed to look like a browser, for example using this
ini_set('user_agent',
'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3');
the http request will look like this
192.168.1.15 - - [01/Oct/2008:21:54:29 +1300] "GET / HTTP/1.0" 200 5193
"-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3"
At this moment the apache will think that the received connection come from a windows firefox 3.0.3.
So there is no a exact way to get this information.
You can use a BrowserMatch directive if the cli requests are not spoofing a real browser in the User-Agent header. Else, like everyone else has said, there is no way to tell the difference.