i am running application in elixir plug and when i run this api app on port 80, it drops some packets and respond 400 bad request directly from cowboy, it is not even logging or anything else. when we debug it , we found that, some of the header values being dropped when getting cowboy request handler.
we are running under AWS load-balancer, when we run both on 8080, every thing is perfect but when we put on 80 packet starts dropping, can any one know workaround this ?
We made a first request:
"POST /ver2/user/update_token HTTP/1.1\r\nhost: int.oktalk.com\r\nAccept: /\r\nAccept-Encoding: gzip, deflate\r\nAccept-Language: en-GB,en;q=0.8,en-US;q=0.6,it;q=0.4\r\nCache-Control: no-cache\r\nContent-Type: application/json\r\nOrigin: chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop\r\nPostman-Token: 05f463a4-db55-6025-5cc1-f62b83db7c93\r\ntoken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoxNH0.Ind--phmd5saXMjBVjgRKNcCEL60qZoCbHggu-iAqY8\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nX-Forwarded-For: 27.34.245.42\r\nX-Forwarded-Port: 80\r\nX-Forwarded-Proto: http\r\nContent-Length: 103\r\nConnection: keep-alive\r\n\r\n"
Response for the first request: 200 OK
We made the same API call again as a second request. What we saw is the the content-length of the previous packet is 103 and the first 103 bytes is not seen in the next packet. I guess the system thinks the first 103 byte belongs to the previous packet itself.
"e\r\nAccept-Language: en-GB,en;q=0.8,en-US;q=0.6,it;q=0.4\r\nCache-Control: no-cache\r\nContent-Type: application/json\r\nOrigin: chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop\r\nPostman-Token: 0e52f1b6-120a-c321-2ba4-d6d20d5eb479\r\ntoken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoxNH0.Ind--phmd5saXMjBVjgRKNcCEL60qZoCbHggu-iAqY8\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nX-Forwarded-For: 27.34.245.42\r\nX-Forwarded-Port: 80\r\nX-Forwarded-Proto: http\r\nContent-Length: 103\r\nConnection: keep-alive\r\n\r\n"
Response of this : 400 bad Request which i see because the first dew bytes are missing.
We are using Elixir.Plug and cowboy
For others that find this question (like me) make sure you're not ignoring any returned conn structs from the Plug.Conn functions.
This snag is outlined fully in this issue, along with a gif illustrating how this goes wrong.
Related
When the user selects the ‘All’ filter on our dashboards, most queries fail and we get this error: 502 - Bad Gateway in Grafana. If it refreshes the page, the errors disappear and the dashboards work. We use an nginx as a reverse proxy and imagine that the problem is linked to URI size or headers. We made an attempt to increase the buffers: large_client_header_buffers 32 1024k. A second attempt was to change the InfluxDB method from GET to POST. Errors have diminished, but they still happen constantly. Our configuration uses nginx + Grafana + InfluxDB.
When using All nodes as filter on our dashboards ( the maximum of possible information), most of the queries return an failure (502 - Bad Gateway) on grafana. We have Keycloak for authetication and an nginx, working as an reverse proxy in front of our grafana server and somehow the problem is linked to it, when acessing the grafana server directly, trhough an ssh-tunnel for example, we do not experience the failure.
nginx log error example:
<my_ip> - - [22/Dec/2021:14:35:27 -0300] "POST /grafana/api/datasources/proxy/1/query?db=telegraf&epoch=ms HTTP/1.1" 502 3701 "https://<my_domain>/grafana/d/gQzec6oZk/compute-nodes-administrative-dashboard?orgId=1&refresh=1m" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
below prints of the error in grafana and the configuration variables
variables we use in them as a whole
error in grafana
A word press site i maintain, gets infected with .ico extension PHP scripts and their invocation links. I periodically remove them. Now i have written a cron job to find and remove them every minute. I am trying to find the source of this hack. I have closed all the back doors as far as i know ( FTP, DB users etc..).
After reading similar questions and looking at https://perishablepress.com/protect-post-requests/, now i think this could be because of malware POST requests. Monitoring the access log i see plenty of POST requests that fail with 40X response. But i also see requests that succeed which should not. Example one below, first request fails, similar POST Requests succeeds with 200 response few hours later.
I tried duplicating a similar request from https://www.askapache.com/online-tools/http-headers-tool/, but that fails with 40X response. Help me understand this behavior. Thanks.
POST Fails as expected
146.185.253.165 - - [08/Dec/2019:04:49:13 -0700] "POST / HTTP/1.1" 403 134 "http://website.com/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.24 (KHTML, like Gecko) RockMelt/0.9.58.494 Chrome/11.0.696.71 Safari/534.24" website.com
Few hours later same post succeeds
146.185.253.165 - - [08/Dec/2019:08:55:39 -0700] "POST / HTTP/1.1" 200 33827 "http://website.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.861.0 Safari/535.2" website.com
146.185.253.167 - - [08/Dec/2019:08:55:42 -0700] "POST / HTTP/1.1" 200 33827 "http://website.com/" "Mozilla/5.0 (Windows NT 5.1)
I am using a web service API.
http://www.douban.com/j/app/radio/people?app_name=radio_desktop_win&version=100&user_id=&expire=&token=&sid=&h=&channel=1&type=n
Typing that address into the chrome, expected result (json file containing song information) could be returned but when using curl it failed. (in both case,response code is OK but the response body is not correct in the later case )
Here are the request info dumped using the Chrome developer tool:
Request URL:http://www.douban.com/j/app/radio/people?app_name=radio_desktop_win&version=100&user_id=&expire=&token=&sid=&h=&channel=7&type=n
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:zh-CN,zh;q=0.8
Connection:keep-alive
Cookie:bid="lwaJyClu5Zg"
Host:www.douban.com
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36
Query String Parametersview sourceview URL encoded
app_name:radio_desktop_win
version:100
user_id:
expire:
token:
sid:
h:
channel:7
type:n
However, using that API with curl, i.e curl http://www.douban.com/j/app/radio/people?app_name=radio_desktop_win&version=100&user_id=&expire=&token=&sid=&h=&channel=7&type=n will not return expected result.
Even specifying the exactly header as what dumped from Chrome still failed.
curl -v -H "Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" -H "Accept-Encoding:gzip,deflat,sdcn" -H "Accept-Language:zh-CN,zh;q=0.8" -H "Cache-Control:max-age=0" -H "Connection:keep-alive" -H "Host:www.douban.com" -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36" http://www.douban.com/j/app/radio/people?app_name=radio_desktop_win&version=100&user_id=&expire=&token=&sid=&h=&channel=7&type=n
Below is what print out with -v from curl. Seems everything was identical with the request made by Chrome but still the response body is not correct.
GET /j/app/radio/people?app_name=radio_desktop_win HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8
Accept-Encoding:gzip,deflat,sdcn
Accept-Language:zh-CN,zh;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:www.douban.com
Why this happened? Appreciate your help.
You need to put quotes around that url in the shell. Otherwise the &s are going to cause trouble.
Another common problem: you may be using an HTTP proxy with Chrome. If so, you need to tell curl about this proxy as well. You can do so by setting the environmental variable http_proxy.
I'm running two servers.
One is a gateway running nginx for dispatching requests for different domains to different servers.
The other one is the the server for my WordPress installation.
I'm using Varnish in front of Apache to do caching stuffs (only caching, no load balancing). I've turned off KeepAlive and set Timeout to 20 seconds for Apache.
Now I'm uploading an image of size 160KB and it fails, while my server configuration allows a maximum size of 20MB. After I submit the upload form in WordPress, I can see from the status line of my browser that the file is uploaded several times (mostly 2 or 3). When I use the asynch uploading plugin of WordPress, I can also see the progress bar growing from 0% to 100% and over and over again, until it fails.
When it fails, it stucks at the path /wp-admin/media-upload.php?inline=&upload-page-form= and Chrome says "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset." I've tried Firefox, exactly the same.
I cannot see anything relevant in the error logs of Varnish and Apache. However, I do see mutiple lines of the following log in the access log of nginx:
220.255.1.18 - - [01/Jan/2013:12:16:36 +0800] "POST /wp-admin/media-upload.php?inline=&upload-page-form= HTTP/1.1" 400 0 "http://MY-DOMAIN/wp-admin/media-new.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.101 Safari/537.11"
220.255.1.29 - - [01/Jan/2013:12:16:41 +0800] "POST /wp-admin/media-upload.php?inline=&upload-page-form= HTTP/1.1" 400 0 "http://MY-DOMAIN/wp-admin/media-new.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.101 Safari/537.11"
220.255.1.23 - - [01/Jan/2013:12:16:51 +0800] "POST /wp-admin/media-upload.php?inline=&upload-page-form= HTTP/1.1" 400 0 "http://MY-DOMAIN/wp-admin/media-new.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.101 Safari/537.11"
220.255.1.26 - - [01/Jan/2013:12:17:03 +0800] "POST /wp-admin/media-upload.php?inline=&upload-page-form= HTTP/1.1" 400 0 "http://MY-DOMAIN/wp-admin/media-new.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.101 Safari/537.11"
So what's the problem? How can I fix it?
I'm using the CSS3 ability to apply multiple background images to an element. Currently, I have this code in my stylesheet:
body{background:url("images/emblem.png") top center no-repeat, url("images/background.png");background-color:#EAE6D9}
The code works in all browsers that support it. And those that it doesn't defaults down to the background-color.
However, watching the access log files for the site, I'm noticing 404 errors pop up for, what looks to be, a malformed request based on this CSS initiative. The funny thing is, they are coming from someone using Firefox 5. I'm using Firefox 5 and I cannot get an error to show up in the log for my IP.
Here's the error line from the log:
10.21.7.246 - - [28/Jun/2011:12:02:01 -0500] "GET /templates/images/emblem.png%22),%20url(%22http://ulabs.illinoisstate.edu/templates/images/background.png HTTP/1.1" 404 1005 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0"
I have a feeling the problem is coming from the fact that the " and the space is being URL encoded, but I'm definitely not doing that. And it doesn't happen all the time. Looking at requests from my IP address, the request is properly split up.
10.1.8.129 - - [28/Jun/2011:12:29:33 -0500] "GET /templates/images/background.png HTTP/1.1" 304 - "http://ulabs.illinoisstate.edu/templates/style.1308848695.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0"
10.1.8.129 - - [28/Jun/2011:12:29:33 -0500] "GET /templates/images/emblem.png HTTP/1.1" 304 - "http://ulabs.illinoisstate.edu/templates/style.1308848695.php" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0"
Has anyone experienced this behavior before? Or have any ideas on what I might try to resolve the issue?
We've discovered it's YSlow causing the error to be generated. When running YSlow, the error would appear in the log immediately for that IP address. Since this really isn't really a problem, luckily there's nothing we need to fix on our end.