In my environment, I use perlbal to redirect request to nginx. If verify_backend is on. perbal will send a "OPTIONS *" request to nginx, but the nginx response it as a bad request.
According to RFC2616:
If the Request-URI is an asterisk (""), the OPTIONS request is intended to apply to the ?server in general rather than to a specific resource. Since a server's communication options typically depend on the resource, the "" request is only useful as a "ping" or "no-op" type of method; it does nothing beyond allowing the client to test the capabilities of the server. For example, this can be used to test a proxy for HTTP/1.1 compliance (or lack thereof).
I think perlbal is trying to send this kind of request, but nginx can't handle this by default.
When I try to send a request "OPTIONS * HTTP/1.0", I always get "HTTP 400 bad request":
127.0.0.1 - - [18/Feb/2013:03:55:47 +0000] "OPTIONS * HTTP/1.0" 400 172 "-" "-" "-"
but it works on "OPTIONS / HTTP/1.0" option without asterisk requests :
127.0.0.1 - - [18/Feb/2013:04:03:56 +0000] "OPTIONS / HTTP/1.0" 200 0 "-" "-" "-"
How can I configure nginx to let it respond with http return 200 rather than HTTP return 400 ?
I know it's an overkill but one solution is to put HAProxy in front of it to just capture that OPTIONS request and then build your own response in HAProxy:
location * {
if ($request_method = OPTIONS ) {
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
}
The only way I found to modify the behaviour in this case was to respond to 400 in general:
error_page 400 =200 /empty_reply.html;
You could just send empty responses to everything you cannot handle.
For whoever wants to try to solve this another way, you can simulate this requests with:
curl -X OPTIONS $yourserverip --request-target "*" --http1.1
Related
I've been facing some issues with nginx and PUT redirects:
Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)
The client does a PUT /my/api with Expect: 100-continue.
My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3).
However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads
I am wondering if there is a way to:
Prevent nginx to send 100-continue unless the service actually does send that.
Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).
Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.
I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue
Any pointers appreciated :)
EDIT 1: Here's a sample setup to reproduce the issue:
An nginx listening on port 80, forwarding to localhost on port 9999
A simple HTTP server listening on port 9999, that always returns redirects on PUTs
nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
server {
listen 80;
server_name frontend;
keepalive_timeout 75s;
keepalive_requests 100;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9999/;
}
}
}
I'm running the above with
docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
Simple python3 HTTP server.
#!/usr/bin/env python3
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_PUT(self):
self.send_response(307)
self.send_header('Location', 'https://s3.amazonaws.com/test')
self.end_headers()
HTTPServer(("", 9999), Redirect).serve_forever()
Test results:
Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
<
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
* Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
>
Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
Notice how nginx sends a HTTP/1.1 100 Continue
With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:
127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"
So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx
I have an Angular 2 app that is using ng2-file-upload to upload files to a server running Nginx. Nginx is definitely sending a 413 when the file size is too large but the browsers (Chrome and Safari) don't seem to be catching it / interpreting it.
Chrome console error:
XMLHttpRequest cannot load <url>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '<url>' is therefore not allowed access. The response had HTTP status code 413.
Safari console error
XMLHttpRequest cannot load <url>. Origin <url> is not allowed by Access-Control-Allow-Origin.
Nginx config
server {
listen 80;
server_name <url>;
access_log /var/log/nginx/access.log main;
client_max_body_size 4m;
location / {
proxy_pass http://<ip address>:3009;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Nginx access log
<ip address> - - [11/Oct/2016:17:28:26 +0100] "OPTIONS /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 200 4 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
<ip address> - - [11/Oct/2016:17:28:36 +0100] "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 413 601 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
Nginx error log
2016/10/11 17:28:26 [error] 30847#0: *1489 client intended to send too large body: 34865919 bytes, client: <ip address>, server: <server>, request: "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1", host: "<host>", referrer: "<url>"
When calling the ng2-file-upload error handling method the response code is 0 and headers are an empty object.
Any help would be much appreciated!
Seems like an old question, but even sader that in 2019 it's still the case that Firefox & chrome don't handle the 413 as you would expect. The continue to process the upload despite nginx sending a 413.
Good old Safari (words I don't get to utter often) appears to be the only modern browser that does what you would expect and if you send a custom 413 error it handles it.
In regard to this question, you can use Angular to get the size of the file and have simple endpoint that verifies if it's to big before you send the actual file.
Doing that in JS would b the best option.
A similar question was also posted here.
As pointed out by the accepted answer:
The problem is that most HTTP clients don't read the response until
they've sent the entire request body. If you're dealing with web browsers
you're probably out of luck here.
I recently tried with the latest version of Safari (v15.6 - 17613.3.9.1.5) and it does handle the error as I expected. It aborts uploading file when right after receiving the 413 code from the server.
In this case, I agree with #AppHandwerker's answer that we should validate the file size on client side before starting upload.
In nginx to drop connection I can return 444, however there is a problem with that IMO. It seems that 444 doesn't silently drop the connection, but actually closes it gracefully, as a result tools that all these spammers use will rapidly retry the request:
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
is there a way to abort tcp (not gracefully as if my server was suddenly unplugged from the net) so that on the requester end it would continue waiting? Are there any drawbacks/problems with that and is that possible with nginx?
To drop requests without Host header in nginx you use the following config:
server {
listen 80;
return 444;
}
Is there a way to handle some of these requests for example if requested url matches some regex?
I'm trying to specify a custom 404 page and preserve the URL, however using the below gives me the error nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in...
location / {
error_page 404 #404;
}
location #404 {
proxy_pass https://example.com/404.html;
}
Does anyone know a solution to this?
I have tested your config on nginx 1.6.2. The error message complains about the URL in proxy_pass having path specified. Specifying only https://server would be OK, but doesn't solve your issue.
Named locations (like #fallback) have been introduced also to avoid having to use non-existing locations (which could later become existing and cause issues). But we can also use a location which should never exist on the server itself, for example:
location / { error_page 404 /xx-404-xx; }
location /xx-404-xx { proxy_pass https://example.com/404.html; }
Redirecting to relative URL causes only internal nginx redirect, which does not change the URL as browser sees it.
EDIT: Here is the test result:
kenny:/etc/nginx/sites-enabled:# wget -S server.com/abc/abc
HTTP request sent, awaiting response...
HTTP/1.1 404 Not Found
Server: nginx/1.6.2
Date: Thu, 16 Jul 2015 07:01:38 GMT
Content-Type: text/html
Content-Length: 5
Connection: keep-alive
Last-Modified: Thu, 16 Jul 2015 07:00:59 GMT
ETag: "5037762-5-51af8a241434b"
Accept-Ranges: bytes
2015-07-16 09:01:38 ERROR 404: Not Found.
From apache access log:
1.2.3.4 - - [16/Jul/2015:09:01:38 +0200] "GET /test.html HTTP/1.0" 200 5 "-" "Wget/1.13.4 (linux-gnu)"
In nginx I have proxy_pass to a html file on Apache webserver with just "test\n" in it. As you can see, nginx fetched that, added headers from Apache (Last-Mod, ETag) and also Content-Length: 5, so it received the html file from Apache, but wget doesn't save the content of 404 errors. Also many browsers don't display 404 errors by default if they are smaller than 1 kB (they show their own error page instead). So either make your 404 page bigger or you can configure nginx to serve it as normal html with "200" result code (error_page 404 =200 /xx).
I have no idea why you are receiving external redirects with the same config. Try it with wget to see which headers exactly nginx sent. Also mixing http and https for proxy should be no issue. Try to remove everything from your config and test only this error page, maybe some other directive is causing this (like other location is used instead).
We're using HAProxy as a load balancer at the moment, and it regularly makes requests to the downstream boxes to make sure they're alive using an OPTIONS request:
OPTIONS /index.html HTTP/1.0
I'm working with getting nginx set up as a reverse proxy with caching (using ncache). For some reason, nginx is returning a 405 when an OPTIONS request comes in:
192.168.1.10 - - [22/Oct/2008:16:36:21 -0700] "OPTIONS /index.html HTTP/1.0" 405 325 "-" "-" 192.168.1.10
When hitting the downstream webserver directly, I get a proper 200 response. My question is: how to you make nginx pass that response along to HAProxy, or, how can I set the response in the nginx.conf?
I'm probably late, but I had the same problem, and found two solutions to it.
First is tricking Nginx that a 405 status is actually a 200 OK and then proxy_pass it to your HAProxy like this:
error_page 405 =200 #405;
location #405 {
root /;
proxy_pass http://yourproxy:8080;
}
The second solution is just to catch the OPTIONS request and build a response for those requests:
location / {
if ($request_method = OPTIONS ) {
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
}
Just choose which one suits you better.
I wrote this in a blog post where you can find more details.
In the httpchk option, you can specify the HTTP method like this:
httpchk GET http://example.com/check.php
You can also use POST, or a plain URI like /. I have it check PHP, since PHP runs external to Nginx.