nginx: ignore some requests without proper Host header - nginx

In nginx to drop connection I can return 444, however there is a problem with that IMO. It seems that 444 doesn't silently drop the connection, but actually closes it gracefully, as a result tools that all these spammers use will rapidly retry the request:
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
is there a way to abort tcp (not gracefully as if my server was suddenly unplugged from the net) so that on the requester end it would continue waiting? Are there any drawbacks/problems with that and is that possible with nginx?
To drop requests without Host header in nginx you use the following config:
server {
listen 80;
return 444;
}
Is there a way to handle some of these requests for example if requested url matches some regex?

Related

application under Nginx switching IPs, how to make always the same?

I am running application under nginx with configuration:
upstream myup {
server localhost:8833
server localhost:8844
}
server {
listen 80;
server_name: localhost;
location / {
proxy_pass http://myup.com
}
}
This configuration works for me, but when I am watching IP using app in logs, I see the following:
127.0.0.1/ - - - [11/JAN] "GET /info HTTP/1.0" 200
0.0.0.0.0.0.0.1 - - - [11/JAN] "GET /image.css HTTP/1.0" 200
127.0.0.1/ - - - [11/JAN] "GET /script.js HTTP/1.0" 200
0.0.0.0.0.0.0.1 - - - [11/JAN] "GET /logo.svg HTTP/1.0" 200
every second request changes IP between (127.0.0.1, 0.0.0.0.0.0.0.1)
Logs from Nginx there always have IP: 127.0.0.1
Logs from my app without Nginx always have IP: 0.0.0.0.0.0.0.1
How do I manage to work continuously with the same IP as my application depends on it?

nginx forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error

I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;

Nginx Reverse Proxy is directing requests to a .255 address after several days

I have nginx configured to perform two functions;
1 - To serve a set of html and javascript pages. The javascript pages iteratively access an API through the Nginx Proxy (see function 2).
2 - In order to get around CORS restrictions from the client/browser, nginx acts as a proxy to the remote api.
Everything works perfectly when nginx is first started and will run for several days to a couple of weeks. At some point, the client is no longer able to get data from two of the API endpoints. The ones that continue to work are retrieved using a GET. The ones that stop working use a POST method.
I looked in the nginx access.log and found:
192.168.100.7 - - [08/Dec/2020:23:01:24 +0000] "POST /example/developer_api/v1/companies/search HTTP/1.1" 499 0 "http://192.168.100.71/example-wdc/ExampleCompanies.html" "Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/538.1 (KHTML, like Gecko) tabprotosrv Safari/538.1"
A HTTP Error 499. Client closed request. This is 30 seconds after the previous successful GET request. I believe this is the originating client closing the connection before Nginx has received and returned data from the API.
I used wireshark on the nginx server to capture the traffic.
I found the following suspect packet:
104 6.716880257 192.168.100.71 XXX.XXX.XXX.255 TCP 66 42920 → 443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 WS=128
I think it is weird that the nginx proxy is sending a TCP SYN request to a broadcast address. The TCP SYN retried several times without any response. This explains the 499 from the originating client since Nginx hasn't had a response within 30 seconds.
I had a theory that the IP address had changed on the remove API server which then confused nginx on where to forward the requests. I added a resolver with a timeout to nginx. This hasn't improved the situation.
So, I am stumped as to where to look next - any ideas, rabbit holes or weird theories will be appreciated.
I have included the nginx config below.
server {
charset UTF-8;
listen 80;
root /var/www/tableau-web-data-connectors/webroot/;
location /copper/ {
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
add_header 'Access-Control-Allow-Origin' '192.168.100.71';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-C$
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
resolver 192.168.100.10 valid=720s;
proxy_pass https://api.example.com/;
proxy_ssl_session_reuse off;
proxy_set_header Host api.prosperworks.com;
proxy_redirect off;
}
}
Nginx uses "resolver" just once on start (or receiving HUP signal) for resolving api.example.com to ip-address (or any other domain names in configuration). Dynamic resolving works only for commercial Nginx+.
There is "POST /example/developer_api/v1/companies/search" with referer "http://192.168.100.71/example-wdc/ExampleCompanies.html".
Client opened "/example-wdc/ExampleCompanies.html" then clicked on the search form but didn't wait for result and closed the page. That's how 499 appeared in access_log. This is a common situation.
Perhaps it's just that nginx does not resolve api.example.com just when "api.example.com" changes its IP for some reason and stop working on old IP. Because resolving only works when nginx is restarted.
An IP ending in 255 is not always broadcast. This may be a valid address.

NGINX -- show cached IPs for host names in config files?

[SHORT VERSION] I understand when NGINX looks at a config file, it does DNS lookups on the hostnames in it, and then stores the results (IP addresses the hostnames should resolve to) somewhere and uses them until the next time it looks at a config file (which, to my understanding, is not until the next restart by default). Is there a way to see this hostnames-to-ips mapping that my currently-running NGINX service has? I am aware there are ways to configure my NGINX to account for changes in IPs for a hostname. I wish to see what my NGINX currently thinks it should resolve my hostname to.
[Elaborated] I'm using the DNS name of an AWS ELB (classic) as the hostname for a proxy_pass. And since both the public and private IPs of an AWS ELB can change (without notice), whatever IP(s) NGINX has mapped for that hostname at the start of its service will become outdated upon such change. I believe the IP-change just happened for me, as my NGINX service is forwarding traffic to a cluster different than what is specified in its config. Restarting the NGINX service fixes the problem. But, again, I'm looking to SEE where NGINX currently thinks it should send the traffic to, not how to fix it or prevent it (plenty of resources online for working with dynamic upstreams, which I evidently should have consumed prior to deploying my NGINX services...).
Thank you in advance!
All you need is the resolver option.
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
With this option nginx will lookup DNS changes without restarting. But only for proxy_pass directive. This wont work, if you are using upstream. DNS resolve of upstream servers supported only in Nginx PLUS version.
If you want to know IP of upstream server, there is few ways:
- in PLUS version you can use status module or upstream_conf module, but PLUS version is not free
- some 3rd party status modules
- write this IP to log with each request, just add $upstream_addr variable to your custom access log. $upstream_addr contains IP address of backend server used in current request. Example of config:
log_format upstreamlog '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent $upstream_addr';
server {
...
access_log /tmp/test_access_log upstreamlog;
resolver ip.of.local.resolver;
location / {
set $pass dns_name.of.backend;
proxy_pass http://$pass;
}
}
Note: always use variable for proxy_pass - only in this case resolver will be used. Example of log:
127.0.0.1 - - [10/Jan/2017:02:12:15 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
127.0.0.1 - - [10/Jan/2017:02:12:25 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
.... IP address changed, nginx wasn't restarted ...
127.0.0.1 - - [10/Jan/2017:02:13:55 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80
127.0.0.1 - - [10/Jan/2017:02:13:59 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80

How to handle "OPTIONS *" request in nginx?

In my environment, I use perlbal to redirect request to nginx. If verify_backend is on. perbal will send a "OPTIONS *" request to nginx, but the nginx response it as a bad request.
According to RFC2616:
If the Request-URI is an asterisk (""), the OPTIONS request is intended to apply to the ?server in general rather than to a specific resource. Since a server's communication options typically depend on the resource, the "" request is only useful as a "ping" or "no-op" type of method; it does nothing beyond allowing the client to test the capabilities of the server. For example, this can be used to test a proxy for HTTP/1.1 compliance (or lack thereof).
I think perlbal is trying to send this kind of request, but nginx can't handle this by default.
When I try to send a request "OPTIONS * HTTP/1.0", I always get "HTTP 400 bad request":
127.0.0.1 - - [18/Feb/2013:03:55:47 +0000] "OPTIONS * HTTP/1.0" 400 172 "-" "-" "-"
but it works on "OPTIONS / HTTP/1.0" option without asterisk requests :
127.0.0.1 - - [18/Feb/2013:04:03:56 +0000] "OPTIONS / HTTP/1.0" 200 0 "-" "-" "-"
How can I configure nginx to let it respond with http return 200 rather than HTTP return 400 ?
I know it's an overkill but one solution is to put HAProxy in front of it to just capture that OPTIONS request and then build your own response in HAProxy:
location * {
if ($request_method = OPTIONS ) {
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
}
The only way I found to modify the behaviour in this case was to respond to 400 in general:
error_page 400 =200 /empty_reply.html;
You could just send empty responses to everything you cannot handle.
For whoever wants to try to solve this another way, you can simulate this requests with:
curl -X OPTIONS $yourserverip --request-target "*" --http1.1

Resources