Nginx map $status gives allways default value - nginx

nginx.conf - http block:
map $status $loggable {
~^[23] 0;
default 1;
}
Can somebody tell me why this is not working for me?
I always get default 1, even when the status code is 200 or 301?
For example, the following map works fine:
map $remote_addr $islocal {
"127.0.0.1" 0;
"192.168.178.100" 0;
default 1;
}
The log file shows me that $status gives correct status code.
nginx.conf - http block:
log_format main '$loggable $status - [$time_local] - $remote_addr - "$request"';
webpage.conf - server block:
access_log /share/NGinX/var/log/access.log main if=$loggable;
Log file:
1 200 - [20/Jan/2019:12:49:38 +0100] - ...
1 301 - [20/Jan/2019:13:04:43 +0100] - ...
1 500 - [20/Jan/2019:13:11:44 +0100] - ...
1 301 - [20/Jan/2019:13:48:05 +0100] - ...
1 500 - [20/Jan/2019:13:48:06 +0100] - ...
1 200 - [20/Jan/2019:13:59:55 +0100] - ...
1 200 - [20/Jan/2019:13:59:58 +0100] - ...
1 404 - [20/Jan/2019:14:28:03 +0100] - ...

Related

Upstream http and https protocol under one upstream

I have been trying to add two target under one upstream, one of which is on HTTP and the other one is HTTPS. I am not sure how to acheive that, I tried adding target like: https:10.32.9.123:443 but that didn't work.
It seems like there is a limitation that the targets could either be on HTTP or HTTPS, is there a workaround to this. My kong config file looks like below:
_format_version: "2.1"
_transform: true
services:
- name: test-server-public
protocol: http
host: test-endpoint-upstream
port: 8000
retries: 3
connect_timeout: 5000
routes:
- name: test-route
paths:
- /test
upstreams:
- name: test-endpoint-upstream
targets:
- target: target-url:8080
weight: 999
- target: target-https-url:443
weight: 1
healthchecks:
active:
concurrency: 2
http_path: /
type: http
healthy:
interval: 0
successes: 1
http_statuses:
- 200
- 302
unhealthy:
http_failures: 3
interval: 10
tcp_failures: 3
timeouts: 3
http_statuses:
- 429
- 404
- 500
- 501
- 502
- 503
- 504
- 505
passive:
type: http
healthy:
successes: 1
http_statuses:
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 226
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
unhealthy:
http_failures: 1
tcp_failures: 1
timeouts: 1
http_statuses:
- 429
- 500
- 503
slots: 1000
There're no workaround for that.
You should set to redirect http to https at your webserver configurations.
This http-to-https-redirect plugin will be helpful in your case.

change nginx response code from 413

Is there any way to change the response code nginx sends? When the server receives a file that exceeds its client_max_body_size as defined in the config, can I have it return a 403 code instead of a 413 code?
Below works fine for me
events {
worker_connections 1024;
}
http {
server {
listen 80;
location #change_upload_error {
return 403 "File uploaded too large";
}
location /post {
client_max_body_size 10K;
error_page 413 = #change_upload_error;
echo "you reached here";
}
}
}
Results for posting a 50KB file
$ curl -vX POST -F file=#test.txt vm/post
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 192.168.33.100...
* TCP_NODELAY set
* Connected to vm (192.168.33.100) port 80 (#0)
> POST /post HTTP/1.1
> Host: vm
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Length: 51337
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------67df5f3ef06561a5
>
< HTTP/1.1 403 Forbidden
< Server: openresty/1.11.2.2
< Date: Mon, 11 Sep 2017 17:58:55 GMT
< Content-Type: text/plain
< Content-Length: 23
< Connection: close
<
* Closing connection 0
File uploaded too large%
and nginx logs
web_1 | 2017/09/11 17:58:55 [error] 5#5: *1 client intended to send too large body: 51337 bytes, client: 192.168.33.1, server: , request: "POST /post HTTP/1.1", host: "vm"
web_1 | 192.168.33.1 - - [11/Sep/2017:17:58:55 +0000] "POST /post HTTP/1.1" 403 23 "-" "curl/7.54.0"

Nginx don't block bot by user-agent

I'm trying to ban annoying bot by user-agent. I put this into server section of nginx config:
server {
listen 80 default_server;
....
if ($http_user_agent ~* (AhrefsBot)) {
return 444;
}
checking by curl:
[root#vm85559 site_avaliable]# curl -I -H 'User-agent: Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)' localhost/
curl: (52) Empty reply from server
so i check /var/log/nginx/access.log and i see some connections get 444, but another connections get 200!
51.255.65.78 - - [25/Jun/2017:15:47:36 +0300 - -] "GET /product/kovriki-avtomobilnie/volkswagen/?PAGEN_1=10 HTTP/1.1" 444 0 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498394856.155
217.182.132.60 - - [25/Jun/2017:15:47:50 +0300 - 2.301] "GET /product/bryzgoviki/toyota/ HTTP/1.1" 200 14500 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498394870.955
How is it possible?
Ok, got it!
I've add $server_name and $server_addr to nginx log format, and saw that cunning bot connects by ip without server_name:
51.255.65.40 - _ *myip* - [25/Jun/2017:16:22:27 +0300 - 2.449] "GET /product/soyuz_96_2/mitsubishi/l200/ HTTP/1.1" 200 9974 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" 1498396947.308
so i added this and bot can't connect anymore
server {
listen *myip*:80;
server_name _;
return 403;
}

nginx as a docker upstream proxy with maps

Trying to use docker to setup a bunch of apps behind a proxy using the nginx maps option for ease of configuration with a large number of backend applications.
The trouble I'm running into is the container won't resolve the addresses that I've given it with links.
I've tried using dnsmasq but that was troublesome, and didn't give me a working resolution.
Any suggestions?
nginx.conf:
events {
worker_connections 1024;
}
http {
map $hostname $destination {
hostnames;
default host1:81;
host1.test.local host1:81;
host2.test.local host2:82;
host3.test.local host3:83;
}
server {
location / {
proxy_pass http://$destination/;
}
}
}
docker-compose.yml:
webproxy:
build: nginx:latest
ports:
- "80:80"
volumes:
- nginx.conf:/etc/nginx/nginx.conf
links:
- "host1:host1"
- "host2:host2"
- "host3:host3"
host1:
image: nginx:latest
ports:
- "81:80"
volumes:
- host1/index.html:/usr/share/nginx/html/index.html
host2:
image: nginx:latest
ports:
- "82:80"
volumes:
- host2/index.html:/usr/share/nginx/html/index.html
host3:
image: nginx:latest
ports:
- "83:80"
volumes:
- host3/index.html:/usr/share/nginx/html/index.html
Error I constantly get:
webproxy_1 | 2015/07/14 16:44:11 [error] 5#0: *1 no resolver defined to resolve host1, client: 10.0.2.2, server: , request: "GET / HTTP/1.1", host: "host2.test.local:8281"
webproxy_1 | 10.0.2.2 - - [14/Jul/2015:16:44:11 +0000] "GET / HTTP/1.1" 502 181 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:39.0) Gecko/20100101 Firefox/39.0"

nginx = / location pattern not working

I am trying to configure nginx to serve a static html page on the root domain, and proxy everything else to uwsgi. As a quick test I tried to divert to two different static pages:
server {
server_name *.example.dev;
index index.html index.htm;
listen 80;
charset utf-8;
location = / {
root /www/src/;
}
location / {
root /www/test/;
}
}
This seems to be what http://nginx.org/en/docs/http/ngx_http_core_module.html#location says you can do. But I'm always getting sent to the test site, even on the / request by visiting http://www.example.dev in my browser.
Curl output:
$ curl http://www.example.dev -v
* Rebuilt URL to: http://www.example.dev/
* Hostname was NOT found in DNS cache
* Trying 192.168.50.51...
* Connected to www.example.dev (192.168.50.51) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: www.example.dev
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.8.0 is not blacklisted
< Server: nginx/1.8.0
< Date: Tue, 19 May 2015 01:11:10 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 415
< Last-Modified: Wed, 15 Apr 2015 02:53:27 GMT
< Connection: keep-alive
< ETag: "552dd2a7-19f"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
...
And the output from the nginx access log:
192.168.50.1 - - [19/May/2015:01:17:05 +0000] "GET / HTTP/1.1" 200 415 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36" "-"
So I decided to comment out the test location. So I have only the location = / { ... block. Nginx now 404s and logs the following error:
2015/05/19 01:24:12 [error] 3116#0: *6 open() "/etc/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.50.1, server: *.example.dev, request: "GET / HTTP/1.1", host: "www.example.dev"
Which is the default root in the original nginx conf file? I guess this confirms my location = / pattern is not matching.
I added $uri to the access log and see that it is showing /index.html which I guess means the first location pattern is matching, but then it goes into the second location block? So now I just need to figure out how to serve my index.html from the / block, or just add another block like: location =/index.html
according to #Alexey Ten commented, in ngx_http_index doc:
It should be noted that using an index file causes an internal redirect, and the request can be processed in a different location. For example, with the following configuration:
location = / {
index index.html;
}
location / {
...
}
a “/” request will actually be processed in the second location as “/index.html”.
In your case, request to "/" will not get /www/src/index.html, but /www/test/index.html.

Resources