I've setup nginx/1.18.0 Ubuntu with the following parameters at http level:
http {
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
client_header_buffer_size 5120k;
large_client_header_buffers 16 5120k;
}
If I remove the parameters then I get Error 414 (Request-URI Too Large).
I've tried to put the parameters at server level, I've removed all parameters from every server and left them on http level, also checked there is no default server, but nothing works. Always same error: 400. Debug logs:
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http process request line
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http alloc large header buffer
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 malloc: 0000557BF591C880:65536
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http large header alloc: 0000557BF591C880 65536
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http large header copy: 1024
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 SSL_read: 15360
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 SSL_read: -1
...
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 chain writer buf fl:1 s:18856
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 chain writer in: 0000557BF5910DF0
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 writev: 18856 of 18856
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 chain writer out: 0000000000000000
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 event timer del: 27: 1394198951
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 event timer add: 27: 1800000:1395997951
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http upstream request: "/api/reports/get-data?data=U2FsdGVkX184fkBl4wAhRkbfDL...
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http upstream process header
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 malloc: 0000557BF5911310:4096
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 recv: eof:1, avail:-1
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 recv: fd:27 28 of 4096
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http proxy status 400 "400 Bad Request"
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 http proxy header done
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 xslt filter header
2022/09/22 14:06:38 [debug] 1083638#1083638: *2 HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
I've reviewed the proxy, which is a node.js application, it does not receive the request.
I've even tried with
client_header_buffer_size 800M;
large_client_header_buffers 16 800M;
And the result is the same. The same configuration with same node.js app, exactly same data, etc. was working properly in a nginx/1.10.3 (Ubuntu).
Please any help would be really appreciated.
Related
I've tried the following SO answers without success This one, this one and a few others. I've also looked at the docs, however I can't figure out what I'm doing wrong. When I hit /, I get the nginx home page (fine). When I try to hit /alpha I get 404. Curling to 127.0.0.1:5001 gives me what I expect from the host. Can anyone tell me what I've missed?
Config:
http {
server {
listen 80;
location /prealpha/ {
proxy_pass http://127.0.0.1:5000/;
}
location /alpha/ {
proxy_pass http://127.0.0.1:5001/;
}
location /beta/ {
proxy_pass http://127.0.0.1:5002/;
}
location /gamma/ {
proxy_pass http://127.0.0.1:5003/;
}
}
Update
It seems like the following log from the debug output of nginx indicates that somehow the keepalive isn't working properly and the connection then gets closed. Do I need to use an upstream or something to make the connection stay open?
2019/09/18 12:11:03 [debug] 8688#8688: *2 http upstream request: "/alpha/?"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http upstream process header
2019/09/18 12:11:03 [debug] 8688#8688: *2 malloc: 000055B88E563AE0:4096
2019/09/18 12:11:03 [debug] 8688#8688: *2 recv: eof:0, avail:1
2019/09/18 12:11:03 [debug] 8688#8688: *2 recv: fd:9 484 of 4096
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy status 404 "404 NOT FOUND"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Server: gunicorn/19.7.1"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Date: Wed, 18 Sep 2019 12:11:03 GMT"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Connection: close"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Content-Type: text/html"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Content-Length: 232"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Set-Cookie: oidc_id_token=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly; Path=/"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header: "Vary: Cookie"
2019/09/18 12:11:03 [debug] 8688#8688: *2 http proxy header done
2019/09/18 12:11:03 [debug] 8688#8688: *2 xslt filter header
2019/09/18 12:11:03 [debug] 8688#8688: *2 posix_memalign: 000055B88E538C60:4096 #16
2019/09/18 12:11:03 [debug] 8688#8688: *2 HTTP/1.1 404 NOT FOUND
Server: nginx/1.14.0 (Ubuntu)
Date: Wed, 18 Sep 2019 12:11:03 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: oidc_id_token=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly; Path=/
Vary: Cookie
Content-Encoding: gzip
MAJOR UPDATE
It turns out that nginx is passing the location selector (not sure what htis is actually called but in the config at the top it's the bit that says "/prealpha/" for example) to the proxy_pass URL. I need it not to do that. It needs to simply pass everything after that point to the proxy. How do I get it to do that?
Eventually, https://serverfault.com/a/379679/356031 fixed it.
So in short, no error log and the request getting passed through clearly (once I'd enabled debug logging in nginx), and then after I'd enabled debug logging in gunicorn indicated that the wrong url was being passed through.
I'm working on a server that is receiving requests from IoT devices. They perform a HEAD request on boot. Unfortunately, it seems there's something wrong with the headers.
NGINX access log
[11/Sep/2018:13:41:11 +0000] "HEAD / HTTP/1.1" 408 0 "-" "-" --- "-" "-"
The log format is as follows
log_format custom '[$time_local] "$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" ---'
'"$content_type" "$content_length"';
NGINX error log
2018/09/11 13:40:11 [debug] 31368#31368: *1 accept: SRCIP:33930 fd:32
2018/09/11 13:40:11 [debug] 31368#31368: *1 event timer add: 32: 60000:1536673271229
2018/09/11 13:40:11 [debug] 31368#31368: *1 reusable connection: 1
2018/09/11 13:40:11 [debug] 31368#31368: *1 epoll add event: fd:32 op:1 ev:80002001
2018/09/11 13:40:11 [debug] 31368#31368: *1 post event 000056011DBAA2C0
2018/09/11 13:40:11 [debug] 31368#31368: *1 delete posted event 000056011DBAA2C0
2018/09/11 13:40:11 [debug] 31368#31368: *1 http wait request handler
2018/09/11 13:40:11 [debug] 31368#31368: *1 malloc: 000056011DAAB650:1024
2018/09/11 13:40:11 [debug] 31368#31368: *1 recv: fd:32 71 of 1024
2018/09/11 13:40:11 [debug] 31368#31368: *1 reusable connection: 0
2018/09/11 13:40:11 [debug] 31368#31368: *1 posix_memalign: 000056011DB0DCD0:4096 #16
2018/09/11 13:40:11 [debug] 31368#31368: *1 http process request line
2018/09/11 13:40:11 [debug] 31368#31368: *1 http request line: "HEAD / HTTP/1.1"
2018/09/11 13:40:11 [debug] 31368#31368: *1 http uri: "/"
2018/09/11 13:40:11 [debug] 31368#31368: *1 http args: ""
2018/09/11 13:40:11 [debug] 31368#31368: *1 http exten: ""
2018/09/11 13:40:11 [debug] 31368#31368: *1 posix_memalign: 000056011DB690C0:4096 #16
2018/09/11 13:40:11 [debug] 31368#31368: *1 http process request header line
2018/09/11 13:40:11 [debug] 31368#31368: *1 http header: "Host: MYHOST"
2018/09/11 13:40:11 [debug] 31368#31368: *1 recv: fd:32 -1 of 953
2018/09/11 13:40:11 [debug] 31368#31368: *1 recv() not ready (11: Resource temporarily unavailable)
2018/09/11 13:41:11 [debug] 31368#31368: *1 event timer del: 32: 1536673271229
2018/09/11 13:41:11 [debug] 31368#31368: *1 http process request header line
2018/09/11 13:41:11 [info] 31368#31368: *1 client timed out (110: Connection timed out) while reading client request headers, client: SRCIP, server: MYHOST, request: "HEAD / HTTP/1.1", host: "MYHOST"
2018/09/11 13:41:11 [debug] 31368#31368: *1 http request count:1 blk:0
2018/09/11 13:41:11 [debug] 31368#31368: *1 http close request
2018/09/11 13:41:11 [debug] 31368#31368: *1 http log handler
2018/09/11 13:41:11 [debug] 31368#31368: *1 free: 000056011DB0DCD0, unused: 707
2018/09/11 13:41:11 [debug] 31368#31368: *1 free: 000056011DB690C0, unused: 3104
2018/09/11 13:41:11 [debug] 31368#31368: *1 close http connection: 32
2018/09/11 13:41:11 [debug] 31368#31368: *1 reusable connection: 0
2018/09/11 13:41:11 [debug] 31368#31368: *1 free: 000056011DAAB650
2018/09/11 13:41:11 [debug] 31368#31368: *1 free: 000056011DAFF960, unused: 128
Tcpdump sudo tcpdump -n -S -s 0 -A 'src SRCIP and port 80' shows
13:55:32.846408 IP SRCIP.39761 > DSTIP.80: Flags [S], seq 1846787, win 2920, options [mss 1460], length 0
E..,....p. *E......h.Q.P........`..h\;........
13:55:33.153456 IP SRCIP.39761 > DSTIP.80: Flags [.], ack 3538300854, win 2920, length 0
E..(....p..^E......h.Q.P....../.P..hqK........
13:55:33.314206 IP SRCIP.39761 > DSTIP.80: Flags [P.], seq 1846788:1846859, ack 3538300854, win 2920, length 71: HTTP: HEAD / HTTP/1.1
E..o&...p..CE......h.Q.P....../.P..hg...HEAD / HTTP/1.1
Host: MYHOST
Content-Length:
13:56:33.363048 IP SRCIP.39761 > DSTIP.80: Flags [F.], seq 1846859, ack 3538300855, win 2919, length 0
E..(....p...E......h.Q.P...K../.P..gq.........
I cannot change the firmware in the devices so I'm looking for a workaround on the NGINX side. Please let me know if I can provide more info to help with the answer.
EDIT: I'm not adding the server config because I've tried too many and I'm not sure what to paste here.
EDIT 2: tcpdump at first logs
13:55:32.846408 IP SRCIP.39761 > DSTIP.80: Flags [S], seq 1846787, win 2920, options [mss 1460], length 0
E..,....p. *E......h.Q.P........`..h\;........
13:55:33.153456 IP SRCIP.39761 > DSTIP.80: Flags [.], ack 3538300854, win 2920, length 0
E..(....p..^E......h.Q.P....../.P..hqK........
13:55:33.314206 IP SRCIP.39761 > DSTIP.80: Flags [P.], seq 1846788:1846859, ack 3538300854, win 2920, length 71: HTTP: HEAD / HTTP/1.1
E..o&...p..CE......h.Q.P....../.P..hg...HEAD / HTTP/1.1
Host: MYHOST
Content-Length:
And then the rest after some time. I assume it's after NGINX times out.
EDIT 3: I finally understand what's going on. This has been really confusing because there's one Apache server in production with which the devices work properly. While trying to switch to NGINX things stopped working.
As I've said above, the IoT devices on boot are performing a HEAD request. They expect a response with a Date: header so that they can parse it.
Currently, the device are working fine with Apache because when a timeout is triggered while waiting for headers from the client, Apache returns a 408 response to the client, including the Date: header.
This directive can set various timeouts for receiving the request headers and the request body from the client. If the client fails to send headers or body within the configured time, a 408 REQUEST TIME OUT error is sent.
(https://httpd.apache.org/docs/2.4/mod/mod_reqtimeout.html)
On the other hand, NGINX when a when a timeout is triggered while waiting for headers from the client, just closes the connection without returning anything to the client. Even if it logs 408 in the access log.
Defines a timeout for reading client request header. If a client does not transmit the entire header within this time, the request is terminated with the 408 (Request Time-out) error.
(http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout)
There's been a discussion on this behaviour already https://trac.nginx.org/nginx/ticket/1005.
In other words, the HEAD request from the IoT devices has always being wrong. It just works with Apache because a 408 response with Date: is sent back whenever a timeout while waiting for headers is triggered.
As I said above, unfortunately there's no way for me to change how the devices work. Thus, I need to workaround in NGINX. The only way I found is to change the source and compile myself.
This is what I came up with by copy / pasting from the internet. Unfortunately, I haven't had time to understand the code and prolly won't ever. It would be really great if somebody helped me understanding how bad that code is and what's a better way of writing it.
The version of NGINX is 1.14.0.
diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
index 2db7a62..086701b 100644
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
## -1236,7 +1236,7 ## ngx_http_process_request_headers(ngx_event_t *rev)
if (rev->timedout) {
ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
c->timedout = 1;
- ngx_http_close_request(r, NGX_HTTP_REQUEST_TIME_OUT);
+ ngx_http_finalize_request(r, ngx_http_special_response_handler(r, NGX_HTTP_REQUEST_TIME_OUT));
return;
}
To validate the code is working I used Telnet:
This is what NGINX normally would do
Request
$ telnet HOST 80
Trying IP...
Connected to HOST.
Escape character is '^]'.
HEAD / HTTP/1.1
Content-Length:
Response
Connection closed by foreign host.
This is what NGINX does with the modified code
Request
$ telnet HOST 80
Trying IP...
Connected to HOST.
Escape character is '^]'.
HEAD / HTTP/1.1
Content-Length:
Response
HTTP/1.1 408 Request Time-out
Server: nginx
Date: Tue, 25 Sep 2018 08:18:41 GMT
Content-Type: text/html
Content-Length: 176
Connection: close
Notice that I could use another header in place of Content-Length: (e.g. Accept: and the result would be the same). If you are trying to reproduce just remember to press enter once (and only once) after the empty header (in the example Content-Length:).
I ended up using Apache. In fact, it seems like there's no way to configure NGINX to send back the 408 to the client (and not just closing the connection). Patching the source is risky and makes updating the server painful.
I was running into this with a proxy_pass. Originally I had the following configuration:
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass https://internal:3210/;
add_header Access-Control-Allow-Origin *;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
Then I commented out the proxy settings and it started to work for me:
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass https://internal:3210/;
# add_header Access-Control-Allow-Origin *;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "upgrade";
# proxy_buffering off;
}
I’m trying to use nginx as a reverse proxy for multiple docker containers running wordpress. The nginx instance and docker are running on Ubuntu 16.04.3 server. I have been testing this out on my local dev environment with Vagrant using a separate Ubuntu box for nginx and another for docker. The vagrant configuration works as expected, but when I try to make a similar configuration on a single physical Ubuntu server the route to the wordpress docker container hangs and eventually returns an HTTP 301 error.
Note: Using a similar nginx reverse proxy configuration for other docker containers listening on different port works. For example, running Jenkins in docker and using a reverse proxy to that container works successfully.
Here are the configurations I am using with Vagrant and then on my physical Ubuntu server:
Working solution with Vagrant and Two separate Ubuntu boxes
Vagrant Configuration
Nginx running in a separate Ubuntu Box
Setting local host file
/etc/hosts 10.10.45.10 - wp.dev
nginx configuration
server {
listen 80;
listen [::]:80;
server_name wp.dev;
error_log /var/log/nginx/wp_dev_error.log debug;
location / {
proxy_pass http://10.10.45.11:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Wordpress docker configuration
Docker-compose.yml file:
version: "2"
services:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: <some_password>
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: <some_password>
Run docker container
docker-compose up -d
Route
wp.dev (10.10.45.10) → docker_wp (10.10.45.11 port 8080)
Curl test: curl wp.dev -- SUCCESS
10.10.45.1 - - [18/Aug/2017:21:38:37 +0000] "GET / HTTP/1.1" 200 51638 "-" "curl/7.54.0"
Broken Configuration
/etc/nginx/sites-available/sub1.mydomain.com.conf
server {
listen 80;
listen [::]:80;
server_name sub1.mydomain.com;
error_log /var/log/nginx/mydomain_nonssl_error.log debug;
location / {
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Docker-compose file for wordpress docker container is the same as the vagrant configuration above.
Testing configuration and results
Curl test: curl ..com
159.203.127.57 - - [18/Aug/2017:15:37:52 -0600] "GET / HTTP/1.1" 301 0 "-" "curl/7.47.0"
The page hangs and I see in the header a HTTP 301 error.
curl -v http://
* Rebuilt URL to: http://<my_testdomain>/
* Trying xx.xx.xx.91...
* Connected to sub1.mydomain.com (xx.xx.xx.91) port 80 (#0)
> GET / HTTP/1.1
> Host: sub1.mydomain.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.10.3 (Ubuntu)
< Date: Sat, 19 Aug 2017 15:05:38 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 0
< Connection: keep-alive
< X-Powered-By: PHP/5.6.31
**< Location: http://<my_test_domain>:8080/**
<
* Connection #0 to host <my_test_domain> left intact
Note: that the route continues to use the destination port in the URL. I don’t see this in my Vagrant configuration. This may be the problem.
I’ve tried different nginx configuration to hide the destination port but nothing seems to work.
Here are some specific questions that may help troubleshoot this problem:
How can I enable more debug information with nginx. I’m using the following “error_log” setting but would like to enable “verbose” logging to determine how routing is working.
error_log /var/log/nginx/mydomain_nonssl_error.log debug;
Why is the destination port still showing in the URL when I run nginx reverse proxy on the same machine, but a similar configuration in Vagrant running separate boxes hides the port and uses the original URL in the request?
Could the problem be in the Docker wordpress/Apache container that is causing the 301 HTTP error?
I’ve been working on this problem for several days and have not been able to resolve the issue. Thanks in advance for your help.
**Nginx Log file of reverse proxy **
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: "X-Real-IP: "
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script var: "168.179.61.161"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: "
"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: "X-Forwarded-For: "
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script var: "168.179.61.161"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: "
"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: "Connection: close
"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: ""
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: ""
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: ""
2017/08/28 09:53:14 [debug] 11853#11853: *1 http script copy: ""
2017/08/28 09:53:14 [debug] 11853#11853: *1 http proxy header: "user-agent: curl/7.53.0"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http proxy header: "accept: */*"
2017/08/28 09:53:14 [debug] 11853#11853: *1 http proxy header:
"GET / HTTP/1.0
Host: <mydevsite>
X-Real-IP: 168.179.61.161
X-Forwarded-For: 168.179.61.161
Connection: close
user-agent: curl/7.53.0
accept: */*
… snip ….
2017/08/28 09:53:15 [debug] 11853#11853: *1 http upstream request: "/?"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http upstream process header
2017/08/28 09:53:15 [debug] 11853#11853: *1 malloc: 000055D755D248F0:4096
2017/08/28 09:53:15 [debug] 11853#11853: *1 recv: fd:29 246 of 4096
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy status 301 "301 Moved Permanently"
2017/08/28 09:53:15 [debug] 11853#11853: *1 posix_memalign: 000055D755D168A0:4096 #16
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Date: Mon, 28 Aug 2017 15:53:14 GMT"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Server: Apache/2.4.10 (Debian)"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "X-Powered-By: PHP/5.6.31"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Location: http://<mydevsite>:8080/"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Content-Length: 0"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Connection: close"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header: "Content-Type: text/html; charset=UTF-8"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy header done
2017/08/28 09:53:15 [debug] 11853#11853: *1 xslt filter header
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 header filter
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: ":status: 301"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "server: nginx/1.10.3 (Ubuntu)"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "date: Mon, 28 Aug 2017 15:53:15 GMT"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "content-type: text/html; charset=UTF-8"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "content-length: 0"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "location: http://<mydevsite>:8080/"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "x-powered-by: PHP/5.6.31"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "strict-transport-security: max-age=63072000; includeSubdomains"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "x-frame-options: DENY"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 output header: "x-content-type-options: nosniff"
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2:1 create HEADERS frame 000055D755D16B78: len:200
2017/08/28 09:53:15 [debug] 11853#11853: *1 http cleanup add: 000055D755D16C60
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 frame out: 000055D755D16B78 sid:1 bl:1 len:200
2017/08/28 09:53:15 [debug] 11853#11853: *1 SSL buf copy: 9
2017/08/28 09:53:15 [debug] 11853#11853: *1 SSL buf copy: 200
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2:1 HEADERS frame 000055D755D16B78 was sent
2017/08/28 09:53:15 [debug] 11853#11853: *1 http2 frame sent: 000055D755D16B78 sid:1 bl:1 len:200
2017/08/28 09:53:15 [debug] 11853#11853: *1 http cacheable: 0
2017/08/28 09:53:15 [debug] 11853#11853: *1 http proxy filter init s:301 h:0 c:0 l:0
2017/08/28 09:53:15 [debug] 11853#11853: *1 http upstream process upstream
2017/08/28 09:53:15 [debug] 11853#11853: *1 pipe read upstream: 1
2017/08/28 09:53:15 [debug] 11853#11853: *1 pipe preread: 0
2017/08/28 09:53:15 [debug] 11853#11853: *1 readv: 1, last:3850
2017/08/28 09:53:15 [debug] 11853#11853: *1 pipe recv chain: 0
2017/08/28 09:53:15 [debug] 11853#11853: *1 pipe buf free s:0 t:1 f:0 000055D755D248F0, pos 000055D755D249E6, size: 0 file: 0, size: 0
Before seeing you trying to debug in nginx side, i would suggest you to try how the upstreams are active for nginx. Could post the output for http://localhost:8080 from nginx ?
--
Mohammed Azfar
Change
proxy_redirect off
to
proxy_redirect http://localhost:8080/ http://$host/
this is what I got as debug level info from the error log
"GET /api/account/logout HTTP/1.0
Host: http://SERVER_IP/
Connection: close
Cache-Control: max-age=0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36
Upgrade-Insecure-Requests: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.6,en;q=0.4,zh-TW;q=0.2
"
2017/08/22 19:38:42 [debug] 22939#0: *62 http cleanup add: 00007F557D385FD0
2017/08/22 19:38:42 [debug] 22939#0: *62 get rr peer, try: 1
2017/08/22 19:38:42 [debug] 22939#0: *62 stream socket 8
2017/08/22 19:38:42 [debug] 22939#0: *62 epoll add connection: fd:8 ev:80002005
2017/08/22 19:38:42 [debug] 22939#0: *62 connect to 10.14.6.4:80, fd:8 #63
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream connect: -2
2017/08/22 19:38:42 [debug] 22939#0: *62 posix_memalign: 00007F557D2E7DE0:128 #16
2017/08/22 19:38:42 [debug] 22939#0: *62 event timer add: 8: 60000:1503401982521
2017/08/22 19:38:42 [debug] 22939#0: *62 http finalize request: -4, "/admin_api/account/logout?" a:1, c:2
2017/08/22 19:38:42 [debug] 22939#0: *62 http request count:2 blk:0
2017/08/22 19:38:42 [debug] 22939#0: *62 post event 00007F557D41A6E0
2017/08/22 19:38:42 [debug] 22939#0: *62 delete posted event 00007F557D41A6E0
2017/08/22 19:38:42 [debug] 22939#0: *62 http run request: "/admin_api/account/logout?"
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream check client, write event:1, "/admin_api/account/logout"
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream recv(): -1 (11: Resource temporarily unavailable)
2017/08/22 19:38:42 [debug] 22939#0: *62 post event 00007F557D41A740
2017/08/22 19:38:42 [debug] 22939#0: *62 delete posted event 00007F557D41A740
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream request: "/admin_api/account/logout?"
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream send request handler
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream send request
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream send request body
2017/08/22 19:38:42 [debug] 22939#0: *62 chain writer buf fl:1 s:467
2017/08/22 19:38:42 [debug] 22939#0: *62 chain writer in: 00007F557D386008
2017/08/22 19:38:42 [debug] 22939#0: *62 writev: 467 of 467
2017/08/22 19:38:42 [debug] 22939#0: *62 chain writer out: 0000000000000000
2017/08/22 19:38:42 [debug] 22939#0: *62 event timer del: 8: 1503401982521
2017/08/22 19:38:42 [debug] 22939#0: *62 event timer add: 8: 60000:1503401982522
2017/08/22 19:38:42 [debug] 22939#0: *62 post event 00007F557D402730
2017/08/22 19:38:42 [debug] 22939#0: *62 post event 00007F557D41A740
2017/08/22 19:38:42 [debug] 22939#0: *62 delete posted event 00007F557D402730
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream request: "/admin_api/account/logout?"
2017/08/22 19:38:42 [debug] 22939#0: *62 http upstream process header
2017/08/22 19:38:42 [debug] 22939#0: *62 malloc: 00007F557D35A4D0:4096
2017/08/22 19:38:42 [debug] 22939#0: *62 recv: fd:8 325 of 4096
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy status 400 "400 Bad Request"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header: "Server: nginx/1.10.2"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header: "Date: Tue, 22 Aug 2017 11:38:42 GMT"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header: "Content-Type: text/html"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header: "Content-Length: 173"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header: "Connection: close"
2017/08/22 19:38:42 [debug] 22939#0: *62 http proxy header done
2017/08/22 19:38:42 [debug] 22939#0: *62 xslt filter header
2017/08/22 19:38:42 [debug] 22939#0: *62 HTTP/1.1 400 Bad Request
I'm forwarding requests to an internal server. The request I get has admin_api prefix and it should be forwarded to an internal server with api prefix. Here is my Nginx config.
server {
listen 8006;
server_name THIS_SERVER_IP;
root /usr/share/nginx;
error_log /var/log/nginx/xxx-error.log debug;
location /admin_api {
proxy_pass http://INTERNAL_SERVER_IP/api;
proxy_set_header Host http://INTERNAL_SERVER_IP/;
proxy_pass_request_headers On;
}
location / {
try_files $uri /index.html;
}
}
I want to define an exact match for '/' request by using 'location = /' rule and specify 'index.html' as the response of this request. But why my setting isn't work?
I have defined two locations as below (Updated: I also post the whole content of my nginx.conf at the bottom.):
location = / {
root /opt/www/static/;
index index.html;
}
location / {
root /opt/www/resource/;
}
And the files in my "/opt/www" directory are as below. (The comments after # The content is: describe the file content in them.)
/opt/www
├── resource
│ ├── hello.html # The content is: The hello.html from /resource
│ └── index.html # The content is: The index.html from /resource
└── static
└── index.html # The content is: The index.html from /static
But i when i access following urls, the outputs are:
http://localhost or http://localhost/ - Response is: The index.html from /resource.
http://localhost/index.html - Response is: The index.html from /resource.
http://localhost/hello.html - Response is: The hello.html from /resource.
I think the result of #2 and #3 are correct, but for #1, why it returns the resource/index.html as response instead of static/index.html? Because i think, according to the definition of location, the response should come from static/index.html file.
using the “=” modifier it is possible to define an exact match of URI and location. If an exact match is found, the search terminates. For example, if a “/” request happens frequently, defining “location = /” will speed up the processing of these requests, as search terminates right after the first comparison.
Another question is, how to change my conf file to specify static/index.html as the response of http://localhost or http://localhost/ by using an exact match?
Updated
I found the trick after turning on nginx debug log by using error_log logs/error.log debug;. According the log, for #1, the request is exactly matched by the first rule location = /, and /opt/www/static/index.html is open. But later, the request is internal redirected to /index.html, then the second rule is matched, as a result, /opt/www/resource/index.html is used.
But my question is, why it redirects the request to /index.html when the exact rule (the first one) is already matched and /opt/www/static/index.html is found? can i stop the internal redirect with some configuration or other directive?
The nginx log is (my nginx version is 1.4.6):
2015/05/01 11:59:46 [debug] 112241#0: *1 http process request line
2015/05/01 11:59:46 [debug] 112241#0: *1 http request line: "GET / HTTP/1.1"
2015/05/01 11:59:46 [debug] 112241#0: *1 http uri: "/"
2015/05/01 11:59:46 [debug] 112241#0: *1 http args: ""
2015/05/01 11:59:46 [debug] 112241#0: *1 http exten: ""
*** omit some logs to process request header and others ***
2015/05/01 11:59:46 [debug] 112241#0: *1 event timer del: 3: 1430452846698
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 0
2015/05/01 11:59:46 [debug] 112241#0: *1 rewrite phase: 1
2015/05/01 11:59:46 [debug] 112241#0: *1 test location: "/"
2015/05/01 11:59:46 [debug] 112241#0: *1 using configuration "=/"
2015/05/01 11:59:46 [debug] 112241#0: *1 http cl:-1 max:1048576
2015/05/01 11:59:46 [debug] 112241#0: *1 rewrite phase: 3
2015/05/01 11:59:46 [debug] 112241#0: *1 post rewrite phase: 4
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 5
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 6
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 7
2015/05/01 11:59:46 [debug] 112241#0: *1 access phase: 8
2015/05/01 11:59:46 [debug] 112241#0: *1 access phase: 9
2015/05/01 11:59:46 [debug] 112241#0: *1 post access phase: 10
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 11
2015/05/01 11:59:46 [debug] 112241#0: *1 open index "/opt/www/static/index.html"
2015/05/01 11:59:46 [debug] 112241#0: *1 internal redirect: "/index.html?"
2015/05/01 11:59:46 [debug] 112241#0: *1 rewrite phase: 1
2015/05/01 11:59:46 [debug] 112241#0: *1 test location: "/"
2015/05/01 11:59:46 [debug] 112241#0: *1 using configuration "/"
2015/05/01 11:59:46 [debug] 112241#0: *1 http cl:-1 max:1048576
2015/05/01 11:59:46 [debug] 112241#0: *1 rewrite phase: 3
2015/05/01 11:59:46 [debug] 112241#0: *1 post rewrite phase: 4
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 5
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 6
2015/05/01 11:59:46 [debug] 112241#0: *1 generic phase: 7
2015/05/01 11:59:46 [debug] 112241#0: *1 access phase: 8
2015/05/01 11:59:46 [debug] 112241#0: *1 access phase: 9
2015/05/01 11:59:46 [debug] 112241#0: *1 post access phase: 10
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 11
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 12
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 13
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 14
2015/05/01 11:59:46 [debug] 112241#0: *1 content phase: 15
2015/05/01 11:59:46 [debug] 112241#0: *1 http filename: "/opt/www/resource/index.html"
Updated again to post the whole content of my nginx.conf.
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name localhost;
location = / {
root /opt/www/static/;
index index.html;
}
location / {
root /opt/www/resource/;
}
}
}
After doing some research and google, i found a solution to resolve this question by myself :-). Using the try_files directive to replace the index one, it does not trigger an internal redirect when static/index.html is found. The final definitions are:
location = / {
root /opt/www/static/;
#index index.html;
try_files /index.html =404;
}
location / {
root /opt/www/resource/;
}