I'm trying to configure Location directive on my nginx web-server (Ubuntu).
I can have access to:
http://127.0.0.1/app1/
BUT when I'm trying to get access whitout slash in the end like:
http://127.0.0.1/app1
I get an error 301 HTTP1.1/Moved permanently
I have following nginx config:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Looks like everything is OK.
And following default.conf:
server {
listen 80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /app1/ {
root /var/www/html/;
index index.html;
try_files $uri $uri/ /app1/index.html;
}
}
Curl output
http://127.0.0.1/app1/
root#ubuntu-test:/etc/nginx/sites-available# curl 127.0.0.1/app1/ -Iv
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD /app1/ HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Server: nginx/1.14.0 (Ubuntu)
Server: nginx/1.14.0 (Ubuntu)
< Date: Thu, 20 Feb 2020 09:14:12 GMT
Date: Thu, 20 Feb 2020 09:14:12 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 5
Content-Length: 5
< Last-Modified: Tue, 18 Feb 2020 10:49:53 GMT
Last-Modified: Tue, 18 Feb 2020 10:49:53 GMT
< Connection: keep-alive
Connection: keep-alive
< ETag: "5e4bc151-5"
ETag: "5e4bc151-5"
< Accept-Ranges: bytes
Accept-Ranges: bytes
<
* Connection #0 to host 127.0.0.1 left intact
http://127.0.0.1/app1
root#ubuntu-test:/etc/nginx/sites-available# curl 127.0.0.1/app1 -Iv
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD /app1 HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.0 (Ubuntu)
Server: nginx/1.14.0 (Ubuntu)
< Date: Thu, 20 Feb 2020 09:19:31 GMT
Date: Thu, 20 Feb 2020 09:19:31 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 194
Content-Length: 194
< Location: http://127.0.0.1/app1/
Location: http://127.0.0.1/app1/
< Connection: keep-alive
Connection: keep-alive
Why does it happen?
Nginx selects the location / block to process the URI /app1 as no other locations are a better match. See how Nginx processes a request.
The $uri/ term of the try_files statement informs Nginx to append a / to any URI that matches a directory. The directory /var/www/html/app1 matches this requirement, so a 301 redirection is generated to append a / to the URI. See this document for details.
In addition, the default behaviour for a URI which ends with a / and points to a directory is to search that directory for a file that matches the index directive. See this document for details.
It is possible to deviate from this default behaviour, but you will need to make a number of changes to your configuration. The location /app1/ needs to lose the trailing / if you want it to match /app1. Your try_files directives need to lose the $uri/ term, if you want to avoid the 301 redirect. You will also lose default index processing, so the index directive will be useless.
Related
I'm trying to make a https proxy on nginx engine. And when I test it on different sites - I always get two HTTP codes - 302 - redirect to https scheme and 400 after connect
proxy config
server {
error_log /var/log/nginx/nginx.err;
access_log /var/log/nginx/nginx.acc;
resolver 127.0.0.53;
listen 80; #default_server;
listen 443 ssl default_server;
server_name proxy;
ssl_certificate /etc/letsencrypt/live/proxy/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/proxy/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/proxy/chain.pem;
proxy_ssl_certificate /etc/letsencrypt/live/proxy/fullchain.pem;
proxy_ssl_certificate_key /etc/letsencrypt/live/proxy/privkey.pem;
proxy_ssl_trusted_certificate /etc/letsencrypt/live/proxy/chain.pem;
large_client_header_buffers 1 128k;
proxy_ssl_verify on;
proxy_ssl_session_reuse off;
ssl_verify_client off;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Access-Control-Allow-Origin *;
proxy_buffering on;
proxy_buffers 8 16k;
proxy_buffer_size 16k;
proxy_pass http://$host$request_uri;
proxy_read_timeout 1800;
}
}
curl -x localhost:80 goo.gl -I -L output (goo.gl - for example, but I have this issue for every site)
HTTP/1.1 301 Moved Permanently
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 10 Sep 2021 12:32:42 GMT
Content-Type: application/binary
Content-Length: 0
Connection: keep-alive
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Location: https://goo.gl/
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 10 Sep 2021 12:32:42 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
same curl output with -v
* Trying ::1:80...
* TCP_NODELAY set
* connect to ::1 port 80 failed: Connection refused
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> HEAD http://goo.gl/ HTTP/1.1
> Host: goo.gl
> User-Agent: curl/7.68.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0 (Ubuntu)
Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Sep 2021 12:34:02 GMT
Date: Fri, 10 Sep 2021 12:34:02 GMT
< Content-Type: application/binary
Content-Type: application/binary
< Content-Length: 0
Content-Length: 0
< Connection: keep-alive
Connection: keep-alive
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
Pragma: no-cache
< Expires: Mon, 01 Jan 1990 00:00:00 GMT
Expires: Mon, 01 Jan 1990 00:00:00 GMT
< Location: https://goo.gl/
Location: https://goo.gl/
< X-XSS-Protection: 0
X-XSS-Protection: 0
< X-Frame-Options: SAMEORIGIN
X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
<
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'https://goo.gl/'
* Hostname localhost was found in DNS cache
* Trying ::1:80...
* TCP_NODELAY set
* connect to ::1 port 80 failed: Connection refused
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#1)
* allocate connect buffer!
* Establish HTTP proxy tunnel to goo.gl:443
> CONNECT goo.gl:443 HTTP/1.1
> Host: goo.gl:443
> User-Agent: curl/7.68.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< Server: nginx/1.18.0 (Ubuntu)
Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Sep 2021 12:34:02 GMT
Date: Fri, 10 Sep 2021 12:34:02 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 166
Content-Length: 166
< Connection: close
Connection: close
<
* Received HTTP code 400 from proxy after CONNECT
* CONNECT phase completed!
* Closing connection 1
curl: (56) Received HTTP code 400 from proxy after CONNECT
If I do curl without a proxy, then it will contain messages with successful TLS handshakes
Receive 404 error while calling URL - http://10.240.0.133/swagger. Below is the snippet of nginx.conf file, I need to append index.html at end of the URI, so I placed a rewrite rule.
server {
listen 80;
listen [::]:80;
server_name localhost;
server_name 10.240.0.133;
server_name 127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
access_log /var/log/nginx/resources-reverse-access.log;
error_log /var/log/nginx/resources-reverse-error.log;
location /swagger {
rewrite ^/swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
}
When I visited the URL - curl -v http://10.240.0.133/swagger
404 is thrown:-
* Trying 10.240.0.133...
* TCP_NODELAY set
* Connected to 10.240.0.133 (10.240.0.133) port 80 (#0)
> GET /swagger HTTP/1.1
> Host: 10.240.0.133
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.14.0 (Ubuntu)
< Date: Wed, 18 Mar 2020 14:41:50 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host 10.240.0.133 left intact
I believe your rewrite rule is incorrect. It should look more like this.
location /swagger {
rewrite ^\/swagger\/?.*?$ /swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
but I believe this still not correct since you have not a set a root directive for this server.
I got a problem while testing an nginx server patched with Quiche implementation of HTTP/3 with curl: when I try to send multiple consecutive request for a small html page (~1kb), nginx responds correctly
root#cUrlClient:~# ./curl/src/curl https://192.168.19.128?[1-5] -Ik --http3
[1/5]: https://192.168.19.128?1 --> <stdout>
--_curl_--https://192.168.19.128?1
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes
[2/5]: https://192.168.19.128?2 --> <stdout>
--_curl_--https://192.168.19.128?2
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes
[3/5]: https://192.168.19.128?3 --> <stdout>
--_curl_--https://192.168.19.128?3
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes
[4/5]: https://192.168.19.128?4 --> <stdout>
--_curl_--https://192.168.19.128?4
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes
[5/5]: https://192.168.19.128?5 --> <stdout>
--_curl_--https://192.168.19.128?5
HTTP/3 200
server: nginx/1.16.1
date: Mon, 25 Nov 2019 13:44:21 GMT
content-type: text/html
content-length: 924
last-modified: Mon, 25 Nov 2019 12:07:59 GMT
etag: "5ddbc41f-39c"
alt-svc: h3-23=":443"; ma=86400
accept-ranges: bytes
If I try to make a single request to a medium/big html file, nginx respond correctly again, but when I try to make multiple consecutive request to a medium/big html page (>=30kb), nginx stop responding after an arbitrary number of requests (2-5 requests normally). Here's an example made of 10 requests to the https://cloudflare-quic.com html page (which I downloaded on my server):
root#cUrlClient:~# ./curl/src/curl -Ik https://192.168.19.128/cloudflare.html?[1-10] --http3 -v
[1/10]: https://192.168.19.128/cloudflare.html?1 --> <stdout>
--_curl_--https://192.168.19.128/cloudflare.html?1
* Trying 192.168.19.128:443...
* Sent QUIC client Initial, ALPN: h3-23
* h3 [:method: HEAD]
* h3 [:path: /cloudflare.html?1]
* h3 [:scheme: https]
* h3 [:authority: 192.168.19.128]
* h3 [user-agent: curl/7.67.0-DEV]
* h3 [accept: */*]
* Using HTTP/3 Stream ID: 0 (easy handle 0x5614ee569460)
> HEAD /cloudflare.html?1 HTTP/3
> Host: 192.168.19.128
> user-agent: curl/7.67.0-DEV
> accept: */*
>
< HTTP/3 200
HTTP/3 200
< server: nginx/1.16.1
server: nginx/1.16.1
< date: Mon, 25 Nov 2019 13:53:43 GMT
date: Mon, 25 Nov 2019 13:53:43 GMT
< content-type: text/html
content-type: text/html
< content-length: 106072
content-length: 106072
< vary: Accept-Encoding
vary: Accept-Encoding
< etag: "5ddbdc21-19e58"
etag: "5ddbdc21-19e58"
< alt-svc: h3-23=":443"; ma=86400
alt-svc: h3-23=":443"; ma=86400
< accept-ranges: bytes
accept-ranges: bytes
<
* Excess found: excess = 27523 url = /cloudflare.html (zero-length body)
* Connection #0 to host 192.168.19.128 left intact
[2/10]: https://192.168.19.128/cloudflare.html?2 --> <stdout>
--_curl_--https://192.168.19.128/cloudflare.html?2
* Found bundle for host 192.168.19.128: 0x5614ee56db00 [can multiplex]
* Re-using existing connection! (#0) with host 192.168.19.128
* Connected to 192.168.19.128 (192.168.19.128) port 443 (#0)
* h3 [:method: HEAD]
* h3 [:path: /cloudflare.html?2]
* h3 [:scheme: https]
* h3 [:authority: 192.168.19.128]
* h3 [user-agent: curl/7.67.0-DEV]
* h3 [accept: */*]
* Using HTTP/3 Stream ID: 4 (easy handle 0x5614ee56b2b0)
> HEAD /cloudflare.html?2 HTTP/3
> Host: 192.168.19.128
> user-agent: curl/7.67.0-DEV
> accept: */*
>
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
* Got h3 for stream 0, expects 4
[...]
It stucks on this screen repeating "Got h3 for stream 0, expects 4". Also I noticed that, when testing on smaller pages, that the smallest the file the bigger is the number of requests fullfilled before stop responding and start printing the error "Got h3 for stream x, expecting y", whith the relation that y=x+4.
Also the access.log and the error.log are clean, meaning that it could maybe be some king of parameter missing in the server configuration, but I'm not sure about it.
Does anyone have an idea of what the problem could be?
My config
nginx version:
nginx version: nginx/1.16.1
built by gcc 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)
built with OpenSSL 1.1.0 (compatible; BoringSSL) (running with BoringSSL)
TLS SNI support enabled
configure arguments:
--prefix=/root/nginx-1.16.1
--with-http_ssl_module
--with-http_v2_module
--with-http_v3_module
--with-openssl=../quiche/deps/boringssl
--with-quiche=../quiche
nginx.conf:
user root;
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically
# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;
# only log critical errors
error_log logs/error.log crit;
error_log logs/error.log debug;
error_log logs/error.log notice;
error_log logs/error.log info;
# provides the configuration file context in which the directives that affect connection processing are specified.
events {
# determines how much clients will be served per worker
# max clients = worker_connections * worker_processes
# max clients is also limited by the number of socket connections available on the system (~64k)
worker_connections 4000;
# optimized to serve many clients with each thread, essential for linux -- for testing environment
use epoll;
# accept as many connections as possible, may flood worker connections if set too low -- for testing environment
multi_accept on;
}
http {
# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# to boost I/O on HDD we can disable access logs
access_log on;
# copies data between one FD and other from within the kernel
# faster than read() + write()
sendfile on;
# send headers in one piece, it is better than sending them one by one
tcp_nopush on;
# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;
# reduce the data that needs to be sent over network -- for testing environment
gzip on;
# gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 10;
# if client stop responding, free up memory -- default 60
send_timeout 2;
# server will close connection after this time -- default 75
keepalive_timeout 30;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
########################################################
########################################################
server {
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
gzip on;
# Enable QUIC and HTTP/3.
listen 443 quic reuseport;
# Enable HTTP/2 (optional).
listen 443 ssl http2;
ssl_certificate certificate.pem;
ssl_certificate_key key.pem;
# Enable all TLS versions (TLSv1.3 is required for QUIC).
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
# Add Alt-Svc header to negotiate HTTP/3.
add_header alt-svc 'h3-23=":443"; ma=86400';
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
###Limits the maximum number of concurrent HTTP/3 streams in a connection.
http3_max_concurrent_streams 256;
###Limits the maximum number of requests that can be served on a single HTTP/3 connection,
###after which the next client request will lead to connection closing and the need of establishing a new connection.
http3_max_requests 20000;
###Limits the maximum size of the entire request header list after QPACK decompression.
http3_max_header_size 100000k;
###Sets the per-connection incoming flow control limit.
http3_initial_max_data 2000000m;
###Sets the per-stream incoming flow control limit.
http3_initial_max_stream_data 1000000m;
###Sets the timeout of inactivity after which the connection is closed.
http3_idle_timeout 1500000m;
}
########################################################
########################################################
}
Curl version
curl 7.67.0-DEV (x86_64-pc-linux-gnu) libcurl/7.67.0-DEV BoringSSL zlib/1.2.11 nghttp2/1.39.2 quiche/0.1.0
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTP3 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets
EDIT
We discussed about this issue on the Cloudflare quiche repo and we find that it's known curl problem: GitHub Issue
I am trying to add a header "X-Body" and set it to the response body of the request in nginx conf.
pid logs/nginx.pid.test;
error_log logs/error.log.test debug;
worker_rlimit_core 500M;
worker_processes 1;
master_process off;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/json;
sendfile on;
keepalive_timeout 65;
variables_hash_max_size 2048;
server {
listen 65311;
server_name test;
access_log logs/test;
location / {
echo "Nginx response";
proxy_pass_request_headers on;
default_type application/json;
echo_read_request_body;
add_header X-Body $request_body;
return 200;
}
}
}
Expecting header X-Body for the following curl request. But couldn't find.
curl -vk "localhost:65311" -d '{"key":"value"}' -H "Content-Type: application/json"
* Rebuilt URL to: localhost:65311/
* Trying ::1...
* TCP_NODELAY set
* connect to ::1 port 65311 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 65311 (#0)
> POST / HTTP/1.1
> Host: localhost:65311
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 15
>
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Server: Test
< Date: Wed, 09 Oct 2019 19:28:14 GMT
< Content-Type: application/json
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
Also, the nginx is built with echo module. How can I add the posted JSON as response header?
nginx does not add empty headers. $request_body is empty, which is why you don't see it. As per the docs, $request_body is only added in certain conditions, specifically when proxied.
Here's a config that works:
http {
log_format postdata escape=json '"$request_body"';
server {
listen 65311;
server_name test;
location /success {
return 200;
}
location / {
proxy_redirect off;
proxy_pass_request_body on;
proxy_pass $scheme://127.0.0.1:$server_port/success;
add_header X-Body $request_body;
access_log logs/test.log postdata;
}
}
}
I have PHP FPM + Nginx setup. One of my PHP applications sets an invalid content length header, so I'm trying to ignore it using fastcgi_hide_header, but it doesn't work. It works for headers other than Content-Length, so I assume there is a problem with that in particular.
What is the correct way of doing this? I cannot modify the PHP application to fix the source of the problem.
server {
listen 8000 default_server;
root /var/www;
index index.php index.html index.htm;
rewrite_log on;
# Make site accessible from http://localhost/
server_name localhost;
location / {
try_files $uri $uri/ /index.php;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
fastcgi_hide_header X-Fake-Header;
fastcgi_hide_header Content-Length;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Output if I remove the code in PHP that sets the headers (this is the desired output):
< HTTP/1.1 200 OK
* Server nginx/1.4.1 (Ubuntu) is not blacklisted
< Server: nginx/1.4.1 (Ubuntu)
< Date: Thu, 13 Feb 2014 01:58:07 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/5.5.3-1ubuntu2.1
If I leave the code in, but use the above nginx config, I get this:
< HTTP/1.1 200 OK
* Server nginx/1.4.1 (Ubuntu) is not blacklisted
< Server: nginx/1.4.1 (Ubuntu)
< Date: Thu, 13 Feb 2014 01:59:09 GMT
< Content-Type: text/html
< Content-Length: 6
< Connection: keep-alive
< X-Powered-By: PHP/5.5.3-1ubuntu2.1
I ended up having to user the HttpHeadersMore module in Nginx (if you're on Ubuntu, this is included with nginx-extras but not nginx-full).
With the module installed, I just added the following to my Nginx configuration:
more_clear_headers Content-Length;
This worked as expected.