I'm running Wordpress on nginx+fpm (port 80) behind another Nginx with SSL set up and such.
I have some background with doing this things right and Wordpress is working fine delivering site, letting into wp-admin etc. However, I've stumbled into the fact that 'index.php' of WP does not honor HTTPS=on mode for some reason.
Testing stand:
Wordpress set to deliver pages on https://example.com/info/ URL from IP 10.130.0.4 behind proxy.
1.
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/wp-login.php
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:29:27 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/info/; secure
Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/; secure
X-Frame-Options: SAMEORIGIN
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/info/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:32:13 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Link: <https://example.com/info/wp-json/>; rel="https://api.w.org/"
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:35:52 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Link: <https://example.com/info/wp-json/>; rel="https://api.w.org/"
And number 4 that makes me almost literally bump my head against the table is:
# curl -I -H "Host: example.com" -H "X-Forwarded-Proto: https" http://10.130.0.4/index.php
HTTP/1.1 301 Moved Permanently
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 27 Jan 2021 18:39:21 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
X-Redirect-By: WordPress
Location: https://example.com/
Nginx is set up using the most widespread gist for it:
location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_pass php;
#The following parameter can be also included in fastcgi_params file
fastcgi_param REQUEST_SCHEME https;
fastcgi_param HTTPS on;
fastcgi_param HTTP_X_FORWARDED_PROTO https;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
If I am missing anything obvious here, please point me to it.
Again, WP is working wholly fine but AIOSEO plugin is trying to deliver robots.txt via 'http://10.130.0.4/index.php?aioseo_robots_path=root' and gets redirected to https://example.com which is different (static) site.
Related
The Setup
I am aiming for a minimalistic configuration, mostly built on defaults
The goal is to serve 10-15, 1-to-3 second long, mostly 2-3 Mb of videos
I have a raspberry running with an official nginx docker image
My Assumptions
nginx is a really powerful tool and provides all sorts of optimisation capabilities, but if I want to simply serve videos like the above, it would work kind "out-of-the-box"
The Issue
The videos do not play at all
When accessing the videos directly, there are two scenarios I encounter
a) HTTP 200 followed by one or more HTTP 206 Partials, and the video does not play, OR
b) HTTP 200 followed by Cancelled request, and the video obviously does not play here either
Furthermore
Multiple videos have been tested (default mobile output, VLC converted, HandBreak web optimized)
nginx (Default Configs provided by the official image)
html {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
#SSL Settings
#Logging Settings
}
In the mime.types, I do have video/mp4.
Static serving files
The videos are located in a folder, which is mounted as /usr/share/x
server {
...
location / {
# Default nginx files
}
location ~ \.mp4$ {
# When I try to use this block, all video request end up being 404s
}
location /x/ {
root /usr/share/;
}
}
Given that this is a micro app, there are obviously other files being served, and they work fine. There is no issue with the locations and routing, only with the videos.
Initial Request
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
sec-ch-ua: ####
sec-ch-ua-mobile: ?0
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: ####
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Initial Response
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 19:50:01 GMT
Content-Type: video/mp4
Content-Length: 1620690
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Accept-Ranges: bytes
Content-Security-Policy: upgrade-insecure-requests
Second Request (Leading to the HTTP 206)
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
sec-ch-ua: ####
DNT: 1
Accept-Encoding: identity;q=1, *;q=0
sec-ch-ua-mobile: ?0
User-Agent: ####
Accept: */*
Sec-Fetch-Site: same-origin
Sec-Fetch-Mode: no-cors
Sec-Fetch-Dest: video
Referer: ####
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Range: bytes=0-
The (sometimes cancelled) Partial Content
HTTP/1.1 206 Partial Content
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 20:03:20 GMT
Content-Type: video/mp4
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Content-Range: bytes 0-1620689/1620690
Content-Length: 1620690
Content-Security-Policy: upgrade-insecure-requests
Final Thoughts and Questions
Im a Senior Front End developer. Far from an advanced Back End or DevOps knowledge, but I think I do well for myself. However, I have spent the better parts of the past 2-3 days trying to serve small videos from my Raspberry. Unsuccessfully.
Is this really an nginx configuration issue?
If so, what am I missing? How do I make this work?
If this is not nginx, what else could it be?
UPDATE (1): cURL
The file that I have chosen to test is 1620720 bytes. I tried to cURL it to see if I get back the same, working video.
curl https://domain.tld/x/nope.mp4 --output ~/retrieved.mp4
This new video is 1620690 bytes. 30 less then the original (gzip?) and it appears to be corrupted. I cannot play the video on my machine.
Checking the video in Firefox, they seem to get it right:
So. Apparently, a more hackathon-like approach, when you skip certain configuration steps, is not really beneficial. And when you want to do things quick and dirty, because time is of the essence, still you should set .mp4s to be treated as binaries in git. (Even better to use LFS)
I use Kong as my API Gateway, running in a Docker container. By executing the following command from docker host, i get the correct answer.
root#prod-s-swarm01:~# curl -i -X GET --url http://prod-s-swarm:8000 --header 'Host: example.com' --header 'apikey: auth-key-maks'
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Thu, 24 Oct 2019 11:16:10 GMT
Server: Apache/2.4.7 (Ubuntu)
Vary: Accept-Encoding
X-RateLimit-Remaining-hour: 4
X-RateLimit-Limit-second: 2
X-RateLimit-Remaining-second: 1
X-RateLimit-Limit-hour: 5
X-Kong-Upstream-Latency: 25
X-Kong-Proxy-Latency: 139
Via: kong/1.3.0
<!DOCTYPE html>
<html lang="ru">
<head>
.......
But, this request over my nginx proxy return not correct answer:
root#prod-s-swarm01:~# curl -i -X GET --url https://kong.myserver.com --header 'Host: example.com' --header 'apikey: auth-key-maks'
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 24 Oct 2019 11:14:33 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 97
Connection: keep-alive
X-Powered-By: Express
ETag: W/"61-Mn0BCF+92vC7dF087oyDAFsiE"
{"Status":"ERROR","Error":"Bad authorize","ErrorDesc":"Не верная авторизация"}
My nginx proxy config:
server {
listen 443 ssl;
server_name kong.myserver.com;
ssl_certificate /etc/letsencrypt/live/appgw/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/appgw/privkey.pem;
location / {
proxy_pass http://prod-s-swarm:8000;
proxy_set_header Host $host;
}
}
I tried to use, $http_host - this also not work.
Another Host's fall into in default_server on nginx. Or in server_name is necessary write all domains in kong api.
The problem, nginx missing content type for woff2
curl -s -I -X GET https://.../Montserrat-Medium.woff2
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 10 Oct 2018 10:30:54 GMT
Content-Length: 118676
Connection: keep-alive
Keep-Alive: timeout=60
Last-Modified: Wed, 10 Oct 2018 10:27:24 GMT
ETag: "1cf94-577dd4cdf1e25"
Accept-Ranges: bytes
What I've try:
added application/woff2 woff2; to /etc/nginx/mime.types (also application/x-font-woff2, etc)
add to server section this part and it works
location ~* ^.+.woff2$ {
return 403;
}
change part above to this and still have no success
location ~* ^.+\.woff2$ {
proxy_pass https://82.202.226.111:8443;
add_header Content-type application/woff2;
root /var/web/public_shtml;
access_log off;
expires 7d;
try_files $uri #fallback;
}
Also I've view nginx -T configuration to be sure where is no other conditions for woff2.
(3) is almost right. Add also the following to remove the upstream's Content-Type:
proxy_hide_header Content-Type;
Changes to mime.types file are not necessary in this case.
But Richard Smith is right, it's the upstream who returns the wrong content type.
My task is to implement microcaching strategy using nginx, that is, cache responses of some POST endpoints for a few seconds.
In http section of the nginx.conf I have the following:
proxy_cache_path /tmp/cache keys_zone=cache:10m levels=1:2 inactive=600s max_size=100m;
Then I have location in server:
location /my-url/ {
root dir;
client_max_body_size 50k;
proxy_cache cache;
proxy_cache_valid 10s;
proxy_cache_methods POST;
proxy_cache_key "$request_uri|$request_body";
proxy_ignore_headers Vary;
add_header X-Cached $upstream_cache_status;
proxy_pass http://my-upstream;
}
The application located at my-upstream outputs Cache-Control: max-age=10 which, if I understand correctly, should make the responses cacheable.
But when I make repetitive requests using curl in short time (less than 10 seconds)
curl -v --data "a=b&c=d" https://my-host/my-url/1573
all of them reach the backend (according to backend logs). Also, X-Cached is always MISS.
Request and response follow:
> POST /my-url/1573 HTTP/1.1
> Host: my-host
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 113
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 113 out of 113 bytes
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 08 May 2018 07:16:10 GMT
< Content-Type: text/html;charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Keep-Alive: timeout=60
< Vary: Accept-Encoding
< X-XSS-Protection: 1
< X-Content-Type-Options: nosniff
< Strict-Transport-Security: max-age=31536000
< Cache-Control: max-age=10
< Content-Language: en-US
< X-Cached: MISS
So the caching does not work.
What am I doing wrong here?
Is there any logging facility in nginx that would allow to see why it chooses not to cache a response?
It turned out that the following directive (which was defined globally) prevented caching from working:
proxy_buffering off;
When I override it under location config with proxy_buffering on;, caching starts working.
So, to make caching work with POST requests, we have to do the following:
Output Cache-Control: public, max-age=10 header on the server
Add proxy_cache_path config and location config in nginx (examples are given in the question text)
Make sure that proxy_buffering is on for the location on which we want to have caching enabled.
To elaborate on #Roman Puchkovskiy's answer above - my origin server was returning the following headers:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
I configured my server to return this instead:
Cache-Control: max-age=3600, public
And now Nginx behaves as expected ✅
I first tried adding this directive to my nginx.conf:
...
location /blah {
...
proxy_ignore_headers Cache-Control;
}
But it looks like that directive doesn't work the way I thought it would.
Note that I wasn't required to add proxy_buffering on to my nginx.conf so it seems I wasn't affected by that issue.
Hi I am new to Nginx and looking some help to redirect my http request to https.
I have two configuration on load balance with port 80 and 444 at Linode cloud system.
If request comes from https then load balancer if sending request to my serving tomcat after terminating SSL to LB.
If request comes from http then load balancer is sending to my nginx server which is redirecting request to https.
I see whenever, I start my nginx server, I see continues logs in my tomcat server of redirect url even no one is hitting my http url. I have following complete nginx.conf file.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 ;
server_name example.com;
#return 301 https://$server_name$request_uri;
rewrite ^ https://$server_name/$request_uri permanent;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location =/ {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Same configuration works perfectly, if I put IP address in place of actual domain name.
Following are the curl results based on location and I see in after redirecting from HTTPS location header is showing https://example.com/login which is doing correctly
# curl -i http://example.com
HTTP/1.1 301 Moved Permanently
Server: nginx/1.6.3
Date: Fri, 29 Jan 2016 07:43:54 GMT
Content-Type: text/html
Content-Length: 184
Connection: close
Location: https://example.com/
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.6.3</center>
</body>
</html>
# curl -i https://example.com
HTTP/1.1 302 Found
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=C3B65BD4E015F05705B585F5F8D70074; Path=/; Secure; HttpOnly
Location: https://example.com/login
Content-Length: 0
Date: Fri, 29 Jan 2016 07:44:03 GMT
Connection: close
#curl -i https://example.com/login
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=6B30D6D70672A99F13B2F441B2F2150E; Path=/; Secure; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Language: en-US
Transfer-Encoding: chunked
Date: Fri, 29 Jan 2016 07:44:18 GMT
Connection: close
<HTML context of login page>
Please suggest me what I am missing here.
To simply redirect all requests to port 80 to https, use the following configuration. No further lines are required, and might skip the purpose of the server:
server {
listen 80 default_server;
server_name _;
rewrite ^ https://$host$request_uri permanent;
}
This way, whichever host or even IP address will be forwarded to the https counterpart. If you are sure there'll be only one destination host, you may use it instead of the $host variable (do not enter a / afterwards):
rewrite ^ https://example.com$request_uri permanent;
It'd be even better if you use return:
return 301 https://example.com$request_uri;
# or
# return 301 https://$host$request_uri;
Since this is the only purpose of this server block, remove all other directives, like root, location, error_page and include.
Beware of additional files at /etc/nginx/conf.d/*.conf or /etc/nginx/sites-enabled/*.conf, they may overwrite these settings.
Reload nginx configuration and test. I suggest using cURL — here's the expected result:
$ curl -i http://example.com
HTTP/1.1 301 Moved Permanently
Server: nginx/1.8.0
Date: Wed, 27 Jan 2016 17:33:45 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
Location: https://example.com/
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.8.0</center>
</body>
</html>
Look and copy the Location: header content, then test again using cURL (use -k if you use a self-signed certificate):
curl -i https://example.com
The result should be from your tomcat application, and NOT another redirect to the same page. If the result is the same (safe from the date), then your LB is probably sending the https requests back to nginx, causing a loop.
Please note that the tomcat application may also be forwarding to https if it doesn't understand it's behind a proxy (the LB). In this case, you'll need to setup the application config to properly understand this (let me know if this is the case).
I see that my Nginx server is exposed on public IP using port:80 and it was receiving traffic from unwanted hosts.
Since I had a default configuration in Nginx server block which was redirecting my all incoming http:80 traffic from any host to my tomcat on https:443, thats why I saw tons of logs in my tomcat server.
I had to add below configuration to my /etc/nginx.conf to redirect port:80 traffic to https if request is coming from my domain only.
if ($host ~ ^(example.com|www.example.com)$) {
return 301 https://$server_name$request_uri;
}