I'm running a reverse proxy using proxy_pass directive from ngx_http_proxy_module. I want to forbid access to certain backend IP address ranges (like 172.0.0.0/24). I've tried
if ($upstream_addr ~* "^172.*") {
return 403;
}
add_header X-mine "$upstream_addr";
both in server and location context but it doesn't work, i.e. Nginx still returns 200:
$ curl localhost -I
HTTP/1.1 200 OK
Server: nginx/1.17.0
Date: Thu, 13 Feb 2020 12:58:36 GMT
Content-Type: text/html
Content-Length: 612
Connection: keep-alive
Last-Modified: Tue, 24 Sep 2019 14:49:10 GMT
ETag: "5d8a2ce6-264"
Accept-Ranges: bytes
X-mine: 172.20.0.2:80
What am I missing? (Note that I added the content of $upstream_addr variable to X-mine header for debugging.)
My understanding is that the if directive is run before the upstream request is sent, while the $upstream_addr header is only set after the upstream request has completed. I have tried and failed to find definitive documentation that explains the precise process, but the nginx documentation seems to be missing a number of things that one might wish for.
See this answer, and also If is evil for a little more guidance. I'm not actually sure quite what you're trying to achieve so I can't offer any hope about whether or not it's possible.
Related
I have Nginx Web Server running in my website, in the service I enabled browser caching for all static assets for limited time to reduce resources usage and improve performance
curl -I https://www.betafox.net/wp-content/uploads/2022/02/Dungeon-Reset-193x278-1-175x238.webp
HTTP/2 200
server: nginx/1.18.0 (Ubuntu)
date: Mon, 21 Mar 2022 17:16:31 GMT
content-type: image/webp
content-length: 13368
last-modified: Wed, 23 Feb 2022 10:14:46 GMT
etag: "62160916-3438"
expires: Thu, 31 Dec 2037 23:55:55 GMT
cache-control: max-age=315360000
accept-ranges: bytes
While some assets don't have much lifetime, they still significantly reduce the server's workloads. Here's my simple server caching config:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
~font/ max;
}
...
server {
...
expires $expires;
...
}
...
However, caching only works for local files, I mean, files that are stored in my server's disk. I only want that the S3 objects (Images) that are accessed remotely through my website, for example the ones in https://www.betafox.net/webtoon/dungeon-reset/chapter-1/ are also cached like the other assets to reduce GET requests and its costs. I know they won't cache from outside access, but I think it'd partially solve the issue without having to rely on complex solutions...
The Setup
I am aiming for a minimalistic configuration, mostly built on defaults
The goal is to serve 10-15, 1-to-3 second long, mostly 2-3 Mb of videos
I have a raspberry running with an official nginx docker image
My Assumptions
nginx is a really powerful tool and provides all sorts of optimisation capabilities, but if I want to simply serve videos like the above, it would work kind "out-of-the-box"
The Issue
The videos do not play at all
When accessing the videos directly, there are two scenarios I encounter
a) HTTP 200 followed by one or more HTTP 206 Partials, and the video does not play, OR
b) HTTP 200 followed by Cancelled request, and the video obviously does not play here either
Furthermore
Multiple videos have been tested (default mobile output, VLC converted, HandBreak web optimized)
nginx (Default Configs provided by the official image)
html {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
#SSL Settings
#Logging Settings
}
In the mime.types, I do have video/mp4.
Static serving files
The videos are located in a folder, which is mounted as /usr/share/x
server {
...
location / {
# Default nginx files
}
location ~ \.mp4$ {
# When I try to use this block, all video request end up being 404s
}
location /x/ {
root /usr/share/;
}
}
Given that this is a micro app, there are obviously other files being served, and they work fine. There is no issue with the locations and routing, only with the videos.
Initial Request
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
sec-ch-ua: ####
sec-ch-ua-mobile: ?0
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: ####
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Initial Response
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 19:50:01 GMT
Content-Type: video/mp4
Content-Length: 1620690
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Accept-Ranges: bytes
Content-Security-Policy: upgrade-insecure-requests
Second Request (Leading to the HTTP 206)
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
sec-ch-ua: ####
DNT: 1
Accept-Encoding: identity;q=1, *;q=0
sec-ch-ua-mobile: ?0
User-Agent: ####
Accept: */*
Sec-Fetch-Site: same-origin
Sec-Fetch-Mode: no-cors
Sec-Fetch-Dest: video
Referer: ####
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Range: bytes=0-
The (sometimes cancelled) Partial Content
HTTP/1.1 206 Partial Content
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 20:03:20 GMT
Content-Type: video/mp4
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Content-Range: bytes 0-1620689/1620690
Content-Length: 1620690
Content-Security-Policy: upgrade-insecure-requests
Final Thoughts and Questions
Im a Senior Front End developer. Far from an advanced Back End or DevOps knowledge, but I think I do well for myself. However, I have spent the better parts of the past 2-3 days trying to serve small videos from my Raspberry. Unsuccessfully.
Is this really an nginx configuration issue?
If so, what am I missing? How do I make this work?
If this is not nginx, what else could it be?
UPDATE (1): cURL
The file that I have chosen to test is 1620720 bytes. I tried to cURL it to see if I get back the same, working video.
curl https://domain.tld/x/nope.mp4 --output ~/retrieved.mp4
This new video is 1620690 bytes. 30 less then the original (gzip?) and it appears to be corrupted. I cannot play the video on my machine.
Checking the video in Firefox, they seem to get it right:
So. Apparently, a more hackathon-like approach, when you skip certain configuration steps, is not really beneficial. And when you want to do things quick and dirty, because time is of the essence, still you should set .mp4s to be treated as binaries in git. (Even better to use LFS)
I am trying to take some theoretical study about HTTP into practice. So I tried to make a HEAD (also tried GET but prefer HEAD since I am interested in the actual object) and it went as follows:
~$ telnet youtube.com 80
Trying 216.58.211.110...
Connected to youtube.com.
Escape character is '^]'.
HEAD /watch?v=GJvGf_ifiKw HTTP/1.1
Host: youtube.com
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://youtube.com/watch?v=GJvGf_ifiKw
Date: Thu, 12 Dec 2019 15:48:41 GMT
Content-Type: text/html
Server: YouTube Frontend Proxy
X-XSS-Protection: 0
As you can see, I am requesting the object locating at /watch?v=GJvGf_ifiKw on the host located at youtube.com and this must sum to youtube.com/watch?v=GJvGf_ifiKw which is the URL of the location header field. What's going on here? Why does it say it has moved to the identical location?
If you looked closely to the output you will find that you've been redirected to HTTPS as your initial request was telnet on port 80 which is the default HTTP port
and since they are enforcing redirection to HTTPS
so the it's redirected to identical location BUT over Secured HTTP with is HTTPS
I have a web application running on centos ec2 instance behind Nginx reverse proxy with SSL certification (Let's Encrypt).
I have a javascript file located at the URL for example https://example.com/static/src/js/allEnd.js
I used CloudFront to delivered the static file with the origin server as the HTTP ec2 instance (not using the s3 bucket.)
My origin server is mapped with the domain name https://example.com I have the following configuration I have made so far:
1. www.example.com is redirected to example.com in Nginx
2. The CloudFront URL is an alias with my custom domain ie cdn.example.com
3. SSL for example.com is done on Nginx whereas SSL for cdn,example.com is done on AWS.
What I have understood so far is the first time the CloudFront will serve the static content by getting the file from my ec2 server and then the next time it will serve from CloudFront but every time the CloudFront redirect to the origin server to get the static file which CloudFront is not serving in my case.
Here is the header for both the origin server and CloudFront server.
1. Origin server (https://example.com)
get https://example.com/static/src/js/allEnd.js
HTTP/2 200
server: nginx/1.12.2
date: Sun, 12 May 2019 12:27:50 GMT
content-type: application/javascript
content-length: 168435
etag: "wzsdm-1557567525-168435-283837276"
cache-control: max-age=604800, public
expires: Sun, 19 May 2019 12:27:50 GMT
strict-transport-security: max-age=15768000; includeSubdomains; preload
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=15768000
2. CloudFront with origin as https://example.com (https://cdn.example.com)
get https://cdn.example.com/static/src/js/allEnd.js
HTTP/2 301
content-type: text/html
content-length: 185
location: https://example.com/static/src/js/allEnd.js
server: nginx/1.12.2
date: Sun, 12 May 2019 09:17:40 GMT
strict-transport-security: max-age=15768000; includeSubdomains; preload
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=15768000
age: 17
x-cache: Hit from cloudfront
via: 1.1 76d9b50884e58e2463b175e34e790838.cloudfront.net (CloudFront)
x-amz-cf-id: HBbfJXbFgKQz4eYlQSpLPQAk8pxZRKuGsb6YjFu8-AJL0JwPfs8FZw==
As you can see the response header that the cdn.example.com (CloudFront) redirect to the origin (example.com).
Also, I am confused with the content-type:text/html which should be content-type: application/javascript
What are the possibilities that I may have misconfigured?
If anything more you want to know, Please feel free to ask. Thanks.
P.S: I am new to the Nginx and AWS configuration and most importantly cache control.
You will need to examine the origin server configuration and log files to understand why this happened.
Check the origin server logs and you will find that the origin server originally generated that 301 redirect response -- not CloudFront.
Notice that the response headers from CloudFront include server: nginx/1.12.2. CloudFront did not add this.
The Content-Type is set as it is, because your origin server also returned HTML saying something like "object has moved" or "you are being redirected." Modern browsers typically don't display that message, they just follow the redirect.
In any event, if the origin server returns a redirect, CloudFront does not follow the redirect returned by the origin server -- it simply returns the redirect to the browser.
One possible explanation for the behavior you see is that you set the Origin Protocol Policy in CloudFront to "HTTP Only" instead of "HTTPS Only" or "Match Viewer," so the origin is trying to redirect the connection to use HTTPS since it sees the incoming connection (from CloudFront) as HTTP instead of HTTPS.
Another possibility is that you configured CloudFront's Cache Behavior settings to whitelist the Host header for forwarding to the origin.
I'm caching static resources using the following code in my nginx.conf:
http {
...
gzip on;
gzip_types *;
gzip_vary on;
...
server {
...
location /static {
alias /opt/static_root;
expires max;
}
}
}
This is sufficient to set the following http headers:
$ curl -I example.com/static/css/bootstrap.min.css
Content-Length: 97874
Last-Modified: Mon, 21 Nov 2016 18:30:33 GMT
ETag: "58333d49-17e52"
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
However, even though the Last-Modified date is later than the cached version the browser has, I'm still seeing the old version of the file (tested on Firefox 50.0 and Chrome 54.0.2840.98).
How can I invalidate the ETag so that whenever I deploy diffs to my static files, the browser understands to reload them?
I have tried nginx -s reload, to no avail.
ETags are used when a client makes a Conditional Request to revalidate an expired resource. But in your case the resource won't expire until 2037! The browser will continue to serve the resource from its cache until then without ever checking with the server. That's what you told it to with your Expires header.
Typically if you're going to do far-future expires like that you have to version the resource by changing the name. Or you can change the Expires to something shorter, in which case the ETags will be used when the client tries to revalidate.