nginx fails to load static .mp4 - nginx

The Setup
I am aiming for a minimalistic configuration, mostly built on defaults
The goal is to serve 10-15, 1-to-3 second long, mostly 2-3 Mb of videos
I have a raspberry running with an official nginx docker image
My Assumptions
nginx is a really powerful tool and provides all sorts of optimisation capabilities, but if I want to simply serve videos like the above, it would work kind "out-of-the-box"
The Issue
The videos do not play at all
When accessing the videos directly, there are two scenarios I encounter
a) HTTP 200 followed by one or more HTTP 206 Partials, and the video does not play, OR
b) HTTP 200 followed by Cancelled request, and the video obviously does not play here either
Furthermore
Multiple videos have been tested (default mobile output, VLC converted, HandBreak web optimized)
nginx (Default Configs provided by the official image)
html {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
#SSL Settings
#Logging Settings
}
In the mime.types, I do have video/mp4.
Static serving files
The videos are located in a folder, which is mounted as /usr/share/x
server {
...
location / {
# Default nginx files
}
location ~ \.mp4$ {
# When I try to use this block, all video request end up being 404s
}
location /x/ {
root /usr/share/;
}
}
Given that this is a micro app, there are obviously other files being served, and they work fine. There is no issue with the locations and routing, only with the videos.
Initial Request
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
sec-ch-ua: ####
sec-ch-ua-mobile: ?0
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: ####
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Initial Response
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 19:50:01 GMT
Content-Type: video/mp4
Content-Length: 1620690
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Accept-Ranges: bytes
Content-Security-Policy: upgrade-insecure-requests
Second Request (Leading to the HTTP 206)
GET #### HTTP/1.1
Host: ####
Connection: keep-alive
sec-ch-ua: ####
DNT: 1
Accept-Encoding: identity;q=1, *;q=0
sec-ch-ua-mobile: ?0
User-Agent: ####
Accept: */*
Sec-Fetch-Site: same-origin
Sec-Fetch-Mode: no-cors
Sec-Fetch-Dest: video
Referer: ####
Accept-Language: en-US,en;q=0.9,hu;q=0.8,sk;q=0.7
sec-gpc: 1
Range: bytes=0-
The (sometimes cancelled) Partial Content
HTTP/1.1 206 Partial Content
Server: nginx/1.14.2
Date: Thu, 14 Jan 2021 20:03:20 GMT
Content-Type: video/mp4
Last-Modified: Thu, 14 Jan 2021 19:05:25 GMT
Connection: keep-alive
ETag: "600095f5-18bad2"
Content-Range: bytes 0-1620689/1620690
Content-Length: 1620690
Content-Security-Policy: upgrade-insecure-requests
Final Thoughts and Questions
Im a Senior Front End developer. Far from an advanced Back End or DevOps knowledge, but I think I do well for myself. However, I have spent the better parts of the past 2-3 days trying to serve small videos from my Raspberry. Unsuccessfully.
Is this really an nginx configuration issue?
If so, what am I missing? How do I make this work?
If this is not nginx, what else could it be?
UPDATE (1): cURL
The file that I have chosen to test is 1620720 bytes. I tried to cURL it to see if I get back the same, working video.
curl https://domain.tld/x/nope.mp4 --output ~/retrieved.mp4
This new video is 1620690 bytes. 30 less then the original (gzip?) and it appears to be corrupted. I cannot play the video on my machine.
Checking the video in Firefox, they seem to get it right:

So. Apparently, a more hackathon-like approach, when you skip certain configuration steps, is not really beneficial. And when you want to do things quick and dirty, because time is of the essence, still you should set .mp4s to be treated as binaries in git. (Even better to use LFS)

Related

Why are browsers not caching these static files

my question seems duplicate of this
but I am having a case
when I refresh a page with F5 then images are not getting fetched from cache instead request is going to server and server responding 304 status code(not modified)
but if I type a URL in address-bar or navigate page from browser back/forward button then images are coming from cache.
but I am having one doubt here why request is made for cached images to origin server on F5 (page refresh)
Nginx configuration
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 2d;
proxy_pass http://localhost:3001;
break;
}
Request header
===================================
GET /assets/first_banner.png HTTP/1.1
Host: localhost:3000
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
Accept: image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36
Referer: http://localhost:3000/
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
===================================
Response header:
===================================
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Sun, 08 Dec 2013 20:31:06 GMT
Content-Type: image/png
Content-Length: 141498
Connection: keep-alive
Cache-Control: max-age=172800
Last-Modified: Wed, 23 Oct 2013 05:34:11 GMT
Etag: "0fc96d0218a47398d37dacca76916727"
X-Ua-Compatible: IE=Edge
X-Request-Id: 48d1ec3a24e2c0f13250ea74101f6753
X-Runtime: 0.021479
Expires: Tue, 10 Dec 2013 20:31:06 GMT
===================================
When you hit F5 you tell the browser to check the webserver if the content, cached locally or not, is still valid.
If the object is expired on the webserver then the browser fetch the asset again. If the object is still valid, the local browser-cached content is used.

Cross-domain chunked uploads using CORS

I have user-submitted files that I'm trying to upload in 10 MB chunks. I'm currently using raw XMLHttpRequest (and XDomainRequest) to push each individual slice (File.prototoype.slice) on the front end. The back end is Nginx using the upload module.
Just for reference, here's the synopsis of how I'm using slice:
element.files[0].slice(...)
I understand the cross-browser prefixed methods webkitSlice and mozSlice and all that.
The problem I have is with actually making the cross-domain request. I'm uploading from server.local to upload.server.local. In Firefox, the options request goes through fine and then the actual post fails. In Chrome and Opera, the options request fails with
OPTIONS https://URL Resource failed to load
Here are the headers from Firefox:
Request Headers
OPTIONS /path/to/asset HTTP/1.1
Host: upload.server.local:8443
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Origin: https://server.local:8443
Access-Control-Request-Method: POST
Access-Control-Request-Headers: content-disposition,content-type,x-content-range,x-session-id
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Response Headers
HTTP/1.1 204 No Content
Server: nginx/1.2.6
Date: Wed, 13 Feb 2013 03:27:44 GMT
Connection: keep-alive
access-control-allow-origin: https://server.local:8443
Access-Control-Allow-Methods: POST, OPTIONS
Access-Control-Allow-Headers: x-content-range, origin, content-disposition, x-session-id, content-type, cache-control, pragma, referrer, host
access-control-allow-credentials: true
Access-Control-Max-Age: 10000
The actual post request never leaves the browser. Nginx access logs never see the post. The browser halts it for some reason. How do I unravel why this post is being blocked?
Chromium 24
Firefox 18
Opera 12.14
I've verified all browsers support CORS properly here.
By pointing my uploads to https://cors-test.appspot.com/test, I have confirmed that the problem is definitely with the server-side headers.
The POST won't leave the browser if the preflight check does not return sufficient permissions and thus the POST request is not fully authorized. The request/response included in the question does look sufficient to me.
Are you sure you are setting withCredentials = true in your XMLHttpRequest?
Are you sure that you have valid (not self-signed) SSL certificates on your servers? The HTTPS might fail the CORS check even if you have added an exception for browsing the site with an invalid certificate.
Have you tried emptying your cache? You have Access-Control-Max-Age: 10000 set in your response headers. That's close to 3 hours. I know you've been working on this longer than that but while testing especially, set that header to zero instead so you don't go crazy with browser caching of old access permissions.
In general I'd start with going as permissive as possible with the CORS headers and slowly ratcheting up the the security to see where it fails. However, this is not completely straightforward. For example, according to the MDN documentation on CORS,
When responding to a credentialed request, server must specify a domain, and cannot use wild carding. The above example would fail if the header was wildcarded as: Access-Control-Allow-Origin: *
When I send the request part of your question to https://cors-test.appspot.com/test, I get back the following:
HTTP/1.1 200 OK
Cache-Control: no-cache
Access-Control-Allow-Origin: https://server.local:8443
Access-Control-Allow-Headers: content-disposition,content-type,x-content-range,x-session-id
Access-Control-Allow-Methods: POST
Access-Control-Max-Age: 0
Access-Control-Allow-Credentials: true
Cache-Control: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Content-Type: application/json
Content-Encoding: gzip
Content-Length: 35
Vary: Accept-Encoding
Date: Thu, 23 May 2013 06:37:34 GMT
Server: Google Frontend
So you can start from there and add more and more security until it breaks to figure out what is the culprit.

LiveHttpHeaders: which cache-control info is right

Using LiveHttpHeaders for Firefox 6 I was trying to see if my css, JS files being cached using Headers Module from Apache using htaccess. But I confuse, there are 2 values from the 'Cache-Control' data:
GET /proz/css/global.css HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0
Accept: text/css,*/*;q=0.1
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection: keep-alive
Referer: http://localhost/proz/
Cookie: PHPSESSID=el34de37pe3bnp4rdtbst1kd43
If-Modified-Since: Fri, 16 Sep 2011 21:15:32 GMT
If-None-Match: "400000000b06a-2999-4ad157e5b4583"
Cache-Control: max-age=0
HTTP/1.1 304 Not Modified
Date: Sat, 17 Sep 2011 03:04:50 GMT
Server: Apache/2.2.17 (Win32) PHP/5.2.8
Connection: Keep-Alive
Keep-Alive: timeout=5, max=99
Etag: "400000000b06a-2999-4ad157e5b4583"
Cache-Control: max-age=604800, public
Vary: Accept-Encoding
Which one is the true data, the first Cache-Control data (max-age=0) or the latter one.
I also would like to know how do I make sure that my JS, CSS, HTML files being compress after I use deflate module in htaccess. And yes, both headers and deflate modules are turn on.
There are two parts in this listing:
The part before the blank line is the request, sent by your browser
The part after the blank line is the response, sent by the server
The Cache-Control: max-age=0 sent by the client (your browser) tells the server (or any proxy in the middle) to send the most fresh version of the file. The browser usually sends this when you hit the refresh button.
The Cache-Control: max-age=604800, public sent by the server tells the client (your browser or a proxy) that the file is valid for 604800 seconds and can be cached for that time. (The browser won't even attemps to ask the server if a newer version exists, unless you hit refresh, as you did in this case.)
The server replied 304 Not Modified, this means that your browser already has the most recent version and it doesn't need to download it again (it did not downloaded it again).
The Vary: Accept-Encoding header indicate that the server taken some decisions based on the client's Accent-Encoding header. This may indicate that, if the server didn't replied 304 Not Modified, it would have compressed the file.
To verify this last point, clear your cache, and request the file again, and look at the content of the Content-Encoding header (must be gzip or deflate if the data is compressed).

IIS6 not doing gzip compression when including Via header in request

I have some static content going through a CDN. I am using IIS6's built in compression (gzip & deflate) for static content and this is working fine when I request it. However, when the CDN makes the initial request for the content, it is not being returned compressed. They therefore don't have compressed content to forward to people requesting it. (Yes this raises the issue of people requesting [the zipped] content from the CDN with a browser that can't handle the compression. - We'll put that to one side for now though)
Here's an example of requesting without the 'Via' header:
HEAD /flash/swfobject.js HTTP/1.1
User-Agent: curl/7.19.7 (i386-pc-win32)
Host: localhost:9120
Accept: */*
Connection: Keep-Alive
accept-encoding: gzip
And it returns a compressed response:
HTTP/1.1 200 OK
Content-Length: 4357
Content-Type: application/x-javascript
Content-Encoding: gzip
Expires: Wed, 01 Jan 2020 00:00:00 GMT
Last-Modified: Wed, 18 Nov 2009 15:36:52 GMT
Accept-Ranges: bytes
Vary: Accept-Encoding
Server: Microsoft-IIS/6.0
Date: Thu, 19 Nov 2009 10:27:50 GMT
However, if I include a 'Via' header in the request (as the CDN does) then the result comes back uncompressed:
Request:
HEAD /flash/swfobject.js HTTP/1.1
User-Agent: curl/7.19.7 (i386-pc-win32)
Host: localhost:9120
Accept: */*
Connection: Keep-Alive
Via: 1.1 204.160.105.17:80 (Footprint 4.5/FPMCP)
accept-encoding: gzip
Response:
HTTP/1.1 200 OK
Content-Length: 14602
Content-Type: application/x-javascript
Expires: Wed, 01 Jan 2020 00:00:00 GMT
Last-Modified: Wed, 18 Nov 2009 15:36:54 GMT
Accept-Ranges: bytes
Server: Microsoft-IIS/6.0
Date: Thu, 19 Nov 2009 10:29:52 GMT
Yes these demos use 'localhost' in the request. I get the same result using the actual domain name from various machines on various networks though.
Two questions then:
Could this be IIS not applying the compression due to the extra header? and if so what can I do about it?
How can I tell if the proxy is decompressing the content before returning it?
Bonus question 3 - What can I do to investigate this problem further?
I am aware of question 332049, but that has the header in the response, not the request.
I stumbled across your question while researching this myself. I uncovered an article on MSDN and the short answer is that the Via header is used for proxies and proxies typically mess up compression. You either have the option of removing the header or you can change the setting in the IIS metabase (HcNoCompressionForProxies="FALSE"). I had success with both options.

HTTP Expires header not respected by browser?

I have a situation where my (embedded) web server is sending Expires header, but the browser does not seem to respect the header setting, i.e., if I refresh the page, the browser requests the resources that are supposed to be cached. Following are the headers that are getting exchanged:
https://192.168.1.180/scgi-bin/ajax/ajax.cgi
GET /scgi-bin/ajax/ajax.cgi HTTP/1.1
Host: 192.168.1.180
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cache-Control: max-age=0
HTTP/1.x 200 OK
Date: Wed, 24 Jun 2009 20:26:47 GMT
Server: Embedded HTTP Server.
Connection: close
Content-Type: text/html
----------------------------------------------------------
https://192.168.1.180/scgi-bin/ajax/static.cgi?fn=images/logo.jpg&ts=20090624201057
GET /scgi-bin/ajax/static.cgi?fn=images/logo.jpg&ts=20090624201057 HTTP/1.1
Host: 192.168.1.180
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: https://192.168.1.180/scgi-bin/ajax/ajax.cgi
Cache-Control: max-age=0
HTTP/1.x 200 OK
Date: Wed, 24 Jun 2009 20:26:47 GMT
Server: Embedded HTTP Server.
Connection: close
Expires: Wed, 1 Jun 2011 20:00:00 GMT
Content-Type: image/jpg
----------------------------------------------------------
The ajax.cgi returns an html page with a logo graphic (via the static.cgi script), which I'd like cached, but the browser is asking for the logo on every refresh.
The browser ignores the Expires header if you refresh the page. It always checks whether the cache entry is still valid by contacting the web server. Ideally, it will use the If-Modified-Since request header so that the server can return '304 Not modified' if the cache entry is still valid.
You're not setting the Last-Modified header, so the browser has to perform an unconditional GET of the content to ensure that it is up to date.
Some rules of thumb for setting Expires and Last-Modified are described in this blog post:
http://blog.httpwatch.com/2007/12/10/two-simple-rules-for-http-caching/
What are you doing in your browser? I looks like you click the reload button or even something like shift+Reload. Normally, the browser wouldn't send a Cache-Control: max-age=0 header. That means the browser has thrown away the cached image and wants to get it again.
If you just navigate to another page and then back again, the browser should respect your Expires header.
Additionally, you could add a Cache-control: public header to your response. That allows proxies and the browser explicitly to cache the image.
Any errors in your https certificate will cause the browser to not respect your headers.
Try it without https and see if it works over plain http.
See this answer https://stackoverflow.com/a/17716911
The CGI script looks like it has a timestamp parameter...this isn't changing, is it? The browser should be treating each unique URL as a different object in the cache, so if that is updating with every request, it won't match with the cached image.
Additionally, the Expires field is not exactly in RFC 1123 format, because you need two digits for the date. This may or may not be an issue, but it's something to check. The browser is including Cache-Control: max-age=0, which indicates that it believes its cache to be potentially out of date.
Once the server gets this validation request, it can return 304 (Not Modified), or 200 (OK), as it is doing currently.

Resources