Optimizing nginx for video streaming - nginx

I'm new to nginx, so please bear with me...
I have a Tomcat/JSP application whose primary purpose is to serve video via JWPlayer. The application is currently hosted on my laptop, but it will be migrating to an Amazon AWS EC2 instance.
On that same EC2 Instance, I have an nginx server set up to deliver said video content. Right now I have about 15 videos that I am serving as a single playlist. These videos range in size, with the smallest one being ~40 MB. The problem is that buffering is taking too long, and I'm trying to figure out how to optimize this.
I have tried to increase my buffer size in the nginx.conf file as follows:
location /videofiles {
# activate flv/mp4 parsing for pseudostreaming
flv;
mp4;
mp4_buffer_size 24M;
mp4_max_buffer_size 48M;
}
I have also tried increasing the chunk_size (though I can't find any doc on this property):
chunk_size 16384;
Here is some other potentially useful config info from nginx.conf:
worker_processes 10;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 100000M;
}
keepalive_timeout 65;
For perspective, I have FIOS and my upload/download is 15Mbps, but the content is coming over at about 15Kbps (according to the Charles app). I would have expected it to stream much faster...
Are there other properties I can set to help optimize this?
Thanks!

Related

NGINX caching upstream server when it shouldn't be

I want to set up an NGINX server which provides the following functionality:
When a request is made NGINX to get the page at /path/to/page, it fetches the page at /path/to/page.
If the upstream server is down or NGINX can't connect to it for some reason, NGINX returns a cached version of the page if it has one.
If the cached file is over 6 hours old, don't use it, just return a 502.
If the upstream server is available, never use the cache.
I have an NGINX config here which I think should work based on my understanding of the docs, but it doesn't and I can't see why. The problem is with point (4), this NGINX server returns the cached version of the file even if the upstream server is online.
daemon off;
error_log /dev/stdout info;
events {
}
http {
proxy_cache_path
"/home/jack/Code/NGINX Caching/Codebase/cache" # Cache path
keys_zone=cache:10m # Name of cacahe, max size for keys 10 megabytes
levels=1:2 # Don't store all cached files in a single directory
max_size=500m # Max size of cache
inactive=6h; # Cached file deleted if not used within six hours
proxy_cache_valid 6h;
proxy_cache_key "$request_method$request_uri";
access_log /dev/stdout;
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
proxy_cache_valid 6h;
proxy_buffering on;
proxy_cache_use_stale error timeout;
}
}
}
Replace the proxy_cache_path with a path to a directory on your machine, and run another webserver on your machine on port 8000. When I modify a file served by the server on port 8000, NGINX doesn't see the change until I erase the cache. The issue is with NGINX and not my client (Firefox), even if I turn off caching in the browser, NGINX returns a 200 with the old file contents.
Can you please check if these two directives might help you:
proxy_cache_revalidate:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_revalidate
and
proxy_cache_use_stale: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale
There is a video from nginx.conf '17 online describing all the cool things you could achive with caching: https://www.youtube.com/watch?v=xZrOjmAkFC8. maybe this is also of interest for you.
So, it seems I misunderstood the NGINX proxy cache directives. The docs are quite confusing on this subject, so I'll lay it out point by point.
This official help page gives a decent overview of the various directives, however it makes no mention of something which as it turns out is a very important conceptual building block in understanding how NGINX caching works: the notion of a cached file being stale.
NGINX's default behavior is to always use the cache if it's there, rather than querying the upstream server. With this config, a minimal config to do caching, NGINX will query the upstream server the first time a page is accessed, and then use the cached version forever after that:
events {
}
http {
proxy_cache_path
/path/to/cache
keys_zone=my_cache:10m;
proxy_cache_key "$request_method$request_uri";
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
}
}
}
You can use the proxy_cache_valid directive to tell NGINX when a cached file should be considered "stale". For example, if we set proxy_cache_valid 5m, then 5 minutes after a cache file is created NGINX will stop serving it and querying the upstream server again on the next request. If the upstream is down, NGINX will return a 502. However, during those five minutes, NGINX will still use the cache even if the upstream server is available, so this is still not what we want.
NGINX has another directive, proxy_cache_use_stale, gives NGINX conditions under which it may use cached files even if they're stale. We can combine these together to get a server which caches pages, makes them stale immediately (or almost immediately), and then only uses them if the upstream is down:
events {
}
http {
proxy_cache_path
/path/to/cache
keys_zone=my_cache:10m;
proxy_cache_key "$request_method$request_uri";
server {
listen 8080;
location ~ ^/(.+)$ {
proxy_pass http://0.0.0.0:8000/$1;
proxy_cache cache;
proxy_cache_valid 1s;
proxy_cache_use_stale error timeout;
}
}
}
This config has almost the behavior we want, except that if the upstream server goes down for an extended period of time, NGINX will continue to use the cache indefinitely. As far as I know there is no way to tell NGINX to totally invalidate/clear a cached file after a given amount of time. Normally that's what proxy_cache_valid is for, but we're already using that for a different purpose, to make files stale after 1 second so they're only used when the upstream is down. We would need some next level after "stale" that means the file is completely invalidated, but I don't think that exists in NGINX.
So the simplest solution is to just clear the cache manually. It's sufficient to just delete all files in the cache directory (or its subdirectories) which were last modified more than 6 hours ago, or whatever you want the expiry time to be. On a Linux system, you can run this script every 5 minutes, for example:
find /path/to/cache -type f -mmin +360 -delete

NGINX: Force nginx to use all workers for load balance

I am working with nginx and I need to test if all workers are sharing data correctly but at the moment only one worker is handling all requests.
My worker_process is set to 4, but still one worker is serving all request.
I can force it to include other workers by changing the amount of worker_connections to a really low value and spam nginx with a curl commands in a while loop. Maybe max_conns could work but i am using free version of nginx.
Is there more practical way to force nginx to use different workers?
My current setup is this. Any help would be appreciated.
worker_processes 4;
events {
worker_connections 5;
multi_accept off;
use epoll;
}
server {
listen 8081;
server_name *.localhost;
# max_conns = 3;
set $upstreamserver "127.0.0.7:8080
location =/worker {
content_by_lua '
ngx.say(ngx.worker.id())
ngx.say(ngx.var.pid)
ngx.say(ngx.worker.count())
';
}
location /basic_status {
stub_status;
}
location / {
proxy_buffering off;
proxy_redirect off;
proxy_pass http://upstreamserver;
}
It looks like you hit a limitation of epoll, not nginx or Lua. This article explains in detail what is going on.
If you really want to distribute the load more envenly across workers, the above article suggests to use the reuseport option on the listen directive (docs).
server {
listen 8081 reuseport;
...
}
This is not the greatest option in all cases though as the latency might increase in some extreme cases.

best way to save nginx request as a file?

i am looking for a solution to save data sent via http (e.g. as a POST) as quickly as possible (with lowest overhead) via nginx (v1.2.9). i tried the following nginx configuration, but am not seeing any files written in the directory:
server {
listen 9199;
location /saveme {
client_body_in_file_only on;
client_body_temp_path /tmp/bodies;
}
}
what am i doing wrong? and/or is there a better way to accomplish this? (the data that is written should ideally be one file per request, and it does not matter if it is fairly "raw" in nature. post-processing of the files will be done via a separate process via a queue.)
This question has already been answered here:
Basically, you need to combine log_format and fastcgi_pass. You can then use the access_log directive for example, to specify where the saved variable should be dumped to.
location = /saveme {
log_format postdata $request_body;
access_log /var/log/nginx/postdata.log postdata;
fastcgi_pass php_cgi;
}
It could also work with your method but I think you're missing client_body_buffer_size and `client_max_body_size
Do you mean save cache for HTTP post while someone access and request file and store on hdd rather than memory?
I may suggest use proxy_cache_path and proxy_cache. The proxy_cache_path directive sets the path and configuration of the cache, and the proxy_cache directive activates it.
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_pass http://my_upstream;
}
}
The local disk directory for the cache is called /path/to/cache
levels sets up a two‑level directory hierarchy under /path/to/cache/
keys_zone sets up a shared memory zone for storing the cache keys and metadata such as usage timers
max_size sets the upper limit of the size of the cache
inactive specifies how long an item can remain in the cache without being accessed
the proxy_cache directive activates caching of all content that matches the URL of the parent location block (in the example, /). You can also include the proxy_cache directive in a server block; it applies to all location blocks for the server that don’t have their own proxy_cache directive.

ember.js application does not update hashtag part of URI with NGINX server

I have an ember.js application I developped on my local machine. I use a restify/node.js server to make it available locally.
When I navigate in my application, the address bar changes like this:
Example 1
1. http://dev.server:3000/application/index.html#about
2. http://dev.server:3000/application/index.html#/items
3. http://dev.server:3000/application/index.html#/items/1
4. http://dev.server:3000/application/index.html#/items/2
I try now to deploy it on a remote test server which runs nginx.
Although everything works well locally, I can navigate into my web application but the part of the URI that is after the hashtag is not updated.
In any browser: http://test.server/application/index.html is always displayed in my address bar. For the same sequence of clicks as in Exemple 1, I always have:
1. http://web.redirection/application/index.html
2. http://web.redirection/application/index.html
3. http://web.redirection/application/index.html
4. http://web.redirection/application/index.html
Moreover, if I directly enter a complete URI http://web.redirection/application/index.html#/items/1 the browser will only display the content that is at http://test.server/application/index.html (which is definitely not the expected behaviour).
I suppose this come from my NGINX configuration since the application works perfectly on a local restify server.
NGINX configuration for this server is:
test.server.conf (which is symlinked into /etc/nginx/sites-enabled/test.server.conf)
server {
server_name test.server web.redirection;
root /usr/share/nginx/test;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location ~ \.csv$ {
alias /usr/share/nginx/test/$uri;
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
EDIT:
Just to be sure that there were no missing files on my test server: I ran a restify/node server (like on my dev machine) and everything works fine when I connect to this server (!). Both nginx and restify servers points to the same files.
EDIT 2
I discovered that my problem happens when I use a web redirection.
If I use an address like http://test.server/application/index.html everything works fine
If I use http://web.redirection/application/index.html it does not work.
So this is my nginx conf that is not correctly redirecting web.redirection URI to test.server or something like that.
Does someone has an idea ? What do I miss ? What should I change to make this work ?
EDIT 3 and solution
The web redirection I used was an A type DNS record. This does not work. Using a CNAME type DNS record solves the issue.
No, this has nothing to do with nginx, any thing past the # is never sent to the server, a javascript code should handle this, I would suggest to use firebug or any inspector to make sure that all your js files are being loaded, and nothing fails with a 404 error, also check for console errors on the inspector console.
The problem came from the DNS redirection from web.redirection to test.server.
It was an A-type record: this does not work.
Using a CNAME-type record that points directly to test.server works.

nginx upload client_max_body_size issue

I'm running nginx/ruby-on-rails and I have a simple multipart form to upload files.
Everything works fine until I decide to restrict the maximum size of files I want uploaded.
To do that, I set the nginx client_max_body_size to 1m (1MB) and expect a HTTP 413 (Request Entity Too Large) status in response when that rule breaks.
The problem is that when I upload a 1.2 MB file, instead of displaying the HTTP 413 error page, the browser hangs a bit and then dies with a "Connection was reset while the page was loading" message.
I've tried just about every option there is that nginx offers, nothing seems to work. Does anyone have any ideas about this?
Here's my nginx.conf:
worker_processes 1;
timer_resolution 1000ms;
events {
worker_connections 1024;
}
http {
passenger_root /the_passenger_root;
passenger_ruby /the_ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name www.x.com;
client_max_body_size 1M;
passenger_use_global_queue on;
root /the_root;
passenger_enabled on;
error_page 404 /404.html;
error_page 413 /413.html;
}
}
Thanks.
**Edit**
Environment/UA: Windows XP/Firefox 3.6.13
nginx "fails fast" when the client informs it that it's going to send a body larger than the client_max_body_size by sending a 413 response and closing the connection.
Most clients don't read responses until the entire request body is sent. Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.
If your HTTP client supports it, the best way to handle this is to send an Expect: 100-Continue header. Nginx supports this correctly as of 1.2.7, and will reply with a 413 Request Entity Too Large response rather than 100 Continue if Content-Length exceeds the maximum body size.
Does your upload die at the very end? 99% before crashing? Client body and buffers are key because nginx must buffer incoming data. The body configs (data of the request body) specify how nginx handles the bulk flow of binary data from multi-part-form clients into your app's logic.
The clean setting frees up memory and consumption limits by instructing nginx to store incoming buffer in a file and then clean this file later from disk by deleting it.
Set body_in_file_only to clean and adjust buffers for the client_max_body_size. The original question's config already had sendfile on, increase timeouts too. I use the settings below to fix this, appropriate across your local config, server, & http contexts.
client_body_in_file_only clean;
client_body_buffer_size 32K;
client_max_body_size 300M;
sendfile on;
send_timeout 300s;
From the documentation:
It is necessary to keep in mind that the browsers do not know how to correctly show this error.
I suspect this is what's happening, if you inspect the HTTP to-and-fro using tools such as Firebug or Live HTTP Headers (both Firefox extensions) you'll be able to see what's really going on.

Categories

Resources