Blank POST with nginx upload module and chunked upload - nginx

I am using the nginx upload module to accept large uploads for a PHP application. I have configured nginx following this blog post (modified for my needs).
Here is (the applicable portion of) my nginx configuration:
server {
# [ ... ]
location /upload {
set $upload_field_name "file";
upload_pass /index.php;
upload_store /home/example/websites/example.com/storage/uploads 1;
upload_resumable on;
upload_max_file_size 0;
upload_set_form_field $upload_field_name[filename] "$upload_file_name";
upload_set_form_field $upload_field_name[path] "$upload_tmp_path";
upload_set_form_field $upload_field_name[content_type] "$upload_content_type";
upload_aggregate_form_field $upload_field_name[size] "$upload_file_size";
upload_pass_args on;
upload_cleanup 400-599;
client_max_body_size 200M;
}
}
In the client side JavaScript, I am using 8MB chunks.
With this configuration, I am able to upload any file that is one chunk or smaller. However, when I try to upload any file that is more than one chunk, the response I get from the server for each intermediate chunk is blank, and the final chunk triggers the call to the PHP application without any incoming POST data.
What am I missing?

It turns out that #Brandan's blog post actually leaves out one important directive:
upload_state_store /tmp;
I added that and now everything works as expected.

Related

NGINX pass image requests to dynamic server

NGINX in front of NodeJS server. NodeJS applications generate dynamic content (attached files such as png) and then call Twilio SMS API (MMS msg) which is provided a URL to the attachment. How to pass these URL requests through to the NodeJS server as they are not static content in NGINX.
Example: png image generated by NodeJS, and must be immediately accessible to Twilio API via URL that comes in through NGINX in front of NodeJS.
If nodejs generates URLs for PNG and these URLs are similar to each other at least part of the path, you can insert them in nginx location with regexp. But it should go before location, which handles regular PNGs.
For example if your static pngs are in location
...
location ~ ^/.+\.(png|jpg|txt|css|js|ttf)$ {
root /var/www/html;
}
...
and your nodejs generates pngs on path like "/twilio/abcabc/123.png". Than you can insert this location before location with static:
...
location ~ ^/twilio/.+\.png$ {
proxy_pass http://nodejs;
}
location ~ ^/.+\.(png|jpg|txt|css|js|ttf)$ {
root /var/www/html;
}
...
In nginx documentation described order how request matchin location:
"Then regular expressions are checked, in the order of their appearance in the configuration file."

Upload file to Nginx with cURL

All,
I'm trying to upload a local file to my remote Nginx server via cURL. I have built Nginx from source with the upload module and the DAV module. At the bottom of the Nginx page, there is an example form to upload a file. I'm not sure how I would implement the form, and (several) Google searches have returned little helpful information about uploading directly to Nginx via cURL.
Current tech stack:
Nginx
Green Unicorn
Flask
Of all the different avenues I've tried, the following is the one that seems the most appropriate for the task.
curl -X POST -F "image=#example.gif" http://54.226.64.199/upload
However, the response is underwhelming.
I've tried --uploade-file as well, the response is a 405. From what I've read, upload only accepts a POST command, not PUT, hence why I get a 405.
I don't need a full solution (would be great!), only pointing in the right direction.
Any help is appreciated. Thanks
EDIT: sorry wanted to include part of my .conf
location /upload {
upload_store /tmp;
#upload_pass #none;
upload_store_access all:rw;
upload_cleanup 400 404 499 500-505;
}
You can do this by specifying filename into URL, without using any external module :
location ~ "/upload/([0-9a-zA-Z-.]*)$" {
alias /storage/www/upload/$1;
client_body_temp_path /tmp/upload_tmp;
dav_methods PUT DELETE MKCOL COPY MOVE;
create_full_put_path on;
dav_access group:rw all:r;
}
And use : curl -T example.gif http://54.226.64.199/upload/example.gif

Nginx holding page without rewriting images

I am trying to set up a holding page for a WordPress site on Nginx. The holding page already existed but the site used to be on an Apache server before we moved it over.
I can get it to redirect to the holding page if maintenance mode is enabled, but the holding.png image does not display. Here is my Nginx code:
server {
# Set to on to enable holding page
set $maintenance off;
if ($maintenance = on) {
rewrite ^(.*)$ /holding.html break;
}
}
The holding image is located in the root of the project, just like the holding page HTML file.
First of all, I would make Nginx send a 503 (Service Unavailable) HTTP response code while in maintenance mode. This prevents search engines from indexing your maintenance page while being offline.
Second, I would put all of the static assets for the maintenance page in a separate directory (e.g. maintenance_files) so that they are nicely coupled together and can be easily excluded from the rewrite.
Also, I would want to have a way to exclude some people so that they are able to do their maintenance tasks. This can be done by IP or by checking for a cookie. You could then set/unset the cookie using a piece of javascript that you paste into your browser inspector.
server {
error_page 503 #maintenance;
# Set to 1 to enable holding page
set $maintenance 0;
# Always allow access for clients that have the 'maintainer' cookie set
if ($http_cookie ~* "maintainer") {
set $maintenance 0;
}
location / {
if ($maintenance) {
return 503;
}
}
location #maintenance {
# You might want to use a different document root for the maintenance
# files so you can freely move and shuffle in the default document root
# during maintenance.
#root /var/www/maintenance/;
if ($uri !~ ^/maintenance_files/) {
rewrite ^(.*)$ /holding.html break;
}
}
}
In order to get maintenance access, paste this in your browser inspector:
document.cookie="maintainer=1";
To remove that cookie:
document.cookie = "maintainer=; expires=Thu, 01 Jan 1970 00:00:00 UTC";
Alternatively, instead of having a variable $maintenance that you set to 1, you could check for a file being present on disk. This way you wouldn't have to reload your Nginx configuration in order to put your site in maintenance mode. However, you would add an addition disk lookup on each request which might be a reason to choose for your option. In that case, run the following command after changing the $maintenance value:
sudo service nginx force-reload

Message "X-Accel-Mapping header missing" in Nginx error log

I am running a Rails 3 site on Ubuntu 8.04 with Nginx 1.0.0 and Passenger 3.0.7.
In my Nginx error.log I started seeing the message X-Accel-Mapping header missing quite a lot. Googling lead me to the docs of Rack::Sendfile and to the Nginx docs.
Now, my app can be accessed through several domains and I am using send_file in my app to deliver some files specific to the domain they are requested from, e.g., if you come to domain1.com/favicon.ico I look up the favicon in at public/websites/domain1/favicon.ico.
This works fine and I don't think I need/want to get Nginx involved and create some private area where I store those files, as the samples in the Rack::Sendfile docs suggest.
How can I get rid of the error message?
this message means that Rack::Sendfile disabled X-Accel-Redirect for you, because you have missing configuration for it in nginx.conf...
I'm using Nginx + Passenger 3 + Rails 3.1.
Gathered information from this pages I've figured it out:
http://wiki.nginx.org/X-accel
http://greenlegos.wordpress.com/2011/09/12/sending-files-with-nginx-x-accel-redirect
http://code.google.com/p/substruct/source/browse/trunk/gems/rack-1.1.0/lib/rack/sendfile.rb?r=355
Serving Large Files Through Nginx via Rails 2.3 Using x-sendfile
I have controller which maps /download/1 requests to storage files which have their own directory structure, like this: storage/00/00/1, storage/01/0f/15 etc. So I need to pass this through Rails, but then I need to use send_file method which will use X-Accel-Redirect to send the final file to the browser through nginx directly.
Within the code I have this:
send_file(
'/var/www/shared/storage/00/00/01',
:disposition => :inline,
:filename => #file.name # an absolute path to the file which you want to send
)
I replaced the filename for this example purposes
Now I had to add these lines to my nginx.conf:
server {
# ...
passenger_set_cgi_param HTTP_X_ACCEL_MAPPING /var/www/shared/storage/=/storage/;
passenger_pass_header X-Accel-Redirect;
location /storage {
root /var/www/shared;
internal;
}
# ...
}
The path /storage is not visible from outside world, it is internal only.
Rack::Sendfile gets the header X-Accel-Mapping, extracts the path from it and replaces /var/www/shared/storage with /storage.... Then it spits out the modified header:
X-Accel-Redirect: /storage/00/00/01
which is then processed by nginx.
I can see this works correctly as the file is downloaded 100x faster than before and no error is shown in the logs.
Hope this helps.
We used the similar technique as NoICE described, but i replaced the "hard-coded" directory containing all the files with the regular expression describing the folder containing the folders containing the files.
Sounds hard, yeah? Just take a look on these (/etc/nginx/sites-available/my.web.site):
location /assets/(.+-[a-z0-9]+\.\w+) {
root /home/user/my.web.site/public/assets/$1;
internal;
}
location /images/(.+)(\?.*)? {
root /home/user/my.web.site/public/images/$1;
internal;
}
This should be used with this check:
location / {
# ...
if (-f $request_filename) {
expires max;
break;
}
# ...
}
to prevent the statics from Rails processing.
I did by this manual
https://mattbrictson.com/accelerated-rails-downloads
my server sends file path /private_upload/file/123/myfile.txt, the file is in /data/myapp-data/private_upload/file/123/myfile.txt
# Allow NGINX to serve any file in /data/myapp-data/private_upload
# via a special internal-only location.
location /private_upload {
internal;
alias /data/myapp-data/private_upload;
}
# ---------- BACKEND ----------
location #backend
{
limit_req zone=backend_req_limit_per_ip burst=20 nodelay;
proxy_pass http://backend;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/; # this header is required, it does nothing
include /etc/nginx/templates/myapp_proxy.conf;
}

nginx + ssi + remote uri access does not work

I have a setup where my nginx is in front with apache+PHP behind.
My PHP application cache some page in memcache which are accessed by nginx directly except some dynamic part which are build using SSI in Nginx.
The first problem I had was nginx didnt try to use memcache for ssi URI.
<!--# include virtual="/myuser" -->
So I figured that if I force it to use a full URL, it would do it.
<!--# include virtual="http://www.example.com/myuser" -->
But in logs file (both nginx and apache) I can see that a slash has been added at the beginning of the url
http ssi filter "/http://www.example.com/myuser"
In the source code of the SSI module I see a PREFIX that seems to be added, but I can really tell if I can disable it.
Anybody got this issue?
Nginx version : 0.7.62 on Ubuntu Karmic 64bits
Thanks a lot
You can configure nginx to include remote URLs despite you cannot refer them directly in SSI instructions. In site config create location with local path and named remote location that points where you want to. For example:
server {
....
location /remote {
proxy_pass #long_haul; # or use "try_files" to provide fallback
}
location #long_haul {
proxy_pass http://porno.com;
}
....
}
and in served html use include directive that refers /remote path:
<!--# include virtual="/remote/rest-of-url&and=parameters" -->
Note that you may customize URL that is passed further with variables and regexp. For example:
location ~/remote(.+) {
proxy_pass #long_haul$1?$args;
}
It has nothing about nginx, you just can't do that. SSI doesn't accept remote uri. you can only specify a local file path.
See
http://en.wikipedia.org/wiki/Server_Side_Includes

Resources