Nginx holding page without rewriting images - nginx

I am trying to set up a holding page for a WordPress site on Nginx. The holding page already existed but the site used to be on an Apache server before we moved it over.
I can get it to redirect to the holding page if maintenance mode is enabled, but the holding.png image does not display. Here is my Nginx code:
server {
# Set to on to enable holding page
set $maintenance off;
if ($maintenance = on) {
rewrite ^(.*)$ /holding.html break;
}
}
The holding image is located in the root of the project, just like the holding page HTML file.

First of all, I would make Nginx send a 503 (Service Unavailable) HTTP response code while in maintenance mode. This prevents search engines from indexing your maintenance page while being offline.
Second, I would put all of the static assets for the maintenance page in a separate directory (e.g. maintenance_files) so that they are nicely coupled together and can be easily excluded from the rewrite.
Also, I would want to have a way to exclude some people so that they are able to do their maintenance tasks. This can be done by IP or by checking for a cookie. You could then set/unset the cookie using a piece of javascript that you paste into your browser inspector.
server {
error_page 503 #maintenance;
# Set to 1 to enable holding page
set $maintenance 0;
# Always allow access for clients that have the 'maintainer' cookie set
if ($http_cookie ~* "maintainer") {
set $maintenance 0;
}
location / {
if ($maintenance) {
return 503;
}
}
location #maintenance {
# You might want to use a different document root for the maintenance
# files so you can freely move and shuffle in the default document root
# during maintenance.
#root /var/www/maintenance/;
if ($uri !~ ^/maintenance_files/) {
rewrite ^(.*)$ /holding.html break;
}
}
}
In order to get maintenance access, paste this in your browser inspector:
document.cookie="maintainer=1";
To remove that cookie:
document.cookie = "maintainer=; expires=Thu, 01 Jan 1970 00:00:00 UTC";
Alternatively, instead of having a variable $maintenance that you set to 1, you could check for a file being present on disk. This way you wouldn't have to reload your Nginx configuration in order to put your site in maintenance mode. However, you would add an addition disk lookup on each request which might be a reason to choose for your option. In that case, run the following command after changing the $maintenance value:
sudo service nginx force-reload

Related

How to rewrite URL in NGINX with location parameter?

i have a Application (Openproject) on a Webserver.
this is Reachable under http://10.0.0.1:8000/
Behind my users and the Webserver is a NGinx on which i need to publish under a specific URL: https://ngrp.com/openproject
so i made the following changes in my Nginx Configuaion (in this NGINX instance multiple Websites are published with the "location" settings):
location /openproject/ {
proxy_pass http://10.0.0.1:8000/;
include /etc/nginx/conf.d/proxy.conf;
}
But when i open the page through the Reverseproxy, the Webbrowser displays only a White Page.
In the Webbrowser Debugger i see, that some paths are wrong, so the browser couldnĀ“t load it. Example:
https://ngrp.com/assets/frontend/styles.c3a5e7705d6c5db9cfc1.css
(/openproject/ is missing in the URL)
Correct would be:
https://ngrp.com/openproject/assets/frontend/styles.c3a5e7705d6c5db9cfc1.css
So can somebody please tell me, which configuration is needed, so i can Openproject under the URL https://ngrp.com/openproject/ successfully?
Thank you very much.
When you proxy_pass you proxy the entire HTTP request meaning that you are actually requesting GET /openproject on http://10.0.0.1:8000/ rather than just GET /
You can add this line before the proxy_pass to fix this and remove the /openproject prefix :
rewrite /openproject/(.*) /$1 break;
This changes the requested URL from /openproject/... to /...

Redirect whatever.net to whatever.com without explicitly writing "whatever" in nginx.conf

I have a web app, that will be running on many different domains (and possibly subdomains). Each domain/subdomain will be available as both .net and .com. I want to redirect every request of .net to .com.
Example:
www.whatever.net -> www.whatever.com
www.sub.whatever.net -> www.sub.whatever.com
whatever.net -> whatever.com
sub.whatever.net -> sub.whatever.com
somethingelse.net -> somethingelse.com
...
For various reasons I'd like to have only one nginx.conf file that works for every installation, so I can't write something like:
server {
server_name .net;
return 301 $scheme://whatever.com$request_uri;
}
Because this works just for the installation that's under the whatever.net/whatever.com domains. So I tried:
server {
server_name "~^(?<name>.+)\.net$";
return 301 $scheme://$name.com$request_uri;
}
But this does not work, it capture every request, not only those coming from the .net domain, and (on chrome at least) the result is that the content of the address bar become: .com/thequerypart.
I'm new to nginx, what am I doing wrong?
EDIT:
The rest of nginx.conf is another server block that starts with:
server {
server_name .com;
...
}
It works as intended without the other one.
I... didn't know browsers cache things that are "Moved permanently" (makes sense), the setup I've shown in the question worked, but a previous different attempt did not, and got cached. So, for anyone that may get here in the future, the answer is:
server {
server_name "~^(?<name>.+)\.net$";
return 301 $scheme://$name.com$request_uri;
}

Blank POST with nginx upload module and chunked upload

I am using the nginx upload module to accept large uploads for a PHP application. I have configured nginx following this blog post (modified for my needs).
Here is (the applicable portion of) my nginx configuration:
server {
# [ ... ]
location /upload {
set $upload_field_name "file";
upload_pass /index.php;
upload_store /home/example/websites/example.com/storage/uploads 1;
upload_resumable on;
upload_max_file_size 0;
upload_set_form_field $upload_field_name[filename] "$upload_file_name";
upload_set_form_field $upload_field_name[path] "$upload_tmp_path";
upload_set_form_field $upload_field_name[content_type] "$upload_content_type";
upload_aggregate_form_field $upload_field_name[size] "$upload_file_size";
upload_pass_args on;
upload_cleanup 400-599;
client_max_body_size 200M;
}
}
In the client side JavaScript, I am using 8MB chunks.
With this configuration, I am able to upload any file that is one chunk or smaller. However, when I try to upload any file that is more than one chunk, the response I get from the server for each intermediate chunk is blank, and the final chunk triggers the call to the PHP application without any incoming POST data.
What am I missing?
It turns out that #Brandan's blog post actually leaves out one important directive:
upload_state_store /tmp;
I added that and now everything works as expected.

Message "X-Accel-Mapping header missing" in Nginx error log

I am running a Rails 3 site on Ubuntu 8.04 with Nginx 1.0.0 and Passenger 3.0.7.
In my Nginx error.log I started seeing the message X-Accel-Mapping header missing quite a lot. Googling lead me to the docs of Rack::Sendfile and to the Nginx docs.
Now, my app can be accessed through several domains and I am using send_file in my app to deliver some files specific to the domain they are requested from, e.g., if you come to domain1.com/favicon.ico I look up the favicon in at public/websites/domain1/favicon.ico.
This works fine and I don't think I need/want to get Nginx involved and create some private area where I store those files, as the samples in the Rack::Sendfile docs suggest.
How can I get rid of the error message?
this message means that Rack::Sendfile disabled X-Accel-Redirect for you, because you have missing configuration for it in nginx.conf...
I'm using Nginx + Passenger 3 + Rails 3.1.
Gathered information from this pages I've figured it out:
http://wiki.nginx.org/X-accel
http://greenlegos.wordpress.com/2011/09/12/sending-files-with-nginx-x-accel-redirect
http://code.google.com/p/substruct/source/browse/trunk/gems/rack-1.1.0/lib/rack/sendfile.rb?r=355
Serving Large Files Through Nginx via Rails 2.3 Using x-sendfile
I have controller which maps /download/1 requests to storage files which have their own directory structure, like this: storage/00/00/1, storage/01/0f/15 etc. So I need to pass this through Rails, but then I need to use send_file method which will use X-Accel-Redirect to send the final file to the browser through nginx directly.
Within the code I have this:
send_file(
'/var/www/shared/storage/00/00/01',
:disposition => :inline,
:filename => #file.name # an absolute path to the file which you want to send
)
I replaced the filename for this example purposes
Now I had to add these lines to my nginx.conf:
server {
# ...
passenger_set_cgi_param HTTP_X_ACCEL_MAPPING /var/www/shared/storage/=/storage/;
passenger_pass_header X-Accel-Redirect;
location /storage {
root /var/www/shared;
internal;
}
# ...
}
The path /storage is not visible from outside world, it is internal only.
Rack::Sendfile gets the header X-Accel-Mapping, extracts the path from it and replaces /var/www/shared/storage with /storage.... Then it spits out the modified header:
X-Accel-Redirect: /storage/00/00/01
which is then processed by nginx.
I can see this works correctly as the file is downloaded 100x faster than before and no error is shown in the logs.
Hope this helps.
We used the similar technique as NoICE described, but i replaced the "hard-coded" directory containing all the files with the regular expression describing the folder containing the folders containing the files.
Sounds hard, yeah? Just take a look on these (/etc/nginx/sites-available/my.web.site):
location /assets/(.+-[a-z0-9]+\.\w+) {
root /home/user/my.web.site/public/assets/$1;
internal;
}
location /images/(.+)(\?.*)? {
root /home/user/my.web.site/public/images/$1;
internal;
}
This should be used with this check:
location / {
# ...
if (-f $request_filename) {
expires max;
break;
}
# ...
}
to prevent the statics from Rails processing.
I did by this manual
https://mattbrictson.com/accelerated-rails-downloads
my server sends file path /private_upload/file/123/myfile.txt, the file is in /data/myapp-data/private_upload/file/123/myfile.txt
# Allow NGINX to serve any file in /data/myapp-data/private_upload
# via a special internal-only location.
location /private_upload {
internal;
alias /data/myapp-data/private_upload;
}
# ---------- BACKEND ----------
location #backend
{
limit_req zone=backend_req_limit_per_ip burst=20 nodelay;
proxy_pass http://backend;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/; # this header is required, it does nothing
include /etc/nginx/templates/myapp_proxy.conf;
}

nginx + ssi + remote uri access does not work

I have a setup where my nginx is in front with apache+PHP behind.
My PHP application cache some page in memcache which are accessed by nginx directly except some dynamic part which are build using SSI in Nginx.
The first problem I had was nginx didnt try to use memcache for ssi URI.
<!--# include virtual="/myuser" -->
So I figured that if I force it to use a full URL, it would do it.
<!--# include virtual="http://www.example.com/myuser" -->
But in logs file (both nginx and apache) I can see that a slash has been added at the beginning of the url
http ssi filter "/http://www.example.com/myuser"
In the source code of the SSI module I see a PREFIX that seems to be added, but I can really tell if I can disable it.
Anybody got this issue?
Nginx version : 0.7.62 on Ubuntu Karmic 64bits
Thanks a lot
You can configure nginx to include remote URLs despite you cannot refer them directly in SSI instructions. In site config create location with local path and named remote location that points where you want to. For example:
server {
....
location /remote {
proxy_pass #long_haul; # or use "try_files" to provide fallback
}
location #long_haul {
proxy_pass http://porno.com;
}
....
}
and in served html use include directive that refers /remote path:
<!--# include virtual="/remote/rest-of-url&and=parameters" -->
Note that you may customize URL that is passed further with variables and regexp. For example:
location ~/remote(.+) {
proxy_pass #long_haul$1?$args;
}
It has nothing about nginx, you just can't do that. SSI doesn't accept remote uri. you can only specify a local file path.
See
http://en.wikipedia.org/wiki/Server_Side_Includes

Resources