Rewrite nginx location rule - nginx

I'm working on load balancing on a cluster. It works great, but I realized that I want to be able to request a certain node by specify it in the url, e.g domain.com/nodeX/request_uri/ where nodeX is the actual node I want to send the request to. The reason why I want to do this, is so I can easily know which node I'm on, if I'm doing work on one of them, and need to sync the actual file(s) on this node to the other nodes, when files changes.
Right now it's only NextCloud running on the server, in the folder /nextcloud/, with a data directory that is shared with glusterfs, so it's not these files I need to replicate, but the "core files of next-cloud" or actually any files in the www-directory that is changed.
This is (in general) the setup on the master node, `/etc/nginx/sites-available/default:
upstream cluster {
ip_hash;
server node1;
server node2;
[...]
server nodeX;
}
server {
listen 443 ssl http2 default_server;
[...]more unrelated configurations[...]
# This works as expected
location / {
proxy_pass http://cluster/;
}
# But this is where I need help:
# If location starts with /nodeX, where X is a number
location ^~ /node([0-9]+) {
# If location is master node (node0)
location /node0 {
# Include nextcloud configuration
include snippets/nextcloud.conf;
}
# Otherwise pass it on to the requested node
proxy_pass http://«node[0-9]+»/;
}
}
Every slave node (nodeX, X > 0) loads the same configuration, and this is a sum-up of it:
server {
listen 80 default_server; #Yep, no need for SSL in local network
[...]
include snippets/nextcloud.conf;
}
I have removed unrelated data (such as add_header, root etc) to keep things clear. Every node (including master) shares the same snippet-folder, which is distributed through glusterfs. This file snippet/nextcloud.conf is the one I need help with. Next cloud will automatically re-direct to domain.com/nextcloud/ if I write domain.com/node0/nextcloud/, so I need a solution to trick the server to belive it's running on /nextcloud/ whenever it's actually running in a sub-directory of nodeX.
This is what I have so far, which do redirect me:
location ~ /(node[0-9]/?)nextcloud {
# set max upload size
client_max_body_size 512M;
fastcgi_buffers 64 4K;
# This is where I should be able to trick the
# server to think its running on /nextcloud/ even
# when its request is /nodeX/nextcloud
location ~ /(node[0-9]/?)nextcloud {
rewrite ^ /nextcloud/index.php$uri;
}
location ~ ^/(node[0-9]/?)nextcloud/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(node[0-9]/?)nextcloud/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(node[0-9]/?)nextcloud/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
#Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^/(node[0-9]/?)nextcloud/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js and css files
# Make sure it is BELOW the PHP block
location ~* \.(?:css|js|woff|svg|gif)$ {
try_files $uri /nextcloud/index.php$uri$is_args$args;
add_header Cache-Control "public, max-age=7200";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Optional: Don't log access to assets
access_log off;
}
location ~* \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /nextcloud/index.php$uri$is_args$args;
# Optional: Don't log access to other assets
access_log off;
}
}
So my question in general is; Is it possible to remove the "/nodeX/" of the URI, or any other thing? :)
Note! It may be that the /nodeX/ part is missing, when the load balancer is supposed to deal with the actual balancing.

Related

public ressources not accessible

I'm tring to access the bundles directory (which is located in : /usr/src/app/public/bundles) of my symfony project.
But, all files in public dir can't be accessed with my navigator. eg:
Request URL: http://localhost:8080/bundles/easyadmin/app.css
Request Method: GET
Status Code: 404 Not Found
The file exists...
This is my nginx config :
server {
server_name ~.*;
location / {
root /usr/src/app;
try_files $uri /index.php$is_args$args;
}
client_max_body_size 100m;
location ~ ^/index\.php(/|$) {
if ($request_method ~* "(GET|POST|PATCH|DELETE)") {
add_header "Access-Control-Allow-Origin" '*' always;
}
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" '*' always;
add_header "Access-Control-Allow-Methods" "GET, POST, PATCH, DELETE, OPTIONS, HEAD";
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
return 200;
}
fastcgi_pass php:9000;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_read_timeout 600;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
I don't know what is missconfigured...
Without seeing any error log I can just guess the issue but lets check directories you are using.
First of all try to avoid root in locations.
Putting root inside of a location block will work and it’s perfectly valid. What’s wrong is when you start adding location blocks. If you add a root to every location block then a location block that isn’t matched will have no root. Therefore, it is important that a root directive occur prior to your location blocks, which can then override this directive if they need to.
The configuration should look like this:
root /usr/src/app/public/;
location / {
try_files $uri /index.php$is_args$args;
}
Your location is missing the public directory. So why do see the 404:
For this request http://localhost:8080/bundles/easyadmin/app.css with your configuration NGINX will look into /usr/src/app/ for /bundles/easyadmin/app.css. And it will not be able to find it. But /usr/src/app/public/bundles/easyadmin/app.css` will be a valid path and should result in 200 OK.

Restricting access by IP address to wp-admin and wp-login.php on Azure Wordpress on Linux

Using Azure Wordpress for Linux Web App, I'm trying to modify the nginx conf.d file to restrict access to wp-login.php and wp-admin directory by IP address. The directives that I'm trying to use either seem to completely allow access or completely deny access, it does not seem to respect allow x.x.x.x; at all.
Here is the code that I've placed in my server block:
location ~ ^(wp-admin|wp-login.php) {
try_files $uri $uri/ /index.php?$args;
index index.html index.htm index.php;
allow x.x.x.x;
deny all;
}
If I only have the deny all; directive, everything returns a 403 forbidden error. If I put in the allow directive, I can access it from any IP address and it never seems to throw an error.
I've noticed in my logs that this is showing up:
278#278: *25 access forbidden by rule, client: 172.19.0.1, server: _, request: "GET /wp-admin HTTP/1.1", host: "myhostname.com"
and that this precedes my server block in the default.conf file:
upstream php {
#server unix:/tmp/php-cgi.socket;
server 127.0.0.1:9000;
}
Is these something going on that basically makes all my inbound traffic appear to nginx as coming from the same IP address? Is there any way to "pass that down" ?
Here is the default.conf file:
upstream php {
server unix:/var/run/php/php7.0-fpm.sock;
#server 127.0.0.1:9000;
}
server {
listen 80;
## Your website name goes here.
server_name _;
## Your only path reference.
root /home/site/wwwroot;
## This should be in your http block and if it is, it's not needed here.
index index.php index.html;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Add locations of phpmyadmin here.
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtual
sendfile off;
set $skip_cache 0;
# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $skip_cache 1;
}
if ($query_string != "") {
set $skip_cache 1;
}
# Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
set $skip_cache 1;
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
# Don't cache WooCommerce URLs
# Cart widgets are still a problem: https://github.com/emcniece/docker-wordpress/issues/3
if ($request_uri ~* "/(cart|checkout|my-account)/*$") {
set $skip_cache 1;
}
location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
fastcgi_read_timeout 300;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache WORDPRESS;
fastcgi_cache_valid 60m;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
I started noticing today that I can see the IP address that I want under the PHP variable of $_SERVER['HTTP_X_CLIENT_IP'] --- Is there a way to test that under the allow/deny options, or to override the value that allow/deny looks at to use this other value? For example:
if ($http_x_client_ip != x.x.x.x) {
return 403;
}

NGINX not caching or saving static files

I have ubuntu 14 on AWS ngnix is point to a website. I have tried everything but it does not serve up the static images. when I try to cache them. I have tired every combo of this but every time I go there are no files.
location ~* \.(css|js|jpeg|jpg|gif|png|ico|xml)$ {
access_log off;
expires 30d;
}
When I go to the directory there is no files in the root path. Any ideas?
Here is an official block usage https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/#
server {
# Replace this port with the right one for your requirements
listen 80 default_server; #could also be 1.2.3.4:80
# Multiple hostnames separated by spaces. Replace these as well.
server_name star.yourdomain.com *.yourdomain.com; # Alternately: _
root /PATH/TO/WEBROOT;
error_page 404 errors/404.html;
access_log logs/star.yourdomain.com.access.log;
index index.php index.html index.htm;
# static file 404's aren't logged and expires header is set to maximum age
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
access_log off;
expires max;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_intercept_errors on;
# By all means use a different server for the fcgi processes if you need to
fastcgi_pass 127.0.0.1:YOURFCGIPORTHERE;
}
location ~ /\.ht {
deny all;
}

Using nginx, how to conditionally hide multiple set-cookie response headers coming from backend?

I have an nginx config using fastcgi_pass. I'm sure it would work the same with proxy_pass.
For anonymous/guest users, I wish to hide the set-cookie response header set by php's session_start() (I also hide the cache-control, expires and pragma headers), but for logged-in users (and when an anonymous user is logging in), I wish to pass all the set-cookie headers sent by the backend.
I made the app set a special header (X-SPECIAL) I can inspect in nginx to decide which kind of response it is - for guests or not.
When the backend sends a single set-cookie header, I successfully pass it only when needed using $upstream_http_set_cookie. But when the backend sets multiple cookies using multiple set-cookie response headers, $upstream_http_set_cookie contains only one value, so the client only sees one cookie. This results in login not working.
How to get all values in $upstream_http_set_cookie?
Or alternatively, how to conditionally set fastcgi_hide_headers? I know that hide_headers does not support variables, and it seems it cannot be set differently per-request.
Do I have to use openresty and lua?
My config:
fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=my_cache:10m;
map $upstream_http_x_special $not_store_in_cache {
default "1";
guest "0";
}
map $upstream_http_x_special $maybe_set_cookie {
default $upstream_http_set_cookie;
guest "";
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.php index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
set $not_from_cache "";
if ($request_method !~ ^(GET|HEAD)$) {
set $not_from_cache "1";
}
if ($cookie_phpsessid != '') {
set $not_from_cache "1";
}
# conditionally don't serve request from cache
fastcgi_cache_bypass $not_from_cache;
# conditionally don't store response into cache
fastcgi_no_cache $not_from_cache$not_store_in_cache;
fastcgi_cache my_cache;
fastcgi_cache_key $scheme$host$request_uri;
fastcgi_cache_valid any 1s;
# serve from cache while generating next response
fastcgi_cache_use_stale error timeout updating http_500 http_503;
# single request to backend, no matter how many requests for same resource.
fastcgi_cache_lock on;
# store in cache regardless of response headers
fastcgi_ignore_headers expires cache-control set-cookie;
# remove these headers from response before passing back to client.
fastcgi_hide_header pragma;
fastcgi_hide_header expires;
fastcgi_hide_header cache-control;
fastcgi_hide_header x-special;
# Fake conditional hide response header by always hiding and
# conditionally adding it back.
# Works only when there is a single "set-cookie" header,
# fails when backend sets multiple cookies.
fastcgi_hide_header set-cookie;
add_header Set-Cookie $maybe_set_cookie;
# standard code for php backend
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Here's a way to set $not_from_cache that avoids the problematic if. Not a complete answer but may lead to one with some more help:
map $request_method,$cookie_phpsessid,$uri $not_from_cache {
default 0;
~*^POST,[^,]*,.*\.php$ 1; # match php POST request
~*,[^,]+,.*\.php$ 1; # match php request with $cookie_phpsessid set
}

Nginx and auth_basic with cgit

On an Arch Linux server running Nginx, I setup correctly cgit. I want to protect cgit with a basic authentication password, except for one directory /pub/. As seen on the documentation, I thought about put on the server context an authentication, and get an exception with the location context for the /pub/ directory. I tried this link to get the path correctly.
Here the configuration file of nginx of the corresponding part.
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
location / {
root /usr/share/webapps/cgit/;
}
location ^~ /pub/ {
auth_basic off;
}
if (!-f $request_filename) {
rewrite ^/([^?/]+/[^?]*)?(?:\?(.*))?$ /cgit.cgi?url=$1&$2 last;
}
location ~ \.cgi$ {
gzip off;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9001;
fastcgi_index cgit.cgi;
fastcgi_param SCRIPT_FILENAME /usr/share/webapps/cgit/cgit.cgi;
fastcgi_param DOCUMENT_ROOT /usr/share/webapps/cgit/;
}
}
This ask me for authentication for whatever any url are. For some easier tests, I tried to leave root without authentication, and only /pub/ with authentication. In this case, it doesn't ask for password at all. So far, I managed to protect either everything or nothing.
Thanks for your help, and my apologies for my approximative English.
I think you want something like this:
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
root /usr/share/webapps/cgit;
# $document_root is now set properly, and you don't need to override it
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi;
location / {
try_files $uri #cgit;
}
# Require auth for requests sent to cgit that originated in location /
location #cgit {
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
gzip off;
# rewrites in nginx don't match the query string
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
location ^~ /pub/ {
gzip off;
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
}

Resources