Nginx and auth_basic with cgit - nginx

On an Arch Linux server running Nginx, I setup correctly cgit. I want to protect cgit with a basic authentication password, except for one directory /pub/. As seen on the documentation, I thought about put on the server context an authentication, and get an exception with the location context for the /pub/ directory. I tried this link to get the path correctly.
Here the configuration file of nginx of the corresponding part.
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
location / {
root /usr/share/webapps/cgit/;
}
location ^~ /pub/ {
auth_basic off;
}
if (!-f $request_filename) {
rewrite ^/([^?/]+/[^?]*)?(?:\?(.*))?$ /cgit.cgi?url=$1&$2 last;
}
location ~ \.cgi$ {
gzip off;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9001;
fastcgi_index cgit.cgi;
fastcgi_param SCRIPT_FILENAME /usr/share/webapps/cgit/cgit.cgi;
fastcgi_param DOCUMENT_ROOT /usr/share/webapps/cgit/;
}
}
This ask me for authentication for whatever any url are. For some easier tests, I tried to leave root without authentication, and only /pub/ with authentication. In this case, it doesn't ask for password at all. So far, I managed to protect either everything or nothing.
Thanks for your help, and my apologies for my approximative English.

I think you want something like this:
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
root /usr/share/webapps/cgit;
# $document_root is now set properly, and you don't need to override it
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi;
location / {
try_files $uri #cgit;
}
# Require auth for requests sent to cgit that originated in location /
location #cgit {
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
gzip off;
# rewrites in nginx don't match the query string
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
location ^~ /pub/ {
gzip off;
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
}

Related

Rewrite urls depending on the host

I have a Nginx server which serves a Symfony app.
But this app may receive requests from different hosts (which are for now simulated in /etc/hosts), and for each host, there is a kind of cache directory located in the public directory, having their own name:
|-src
|-var
|-...
|-public
| |-host1.com
| | |-file1
| | |-file2
| |-host2.com
| | |-file1
| | |-...
URIs can be of the following form (please note the absence of the subdirectory name):
https://host1.com/file1
In this case, I want Nginx to check if the public/host1.com/file1 exists. So I need to setup a kind of rewrite rule from /file1 to /host1.com/file1.
If the file exists, Nginx has to serve it. But if not (i.e. https://host1.com/file53), I want Nginx to redirect to the Symfony app, so that this one can generate the missing file, and serve it.
How can I do this with Nginx?
Here is my try. Without the 3 lines below the comment, it is working as a classic server.
server {
listen 80;
root /var/www/project/public;
location / {
##############################################
# Here is my try, but Nginx crashes with this:
##############################################
if ($host = "host1.com") {
try_files /host1.com$uri /index.php$is_args$args;
}
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass myapp:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
if (-f $request_filename) {
expires 30d;
access_log off;
}
}
location ~ \.php$ {
return 404;
}
rewrite_log on;
error_log /var/log/nginx/project_error.log notice;
access_log /var/log/nginx/project_access.log;
}
As suggested in the comments, using different server blocks was the solution.
I also had problems with the following part, which was blocking all the image requests without logging it, which had caused misunderstandings in my debugging process:
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
if (-f $request_filename) {
expires 30d;
access_log off;
}
}

Set nginx root to a public folder while keeping the parent directory name in the URL

In our web project we have a a directory called public. We set the root in the nginx config to this public folder so that only the files in the public folder are accessible through the URL.
Our config looks somewhat like this:
server {
listen 80;
server_name example.com
root /srv/nginx/example.com/v1/public;
index index.html index.php;
location / {
try_files $uri $uri/ /index.php;
add_header Access-Control-Allow-Origin *;
}
location ~ \.php$ {
fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
}
}
So now we can access srv/nginx/example.com/v1/public through the URL example.com. Great.
But how can we set our URLs to example.com/v1 with the root at /srv/nginx/example.com/v1/public? Also if we upload a new version it should be available through the URL with example.com/v2 with the root at /srv/nginx/example.com/v2/public without changing config files.
One way I think I can achieve this is by making multiple server blocks each time we upload a new version. But like I said I don't wish to change the nginx config each time we upload a new version and have the risk of doing something wrong.
What other ways are there? And how can I use these?
Use a regular expression location block to split the URI into two components and use an alias directive to construct the path to the target file (which is represented by the $request_filename variable).
For example:
server {
listen 80;
server_name example.com
root /var/empty;
index index.html index.php;
add_header Access-Control-Allow-Origin *;
location ~ ^/(?<prefix>[^/]+)/(?<suffix>.*)$ {
alias /srv/nginx/example.com/$prefix/public/$suffix;
if (!-e $request_filename) { rewrite ^ /$prefix/index.php last; }
location ~ \.php$ {
if (!-f $request_filename) { return 404; }
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass php-fpm;
}
}
}
Avoid the use of try_file with alias due to this issue. See this caution on the use of if.

Redirect localhost to https nginx magento

I am running a magento website on my localhost and want to redirect it to https so that service workers can get registered. my conf file is
upstream php-handler {
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
listen *:443 ssl;
server_name mytestsite.com;
ssl_certificate /etc/nginx/ssl/wildcard.chained.crt;
ssl_certificate_key /etc/nginx/ssl/somekey.key;
return 301 https://$server_name$request_uri;
# Path to the root of your installation
root /home/webstack/magento;
index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) {
#deny all;
}
location / {
# The following 2 rules are only needed with webfinger
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
#try_files $uri $uri/ index.php;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php(?:$|/) {
try_files $uri $uri/ /index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
#fastcgi_param HTTPS on;
fastcgi_pass php-handler;
}
# Optional: set long EXPIRES header on static assets
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}
when i restart the nginx server and type the address https://mytestsite.com it says
The mytestsite.com page isn’t working
mytestsite.com redirected you too many times.
I've tried clearing the cache and cookies but its still the same.
can anyone tell me what is wrong with the conf file?
Thanks in advance.
Delete this line
return 301 https://$server_name$request_uri;
and set unsecure and secure links on magento admin panel(System>Configuration>Web)
Base URL = https://mytestsite.com
Base Link URL = https://mytestsite.com
Base Skin URL = https://mytestsite.com
Base Media URL = https://mytestsite.com
Base JavaScript URL = https://mytestsite.com

Nginx reverse proxy to Wordpress on an URI

I have a Symfony 2.5.X app running on an nginx server. I will call it domain.com.
The /news URI within that server is configured as a reverse proxy to a remote machine, where I run Wordpress blog on nginx server again. I will call it blog.domain.com.
domain.com's configuration looks like that:
server {
listen 80;
server_name domain.com;
set $project_path /home/webserver/prod.domain.com;
root $project_path/web;
error_log /home/webserver/prod.domain.com/app/logs/nginx_error.log;
access_log /home/webserver/prod.domain.com/app/logs/nginx_access.log;
charset utf-8;
client_max_body_size 65m;
# Some extra speed
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Reverse-proxy all /news calls to remote machine
location ~ /news?(.*) {
access_log off;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header Host blog.domain.com; # without it it doesn't work
#proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Custom-Secret 6ffe3dba7213c678324a101827aa3cf22c;
proxy_redirect off;
proxy_buffering off;
#proxy_intercept_errors on;
proxy_pass http://blog.domain.com:80;
break;
}
# Default URLs
location / {
try_files $uri /app.php$is_args$args;
}
# Error pages (static)
#error_page 403 /errorpages/403.html;
error_page 404 /errorpages/404.html;
#error_page 405 /errorpages/405.html;
error_page 500 501 502 503 504 /errorpages/5xx.html;
# Don't log garbage, add some browser caching
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
add_header Pragma "public";
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
try_files $uri /app.php?$query_string;
}
location ~* ^.+\.(css|js)$ {
expires modified +1m;
add_header Pragma "private";
add_header Cache-Control "private";
etag on;
try_files $uri /app.php?$query_string;
}
location = /robots.txt {
allow all;
access_log off;
log_not_found off;
}
# Disallow .htaccess, .htpasswd and .git
location ~ /\.(ht|git) {
deny all;
}
# Parse PHP
location ~ ^/(app|app_dev|config)\.php(/|$) {
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_pass php;
}
}
blog.domain.com's configuration looks like that:
server {
listen 80;
server_name blog.domain.com;
root /home/webserver-blog/news;
access_log /home/webserver-blog/logs/http_access.log;
error_log /home/webserver-blog/logs/http_error.log;
charset utf-8;
client_max_body_size 65m;
# Some extra speed
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Default URLs
location / {
# This never gets parsed as / is reserved for our main server
}
location ~* ^/news/(wp-content|wp-admin) { # without this directive I didn't have any static files
root /home/webserver-topblog/;
}
location ~* ^/news {
try_files $uri $uri/ /index.php?args;
}
# Don't log garbage
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
}
location = /robots.txt {
allow all;
access_log off;
log_not_found off;
}
# Disallow .htaccess or .htpasswd
location ~ /\.ht {
deny all;
}
# Disallow logs
location ~ ^/logs/.*\.(log|txt)$ {
deny all;
}
# Parse PHP
location ~ \.php$ {
#if (!-e $request_filename) { rewrite / /index.php last; }
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php;
}
}
As you can figure out, my Wordpress resides in /home/webserver-blog/news/. I have a slightly modified index.php file in Wordpress that checks for X-Custom-Secret header, and if it's not present (or invalid), it forces a 301 redirection to domain.com/news/
Now I have tried several different approaches to get it running properly.
First (and most obvious) was pointing the root of blog.domain.com's to /home/webserver-blog/ and allowing nginx to naturally pass the request URI to the subdirectory, /news. This worked quite well, yet it didn't allow me to utilize Wordpress' permalinks and just worked with query strings. Other strange behaviour it produced was actually exposing blog.domain.com in HTTP redirect if you called /news without trailing slash. Those redirects were quickly handled by my custom index.php, but still I want to avoid exposing blog.domain.com completely.
Second (and pretty-much current) approach was again pointing the root of blog.domain.com's directly to Wordpress' directory, /home/webserver-blog/news/ and cheating all the requests for static files with location ~* ^/news/(wp-content|wp-admin) directive pointing it's root directory one levelel up. This worked for both permalinks and static files, but again - /news/wp-login.php gives me infinite redirects to itself, and /news/wp-admin/ actually downloads the index.php file instead of parsing it (sends it as application/octet-stream)
I am completely out of ideas... Any help would be much appreciated.
I think I managed to come with a so-so solution. Far from being perfect or clean, but... well, it works.
blog.domain.com's config:
server {
listen 80;
server_name blog.domain.com;
root /home/webserver-blog;
access_log /home/webserver-blog/logs/http_access.log;
error_log /home/webserver-blog/logs/http_error.log;
charset utf-8;
client_max_body_size 65m;
# Some extra speed
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Default URLs
location ~* ^/news$ {
rewrite ^ $scheme://domain.com/news/ permanent; # ** HARDCODED production url
break;
}
location / {
try_files $uri $uri/ #redir;
}
location #redir {
rewrite ^/news/(.*)$ /news/index.php?$1 last;
}
# Don't log garbage
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
}
location = /robots.txt {
allow all;
access_log off;
log_not_found off;
}
# Disallow .htaccess or .htpasswd
location ~ /\.ht {
deny all;
}
# Disallow logs
location ~ ^/logs/.*\.(log|txt)$ {
deny all;
}
# Parse PHP
location ~ \.php$ {
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_pass php;
}
}
So the trick is I'm still operating on filesystem directories and not fancy all-the-way-around rewrites and redirects. news/ still remains a physical directory in the filesystem that gets read with the location / directive from nginx. Previous issues with exposing the blog.domain.com domain on trying to access without slashes seem to be native nginx's behaviour - it sees a directory, it adds a slash at the end; and since it's server_name is set to blog.domain.com, here we go. Hardcoding production URL and putting that rule on top pretty much fixed the problem.
#redir location again enabled the Wordpress' permalinks nicely.
One more thing I have added to entire setup to prevent people form going directly on http://blog.domain.com/ is another index.php file stored directly in /home/webserver-blog/:
<?php
/*
* domain.com redirector
*/
$production = 'http://domain.com/news/';
// Redirect nicely
if(isset($_SERVER['REQUEST_URI']) and $_SERVER['REQUEST_URI'] !== '/') {
$target = sprintf('%s%s', $production, preg_replace('/^\//', null, $_SERVER['REQUEST_URI']));
header('Location: ' . $target);
}
else header('Location: ' . $production);
...and, as mentioned before, few lines on top of Wordpress' original index.php:
<?php
/*
* wordpress loader
*/
$production = 'http://domain.com/news/';
// Allow only reverse-proxied requests
if(!isset($_SERVER['HTTP_X_CUSTOM_SECRET']) or $_SERVER['HTTP_X_CUSTOM_SECRET'] !== md5('your-md5encoded-text-in-proxy_set_header-X-Custom-Secret')) {
die(header('Location: ' . $production));
}
require_once dirname(__FILE__) . '/index-wp-org.php';
Ugly... but works.
I'd still be happy to hear nicer solutons. :)

Location route not matching

I have a location which simply for some reason isn't triggering. I've tried the routes in all sorts of different orders, and still it doesn't work. When a user comes along and requests /_hostmanager/ it should trigger, but instead it gets the index.php page from the route.
The server config file is:
server {
index index.php index.html;
root /var/www/html;
server_name _;
listen 80;
# Logging
error_log /var/log/httpd/elasticbeanstalk-error_log;
# Route standard requests
location / {
try_files $uri $uri/ /index.php;
}
# Proxy Hostmanager
location /_hostmanager/ {
proxy_pass http://127.0.0.1:8999/;
}
# Include PHP
location ~ \.php {
# CGI Configuration
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_index index.php;
# Zero-day exploit defense
try_files $uri $uri/ /index.php =404;
# Use socket for connection
fastcgi_pass unix:/tmp/php5-fpm.sock;
}
# Cache control
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
log_not_found off;
expires 360d;
}
# Disable hidden files
location ~ /\. {
deny all;
}
}
Can anyone spot what my (probably stupid!) error is?
Thanks in advance! :)
Nevermind, I worked it out! For some reason using the reload command on nginx wasn't working. stopped and started and voila!

Resources