I have a Nginx server which serves a Symfony app.
But this app may receive requests from different hosts (which are for now simulated in /etc/hosts), and for each host, there is a kind of cache directory located in the public directory, having their own name:
|-src
|-var
|-...
|-public
| |-host1.com
| | |-file1
| | |-file2
| |-host2.com
| | |-file1
| | |-...
URIs can be of the following form (please note the absence of the subdirectory name):
https://host1.com/file1
In this case, I want Nginx to check if the public/host1.com/file1 exists. So I need to setup a kind of rewrite rule from /file1 to /host1.com/file1.
If the file exists, Nginx has to serve it. But if not (i.e. https://host1.com/file53), I want Nginx to redirect to the Symfony app, so that this one can generate the missing file, and serve it.
How can I do this with Nginx?
Here is my try. Without the 3 lines below the comment, it is working as a classic server.
server {
listen 80;
root /var/www/project/public;
location / {
##############################################
# Here is my try, but Nginx crashes with this:
##############################################
if ($host = "host1.com") {
try_files /host1.com$uri /index.php$is_args$args;
}
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass myapp:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
if (-f $request_filename) {
expires 30d;
access_log off;
}
}
location ~ \.php$ {
return 404;
}
rewrite_log on;
error_log /var/log/nginx/project_error.log notice;
access_log /var/log/nginx/project_access.log;
}
As suggested in the comments, using different server blocks was the solution.
I also had problems with the following part, which was blocking all the image requests without logging it, which had caused misunderstandings in my debugging process:
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
if (-f $request_filename) {
expires 30d;
access_log off;
}
}
Related
I have problems matching my requirements:
I want 2 things;
https://www.test-boutique.vm/store.css to be routed to the PHP application because the file content is streamed by the app;
https://www.test-boutique.vm/static/css/basic.css to be served from the filesystem because it exists on it;
My vhost is :
server {
listen 443;
server_name www.test-boutique.vm;
root /srv/app/public/front;
index index.php;
location / {
# try to serve file directly, fallback to index.php
try_files $uri $uri/ /index.php$is_args$args;
}
# css are for the files generated by the application (store.css)
location ~ \.(php|htm|css)$ {
try_files $uri $uri/ /index.php$is_args$args;
fastcgi_pass unix:/var/run/php-fpm.app.sock;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS on;
fastcgi_param APP_ENV dev;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|bmp|png|jpg|jpeg|gif|swf|ico)$ {
try_files $uri =404;
log_not_found off;
access_log off;
expires 7d;
add_header Cache-Control public;
add_header Cache-Control must-revalidate;
}
rewrite ^/media/(.*)$ https://test.cloud/$http_host/media/$1 permanent;
rewrite ^/img/(.*)$ https://test.cloud/$http_host/img/$1 permanent;
access_log /var/log/nginx/fov4_access_log;
error_log /var/log/nginx/fov4_error_log;
}
With this version:
✅ /store.css file works well (generated by the PHP application)
❌ /static/css/basic.css is trying to be served by the PHP application instead of serving the file directly from the filesystem (the file exists for sure under this path)
When removing the css part from the vhost
(location ~ \.(php|htm|css)$ { TO NEW location ~ \.(php|htm)$ {
❌ /store.css file is trying to be served as a static asset and ends up 404 (the request is not passed to the application)
✅ /static/css/basic.css is served correctly
What am I missing please ?
Instead of matching all css files like you do here: location ~ \.(php|htm|css)$ {, try matching that one css file that you need to have generated by PHP:
location ~ \.(php|htm)$ {
# you php-fpm config here
}
location ~* \.(js|css|bmp|png|jpg|jpeg|gif|swf|ico)$ {
# your static files config here
}
location = /store.css {
# you php-fpm config here
}
I have a React frontend and a Symfony backend I'm trying to serve on the same domain. The React frontend needs to serve assets if they exist otherwise fallback to serve index.html.
I'd like to serve the php Symfony app when /api is in the request uri. Similar to the React app, I need all requests to go to the index.php file.
The frontend is being served correctly but not the api. I get a 404 from nginx when i hit /api in the browser.
I feel like i'm close but for some reason nginx doesn't have the correct $document_root. I'm adding a header(X-script) to test what the variables are and I'm getting the following:
X-script: /usr/share/nginx/html/index.php
Here's my nginx config.
server {
listen 80 default_server;
index index.html index.htm;
access_log /var/log/nginx/my-site.com.log;
error_log /var/log/nginx/my-site.com-error.log error;
charset utf-8;
location /api {
root /var/www/my-site.com/backend;
try_files $uri $uri/ /index.php$is_args$args;
}
location / {
root /var/www/my-site.com/frontend;
try_files $uri /index.html;
}
location ~* \.php$ {
add_header X-script "$document_root$fastcgi_script_name" always;
try_files $fastcgi_script_name =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Any help would be much appreciated.
The web root of a Symfony 4 project must include the public subfolder. I am not using NGINX but I think this is the correct configuration:
location /api {
root /var/www/my-site.com/backend/public;
In the following vhost, the most important changes I made are :
commented out index directive in server block : it is handled directly in locations blocks
added a slash after location /api/ and remove unnecessary $uri/ in the same api location block
moved the php_fpm logic to index.php location block : you want all requests to be passed to front controller in Symfony app
For the same reason, moved the 404 logic to a general php block, which will handle any other php file request
server {
listen 80 default_server;
access_log /var/log/nginx/my-site.com.log;
error_log /var/log/nginx/my-site.com-error.log error;
charset utf-8;
location /api/ {
root /var/www/my-site.com/backend;
try_files $uri /index.php$is_args$args;
}
location / {
root /var/www/my-site.com/frontend;
try_files $uri /index.html;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param HTTPS off;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
}
Last, I bet you'll have to add symfony public folder into api location block root directive.
This vhost was tested fine on my localhost with following tree.
api_test
- backend
- index.php
- frontend
- index.html
I can successfully access to
backend/index.php from api_test.dv/api
frontend/index.html from api_test.dv/
Kojos answer is excellent, but to make it completely functional a root directive needs to be added under server or an error message “primary script unknown” will be observed. This is almost always related to a wrongly set SCRIPT_FILENAME in the nginx fastcgi_param directive.
# Nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name ${NGINX_HOST};
root /var/www/html/backend/public;
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/error.log error;
charset utf-8;
location / {
root /var/www/html/frontend/build;
try_files $uri /index.html;
}
location /api {
alias /var/www/html/backend/public;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
}
In our web project we have a a directory called public. We set the root in the nginx config to this public folder so that only the files in the public folder are accessible through the URL.
Our config looks somewhat like this:
server {
listen 80;
server_name example.com
root /srv/nginx/example.com/v1/public;
index index.html index.php;
location / {
try_files $uri $uri/ /index.php;
add_header Access-Control-Allow-Origin *;
}
location ~ \.php$ {
fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
}
}
So now we can access srv/nginx/example.com/v1/public through the URL example.com. Great.
But how can we set our URLs to example.com/v1 with the root at /srv/nginx/example.com/v1/public? Also if we upload a new version it should be available through the URL with example.com/v2 with the root at /srv/nginx/example.com/v2/public without changing config files.
One way I think I can achieve this is by making multiple server blocks each time we upload a new version. But like I said I don't wish to change the nginx config each time we upload a new version and have the risk of doing something wrong.
What other ways are there? And how can I use these?
Use a regular expression location block to split the URI into two components and use an alias directive to construct the path to the target file (which is represented by the $request_filename variable).
For example:
server {
listen 80;
server_name example.com
root /var/empty;
index index.html index.php;
add_header Access-Control-Allow-Origin *;
location ~ ^/(?<prefix>[^/]+)/(?<suffix>.*)$ {
alias /srv/nginx/example.com/$prefix/public/$suffix;
if (!-e $request_filename) { rewrite ^ /$prefix/index.php last; }
location ~ \.php$ {
if (!-f $request_filename) { return 404; }
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass php-fpm;
}
}
}
Avoid the use of try_file with alias due to this issue. See this caution on the use of if.
I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing.
I do want to deny access to all the files on the website for all IPs but mine. How should I do that?
Here's the config I have:
server {
listen 80;
server_name example.com;
root /var/www/example;
location / {
index index.html index.php; ## Allow a static html file to be shown first
try_files $uri $uri/ #handler; ## If missing pass the URI to front handler
expires 30d; ## Assume all files are cachable
allow my.public.ip;
deny all;
}
location #handler { ## Common front handler
rewrite / /index.php;
}
location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler
rewrite ^(.*.php)/ $1 last;
}
location ~ .php$ { ## Execute PHP scripts
if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss
expires off; ## Do not cache dynamic content
fastcgi_pass 127.0.0.1:9001;
fastcgi_param HTTPS $fastcgi_https;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params; ## See /etc/nginx/fastcgi_params
}
}
That is because your deny/allow rule applies to just one location.
Remove that and try:
server {
listen 80;
server_name example.com;
root /var/www/example;
if ($remote_addr != "YOUR.PUBLIC.IP") {return 403;}
...
}
As the test is outside any specific locationblock, it will apply to all cases.
Note also that IF is not evil here since it just "returns".
OK, so I've found the solution. Nginx processes the most exact regex which in this case is the regex for php files. To make the config work all further locations must be defined within / location rule except for #handler (you cannot put under any rule - only as root)
server {
listen 80;
server_name example.com;
root /var/www/example;
location / {
index index.html index.php; ## Allow a static html file to be shown first
try_files $uri $uri/ #handler; ## If missing pass the URI to front handler
expires 30d; ## Assume all files are cachable
allow my.public.ip;
deny all;
location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler
rewrite ^(.*.php)/ $1 last;
}
location ~ .php$ { ## Execute PHP scripts
if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss
expires off; ## Do not cache dynamic content
fastcgi_pass 127.0.0.1:9001;
fastcgi_param HTTPS $fastcgi_https;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params; ## See /etc/nginx/fastcgi_params
}
}
location #handler { ## Common front handler
rewrite / /index.php;
}
}
On an Arch Linux server running Nginx, I setup correctly cgit. I want to protect cgit with a basic authentication password, except for one directory /pub/. As seen on the documentation, I thought about put on the server context an authentication, and get an exception with the location context for the /pub/ directory. I tried this link to get the path correctly.
Here the configuration file of nginx of the corresponding part.
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
location / {
root /usr/share/webapps/cgit/;
}
location ^~ /pub/ {
auth_basic off;
}
if (!-f $request_filename) {
rewrite ^/([^?/]+/[^?]*)?(?:\?(.*))?$ /cgit.cgi?url=$1&$2 last;
}
location ~ \.cgi$ {
gzip off;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9001;
fastcgi_index cgit.cgi;
fastcgi_param SCRIPT_FILENAME /usr/share/webapps/cgit/cgit.cgi;
fastcgi_param DOCUMENT_ROOT /usr/share/webapps/cgit/;
}
}
This ask me for authentication for whatever any url are. For some easier tests, I tried to leave root without authentication, and only /pub/ with authentication. In this case, it doesn't ask for password at all. So far, I managed to protect either everything or nothing.
Thanks for your help, and my apologies for my approximative English.
I think you want something like this:
server {
listen 80;
server_name git.nicosphere.net;
index cgit.cgi;
gzip off;
root /usr/share/webapps/cgit;
# $document_root is now set properly, and you don't need to override it
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/cgit.cgi;
location / {
try_files $uri #cgit;
}
# Require auth for requests sent to cgit that originated in location /
location #cgit {
auth_basic "Restricted";
auth_basic_user_file /srv/gitosis/.htpasswd;
gzip off;
# rewrites in nginx don't match the query string
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
location ^~ /pub/ {
gzip off;
rewrite ^/([^/]+/.*)?$ /cgit.cgi?url=$1 break;
fastcgi_pass 127.0.0.1:9001;
}
}