I’d like to separate my main app and XHR requests on a Symfony2 project.
Here are my regular Sf2 controllers :
https://www.example.com/* => /web/app.php/*
https://www.example.com/app_dev.php/* => /web/app_dev.php/*
But I’d like to add XHR requests like that (on the same domain, to avoid CORS issues) :
https://www.example.com/xhr/* => /web/xhr.php/* (no dev environment required)
I’m trying to create a nginx vhost, but I don’t find the correct way :
server {
listen 443 default_server ssl;
server_name www.example.com;
root /var/www/example.com/current/web;
include ssl_config.conf;
location /xhr {
try_files $uri /xhr.php$is_args$args;
}
location / {
try_files $uri /app.php$is_args$args;
}
location ~ ^/(app|app_dev|xhr)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
}
Your configuration will strip pathinfo during the redirect. Perhaps you should use:
try_files $uri /xhr.php$uri$is_args$args;
or simply:
try_files $uri /xhr.php$request_uri;
Related
I have problems matching my requirements:
I want 2 things;
https://www.test-boutique.vm/store.css to be routed to the PHP application because the file content is streamed by the app;
https://www.test-boutique.vm/static/css/basic.css to be served from the filesystem because it exists on it;
My vhost is :
server {
listen 443;
server_name www.test-boutique.vm;
root /srv/app/public/front;
index index.php;
location / {
# try to serve file directly, fallback to index.php
try_files $uri $uri/ /index.php$is_args$args;
}
# css are for the files generated by the application (store.css)
location ~ \.(php|htm|css)$ {
try_files $uri $uri/ /index.php$is_args$args;
fastcgi_pass unix:/var/run/php-fpm.app.sock;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS on;
fastcgi_param APP_ENV dev;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(js|css|bmp|png|jpg|jpeg|gif|swf|ico)$ {
try_files $uri =404;
log_not_found off;
access_log off;
expires 7d;
add_header Cache-Control public;
add_header Cache-Control must-revalidate;
}
rewrite ^/media/(.*)$ https://test.cloud/$http_host/media/$1 permanent;
rewrite ^/img/(.*)$ https://test.cloud/$http_host/img/$1 permanent;
access_log /var/log/nginx/fov4_access_log;
error_log /var/log/nginx/fov4_error_log;
}
With this version:
✅ /store.css file works well (generated by the PHP application)
❌ /static/css/basic.css is trying to be served by the PHP application instead of serving the file directly from the filesystem (the file exists for sure under this path)
When removing the css part from the vhost
(location ~ \.(php|htm|css)$ { TO NEW location ~ \.(php|htm)$ {
❌ /store.css file is trying to be served as a static asset and ends up 404 (the request is not passed to the application)
✅ /static/css/basic.css is served correctly
What am I missing please ?
Instead of matching all css files like you do here: location ~ \.(php|htm|css)$ {, try matching that one css file that you need to have generated by PHP:
location ~ \.(php|htm)$ {
# you php-fpm config here
}
location ~* \.(js|css|bmp|png|jpg|jpeg|gif|swf|ico)$ {
# your static files config here
}
location = /store.css {
# you php-fpm config here
}
I have a React frontend and a Symfony backend I'm trying to serve on the same domain. The React frontend needs to serve assets if they exist otherwise fallback to serve index.html.
I'd like to serve the php Symfony app when /api is in the request uri. Similar to the React app, I need all requests to go to the index.php file.
The frontend is being served correctly but not the api. I get a 404 from nginx when i hit /api in the browser.
I feel like i'm close but for some reason nginx doesn't have the correct $document_root. I'm adding a header(X-script) to test what the variables are and I'm getting the following:
X-script: /usr/share/nginx/html/index.php
Here's my nginx config.
server {
listen 80 default_server;
index index.html index.htm;
access_log /var/log/nginx/my-site.com.log;
error_log /var/log/nginx/my-site.com-error.log error;
charset utf-8;
location /api {
root /var/www/my-site.com/backend;
try_files $uri $uri/ /index.php$is_args$args;
}
location / {
root /var/www/my-site.com/frontend;
try_files $uri /index.html;
}
location ~* \.php$ {
add_header X-script "$document_root$fastcgi_script_name" always;
try_files $fastcgi_script_name =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Any help would be much appreciated.
The web root of a Symfony 4 project must include the public subfolder. I am not using NGINX but I think this is the correct configuration:
location /api {
root /var/www/my-site.com/backend/public;
In the following vhost, the most important changes I made are :
commented out index directive in server block : it is handled directly in locations blocks
added a slash after location /api/ and remove unnecessary $uri/ in the same api location block
moved the php_fpm logic to index.php location block : you want all requests to be passed to front controller in Symfony app
For the same reason, moved the 404 logic to a general php block, which will handle any other php file request
server {
listen 80 default_server;
access_log /var/log/nginx/my-site.com.log;
error_log /var/log/nginx/my-site.com-error.log error;
charset utf-8;
location /api/ {
root /var/www/my-site.com/backend;
try_files $uri /index.php$is_args$args;
}
location / {
root /var/www/my-site.com/frontend;
try_files $uri /index.html;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param HTTPS off;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
}
Last, I bet you'll have to add symfony public folder into api location block root directive.
This vhost was tested fine on my localhost with following tree.
api_test
- backend
- index.php
- frontend
- index.html
I can successfully access to
backend/index.php from api_test.dv/api
frontend/index.html from api_test.dv/
Kojos answer is excellent, but to make it completely functional a root directive needs to be added under server or an error message “primary script unknown” will be observed. This is almost always related to a wrongly set SCRIPT_FILENAME in the nginx fastcgi_param directive.
# Nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name ${NGINX_HOST};
root /var/www/html/backend/public;
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/error.log error;
charset utf-8;
location / {
root /var/www/html/frontend/build;
try_files $uri /index.html;
}
location /api {
alias /var/www/html/backend/public;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
}
I've my Laravel app running on root domain (domain.com/) and a WordPress site on (domain.com/wordpress/).
Root folder for Laravel app = /var/www/laravel-application/
Root folder for WordPress = /var/www/wp/
Everything works fine with both Laravel and WordPress until I switch on the pretty permalinks in WordPress. I get the Laravel error page 'NotFoundHttpException'. It seems Laravel is intervening with WordPress rewrites.
This is my Nginx config:
server {
listen 80;
listen [::]:80 ipv6only=on;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
root /var/www/laravel-application/public;
index index.php index.html index.htm;
server_name domain.com www.domain.com;
location /wordpress {
alias /var/www/wp;
location /wordpress {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $request_filename;
}
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$query_string;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /.well-known {
allow all;
}
}
I've tried practically all suggestions from other topics, none worked. There are no plugins/themes, just basic WordPress install. So far I was able to get WordPress working in the alias folder with plain urls.
I can't figure out why within the alias folder the Laravel routing is still picking up the request on url rewrites.
What am I doing wrong here?
I think you should try to add this code in config
location / {
try_files $uri $uri/ /index.php?$args;
}
I have add this to my nginx and it working
I fixed it myself by the config from this coding magician.
In our web project we have a a directory called public. We set the root in the nginx config to this public folder so that only the files in the public folder are accessible through the URL.
Our config looks somewhat like this:
server {
listen 80;
server_name example.com
root /srv/nginx/example.com/v1/public;
index index.html index.php;
location / {
try_files $uri $uri/ /index.php;
add_header Access-Control-Allow-Origin *;
}
location ~ \.php$ {
fastcgi_intercept_errors on;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php-fpm;
}
}
So now we can access srv/nginx/example.com/v1/public through the URL example.com. Great.
But how can we set our URLs to example.com/v1 with the root at /srv/nginx/example.com/v1/public? Also if we upload a new version it should be available through the URL with example.com/v2 with the root at /srv/nginx/example.com/v2/public without changing config files.
One way I think I can achieve this is by making multiple server blocks each time we upload a new version. But like I said I don't wish to change the nginx config each time we upload a new version and have the risk of doing something wrong.
What other ways are there? And how can I use these?
Use a regular expression location block to split the URI into two components and use an alias directive to construct the path to the target file (which is represented by the $request_filename variable).
For example:
server {
listen 80;
server_name example.com
root /var/empty;
index index.html index.php;
add_header Access-Control-Allow-Origin *;
location ~ ^/(?<prefix>[^/]+)/(?<suffix>.*)$ {
alias /srv/nginx/example.com/$prefix/public/$suffix;
if (!-e $request_filename) { rewrite ^ /$prefix/index.php last; }
location ~ \.php$ {
if (!-f $request_filename) { return 404; }
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass php-fpm;
}
}
}
Avoid the use of try_file with alias due to this issue. See this caution on the use of if.
I've been struggling with a NGINX configuration. I've set up a development environment (local laptop) with a configuration supporting search engine friendly (SEF) urls, but the same configuration doesn't seem to work on my test server.
local configuration:
server {
server_name example;
root /home/arciitek/git/example/public;
client_max_body_size 500M;
location /collection/ {
try_files $uri $uri/ /collection/index.php$args;
index index.php;
}
location / {
index index.html index.htm index.php;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/arciitek/git/example/public/$fastcgi_script_name;
}
}
This works fine. Now on the test environment it looks like this:
{
server_name dev.example.com;
access_log /srv/www/dev.example.com/access.log;
error_log /srv/www/dev.example.com/error.log debug;
root /srv/www/dev.example.com/public;
location /collection/ {
try_files $uri $uri/ /collection/index.php$args;
index index.php;
}
location / {
index index.html index.htm index.php;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/dev.example.com/public$fastcgi_script_name;
}
}
On my development environment everything is fine. But on my test environment, when I call a url in my browser with prettyness added : collection/[brand]/[product]. I get the: No input file specified error. Mind you, if I call a url anding with collection/ everyting works as well..
Can anyone help me with this please? if more info is needed, please let me know..
kind regards,
Erik
After frustrating for a long time, I noticed that it was not the configuration that gave me trouble, but the link to sites-enabled...
There you go... *pads himself on the back...