I have got a Symfony2.2.1 project which run with a nginx/1.2.6 (Ubuntu 13.04 VirtualBox).
The render of assets are ok with hard link.
With symlink, it works only on the first initialisation.
When I update a symlink source, the browser render transform my modifications with ����� characters. There is no errors from the browser and the part without modifications is not impacted.
Example of the end of my CSS file after modification:
[...]
div.form-actions {
text-align: center;
}
�����
Currently, I use hard link. I had not this problem with Apache2... :/
Have you got an idea?
Thanks
Nginx site conf:
server {
listen 80;
root /media/sf_NetBeansProjects/XXXX/web;
index app.php;
server_name XXXX.lo;
location / {
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}
location #rewriteapp {
# rewrite all to app.php
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /media/sf_NetBeansProjects/XXXX/app/logs/nginx_errors.log;
access_log /media/sf_NetBeansProjects/XXXX/app/logs/nginx_access.log;
}
The subtlety is that the media/sf_NetBeansProjects is a VirtualBox share folder with my Windows8 but as I say previously, apache2 was always ok with that.
Try to restart php5-fpm, after create symlink.
sudo service php5-fpm reload
And check disable_symlinks option http://nginx.org/en/docs/http/ngx_http_core_module.html#disable_symlinks
This article helped:
https://coderwall.com/p/ztskha
"Simply spoken, sendfile() uses kernel calls to copy files directly from disc to tcp. If you are using remote filesystems (like nfs or the VirtualBox Guest Additions stuff), this method isn't reliable."
Essentially, turn off sendfile for NGINX if you are trying to serve files on your guest VM that exist on your host.
"To turn off sendfile() in Apache, you can use the EnableSendfile off directive, for nginx use sendfile off."
Ok well there's one thing that comes up my mind, maybe you're viewing the binary data of the image file, so maybe the browser isn't identifying this as an image file, maybe cause nginx isn't sending the content-type, could be for another reason. but I have one suggestion, add this in your default location /
location / {
try_files ..... ;
types {
image/jpeg jpg jpeg;
}
}
alternatively, you can include mime.types inside the server block
server {
#bla bla bla
include mime.types;
location / {
#bla bla
}
}
I'm not sure if this will work or not, but it's worth a try.
Try clearing your browser cache sometimes nginx throw the the file as raw as in with no mime-type set.
Also try changing the HttpHeaders set the expiration and cache-control of per file to minimum, it depends if your project is still in development. So that the file that is being push by the server is always updated and is not being cache by the browser.
I had the same problem, using the same setup.
You need to disable sendfile from Nginx in order to properly send this static files under symbolic links.
location / {
sendfile off; # Do it before try files
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}
Related
I’m trying to figure out the best way of securing access to my MariaDB database. I have a root non-wordpress site with 2 wordpress sites as directories (/blog and /shop) - each with separate databases - that use phpMyAdmin as a database viewer (accessible at /phpmyadmin). I want to increase the security so that it can’t be hacked so easily. However, I can’t seem to implement any of the recommended security measures.
Creating a .htaccess and in /usr/share/phpmyadmin and adding the following to whitelist IPs and block all other IPs has no effect:
Order Deny,Allow
Deny from All
Allow from 12.34.56.78
Changing the phpMyAdmin url via the config file (so it’s not accessible at /phpmyadmin) also seems to have no effect.
I’m assuming that it’s because apache is not running (I use Nginx to run my main domain and the 2 wordpress sites). I can’t run apache and Nginx simultaneously (presumably because they’re both fighting for port 80), but what I don’t get is that when Nginx is running and apache is supposedly not running, how is the /phpmyadmin link still accessible?
Here’s my .conf file in /etc/nginx/sites-available (also symlinked to sites-enabled):
upstream wp-php-handler-four {
server unix:/var/run/php/php7.4-fpm.sock;
}
server {
listen 1234 default_server;
listen [::]:1234 default_server;
root /var/www/site;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location /shop {
try_files $uri $uri/ /shop/index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass wp-php-handler-four;
}
}
I followed a tutorial to set this up (maybe I’m misunderstanding how it’s fully set up) but is this not actually using apache to access /phpmyadmin or is it using some web socket? How can I make the above security attempts work?
Note: the /usr/share/phpmyadmin/ dir is symlinked to /var/www/site/
Creating a .htaccess in /usr/share/phpmyadmin and adding the following to whitelist IPs and block all other IPs has no effect:
Order Deny,Allow
Deny from All
Allow from 12.34.56.78
Of course it won't have any effect since this file processed only by apache.
I can’t run apache and Nginx simultaneously (presumably because they’re both fighting for port 80)
In an early days of nginx there was a technique to use nginx for static files and apache to process PHP scripts. Apache was running on some other port (for example, 8080) and listening only on local IP (127.0.0.1). Nginx configuration for that was looking like
upstream apache {
server 127.0.0.1:8080;
}
server {
...
location ~ \.php$ {
proxy_pass http://apache;
}
}
Nowadays it is rarely used since using PHP-FPM is more flexible and gives a less server overhead. However it can be used when you have a complex .htaccess configuration and don't want to rewrite it for nginx/PHP-FPM.
but what I don’t get is that when Nginx is running and apache is supposedly not running, how is the /phpmyadmin link still accessible?
...
Is this not actually using apache to access /phpmyadmin or is it using some web socket?
This configuration uses UNIX socket /var/run/php/php7.4-fpm.sock where PHP-FPM daemon is listening for requests (you can read an introduction to this article to get some additional details).
How can I make the above security attempts work?
One of many possible solutions is
Unlink /usr/share/phpmyadmin/ from /var/www/site/
Use the following location block (put it before the location ~ \.php$ { ... } one:
location ~ ^/phpmyadmin(?<subpath>/.*)? {
allow 12.34.56.78;
# add other IPs here
deny all;
alias /usr/share/phpmyadmin/;
index index.php;
try_files $subpath $subpath/ =404;
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$subpath;
fastcgi_pass wp-php-handler-four;
}
}
To add to the otherwise quite thorough answer:
Since Nginx doesn't use .htaccess files or the same syntax as Apache, you aren't being restricted as Apache would do. You may wish to find some other solution, or you could use what's built in to phpMyAdmin: there is a allow/deny functionality built in that you can learn about in the documentation: https://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_AllowDeny_order (and https://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_AllowDeny_rules); this will let you restrict access based on username and IP address.
I have a problem with my Nginx configuration. I have 2 servers, one with nginx and one with my webApp in symfony3.
Here is my configuration :
location /portal/mysite/ {
set $frontRoot /srv/data/apps/mysite-portal-stag/current/web;
set $sfApp app.php; # Change to app.php for prod or app_dev.php for dev
root /srv/data/apps/mysite-portal-stag/current/web;
rewrite ^/portal/mysite/(.*)$ /$1 break;
try_files $uri #sfFront;
}
location #sfFront {
root /srv/data/apps/mysite-portal-stag/current/web;
fastcgi_pass myserver:myport;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $frontRoot/$sfApp;
fastcgi_param SCRIPT_NAME /portal/mysite/$sfApp;
}
The webSite work for all the php scripts but all the assets (static files) are broken files. I don't understand enough how Nginx works to indicate what are the static files and "tell" my proxy that they aren't script.
The try_files directive automatically tries to find static files, and serve them as static, prior to giving up, and letting the request be served as a script.
http://nginx.org/r/try_files
Checks the existence of files in the specified order and uses the first found file for request processing; the processing is performed in the current context. The path to a file is constructed from the file parameter according to the root and alias directives. It is possible to check directory’s existence by specifying a slash at the end of a name, e.g. “$uri/”. If none of the files were found, an internal redirect to the uri specified in the last parameter is made.
Note that although you're already using try_files, it appears that perhaps your path handling isn't up to spec.
As for your own answer with a temporary solution, there's nothing wrong with using a rewrite or two, but that said, it looks like you'd benefit from the alias directive.
http://nginx.org/r/alias
Defines a replacement for the specified location.
However, you've never explained why you're serving stuff out of /tmp. Note that /tmp is often automatically cleared by some cron scripts, e.g., on OpenBSD, the /etc/daily script would automatically find and remove files older than about 7 days (on a daily basis, as the name suggests).
In summary, you should first figure out what is the appropriate mapping between the web view of the filesystem and your filesystem.
Subsequently, if a prefix is found, just use a separate location for the assets, together with alias.
Else, figure out the paths for try_files to work as intended.
I have find a very ugly solution until anyone find a better solution, here is what I have done :
I have copied all the assets repository and copied it to my proxy server where nginx is.
Here is my new config :
location /portal/mysite/ {
set $frontRoot /srv/data/apps/mysite-portal-stag/current/web;
set $sfApp app.php;
root /srv/data/apps/mysite-portal-stag/current/web;
rewrite ^/portal/mysite/(.*)$ /$1 break;
try_files $uri #sfFront;
}
location /portal/mysite/asset {
root /tmp/mysite/asset;
rewrite ^/portal/mysite/asset/(.*)$ /$1 break;
}
location #sfFront {
set $frontRootWeb /srv/data/apps/mysite-portal-stag/current/web;
root /srv/data/apps/mysite-portal-stag/current/web;
fastcgi_pass myAdressWeb:myPort;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $frontRoot/$sfApp;
fastcgi_param SCRIPT_NAME /portal/mysite/$sfApp;
}
And now it's working, all the js/css and pictures are found.
If anyone think about a "cleaner" answer, he is more than welcome to answer.
I set up my domain on my server using nginx. So far so good my homepage works. But now I wanna add some locations for later test of programming. My plan is to call diffrent projects like mydomain.com/php/myprogramm.php
So I add some folder in /var/www/mydomain.com/php (my side index is in /var/www/mydomain.com/html)
Entering www.mydomain.com/php/ leads to an 403 error and mydomain.com/php/myprogramm.php says File not found...
this is my nginx file:
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
# Make site accessible from http://localhost/
server_name mydomain.com www.mydomain.com;
location / {
root /var/www/mydomain.com/html;
index index.html index.htm;
}
location /php/ {
root /var/www/mydomain.com;
}
location /js/ {
root /var/www/mydomain.com;
}
location /node/ {
root /var/www/mydomain.com;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Of course when I set up my domain I also set sudo chown -R www-data:www-data /var/www/mydomain.com/html and sudo chmod 755 /var/www
Some ideas someone? :/
Problems analysis
The first golden rule is:
nginx always serves request from a single location only. (Re-)read http://nginx.org/en/docs/http/request_processing.html.
Based on your configuration:
Requests to (www.)mydomain.com/php/<whatever> for files not ending with .php will be served by location /php/ from /var/www/mydomain.com/php/<whatever>
Requests to (www.)mydomain.com/<whatever>.php will be served by location ~\.php$ from <default root ('html' by default)>/<whatever>.php
The first problem here is that you are not serving .php files from where you think you are. Learn from location documentation how the location block serving a request is chosen.
You will note that the 'File not found' error was not an nginx error, but a message generated by PHP. That helps to know whether the problem comes from (frontend or backend).
Now about that 403: it seems nginx has trouble accessing the location where it is supposed to serve content from. Check /var/www/mydomain.com/php/ (directory + contents) rights.
Proposed pieces of advice
Your configuration looks suboptimal.
If you use the same root in lots of location blocks, why not moving it one level upper so it becomes the default (which yo ucan override in specific locations where needed)?
You can used nested locations, ie to solve you PHP files serving problem. Note that it is always a good idea to enclose regex locations inside prefix locations (What is the difference? Read location documentation). The reason is regex locations are order-sensitive, which is bad for maintenance. Prefix locations are not since only the longest match with a request URI will be chosen.
Here is a propsed updated version of part of your configuration:
root /var/www/mydomain.com;
location / {
root /var/www/mydomain.com/html;
index index.html index.htm;
}
location /php/ {
location ~ \.php$ {
# Useless without use of $fastcgi_script_name and $fastcgi_path_info
# Moreover, requests ending up here always end with .php...
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
# You seem to have copy-pasted this section without understanding it.
# Good understanding of what happens here is mandatory for security.
}
}
I suggest you read the documentation about fastcgi_split_path_info, $fastcgi_script_name and $fastcgi_path_info.
For my testing right now I fixed the issue quite simply.
I forogt to check my php.ini and change the cgi.fix_pathinfo to 0
Also I changed the group for my folders (still had root inside) to www-data.
At the end I updated my configuration: I set root /var/www/mydomain.com; in my server block (server{})
That's all I did.
But I will keep your advice in mind for later issues.
Thanks for your help I appreciate it.
I'm trying to set up a subdomain with nginx on Ubuntu 13.04. I was actually able to do this before with:
server {
root /home/ajcrites/projects/site-a/public/;
server_name sub.localhost;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Then http://sub.localhost would work perfectly.
Now, I've copied the same config and just changed root (to a path that exists) and server_name to some other value like subber.localhost. No matter what I do, I can't get subber.localhost to work. Also no matter what I do, I can't get sub.localhost to not work while nginx is running. What's weird is that nginx actually will complain about errors in the config files, but even if there are none it doesn't matter. Only sub.localhost will work.
I've tried doing all of the following, repeatedly and with various combinations:
Create a separate file in sites-available. nginx seems to thing that it's there since it will complain about errors when I try to reload.
service nginx stop && service nginx start
nginx stop; nginx
nginx -s reload
When I stop it, pgrep does not show any nginx process, so it seems like it is correctly getting stopped. This does not appear to be a cache issue as a fresh browser instance reacts the same way.
I happen to have had similar problems, each time that was a simple issue (that took me a long time to find though). For instance (sorry if some points are obvious - but sometimes we focus on the difficult, ignoring the obvious - at least that's my case)
ensure the symlink is present in sites-enabled
root path has read access all the way from / (at at least x) for the nginx user
is subber.localhost well defined in /etc/hosts for local tests (or local DNS),
Maybe you could try to force the IP on which listen is listening. Eg instead of
listen 80
you could try
listen 127.0.0.1:80; # OR
listen 192.168.0.1:80; # you local address (or remote if that's the case)
I'm currently building a multi-domain cms in rails. Since this content is the same until the next change I'd like to do caching via static files.
The public directory with some cached pages of foo.com and baz.com (/ and /asdf in both cases):
public/
assets/
cms.css
sites/
foo.com/
assets/
screen-some-hash.min.css
index.html
asdf/
index.html
baz.com/
assets/
screen-some-hash.min.css
index.html
asdf/
index.html
What I want to do is the following:
redirect www to non-www (works)
If the requests contains a subdomain (cms, admin, whatever):
If the path contains /assets serve the file in public/assets and set the expire stuff to 30d or so. No problem here since /assets = public/assets and public/ is the passenger root.
Everything else: handle it via rails, no special caching or anything required.
For all other requests (meaning no subdomain):
If the path contains /assets serve the file in public/sites/$host$request_uri and set the expire stuff to 30d or so. Everything else: check for public/sites/$host$request_uri or fall back to the rails app.
I have never worked with nginx conditionals other than the www/non-www redirects and don't really know what I have to do for the conditions mentioned above. If at all possible, I don't want to use redirects for the cached stuff (ie redirection to /sites/foo.com/asdf), instead I'd like to have nginx serve this file directly when going to http://foo.com/asdf.
Further: I don't want to hardcode the hostnames as I'd like to handle an unknown amount of domains. I also don't want to use more than a single rails application for this.
Got something that works, not 100% but good enough for now.
server {
listen 80;
server_name *IP*;
if ($host ~* www\.(.*)) {
set $host_without_www $1;
rewrite ^(.*)$ http://$host_without_www$1 permanent;
}
location ~ ^/(assets)/ {
try_files /sites/$host$uri $uri #passenger;
root /home/cms/app/current/public;
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
try_files /sites/$host$uri/index.html /sites/$host$uri $uri #passenger;
root /home/cms/app/current/public;
}
location #passenger {
access_log /home/cms/app/shared/log/access.log;
error_log /home/cms/app/shared/log/error.log;
root /home/cms/app/current/public;
passenger_enabled on;
}
}
For subdomains, this should do the trick:
server {
server_name ~^(?<subdomain>.+)\.example\.com$;
access_log /var/log/nginx/$subdomain/access.log;
location /assets {
expires max;
}
location / {
proxy_pass http://your_rails_app;
}
}
Not really sure about the proxy_pass setting as my only experience with Ruby apps is Gitlab, which I'm running this way. I hope this helps at least a little.
server {
server_name example.com;
location /assets {
root /public/sites/$hostname/$request_uri;
expires max;
}
}
You'll have to add your own settings and play with it a little as I don't have a chance to actually test it now. But it should show you the way.