nginx - Failing to load images only, loading css and js - nginx

So I setup nginx and uwsgi using this tutorial: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
I finished the tutorial completely but for some reason only my images are not being loaded on the page when I run the command...
uwsgi --ini exchange_uwsgi.ini
where exchange_uwsgi.ini is my initialization file for specifying what socket I run on, wheres my project, wheres my virtualenv etc...
Just to re-iterate, the only things not showing up are my images and my images and css are all stored in one folder.
Any reason why this might happens?
Thanks

I fixed the problem.
Make sure to check the permissions on all of your static files. Only 2 images of mine were not loading and they were the only ones with incorrect permissions.
On Linux, first go to the folder with all your static files in the command window, type "ls -l" for list items with the long parameter so you can view permissions.
I set my permission on each file to -rw-rw-r--
Edit: In order to change permissions look into the command "chmod"

Related

Nginx - Custom configuration files location

I use Nginx with many domains. Some of these domains have custom configurations. I'm including these files inside the server blocks in the Nginx configurations.
For example:
server {
... some configurations things here...
include /var/somewhere/custom.conf;
etc.. etc..
}
The configuration files of Nginx are inside: /etc/nginx
To try and keep everything in one place and not have my custom configuration files all over the place I would like to place my custom configuration files inside /etc/nginx/some_directory
Can I create a sub directory inside /etc/nginx without it causing any issues with Nginx itself? I want to create /etc/nginx/some_directory/ and place my many custom configuration files inside it and include them.
I'm specifically asking this question because I don't want to break something on my production server.
If nginx doesn't know about a directory, it'll not touch it. You can verify that by greping against such pattern in nginx's codebase.
However, messing with a foreign folder structure might cause problems with permissions and ownership of the files, therefore either just use a pre-defined folders nginx prepared for you (/etc/nginx/sites-enabled and /etc/nginx/sites-available) which you can use with symlinks such as nginx itself does
# ls /etc/nginx/sites-enabled
default -> /etc/nginx/sites-available/default
# ls /etc/nginx/sites-available
default
otherwise you're getting into a situation what C/C++ programmers call an undefined behavior and there's no guarantee that what works now will work in the future / nginx doesn't change as well as the distro maintainers might mess with the folder structure and permissions for the packages in distro package manager.
Example:
Nginx might verify the full /etc/nginx tree's permissions and owners - if your folders/files don't match it might cause a warning or crash even. If it's installed by a package manager, it might cause issues when removing the package itself e.g. if the package manager attempts to remove only a known list of folders + afterwards the parent i.e. /etc/nginx by rmdir or similar. Situations you don't really want to get into and debug when you can use allowed folders or symlinks or your own folders that are not bound to an application or behavior except the one you define.

Laravel Homestead setup but does not sync hidden files

Hi I've setup Homestead correctly but when I ssh into my Vagrant instance I can see all my files just not the hidden ones (.env, .git, .gitignore are missing). I'm trying to run webpack-dev-server within my instance and it needs my .env file to run. Is it normal that hidden files are not synced?
My .yaml file:
Hidden files don't show when using the ls command (which I guess is what u're doing). They are still there, however. You can try something like nano .env and you'll see that you can edit your env from the console :)
Another command you could use is ls -a, which should show all files regardless of whether they're hidden or not.

Exec with PHP-FPM on nginx (under chroot) returns nothing

I've created a nginx server in a chroot at /srv/http with php-fpm. Both services use the http user and it works fine. The problem comes when I try to run an exec command such as
echo shell_exec('/usr/bin/ls');
There is no output at all on the web page or in the errors. I've also tried
error_log(shell_exec('/usr/bin/ls');
and still nothing.
Things I've Tried or Know:
safe mode off
exec enabled
user is http (using phpinfo())
display_errors = on
error_reporting = E_ALL
sudo /usr/bin/chroot --userspec=http:http /srv/http ls works fine
Can create file and read from it using file_puts_content and fopen/fread
tried shell_exec,exec,system, and passthrough - nothing worked
tried appending 2>&1 to the end of the command and nothing
I've copied all the executables and libraries necessary over
all libraries, binaries, and everything under /srv/http/www (where the webpages are) have executable and read permissions
doc_root is www
As far as I know, everything works in the chroot, except shell commands through php-fpm. Anyone have any idea where I went wrong and how to fix it?
This may sound stupid but you must just copy /bin/sh (not /bin/bash!) to you chroot.
For example see this question: How do I change the shell for php's exec()
If you chroot to some directory, then this directory becomes the root for all your PHP scripts. That means, that if you execute /usr/bin/ls from within PHP, it will try to exectue /srv/http/usr/bin/ls instead.
You can copy the executable to that directory - but be aware of the security implications. If you copy critical system executables into the chrooted directory you basically bypass the positive effects of chroot.
I get no output for
echo shell_exec('/usr/bin/ls');
either. Presumably because ls isn't a file but a built-in command. Running:
echo shell_exec('ls');
outputs:
css demos favicon.ico images js path.php robots.txt routing.php test
which is the list of files in my root directory for the site.

Unix file permissions, WARNING: can't access

I'm trying to change the permissions of a few files that are used with a webpage I'm uploading to my site. I'm using the Unix command line to do it.
I've tried two commands:
chmod 755 index.html
chmod 644 index.html
But I get the message
chmod: WARNING: can't access index.html
after using these commands for some reason, and I have no idea why... initially I though it might be because I had the file open in a couple of programs (text editor and web browser), but I've closed these down, and I'm still getting the same problem... any idea why, and how I can set the permissions correctly so that the file will be viewable by anyone on the web, but only editable by me?
Cheers!
Here's a link that looks similar to your problem but it's on Solaris:
http://www.unix.com/solaris/45229-unable-chmod-file-directory.html
The solution is on pg 2 of this thread but the Cliff's note version of the solution is the person found that something else was mounting at that directory. It showed up when they ran
df -k /their_dir_location
Hope this helps.
another possible issue is ... if you are using solaris zones .. the directory visiable in more than one zone but only one zone has write abilities.

Change the path of a symlinked directory in NGINX

Lets say that I have a directory /var/www/assets/ and within that directory I have a symlink which points to a folder which contains all the latest asset files for my website.
/var/www/assets/asssets -> /var/www/website/releases/xxxxxxx/public/assets
If I configure NGINX to serve asset files from /var/www/assets with the domain assetfilesdomain.com and asset files are prefixed with the directory /asset/ then when that /asset folder's symlink is changed then the updated link is not reflected in NGINX. The way that I see it, NGINX grabs the resolved path for that asset folder when it is started.
Is there any way to get around this?
Reloading nginx (sending a HUP signal to the master process) seems to solve this issue (probably because it starts new workers and shuts down the old ones, gracefully).
it seems like you're using Capistrano. You can override deploy:restart and put the nginx reloading there.

Resources