im trying uwsgi+nginx+django tutorial, and got stuck there (link links directly to section where i got stuck)
https://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html#if-that-doesn-t-work
nginx error.log says:
2015/03/09 13:44:51 [crit] 11642#0: *16 connect() to unix:///home/gaucan/temp/my_app/mysite.sock failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///home/gaucan/temp/my_app/mysite.sock:", host: "localhost:8000"
and tutorial says to fix it, do the following:
You may also have to add your user to nginx’s group (which is probably www-data), or vice-versa, so that nginx can read and write to your socket properly.
But im linux noob and i don't know how to this, or how to find out if that group is www-data or not... already did some mess before with changing owner to some folders to user:"gaucan"
also i did skip in tutorial this step:
You will need the uwsgi_params file, which is available in the nginx directory of the uWSGI distribution, or from https://github.com/nginx/nginx/blob/master/conf/uwsgi_params
since i don't know which directory is nginx directory of the uWSGI distribution...
btw i use FEDORA if thats any help...
To find the permissions of a file:
ls -al unix:///home/gaucan/temp/my_app/mysite.sock
That will print some columns of information, the first of which will be the unix permissions:
-rw-rw-r-- 1 user group 1234 Mar 9 2015 name of file
The first dash is special, just ignore it. The next three characters represent the read, write, and exeute permissions of the User who owns the file. Next 3 represent the group who owns the file. The last 3 represent everyone else.
The nginx process needs to be able to write to that socket file.
You don't have to add your user to the same group as nginx, but you do need to allow proper permissions on the socket. Those instructions as written don't make 100% sense to me.
-- instructions to add your user to a group anyway --
To find the user of a process:
ps aux | grep nginx
One of the output columns will be the user of the nginx process
To find out what groups you belong to:
groups
That will print a space separated list of unix groups that you belong to.
To set a brand new list of groups that you belong to
sudo usermod -G group1,group2 username
Note that the groups are comma separated and they will wipe out any existing groups, so you need to re-type all the existing groups into that command with commas.
Alternatively, use the --append flag of usermod:
sudo usermod --append -G www-data username
You must completely logout and login again to gain the effect of the new groups. (There may exist a shortcut to reload your groups, but I'm not aware of one)
not sure what everything wrong i did, but simply after spending whole day on it, i retried the tutorial, but i tried to write directly all config into /etc/nginx/nginx.conf and it did work then... so that was the bad steps at the tutorial (creating own mysite_nginx.conf and then symlinking with some sites-enabled which didnt even exist in this new version of nginx... )
also did placed socket in /tmp/sock.sock mb that helped too..
I solved the "Permission denied" problem by changing the socket location to "/tmp/sock.sock". My socket was previously somewhere under /root and I kept getting "Permission denied" even after I did "chmod 777".
Related
I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.
I have NGINX working as a cache engine and can confirm that pages are being cached as well as being served from the cache. But the error logs are getting filled with this error:
2018/01/19 15:47:19 [crit] 107040#107040: *26 chmod()
"/etc/nginx/cache/nginx3/c0/1d/61/ddd044c02503927401358a6d72611dc0.0000000007"
failed (1: Operation not permitted) while reading upstream, client:
xx.xx.xx.xx, server: *.---.com, request: "GET /support/applications/
HTTP/1.1", upstream: "http://xx.xx.xx.xx:80/support/applications/",
host: "---.com"
I'm not really sure what the source of this error could be since NGINX is working. Are these errors that can be safely ignored?
It looks like you are using nginx proxy caching, but nginx does not have the ability to manipulate files in it's cache directory. You will need to get the ownership/permissions correct on the cache directory.
Not explained in the original question is that the mounted storage is an Azure file share. So in the FSTAB I had to include the gid= and uid= for the desired owner. This then removed the need for chown and chmod also became unnecessary. This removed the chmod() error but introduced another.
Then I was getting errors on rename() without permission to perform this. At this point I scrapped what I was doing, moved to a different type of Azure storage (specifically a Disk attached to the VM) and all these problems went away.
So I'm offering this as an answer but realistically, the problem was not solved.
We noticed the same problem. Following the guide from Microsoft # https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv#create-a-storage-class seems to have fixed it.
In our case the nginx process was using a different user for the worker threads, so we needed to find that user's uid and gid and use that in the StorageClass definition.
I assume I simply have to insert an entry into an nginx.conf file to resolve the error that is plaguing me (see below), but so far I haven’t had any luck figuring out the syntax. Any help would be appreciated.
I want to run nginx as a regular user while having installed it using homebrew as a user with administrative privileges. nginx is trying to write to the error.log file at /usr/local/var/log/nginx/error.log, which it cannot because my regular user lacks write privilege there.
Another wrinkle is coming from the fact that there are two nginx.conf files, a global and a local, and as far as I can tell they are both being read. They are in the default homebrew location /usr/local/etc/nginx/nginx.conf and my local project directory $BASE_DIR/nginx.conf.
Here is the error that is generated as nginx attempts to start up:
[WARN] No ENV file found
10:08:18 PM web.1 | DOCUMENT_ROOT changed to 'public/'
10:08:18 PM web.1 | Using Nginx server-level configuration include 'nginx.conf'
10:08:18 PM web.1 | 4 processes at 128MB memory limit.
10:08:18 PM web.1 | Starting php-fpm...
10:08:20 PM web.1 | Starting nginx...
10:08:20 PM web.1 | Application ready for connections on port 5000.
10:08:20 PM web.1 | nginx: [alert] could not open error log file: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
10:08:20 PM web.1 | 2017/03/04 22:08:20 [emerg] 19557#0: "http" directive is duplicate in /usr/local/etc/nginx/nginx.conf:17
10:08:20 PM web.1 | Process exited unexpectedly: nginx
10:08:20 PM web.1 | Going down, terminating child processes...
[DONE] Killing all processes with signal null
10:08:20 PM web.1 Exited with exit code 1
Any help figuring out how to get nginx up and running so I can back to the development side of this project will be much appreciated.
The whole problem stems from trying to run nginx as my ordinary user self despite the fact that nginx was installed by my user self with administrative privileges. I was able to resolve both the errors shown here with the following commands executed as a user with administrative privileges:
sudo chmod a+w /usr/local/var/log/nginx/*.log
sudo chmod a+w /usr/local/var
sudo chmod a+w /usr/local/var/run
Note that the /usr/local/var directory appears to have been created by homebrew upon installing nginx and this machine is my laptop so I can’t see any reason not to open it up. You might have greater security concerns in other scenarios.
I admit that when I wrote this question I thought it was about moving the error.log file to another directory. Now I see that that is not a full solution, so instead the solution I present here is about giving ordinary users write privileges in the necessary directories.
The reason I changed my mind is that nginx can (and in this case does) generate errors before (or while) reading the nginx.conf files and needs to be able to report those errors to a log file. Modifying the nginx.conf file was never going to solve my problem. What woke me up to this issue was reading this post: How to turn off or specify the nginx error log location?
Did you try to set the log path to a custom location by editing nginx.conf?
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
Change path to somewhere use has write privileges.
Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?
I would use inotifywatch with a timeout on the directory containing the generated conf files and reload nginx only if something was modified/created/deleted in said directory during that time:
-t , --timeout Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait.
If you
Use a script similar to what's provided in this answer, let's call it check_nginx_confs.sh
Change your ExecStart directive in nginx.service so /etc/nginx/ is /dev/shm/nginx/
Add a script to /etc/init.d/ to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
Use rsync (or other sync tool) to sync /dev/shm/nginx back to /etc/nginx; so you dont lose config files created in /dev/shm/nginx on reboot. Or simply make both locations in-app, for atomic checks as desired
Set a cronjob to run check_nginx_confs.sh as often as files 'turn old' in check_nginx_confs.sh, so you know if a change happened within the last time window but only check once
Only systemctl reload ngnix if check_nginx_confs.sh finds a new file, once per time period defined by $OLDTIME
Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service
Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done
I'm using named pipes as the log file for the access_log of my nginx, I want to know what happens internally in nginx when I delete and recreate the pipe. What I note is that nginx keep working but stop logging.
Even if i don't create again the pipe nginx didn't try to create a regular file for logging.
I don't want to lose my logs, but apparently the only option is to restart nginx, can I enforce nginx to check again for the log file?
The error log only says this, even if the pipe doesn't exists or the pipe is recreated:
2012/02/27 22:45:13 [alert] 24537#0: *1097 write() to "/tmp/access.log.fifo" failed (32: Broken pipe) while logging request, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "localhost:8002"
Thanks.
AFIAK, you need to send nginx a USR1 signal to instruct it to reopen the log files. Basically nginx will keep trying to write to the file-descriptor for the old files (that's why you are seeing the Broken Pipe error). More info here:
http://wiki.nginx.org/LogRotation (also click through the other links at the bottom of that page).
hth