I edited my /etc/passwd and /etc/group file without backups and found the user www-data I thought it is useless, then deleted it from these two files.
When I tried to start my Nginx service, I got a failure whose log shows nginx using the www-data user to produce workers!
Was the user created by Nginx when I installed it or It was there since the big bang (I mean is it inherent user)?
Should I recreate the user and give some permissions to it (although I have no idea what permissions it needs), or just reinstall Nginx?
Thank you!
What OS are you using? On most linux user www-data is by default available with UID/GUID 33. In file /etc/passwd in Ubuntu and Debian you shoud have:
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
and in file /etc/group:
www-data:x:33:
Adding this two entries should be enough.
But of course you can use nginx as another user, just edit nginx.conf file. This is usefull sometimes when you have nginx for just for one website.
Related
Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.
Sorry if this is answered elsewhere, or requires a trick.
I have installed openCPU on an ubuntu xenial-16.04 instance. I'd like to lengthen the timelimit.post value as instructed in the /etc/opencpu/server.conf file. Trouble is I can't find it.
ubuntu#ip-x-x-x-x:/usr/lib/opencpu$ ls -a
. .. library rapache scripts
Maybe please check again to see if you don't find /etc/opencpu/server.conf in the expected directory - i.e. because, as above in your output of ls -la /etc/opencpu/, the server.conf file is listed being there. Note though the owner is root so take that into account when you try to open+edit the file.
I am writing a very basic RPM that does nothing more then drop off a simple GUI onto a system. It require nginx, drop some code into it's html directory, and drops a conf file into it's conf.d directory. Most of the time this will likely be run on a VM or fresh box with little else installed.
While testing my RPM I noticed that the nginx that it installs fails out of the box. The problem is that it's default.conf directory uses IPV6 address instead of IPV4 and the machine does not have an IPV6 address set, I gaurentee none of the machines this code is installed on will ever have IPV6 set.
The fix is very simple, but my question is about good protocol. I'm guessing it would usually be considered wrong to have my RPM modifying the default.conf of the nginx file to fix the line causing the exception, but at the same time if I don't my RPM will not function out of the box without someone manually making a tweak to the configuration files. How 'wrong' is it to overwrite the default files if I'm mostly confident that I'll be installed on machines that don't have iPV6 addresses?
I'd check if you can drop something in conf.d to override the bad settings.
Otherwise...
Your %post can modify it with something like sed. Then put a flag there indicating you did, so your %postun can try to clean up afterwards.
I have created a folder to the default server at /var/www/default and everything works as expected.
Inside that folder I made a symlink to ~/WebstormProjects/my-project, using the common ln -s.
It worked for a while, and the last time I updated using apt-get, nginx doesn't follow anymore the symbolic link, which gives me a 404 error, not even listing the symlinks as it used to do.
Tried using the disable_symlinks directive, setting it to off, and nothing happened. Also followed the steps in this link and still nothing. Also added myself to the www-data user, nothing.
But if I edit nginx.conf by changing the user directive to my own user and restarting the server does work, but I know that's a very bad practice and some day in the future it will not allow PHP-FPM to work.
So, what can I do to make nginx follow symlinks, without changing the owner of my source directories? BTW, I'm using Ubuntu 14.04.3 and nginx 1.4.6 installed via package manager.
It was just a problem with permissions:
chmod 755 /home
chmod 755 /home/user
Got previous commands from this answer.
SOLVED: See my answer below
I'm experiencing the same issue that Austin Hyde experienced in this question. I have an SQLite database that I can read, but not write.
Specifically, I'm getting General error: 8 attempt to write a readonly database in /var/www/html/green/database.php on line 34
My issue diverges from his as follows:
-As recommended in the answers to his question, I've made the database world-writeable, as well as the folder in which the database resides, with no luck. I've also set the owner of the database to "apache" as well as "nobody", without success.
-I've set the entire path set 777, beginning at /var (which I hate to do), no joy.
-I've messed about with SELinux (I'm running Fedora 12) to let httpd do whatever it wants; nothing.
I feel that I'm almost certainly missing something simple here, but I'm out of ideas.
What permissions need to be on an SQLite file in order to allow PHP / Apache to read and write to it via PDO?
Edit: Another related question, adding weight to the hypothesis that I've got a write permissions conflict somewhere.
For those who can not afford to disable SELinux entirely, here is the way to go.
To make a directory (say rw_data) and all it's content writable to any process running in httpd_t domain type ie. web-server processes, use following command as root.
chcon -R -t httpd_sys_content_rw_t "/var/www/html/mysite/rw_data/"
you can check SELinux context labels with following command :
ls -Z /var/www/html/mysite | grep httpd_sys_content_rw_t
This works on Fedora 16, should work on other SELinux enabled distros too.