Linux Apline: restart nginx - nginx

I have Alpine Linux, 3.15.0 version on the server.
The installed nginx version is 1.21.6. I have performed apk update
nginx -t command successfully responds with
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
When I type nginx -s reload server responds with
2023/02/03 10:58:00 [notice] 54#54: signal process started
but nothing actually happens. It's like the process started and that's all.
What am I missing?

According to Nginx documentation, command nginx -s reload actually sends signal to nginx master process and
once the master process receives the signal to reload configuration,
it checks the syntax validity of the new configuration file and tries
to apply the configuration provided in it. If this is a success, the
master process starts new worker processes and sends messages to old
worker processes, requesting them to shut down.
Thus, we can consider that nginx is restarted (If we disregard the fact that the master process itself continued to work).
At the same time, if you want to totally restart nginx, you can stop it with nginx -s quit command and then start again. Or that's much better use your system service manager. If I'm not mistaken, there is an open-rc in Alpine, thus command will be rc-service nginx restart.

Related

Using systemd to start vpnc results in 'unknown host'?

I'm trying to connect to vpnc using a systemd service file. The service file runs a script, myscript.sh, which, among other things, runs:
sudo vpnc myhost
Upon booting the device, the other commands are correctly executed, but the vpn is not connected, and gives me the error message:
vpnc: unknown host `myhost.com'
However, if I run the service file manually using
systemctl start myservice.service
then the vpn is successfully started.
My service file looks like this:
[Unit]
Description=VPN Start
Wants=network-online.target
After=network.target network-online.target
[Service]
Environment=DISPLAY=:0.0
Environment=XAUTHORITY=/home/pi/.Xauthority
Type=forking
ExecStart=/bin/bash /home/pi/myscript.sh
Restart=on-abort
User=pi
Group=pi
[Install]
WantedBy=multi-user.target
systemctl status myservice.service
includes this message:
pi: TTY=unknown ; PWD=/home/pi ; USER=root ; COMMAND=/usr/sbin/vpnc myhost
I have already done:
systemctl enable systemd-networkd-wait-online
and that hasn't appeared to help.
It might be to late, but maybe anyone else stumbles upon this.
I had the same issue although I configured the VPN through the GUI.
I found out eventually that my /etc/resolv.conf was a symlink to a configuration file some third party VPN software I sometimes use installed. Although the entries in that file looked fine it only worked, when I deleted the /etc/resolv.conf symlink and created a new normal file.
TL;DR:
Backup /etc/resolv.conf
Remove symlink
Create new /etc/resolv.conf and populate it with your preferred configuration
After that my VPN worked like a charm.

Homebrew Nginx not running but says it is in brew services

I have OSX El Capitan. I installed Nginx-Full via homebrew. I am supposed to be able to start and stop services with
brew services Nginx-Full Start
I run that command and it seems to start no problem. I check the running services with
brew services list
That indicates that the Nginx-Full services is running. When i run
htop
to look at everything that is running Nginx does not show up and the server is not handling requests.
nginx is failing to launch because of an error, but brew-services is not communicating that to you.
Running it with sudo, as other users have suggested, is just masking the problem. If you just run nginx directly, you may see that there is actually a configuration or permissions issue that is causing nginx to abort. In my case, it was because it couldn't write to the error log:
nginx: [alert] could not open error log file: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
2020/04/02 13:11:53 [warn] 19989#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:2
2020/04/02 13:11:53 [emerg] 19989#0: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
The last error is causing nginx to fail to launch. You can make yourself the owner of the logs with:
sudo chown -R $(whoami) /usr/local/var/log/nginx/
This should cause subsequent config errors to be written to the error log, even if homebrew services is not reporting them in stderr/stdout for now.
I've opened an issue about this: https://github.com/Homebrew/homebrew-services/issues/215
The log path may not the same for everyone. You can check the path to log file by checking the config file /usr/local/etc/nginx/nginx.conf. You can find a line like:
error_log /Users/myusername/somepath/nginx.log;. Change the chown command above accordingly. If even this didn't solve the problem, you may have to do the same for any other log files specified in the server blocks in your nginx configuration
Try launching it with "sudo", even if the formula say
The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that
nginx can run without sudo.
sudo brew services Nginx-Full start
this worked for me:
sudo brew services start nginx
Running sudo nginx worked for me, it initially gave some error stating certain file in certain directory is missing, creating that file, and then another file is asked for to be created and then it runs properly.
I had similar problem, running it brew services start nginx used to show nginx running.
but brew services list used to show error.
running with sudo nginx solved my issue

Ensure nginx master process stays running

I am currently trying to setup a docker container using ubuntu:14.04 as my base image, with nginx and gunicorn/django/celery running inside. I am using supervisor to start all of the processes, and have tested to make sure gunicorn is relaunched when it goes down. However, I can't figure out how to do it with nginx.
My supervisord.conf for nginx is as follows:
[program:nginx]
command=nginx
autorestart=false
I have autorestart set to false because, from what I can tell, the nginx command simply starts the master process and worker processes, and then exits with status code 0. If I have autorestart set to true, it simply keeps trying to restart that nginx command, which will fail for subsequent retries because the master/worker processes are already running and bound to the port.
On the surface, this seems okay, because if I try and kill a worker process, the master will start another worker to take it's place. But how do I ensure the master process stays running as well?
You need to append daemon off; to your nginx.conf configuration instructing nginx to run in the foreground.
Then modify your supervisor stanza to be:
[program:nginx]
command=nginx
autorestart=true
It will still spawn master/worker processes/subprocesses and can be used this way in production setups just fine. In this case it's supervisor that runs the process in the background and controls and supervises it.
See this FAQ entry

Nginx restart silently failing

I'm having a bit of an issue with Nginx silently failing on restart on Centos6.
If Nginx is running, service nginx restart will stop it, without actually restarting it.
Yet it outputs:
Checking configuration of nginx: [ OK ]
Stopping nginx: [ OK ]
Starting nginx: [ OK ]
Again, it says nginx is started successfully, yet a status check shows that it isn't. If nginx is already stopped, though, service nginx restart will work. Consequently a double restart will restart it.
The error.log for accompanying the output above only has:
2015/03/03 21:53:17 [notice] 14093#0: signal process started
Any thoughts as to why this is happening?
reload does not have this same problem. Is it swappable with restart when config changes are made?

Nginx Tornado and Flask - What's a good start/stop script and keep-alive method

I've set up a Flask application to run on a tornado server backed by nginx. I've written a couple of bash scripts to reload server configuration when a new version is deployed, but I am unhappy with them. Basically what I have is:
to start the server (assuming in project root)
# this starts the tornado-flask wrapper
python myapp.py --port=8000 # .. some more misc settings
# this starts nginx
nginx
to stop it
pkill -f 'myapp.py'
nginx -s stop
to restart
cd $APP_ROOT
./script/stop && ./script/start
Many times these don't work smoothly and I need to manually run the commands. Also, I'm looking for a way to verify the service is alive, and start it up if it's down. Thoughts? Thanks.
Supervisor is what you are looking for.
It's what I use to manage my Tornado apps along with some other processing daemons.
It will daemonize, handle logging, pid files... Pretty much everything you need.

Resources