I'm looking to reload, not restart, nginx with monit. The docs say that valid service methods are start, stop and restart but not reload.
Does anyone have a workaround for how to reload nginx rather than restart it?
Edit - I should have pointed out that I still require the ability to restart nginx but I also need, under certain conditions, to reload nginx only.
An example might be that if nginx goes down it needs to be restarted but if it has an uptime > 3 days (for example) it should be reloaded.
I'm trying to achieve this: https://mmonit.com/monit/documentation/monit.html#UPTIME-TESTING
...but with nginx reloading, not restarting.
Thanks.
I solved this issue using the exec command when my conditions are met. For example:
check system localhost
if memory > 95%
for 4 cycles
then exec "/etc/init.d/nginx reload"
I've found that nginx memory issues can be resolved in the short term by reloading rather than restarting.
You can pass the reload signal which should do the job:
nginx -s reload
"Use the docs. Luke!"
According to the documentation, sending HUP signal will cause nginx to re-read its configuration file(s), to check it, and to apply new configuration.
See for details: http://nginx.org/en/docs/control.html#reconfiguration
Here's a config that will achieve what you wanted:
check process nginx with pidfile /usr/local/var/run/nginx.pid
start program = "/usr/local/bin/nginx -s start"
stop program = "/usr/local/bin/nginx -s stop"
if uptime > 3 days then exec "/usr/local/bin/nginx -s reload"
I've tried this on my configuration. The only problem I'm seeing is that Monit assumes that you're defining an error condition when checking the uptime like this. The nginx -s reload command, as I see it on my machine, does not reset the process' uptime back to 0. Since Monit thinks that the uptime being > 3 days is an error condition being remedied by the command you give it in the config, but that command doesn't reset the uptime to be less than 3 days, Monit will report Uptime failed as the status of the process, and you'll see this in the logs:
error : 'nginx' uptime test failed for /usr/local/var/run/nginx.pid -- current uptime is 792808 seconds
You'll see hundreds of these, actually (my config has Monit run every 30 seconds, so I get one of these every 30 seconds).
One question: I'm not sure what reloading nginx after a long time, like 3 days, will do for it - is it helpful to do that for nginx? If you have a link to info on why that would be good for nginx to do, that might help other readers getting to this page via search. Maybe you accepted the answer you did because you saw that it would only make sense to do this when there is an issue, like memory usage being high?
(old post, I know, but I got here via Google and saw that the accepted answer was incomplete, and also don't fully understand the OP's intent).
EDIT: ah, I see you accepted your own answer. My mistake. So it seems that you did in fact see that it was pointless to do what you initially asked, and instead opted for a memory check! I'll leave my post to give this clarity to any other readers with the same confusion
Related
I have following configuration with
worker_process 4;
But I noticed that it always hits only 1 worker.
I am testing on a local Centos VM. I am doing curl http call on specific port and added a file with 1000 curl requests and ran them from multiple terminal windows.
But see alll of them hit only 1 worker. Is there a way that I can have atleast more than 1 worker started. Can someone please share their knowledge on this.
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
In the epoll-and-accept the load balancing algorithm differs: Linux seems to choose the last added process, a LIFO-like behavior. The process added to the waiting queue most recently will get the new connection. This behavior causes the busiest process, the one that only just went back to event loop, to receive the majority of the new connections. Therefore, the busiest worker is likely to get most of the load.
Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?
Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.
I am using Docker version 17.06.2-ce, build cec0b72 on CentOS 7.2.1511.
I am going through the docker getting started tutorial. I have played around with docker a bit beyond this, but not much.
I have built the friendlyhello image by copy-pasting from the website. When running with
docker run -d -p 8080:80 friendlyhello
I can curl localhost:8080 and get a response in ~20ms. However, when I run
docker run -p 8080:80 friendlyhello
i.e., without being detached from the container, trying to curl localhost:8080 takes over 50 seconds. This makes no sense to me.
EDIT: it seems that killing containers repeatedly may have something to do with this. Either that, or it's random whether a given container can serve quickly or not. After stopping and starting a bunch of identical containers with the -d flag as the only change, I have only seen quick responses from detached containers, but detached containers can also be slow to respond. I also think it's worth mentioning that 95%+ of the slow response times have been either 56s or 61s.
Trying to research this error gives me plenty of responses about curl being slower when run inside a container, but that is about all I can find.
If it matters, I'm working on a VM, have no access to the host, am always root, and am behind a network firewall and proxy, but I don't think this should matter when only dealing with localhost.
I'm dumb.
The getting started tutorial says that it can take a long time for http responses with this app because of an unmet dependency that they have you add further in the tutorial. Unfortunately, they say this on the next page, so if you're on part 2 and a beginner, it's not clear why this problem occurs until you give up and go on to part 3. They claim the response may take "up to 30 seconds"; mine is double that, but it's clear that this is the root cause.
I'm reading the debugging section of NGINX and it says to turn on debugging, you have to compile or start nginx a certain way and then change a config option. I don't understand why this is a two step process and I'm inferring that it means, "you don't want to run nginx in debug mode for long, even if you're not logging debug messages because it's bad".
Since the config option (error_log) already sets the logging level, couldn't I just always compile/run in debug mode and change the config when I want to see the debug level logs? What are the downsides to this? Will nginx run slower if I compile/start it in debug mode even if I'm not logging debug messages?
First off, to run nginx in debug you need to run the nginx-debug binary, not the normal nginx, as described in the nginx docs. If you don't do that, it won't mater if you set the error_log to debug, as it won't work.
If you want to find out WHY it is a 2 step process, I can't tell you why exactly the decision was made to do so.
Debug spits out a lot of logs, fd info and so much more, so yes it can slow down your system for example as it has to write all that logs. On a dev server, that is fine, on a production server with hundreds or thousands of requests, you can see how the disk I/O generated by that log can cause the server to slow down and other services to get stuck waiting on some free disk I/O. Also, disk space can run out quickly too.
Also, what would be the reason to run always in debug mode ? Is there anything special you are looking for in those logs ? I guess i'm trying to figure out why would you want it.
And it's maybe worth mentioning that if you do want to run debug in production, at least use the debug_connection directive and log only certain IPs.
I have a webserver (nginx) running debian and php5-fpm randomly seems to crach, it replys with 504 bad gateway if i call php files.
when it is in a crashed state and i do sudo /etc/init.d/php5-fpm it says that it is running, but it will still it gives 504 bad gateway until i do sudo /etc/init.d/php5-fpm
I'm thinking that it has maybe to do with one of my php files which is in a infinity loop until a certain event occurs (change in mysql database) or until it will be time-outed. I don't know if generally that is a good thing or if i should make the loop quit itself before a timeout occurs.
Thanks in advice!
First look at the nginx error.log for the actual error. I don't think PHP crashed, just your loop is using all available php-fpm processes, so there is none free to serve your next request from nginx. That should produce Timeout error in the logs (nginx will wait for some time for available php-fpm process).
Regarding your second question. You should not use infinite loops for this. And if you do, insert sleep() command inside the loop - otherwise you will overload your CPU with that loop and also database with queries.
Also I guess it is enough to have one PHP process in that loop waiting for a event. In that case use some type of semaphore (file or info in db) to let other processes know that one is already waiting for that event. Otherwise you will always eat up all available PHP processes.