ZEO deadlocks on uWSGI in master mode - deadlock

Good day!
I am migrating to uWSGI deployment. The project is half on ZOPE3 and uses ZODB with ZEO for multiple access. If I start the uwsgi daemon like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file
Everything runs OK. It's the Single Process mode. No blocks or locks.
When I start the app like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file -p 3
Everythig runs. It's the Preforking mode. We have good results. But some requests block. I suspect that the app blocks 1 request when the new instance starts. I have 2-3 locked request. All other work good.
But when I start like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file --master
The app launches, but no requests are served. When I go curl localhost:9090/some_page it never loads anything. No CPU no disk usage. It just locks.
Does someone know any specific ZEO behavior, that could result into this? If I run just FileStorage it runs normally without any deadlocks.
Any details about the master mode of uWSGI behavior would also be appreciated.

Ok. So I managed to launch the damn thing. I suspect, that ZEO's rpc works bad with linux forking. So you need to start the app only in the forked process, not before forking.
See lazy or lazy-apps configuration options for uwsgi.
ref: http://uwsgi-docs.readthedocs.org/en/latest/ThingsToKnow.html

Related

Openresty dynamic spawning of worker process

Is is possible to spawn a new worker process and gracefully shutdown an existing one dynamically using Lua scripting in openresty?
Yes but No
Openresty itself doesn't really offer this kind of functionality directly, but it does give you the necessary building blocks:
nginx workers can be terminated by sending a signal to them
openresty allows you to read the PID of the current wroker thread
LuaJITs FFI allows you to use the kill() system call or
using os.execute you can just call kill directly.
Combining those, you should be able to achieve what you want :D
Note: After reading the question again, I noticed that I really only answered the second part.
nginx uses a set number of worker processes, so you can only shut down running workers which the master process will then restart, but the number will stay the same.
If you just want to change the number of worker processes, you would have to restart the nginx instance completely (I just tried nginx -s reload -g 'worker_processes 4;' and it didn't actually spawn any additional workers).
However, I can't see a good reason why you'd ever do that. If you need additional threads, there's a separate API for that, other than that, you'll probably just have to live with a hard restart.

Is it bad to always run nginx-debug?

I'm reading the debugging section of NGINX and it says to turn on debugging, you have to compile or start nginx a certain way and then change a config option. I don't understand why this is a two step process and I'm inferring that it means, "you don't want to run nginx in debug mode for long, even if you're not logging debug messages because it's bad".
Since the config option (error_log) already sets the logging level, couldn't I just always compile/run in debug mode and change the config when I want to see the debug level logs? What are the downsides to this? Will nginx run slower if I compile/start it in debug mode even if I'm not logging debug messages?
First off, to run nginx in debug you need to run the nginx-debug binary, not the normal nginx, as described in the nginx docs. If you don't do that, it won't mater if you set the error_log to debug, as it won't work.
If you want to find out WHY it is a 2 step process, I can't tell you why exactly the decision was made to do so.
Debug spits out a lot of logs, fd info and so much more, so yes it can slow down your system for example as it has to write all that logs. On a dev server, that is fine, on a production server with hundreds or thousands of requests, you can see how the disk I/O generated by that log can cause the server to slow down and other services to get stuck waiting on some free disk I/O. Also, disk space can run out quickly too.
Also, what would be the reason to run always in debug mode ? Is there anything special you are looking for in those logs ? I guess i'm trying to figure out why would you want it.
And it's maybe worth mentioning that if you do want to run debug in production, at least use the debug_connection directive and log only certain IPs.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

How to cleanup sockets after mono-process crashes?

I am creating a chat server in mono that should be able to have many sockets open. Before deciding on the architecture, I am doing a load test with mono. Just for a test, I created a small mono-server and mono-server that opens 100,000 sockets/connections and it works pretty well.
I tried to hit the limit and at sometime the process crashes (of course).
But what worries me is that if I try to restart the process, it directly gives "Unhandled Exception: System.Net.Sockets.SocketException: Too many open files".
So I guess that somehow the filedescriptions(sockets) are kept open even when my process ends. Even several hours later it still gives this error, the only way I can deal with it is to reboot my computer. We cannot run into this kind of problem if we are in production without knowing how to handle it.
My question:
Is there anything in Mono that keeps running globally regardless of which mono application is started, a kind of service I can restart without rebooting my computer?
Or is this not a mono problem but a unix problem, that we would run into even if we would program it in java/C++?
I checked the following, but no mono processes alive, no sockets open and no files:
localhost:~ root# ps -ax | grep mono
1536 ttys002 0:00.00 grep mono
-
localhost:~ root# lsof | grep mono
(nothing)
-
localhost:~ root# netstat -a
Active Internet connections (including servers)
(no unusual ports are open)
For development I run under OSX 10.7.5. For production we can decide which platform to use.
This sounds like you need to set (or unset) the Linger option on the socket (using Socket.SetSocketOption). Depending on the actual API you're using there might be better alternatives (TcpClient has a LingerState property for instance).

php5-fpm craches

I have a webserver (nginx) running debian and php5-fpm randomly seems to crach, it replys with 504 bad gateway if i call php files.
when it is in a crashed state and i do sudo /etc/init.d/php5-fpm it says that it is running, but it will still it gives 504 bad gateway until i do sudo /etc/init.d/php5-fpm
I'm thinking that it has maybe to do with one of my php files which is in a infinity loop until a certain event occurs (change in mysql database) or until it will be time-outed. I don't know if generally that is a good thing or if i should make the loop quit itself before a timeout occurs.
Thanks in advice!
First look at the nginx error.log for the actual error. I don't think PHP crashed, just your loop is using all available php-fpm processes, so there is none free to serve your next request from nginx. That should produce Timeout error in the logs (nginx will wait for some time for available php-fpm process).
Regarding your second question. You should not use infinite loops for this. And if you do, insert sleep() command inside the loop - otherwise you will overload your CPU with that loop and also database with queries.
Also I guess it is enough to have one PHP process in that loop waiting for a event. In that case use some type of semaphore (file or info in db) to let other processes know that one is already waiting for that event. Otherwise you will always eat up all available PHP processes.

Resources