nginx kill my threads - nginx

I developed a nginx module to connect zookeeper, but I find the threads that created on zookeeper init were killed by nginx. When I debug by gdb with 'info threads', it only shows one thread, but it should be three threads. Why is this and how can I solve it?

Related

How do web servers stay alive?

I am wondering how web servers i.e. Nginx, Flask, Django stay alive and wait for requests. And how I can write my own program which stays alive and waits for a request before launching an action.
The short answer for the overwhelming number of cases involving nginx is systemd service. When you install nginx it sets itself up as a systemd service which is configured to start nginx on boot and keep it running.
You can adapt systemd to load and keep your own services (like Flask, etc.) alive and waiting for requests as well. Here is an article that explains the basics.
An alternative to systemd (which is built into most of the Linux systems you would be using on a server) is supervisord. Like systemd, supervisord can be configured to monitor, start, and keep your service running in the background, waiting for a request.

What is standard number of workers and threads to use for uWSGI server?

I have a Nginx web server with uWSGI app server installed on a single CPU Ubuntu 14.04 image.
This uWSGI app server successfully handles a Flask app requests.
The problem I am facing is that sometimes requests from a single client will time out for an extended period of time (1-2 hours).
This was happening without specifying workers or threads in my uwsgi.conf file.
Is there an ideal amount of workers/threads to be used per CPU?
I am using emperor service to start the uWSGI app server.
This is what my uwsgi.conf looks like
description "uWSGI"
start on runlevel [2345]
stop on runlevel [06]
respawn
env UWSGI=/var/www/parachute_server/venv/bin/uwsgi
env LOGTO=/var/log/uwsgi/emperor.log
exec $UWSGI --master --workers 2 --threads 2 --emperor /etc/uwsgi/vassals --die-on-term --uid www-data --gid www-data --logto $LOGTO --stats 127.0.0.1:9191
Could this be a performance problem in regards to nginx / uwsgi or is it more probable that these timeouts are occuring because I am only using a single CPU?
Any help is much appreciated!
Interesting issue you have...
Generally, you'd specify at least 2 * #CPUs + 1. This is because uWSGI might be performing a read/write to a socket, and then you'll have another worker accepting requests. Also, the threads flag is useful if your workers are synchronous, because they can notify the master thread that they are still busy working and so preventing a timeout.
I think having one worker was the reason for your timeout (blocking all other requests), but you should inspect your responses from your app. If they are taking a long time (say reading from db), you'll want to adjust the uwsgi_read_timeout directive in Nginx to allow uWSGI to process the request.
I hope this helps.

springboot embeded tomcat Acceptor thread missing

We are using springboot 1.3.3.RELEASE with embedded tomcat.
The service is running on redhat Linux 2.6.32 (64bit), using java 1.8.0_45.
In our load environment we noticed that the server is up (java VM is still running) and responding to non-HTTP requests, but HTTP requests exposed via SpringMVC Rest do not work, we get timeout.
After comparing thread dump between healthy and not-healthy system, we noticed in a non-healthy system the http-nio-{port}-Acceptor thread is missing.
In particular the dump shown bellow from a healthy machine is missing from bad one.
"http-nio-8080-Acceptor-0" #45 daemon prio=5 os_prio=0
tid=0x00007f13fb9ef800 nid=0x896b runnable [0x00007f146f3f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
- locked <0x00000005cd5d0558> (a java.lang.Object)
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:682)
at java.lang.Thread.run(Thread.java:745)
What could be reasons for the Acceptor thread to be missing?
Is it possible to restart the threat, without restarting the whole application?
Is that the reason we are not able to service HTTP requests?

Is there a way to make nginx start a uwsgi process at the first request?

I was thinking if is there a way to make nginx start a uwsgi proccess at the first request, so I can save a lot of memory with idle sites.
Someone knows how to do this?
Thanks!
Nginx don't start uwsgi processes at all. It's uWSGI server job.
Probably, you're looking for "cheap" mode:
http://projects.unbit.it/uwsgi/wiki/Doc#cheap
Nginx (by design) cannot generate new processes (this is why you do not have cgi support in nginx).
You can use the cheap+idle modes of uwsgi, to start with only the master and rip-off workers after a specified time (set by --idle) of inactivity.
If even starting only the master is too much for you (i suppose you want the minimal memory usage) you can look at the old-school inetd/xinetd or newer upstart socket bridge and systemd socket activation to activate uWSGI only on specific connections

Does stopping a BizTalk host instance also stop the Applications that run under it?

Does stopping a BizTalk host instance also stop the Applications that run under it?
Or, what is the difference between stopping a host instance and stopping the applications under it?
No, host instances and applications are completely independent. You can stop a host instance and the application will remain in the started state. You can stop an application and the host instance will remain in the running state.
To understand the difference between stopping a host instance and stopping an application you first need to understand what these things are.
Basically, you need to think of your application as a set of assemblies plus some runtime configuration, and a set of logical subscriptions.
When you "start" an application up there are actually two steps which happen.
The parts of your application which need to receive messages (orchestrations and send ports) are enlisted. This ensures that an internal queue exists to receive the messages. Note that the application is not yet started, but it can receive and queue messages for processing later.
Then when you start the application the various parts of your application are able to process the messages.
The host instance is basically a windows service.
When you stop the host instance, all you are really doing is stopping the underlying windows service which runs the host instance. This means that all the assemblies which contain your application artifacts are unloaded, and the application will obviously stop processing. This is despite the fact that the application is still in the started state.
When you start the host instance again it loads your application assemblies back into memory and will be able to continue processing new messages. Messages which were being processed when the host instance was stopped may be in a state called suspended, but if they are can be manually resumed.
Hope this helps.
Yes, if you application run only on that host instance (meaning: application will stop to process messages). However internals of why it stopped processing is quite different. See explanation below and in hugh jadick's answer.
Stopping host instance for specified host type will stop execution of all artifacts (adapter handlers, receive locations, pipelines, orchestrations, etc.) that run on specified host. Application is a logical group of artifacts which can run on single or multiple host instances. Multiple applications can run on a single host instance, and vice versa. So, stopping an app is just shutdown of execution of artifacts, while stopping host instance is shutdown of physical instance there app artifacts are executing.

Resources