Multiple server processes using nginx and uWSGI - nginx

I've noticed that you can start multiple processes within one uWSGI instance behind nginx:
uwsgi --processes 4 --socket /tmp/uwsgi.sock
Or you can start multiple uWSGI instances on different sockets and load balance between them using nginx:
upstream my_servers {
server unix:///tmp.uwsgi1.sock;
server unix:///tmp.uwsgi2.sock;
#...
}
What is the difference between these 2 strategies and is one preferred over the other?
How does load balancing done by nginx (in the first case) differ from load balancing done by uWSGI (in the second case)?
nginx can front servers on multiple hosts. Can uWSGI do this within a single instance? Do certain uWSGI features only work within a single uWSGI process (ie. shared memory/cache)? If so it might be difficult to scale from the first approach to the second one....

The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is way better than having nginx load balacing between multiple instances (this is obviously true only for local instances). What you need to take in account is the "thundering herd problem". Its implications are exposed here: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html.
Finally, all of the uWSGI features are multithread/multiprocess (and greenthreads) aware so the caching (for example) is shared by all processes.

Related

Can we Scale gRPC like worker processes in the same server

Our application is expected to receive thousands of request every second and we are considering gRPC as one of our main service is in a different language.
My queries are
Can we use something like supervisor to spawn multiple workers (one gRPC server per service) as gRPC servers listening to the same port, Or is gRPC servers limited to only one per server/port
How would i go about the performance testing to determine maximum requests per gRPC server.
Thanks in advance
While you can certainly use supervisord to spawn multiple gRPC server processes, port sharing would be a problem. However, this is a Posix limitation, not a gRPC limitation. By default, multiple processes cannot listen on the same port. (to be clear, multiple processes can bind to the same port with SO_REUSEPORT, but this would not result in the behavior you presumably want).
So you have two options in order to get traffic routed to the proper service on a single port. The first option is to run all of the gRPC services in the same process and attached to the same server.
If having only a single server process won't work for you, then you'll have to start looking at load balancing. You'd front all of your services with any HTTP/2-capable load balancer (e.g. Envoy, Nginx) and have it listen on your single desired port and route to each gRPC server process as appropriate.
This is a very broad question. The answer is "the way you'd benchmark any non-gRPC server." This site is a great resource for some principles behind benchmarking.

Is it good or if there is some trouble will happen do a logrotate inside k8s pods with common file?

Using Kubernetes deploying nginx in several pods. Each pod is mounting access.log file to hostPath in order to read by Filebeat to collect to other output.
If do log rotation in the same cron time in every pod, they are using common access.log file, it works.
I tested with few data in a simple cluster. If large data occurred in production, is it a good plan or something wrong will happen with logrotate's design?
This will not usually work well since logrotate can't see the other nginx processes to sighup them. If nginx in particular can detect a rotation without a hup or other external poke then maybe, but most software cannot.
In general container logs should go to stdout or stderr and be handled by your container layer, which generally handles rotation itself or will include logrotate at the system level.

Load Balancer HA

what happens when a load balancer (HaProxy / Nginx) is down, this single point of failure may cause the whole system to be unavailable, what's the best strategy to recover in this case; how to prevent service to be unavailable.
Do we also need replication for Loadbalancer to prevent data loss?
The common solution is to run one or several servers with one or more VIPs (Virtual IP address) where the keepalived handle the VIP and haproxy the load.
This is one of many examples how to create such a setup Setting Up A High-Availability Load Balancer (With Failover And Session Support) With HAProxy/Keepalived On Debian Lenny
About the "replication" should you answer to you these questions.
what do you want to replicate?
how many replications do you want?
In HAProxy can you use Peers for replication of several things.

Use sperate pool for sites with HHVM

I am using HHVM 3.0.1(rel) with nginx over unix socket. I would like to setup pooling as in php-fpm and use different pools for different sites and allocate resources very accurately. Is it possible?
Currently, no. It's in the backlog of things to add, or you could work on adding it yourself.
The current workaround is to have multiple instances of HHVM running on different ports and manually set up pools that way.

Is there a way to make nginx start a uwsgi process at the first request?

I was thinking if is there a way to make nginx start a uwsgi proccess at the first request, so I can save a lot of memory with idle sites.
Someone knows how to do this?
Thanks!
Nginx don't start uwsgi processes at all. It's uWSGI server job.
Probably, you're looking for "cheap" mode:
http://projects.unbit.it/uwsgi/wiki/Doc#cheap
Nginx (by design) cannot generate new processes (this is why you do not have cgi support in nginx).
You can use the cheap+idle modes of uwsgi, to start with only the master and rip-off workers after a specified time (set by --idle) of inactivity.
If even starting only the master is too much for you (i suppose you want the minimal memory usage) you can look at the old-school inetd/xinetd or newer upstart socket bridge and systemd socket activation to activate uWSGI only on specific connections

Resources