Use sperate pool for sites with HHVM - nginx

I am using HHVM 3.0.1(rel) with nginx over unix socket. I would like to setup pooling as in php-fpm and use different pools for different sites and allocate resources very accurately. Is it possible?

Currently, no. It's in the backlog of things to add, or you could work on adding it yourself.
The current workaround is to have multiple instances of HHVM running on different ports and manually set up pools that way.

Related

Nginx multiple instances

I have an EC2 instance with AWS and I have installed nginx and created multiple server blocks to server multiple applications.
However, if nginx goes down, all the applications go down as well.
Is there any way to setup seperate nginx instance for each application? So if one nginx instance goes down, it won't affect other instances.
Yes, its technically possible to install 2 nginx instances on the same server but I would do it another way.
1 - You could just create multiple EC2 instances. The downside of this approach is that maybe it's gets harder to maintain depending on how many instances you want.
2 - You could use Docker or any of its alternatives to create containers and solve this problem. You can create as many containers you need and totally separate nginx instances. Although docker is simple to learn and start using it in no time, the downside of this approach is that you need to put a little effort to learn it and your main EC2 instance needs to have enough resources to share between the containers.
I hope it helps!
If it's possible to use ELB instead of nginx. this will be more convenient but if ELB doesn't work for you. nginx already support High Availability mode to avoid the problem you mentioned of having a single point of failure
it's documented officially here
https://www.nginx.com/products/nginx/high-availability/
it's better than having one nginx machine for every application and grantee more availability
The type of redundancy that you’re looking for is usually provided by a load balancer or reverse proxy in practice. There’s a lot of ways this can be achieved architecturally, but generally speaking looks like this;
Run multiple nginx instances with the same server definitions, and a balancer like haproxy. This allows the balancer to check which nginx instances are online and send requests to each in turn. Then if a instance goes down, or the orchestrator is bring up a new one, requests only get sent to the online ones.
If requests need to be distributed more heavily, you could have nginx instances for each server, with a reverse proxy directed at each instance or node.
There may be some overhead for nginx if you do it that way and your nginx may be difficult to maintain later because there are many nginx instances. Ex. If you need to update or add some modules it will be harder.
How about if you try using EC2 autoscaling group even a minimum 1 and desired 1? So that it will automatically launch a new instance if the current one goes down.
If you need to preserve some settings like the elastic ip of your EC2, you can try to search for EC2 instance recovery. It will restore your setup unlike the autoscaling group.
But it would be better if you will use a loadbalancer like ALB and use 2 instances at a minimum. Using an ALB will also make you more secure. You may also want to read about ALB target groups. It will give you more options on how to solve your current problem.

Achive a "more than 10 connections, pass to next server" setup with NGINX or other

Idea
Gradually use a few small-scale dedicated servers in combination with an expensive cloud platform, where - on little traffic - the dedicated servers should first filled up before the cloud kicks in. Hedging against occasional traffic spikes.
nginx
Is there an easy way (without nginx plus) to achieve a "waterfall like" set-up, where small servers should first be served up to a maximum number of concurrent connections, or better, current bandwidth before the cloud platform sees any traffic?
Nginx Config, Libraries, Tools?
Thanks
You will use nginx upstream module.
If you want total control, set your cloud servers with backup parameter, so that they won't be used until your primary servers fail. Then use custom monitoring scripts to determine when those could servers should kick-in, change nginx config and remove the backup keyword from them. Also monitor conditions when you want to stop using the cloud servers and alter the nginx config.
More simple solution (but without fine tuning like avoiding spikes) is to use the max_conns=number parameter. Nginx should start to use the backup server if all other already have max number of connections (I didn't test it).
NOTE: max_conns parameter was only available in paid nginx between v1.5.9 and v1.11.5, so the only solution with these versions is own monitoring + reloading of nginx config when needed to change the upstream servers. Thanks Mickaël Le Baillif's comment to point out this parameter is now available to all.

Efficiently using multiple docker containers in a single host

I have a physical server running Nginx, MySQL and serving my PHP website. The server has Multi-Core processor with 16 GB of RAM. This server can handle certain amount of web traffic.
Now instead of this single server, if I run multiple docker containers running individual instances of Nginx (App Server) and MySQL (DB Server) in it and load balance between the application and database containers, will it be able to handle the same amount of traffic as a single server handled it or would it be lesser (Performance wise)?
How will the performance be if I use a virtual server like EC2 or Digital Ocean Leaflet with the same hardware configuration instead of a physical server?
Since all process run on the native host (you can run ps aux on host outside container and see them). There should be very little overhead. The network bridging and IP Tables entries to forward packets to virtual host will add some CPU overhead but I can't imagine that being too onerous.
If the question is several nginx + 1 mysql versus several containers with each nginx + mysql, probably performance wise would be better not using containers, mainly because mysql and how much memory can have if is a single instance vs having multiple separate instances. You can anyway have the nginx in separate containers but using one central mysql for all sites, in a container or not.

Multiple server processes using nginx and uWSGI

I've noticed that you can start multiple processes within one uWSGI instance behind nginx:
uwsgi --processes 4 --socket /tmp/uwsgi.sock
Or you can start multiple uWSGI instances on different sockets and load balance between them using nginx:
upstream my_servers {
server unix:///tmp.uwsgi1.sock;
server unix:///tmp.uwsgi2.sock;
#...
}
What is the difference between these 2 strategies and is one preferred over the other?
How does load balancing done by nginx (in the first case) differ from load balancing done by uWSGI (in the second case)?
nginx can front servers on multiple hosts. Can uWSGI do this within a single instance? Do certain uWSGI features only work within a single uWSGI process (ie. shared memory/cache)? If so it might be difficult to scale from the first approach to the second one....
The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is way better than having nginx load balacing between multiple instances (this is obviously true only for local instances). What you need to take in account is the "thundering herd problem". Its implications are exposed here: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html.
Finally, all of the uWSGI features are multithread/multiprocess (and greenthreads) aware so the caching (for example) is shared by all processes.

Is there a way to make nginx start a uwsgi process at the first request?

I was thinking if is there a way to make nginx start a uwsgi proccess at the first request, so I can save a lot of memory with idle sites.
Someone knows how to do this?
Thanks!
Nginx don't start uwsgi processes at all. It's uWSGI server job.
Probably, you're looking for "cheap" mode:
http://projects.unbit.it/uwsgi/wiki/Doc#cheap
Nginx (by design) cannot generate new processes (this is why you do not have cgi support in nginx).
You can use the cheap+idle modes of uwsgi, to start with only the master and rip-off workers after a specified time (set by --idle) of inactivity.
If even starting only the master is too much for you (i suppose you want the minimal memory usage) you can look at the old-school inetd/xinetd or newer upstart socket bridge and systemd socket activation to activate uWSGI only on specific connections

Resources