Modifying nginx config directly in memory? - nginx

This might be a very silly question but I'll still ask it.
Nginx reads nginx.conf file & keeps information in memory/cache until you do a 'nginx -s reload'.
Is there a way were I can modify nginx configuration directly in memory as I need to do reload multiple times per minute and config file can be huge.
Basically the problem I'm trying to solve is that I have multiple docker containers coming up & down dynamically on a set of host machines. Every time a container comes up, it'll have a different IP & port open (application design constraint). And I'm thinking of using Nginx as reverse proxy. What should I do to solve this problem considering the fact that final product might have 3000 - 5000 containers running on a cluster of hosts. The rate at which containers are launched/destroyed might be around 100 per second.I need a fast way to make sure routing is happening properly

hmmm, probably not, nginx loads its config in multiple workers, so this does not look like a good idea to try to change it on the fly.
What it your goal ? You seem to need to do some dynamic routing or other sort of treatment. You should instead look at:
nginx directives and modules such as eval
Lua scripting
nginx module dev (in C/C++)
This would allow you to do more or less whatever you want, you can read some config in a db like redis, and change the behavior of your code according to the value in Redis.
For example, you could do a lot just by reading a value in Redis, and then use if directive in your nginx config file. You can use How can I get the value from Redis and put it in a variable in NGiNX? to get redis value in nginx with eval module.
UPDATE :
For dynamic IP in nginx, you should look at Dynamic proxy_pass to $var with nginx 1.0.
So I would suggest that you :
have a process that write in redis the IP address of your dockers
read it with eval and redis module in nginx
use the value to proxy

Related

Nginx multiple instances

I have an EC2 instance with AWS and I have installed nginx and created multiple server blocks to server multiple applications.
However, if nginx goes down, all the applications go down as well.
Is there any way to setup seperate nginx instance for each application? So if one nginx instance goes down, it won't affect other instances.
Yes, its technically possible to install 2 nginx instances on the same server but I would do it another way.
1 - You could just create multiple EC2 instances. The downside of this approach is that maybe it's gets harder to maintain depending on how many instances you want.
2 - You could use Docker or any of its alternatives to create containers and solve this problem. You can create as many containers you need and totally separate nginx instances. Although docker is simple to learn and start using it in no time, the downside of this approach is that you need to put a little effort to learn it and your main EC2 instance needs to have enough resources to share between the containers.
I hope it helps!
If it's possible to use ELB instead of nginx. this will be more convenient but if ELB doesn't work for you. nginx already support High Availability mode to avoid the problem you mentioned of having a single point of failure
it's documented officially here
https://www.nginx.com/products/nginx/high-availability/
it's better than having one nginx machine for every application and grantee more availability
The type of redundancy that you’re looking for is usually provided by a load balancer or reverse proxy in practice. There’s a lot of ways this can be achieved architecturally, but generally speaking looks like this;
Run multiple nginx instances with the same server definitions, and a balancer like haproxy. This allows the balancer to check which nginx instances are online and send requests to each in turn. Then if a instance goes down, or the orchestrator is bring up a new one, requests only get sent to the online ones.
If requests need to be distributed more heavily, you could have nginx instances for each server, with a reverse proxy directed at each instance or node.
There may be some overhead for nginx if you do it that way and your nginx may be difficult to maintain later because there are many nginx instances. Ex. If you need to update or add some modules it will be harder.
How about if you try using EC2 autoscaling group even a minimum 1 and desired 1? So that it will automatically launch a new instance if the current one goes down.
If you need to preserve some settings like the elastic ip of your EC2, you can try to search for EC2 instance recovery. It will restore your setup unlike the autoscaling group.
But it would be better if you will use a loadbalancer like ALB and use 2 instances at a minimum. Using an ALB will also make you more secure. You may also want to read about ALB target groups. It will give you more options on how to solve your current problem.

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

Looking up a container's address via its hostname dynamically in Nginx

I'm currently trying to run two containers on a single host, one being an application (Ruby on Rails) and the other Nginx as a reverse proxy and cache. The app is running on TCP port 80. What I want to be able to do is bring down my application container, remove it and then bring it up again without having to restart nginx. The problem is that Nginx only seems to look up the IP of the container once, so if it goes down then back up at a different address then Nginx will just complain that there's nothing there.
I've tried a few things:
Using resolver 127.0.0.11 valid=5 to use Docker's DNS
Using an upstream block
Using a variable to try to get nginx to resolve at runtime.
I'm not sure where else to look but none of these options work if the application is brought up on a different IP address. Is there something I'm missing making this impossible?
Thanks.
Ended up reading through the 12 factor app which inspired me to remove the Nginx proxying to Rails upstream altogether, and instead used it as a proxy cache which has an upstream of the external DNS name.

can I configure ngnix anyways other than through the normal ngnix.conf file

Is there any way I can configure ngnix other than through the normal ngnix.conf file ?
Like xml configuration or memcache or any other ways..?
My objective is to add/remove upstreams to the configuration dynamically. Ngnix doesnt seem to have a direct solution for this so I was planning to play with the configuration file, but I am finding it very difficult and error prone to modify the file through script/programs.
Any suggestions ?
No. You can't. The only way to "dynamically" reconfigure nginx is to process the config files in external software and then reload the server. Neither you can "program" config like in Apache. The nginx config is mostly a static thing which is praised for its performance.
Source: I needed it too, done some research.
Edit: I have a "supervising" tool installed on my hosts that monitors load and clusters and such. I've ended up implementing the upstreams scaling through it. Whenever a new upstream is ready, it notifies my "supervisor" on all web servers. The "supervisors" then query for served "virtual hosts" on the new upstream and add all of them to their context on the nginx host. then it just nginx -t && nginx -s reload everything. This is for nginx fastcgiing to php-fpms.
Edit2: I have many server blocks for different server_names (sites), each has an upstream associated to it on another host(s). In the server block I have include /path/to/where/my/upstream/configs/are/us-<unique_site_id>.conf line. the us-<unique_site_id>.conf is generated when the server block is created and populated with existing upstream(s) information. When there are changes in the upstreams pool or the site configuration, the file is rewritten to reflect it.

Geo IP Module for nginx

On my nginx server, I am going to use more than one of the geo ip databases (one for country+city and another one for isp or organization). I could not find a module for nginx and/or pecl to get more than one of these databases to run.
The database provider is not going to publish a single DB with all the data in one file), so it looks like i am lost.
http://wiki.processmaker.com/index.php/Nginx_and_PHP-FPM_Installation seems to work with one DB only.
It's possible with the standard built-in GeoIP nginx module:
http://nginx.org/en/docs/http/ngx_http_geoip_module.html
geoip_country CountryCity.dat;
geoip_city CountryCity.dat;
geoip_org Organization.dat;

Resources