Automatically append docker container to upstream config of nginx load balancer - nginx

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.

There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

Related

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

Nginx multiple instances

I have an EC2 instance with AWS and I have installed nginx and created multiple server blocks to server multiple applications.
However, if nginx goes down, all the applications go down as well.
Is there any way to setup seperate nginx instance for each application? So if one nginx instance goes down, it won't affect other instances.
Yes, its technically possible to install 2 nginx instances on the same server but I would do it another way.
1 - You could just create multiple EC2 instances. The downside of this approach is that maybe it's gets harder to maintain depending on how many instances you want.
2 - You could use Docker or any of its alternatives to create containers and solve this problem. You can create as many containers you need and totally separate nginx instances. Although docker is simple to learn and start using it in no time, the downside of this approach is that you need to put a little effort to learn it and your main EC2 instance needs to have enough resources to share between the containers.
I hope it helps!
If it's possible to use ELB instead of nginx. this will be more convenient but if ELB doesn't work for you. nginx already support High Availability mode to avoid the problem you mentioned of having a single point of failure
it's documented officially here
https://www.nginx.com/products/nginx/high-availability/
it's better than having one nginx machine for every application and grantee more availability
The type of redundancy that you’re looking for is usually provided by a load balancer or reverse proxy in practice. There’s a lot of ways this can be achieved architecturally, but generally speaking looks like this;
Run multiple nginx instances with the same server definitions, and a balancer like haproxy. This allows the balancer to check which nginx instances are online and send requests to each in turn. Then if a instance goes down, or the orchestrator is bring up a new one, requests only get sent to the online ones.
If requests need to be distributed more heavily, you could have nginx instances for each server, with a reverse proxy directed at each instance or node.
There may be some overhead for nginx if you do it that way and your nginx may be difficult to maintain later because there are many nginx instances. Ex. If you need to update or add some modules it will be harder.
How about if you try using EC2 autoscaling group even a minimum 1 and desired 1? So that it will automatically launch a new instance if the current one goes down.
If you need to preserve some settings like the elastic ip of your EC2, you can try to search for EC2 instance recovery. It will restore your setup unlike the autoscaling group.
But it would be better if you will use a loadbalancer like ALB and use 2 instances at a minimum. Using an ALB will also make you more secure. You may also want to read about ALB target groups. It will give you more options on how to solve your current problem.

Receiving and serving static files in kubernetes

In the pre-k8s, pre-container world, I have a cloud VM that runs nginx and lets an authorized user scp new content into the webroot.
I'd like to build a similar setup in a k8s cluster to host static files, with the goal that:
An authorized user can scp new files in
These files are statically served on the web
These files are kept in a persistent volume so they don't disappear when things restart
I can't seem to figure out a viable combination of storage class + containers to make this work. I'd definitely appreciate any advice!
Update
What I didn't realize is that two containers running in the same pod can both have the same gcePersistentDisk mounted as read/write. So my solution in the end looks like one nginx container running in the same pod as an sshd container that can write to the nginx webroot. It's been working great so far.
I think you're trying to fit a square peg into a round hole here.
Essentially, you're building an FTP server (albeit with scp rather than FTP).
Kubernetes is designed to orchestrate containers.
The two don't really overlap at all.
Now, if you're really intent on doing this, you could hack something together by creating a docker container running an ssh daemon, plus nginx running under supervisor. The layer you need to be concentrating on is getting your existing VM setup replicated in a docker container. You can then run it on Kubernetes and attach a persistent volume.

How to organize architecture of an isomorphic app using docker?

I am developing an isomorphic app. The key moment here is that js code on frontend server and on client is the same.
Suppose we have the following services:
frontend
backend
comments
database
Of course each of these services lives in it's own docker container.
And there is a need to access backend and comments services from client side (as api.app.com and comments.app.com respectively).
It seems pretty reasonable to use nginx as reverse proxy here. So these are new containers to be added:
nginx
consul
consul-template
registrator
And the last problem is to resolve *.app.com to nginx. How to achieve this without buying app.com domain? Of course solution is to add DNS to each container and to dev host. But what docker container should I use as DNS server?
Or maybe there is better architecture?

Modifying nginx config directly in memory?

This might be a very silly question but I'll still ask it.
Nginx reads nginx.conf file & keeps information in memory/cache until you do a 'nginx -s reload'.
Is there a way were I can modify nginx configuration directly in memory as I need to do reload multiple times per minute and config file can be huge.
Basically the problem I'm trying to solve is that I have multiple docker containers coming up & down dynamically on a set of host machines. Every time a container comes up, it'll have a different IP & port open (application design constraint). And I'm thinking of using Nginx as reverse proxy. What should I do to solve this problem considering the fact that final product might have 3000 - 5000 containers running on a cluster of hosts. The rate at which containers are launched/destroyed might be around 100 per second.I need a fast way to make sure routing is happening properly
hmmm, probably not, nginx loads its config in multiple workers, so this does not look like a good idea to try to change it on the fly.
What it your goal ? You seem to need to do some dynamic routing or other sort of treatment. You should instead look at:
nginx directives and modules such as eval
Lua scripting
nginx module dev (in C/C++)
This would allow you to do more or less whatever you want, you can read some config in a db like redis, and change the behavior of your code according to the value in Redis.
For example, you could do a lot just by reading a value in Redis, and then use if directive in your nginx config file. You can use How can I get the value from Redis and put it in a variable in NGiNX? to get redis value in nginx with eval module.
UPDATE :
For dynamic IP in nginx, you should look at Dynamic proxy_pass to $var with nginx 1.0.
So I would suggest that you :
have a process that write in redis the IP address of your dockers
read it with eval and redis module in nginx
use the value to proxy

Resources