How to organize architecture of an isomorphic app using docker? - nginx

I am developing an isomorphic app. The key moment here is that js code on frontend server and on client is the same.
Suppose we have the following services:
frontend
backend
comments
database
Of course each of these services lives in it's own docker container.
And there is a need to access backend and comments services from client side (as api.app.com and comments.app.com respectively).
It seems pretty reasonable to use nginx as reverse proxy here. So these are new containers to be added:
nginx
consul
consul-template
registrator
And the last problem is to resolve *.app.com to nginx. How to achieve this without buying app.com domain? Of course solution is to add DNS to each container and to dev host. But what docker container should I use as DNS server?
Or maybe there is better architecture?

Related

ECS Nginx network setup

I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template

Stacking Nginx on top of Jhipster Gateway

We are creating an application following an microservice architecture using Jhipster, and now someone suggested putting an Nginx in front of the Jhipster gateway so user access goes through Nginx instead of directly through the Jhipster gateway, and my question is there any benefit in doing this? Since from my perspective we are just proxying twice the requests nothing else, or am I missing something?
It could be useful for:
load balancing multiple instances of your gateway
restrict external access to some URLs if you have an internal access to your gateway
blue/green deployments

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

Configuring nginx web server with multiple app server of aws stack

I am a DevOps guy and presently I am running my Ruby on Rails application on ubuntu ec2 where the app and also the web server reside inside the same box but we are using mysql RDS cluster. I can see lot of spikes due to more traffic to the web site. So I am planning to change the system. I wanna put web server nginx in a separate instance and web app in a separate instance. But this needs a load balancer which should reside in nginx box, but once the traffic goes up, the nginx instance can be configured to auto scale. What about the app server instance? It can be configured to auto scale but it needs to attach itself to the web server and web server needs to discover the new app server which was created. How can achieve this? Kindly help me out to get this done.
When you are using one single web server at the moment, a transition to using nginx as static webserver and proxy for another backend webserver on another instance really makes sense and will give you performance boost.
However I am not sure if you really need autoscaling. Autoscaling mostly makes sense if you want to react on fast traffic spikes etc. If you have a more or less continuous workload that might increase over time, it should be easier to manually launch and add another backend server in the nginx config. If this does not work for you, you can still have a look at Amazon's Elastic Loadbalancers and Autoscaling afterwards.

Resources