Consul in a Docker-based Microservices architecture - networking

We are working on switching over to micro services from a monolithic application.
Each microservice is going to be running on Docker through Amazon ECS.
We've decided to use Consul for service discovery. We have 3 servers running on EC2 instances inside the VPC.
My question is as follows:
How/Where do I start the Consul agent for each micro service? Do I run another container on each instance (through Docker-Compose) with Consul inside? Or do I somehow run a Consul agent inside the already existing Docker container for each micro service?
Attached is a rough representation of my situation.
Should the Consul Client (in yellow) be in its own Docker Container or inside the Node.js container?

Consul is another service, and I wouldn't deploy it inside the container of my microservice. In a large-scale scenario, I'd deploy several Consul containers: some would run the agent under Server mode (think of them as Masters), and some would run it under Client mode (think of them as Slaves).
I wouldn't deploy the agents running under client mode as part of my application's containers, because:
Isolating them means that they are stopped individually. Putting them together means that whenever I'd stop my application's container due to a version upgrade or a failure, I'd be needlessly stopping the Consul agent running therein. Same goes the other way around: stopping the Consul agent would stop my running application. This unneeded coupling isn't beneficial.
Isolating them means they can be scaled separately. I may need to scale my microservice and deploy more instances of it. If the container also contains Consul client agents, then scaling my microservice would end up scaling Consul as well. Or the other way around: I may need to scale Consul without scaling my microservice.
Isolating them is easier in terms of Docker container images. I can keep using the official Consul image and upgrade without much hassle. Putting Consul and my microservice together would mean that upgrading Consul requires me to modify the container image by myself.

Related

Migrating to ECS Fargate from EKS

I'm currently in the process of migrating 3 applications from Elastic Kubernetes Service (EKS) to ECS Fargate. Each application is built with Node JS .The current setup seems to be only 1 load balancer in front of one application and the other two applications are accessed through that one load balancer. This is currently how all three applications is accessed:
first_app.example.com
first_app.example.com/second_app
first_app.example.com/third_app
The front end of each application is being powered by an nginx proxy in EKS. I'm not entirely sure if I need nginx to be in ECS Fargate because the application load balancer I'm planning on to use will have an SSL cert integrated with it for redirects to HTTPS from HTTP. I'm a little unclear how to approach moving these applications to Fargate. Additionally, the third app has 3 additional functions:
Apollo GraphQL (abstraction layer between the front end & back end)
CSV
File Manager
This functionality also needs to be implemented on the Fargate side.
Currently I have setup one ECS Fargate cluster, one ECS Service, and one task definition. The task definition currently has the following 7 ECR images:
app_one_front_end
app_two_front_end
app_three_front_end
app_three_csv_job
app_three_file_manager_job
app_three_graphql
nginx ??
All of these images are stored in ECR. However I don't believe I need nginx in this Fargate cluster.
I'm a little unsure how to approach the architecture for this set of applications. It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition. The service can then be associated with an application load balancer where I set path based routing to access each application.
Any advice on how to approach this migration would be appreciated.
Thanks!
Each Kubernetes Replica Set should be converted to an ECS Service. Each Kubernetes Pod would be converted to an ECS Task.
Kubernetes Replica Set == ECS Service
Kubernetes Pod == ECS Task
If you had multiple Replica Sets in Kubernetes, in order to scale your pods independently, then in order to have the same scalability in ECS you would configure them as separate services with independent scaling configurations.
You are correct in that you probably don't need the Nginx container in ECS.
It seems I can only have one task definition running on a service, that's why all containers were implemented into one task definition.
Services can communicate with each other. You would enable ECS Service Discovery to facilitate that. However it is fine to have them all in the same Task/Service if they don't need to be scaled out independently.
Also, multiple services can be associated with a single Application Load Balancer by creating different listener rules in the load balancer that map to different Target Groups, if that is something you need. You might need to have multiple Target Groups even if you only have a single ECS Service, because you will need to map different load balancer listeners to different containers in your task. That basically allows the Application Load Balancer to perform the job that Nginx was doing in Kubernetes.

Airflow and Docker Containers

I am running airflow in containers AWS ECS, 1 scheduler, 2 web servers, and multiple celery workers.
From what I have seen the only thing that is affected when running them in containers is that the web servers are unable to access the workers' port on 8793 to retrieve logs from the workers.
Is that that only thing that is affected when running these in containers?
Yes because you can only map one port from docker container to host instance. I use a similar setup and logs are the only main issue I have faced. There are different ways to solve this though. You use other logging services on the container which push the logs to Cloudwatch or FLuentdb etc..

Configure the network interfaces of the host a docker container is running on

I have a web service (webpage) that allows the user to configure the network interfaces of the host (it is basically a webpage used to configure the host NICs). Now we are thinking of moving such service inside a docker container. That means that the sw running inside the container should be able to modify the configuration of the network interface of the host the docker is running on top of.
I tried starting a docker with --network=host and I used the ip command to modify the interfaces configuration, but all I can (obviously?!?) get is permission denied.
This probably make sense as it might be an issue from a security point of view, not to mention you are changing the network configuration seen by other potentially running containers, but I'm wondering if there is any docker configuration/setting that might allow me to perform the task entirely inside the docker container (at my own risk).
By that I mean that I can think at least of a workarond, having a service running on the host (outside the docker container) and have the docker and the service talk to each other with some IPC mecchanics.
This is a solution, but not optimal, as this will brake the docker paradigm of having all your stuff running inside the container. Moreover that would mean that when we upgrade the container with a new version of the software, we might need also to upgrade the module outside the container.
Try running your container in privileged mode to remove the container restrictions:
docker run --net=host --privileged ...
If that solves your issue, you can likely replace the --privileged with --cap-add and various kernel capabilities. The first privilege that comes to mind is NET_ADMIN, which you could try with:
docker run --net=host --cap-add NET_ADMIN ...
See this section of the docker run docs for more details on configuring privileges.

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

How to organize architecture of an isomorphic app using docker?

I am developing an isomorphic app. The key moment here is that js code on frontend server and on client is the same.
Suppose we have the following services:
frontend
backend
comments
database
Of course each of these services lives in it's own docker container.
And there is a need to access backend and comments services from client side (as api.app.com and comments.app.com respectively).
It seems pretty reasonable to use nginx as reverse proxy here. So these are new containers to be added:
nginx
consul
consul-template
registrator
And the last problem is to resolve *.app.com to nginx. How to achieve this without buying app.com domain? Of course solution is to add DNS to each container and to dev host. But what docker container should I use as DNS server?
Or maybe there is better architecture?

Resources