I have the following situation:
My application consists of a single web service that calls an
external API (say, some SaaS service, ElasticSearch or so). For non-unit-testing purposes we want to control the external service and later also inject faults. The application and the "mocked" API are dockerized and
now I want to use docker-compose to spin all containers up.
Because the application has several addresses hardcoded (e.g. the hostname of external services) I cannot change them and need to work around.
The service container makes a call to http://external-service.com/getsomestuff.
My idea was to use some features that are provided by docker to reroute all outgoing traffic to the external http://external-service.com/getsomestuff to the mock container without changing the URL.
My docker-compose.yaml looks like:
version: '2'
services:
service:
build: ./service
container_name: my-service1
ports:
- "5000:5000"
command: /bin/sh -c "python3 app.py"
api:
build: ./api-mock
container_name: my-api-mock
ports:
- "5001:5000"
command: /bin/sh -c "python3 app.py"
Finally, I have a driver that just does the following:
curl -XGET localhost:5000/
curl -XPUT localhost:5001/configure?delay=10
curl -XGET localhost:5000/
where the second curl just sets the delay in the mock to 10 seconds.
There are several options I have considered:
Using iptables-fu (would require modifying Dockerfiles to install it)
Using docker networks (this is really unclear to me)
Is there any simple option to achieve what I want?
Edit:
For clarity, here is the relevant part of the service code:
import requests
#app.route('/')
def do_stuff():
r = requests.get('http://external-service.com/getsomestuff')
return process_api_response(r.text())
Docker runs an internal DNS server for user defined networks. Any unknown host lookups are forwarded to you normal DNS servers.
Version 2+ compose files will automatically create a network for compose to use so there's a number of ways to control the hostnames it resolves.
The simplest way is to name your container with the hostname:
version: "2"
services:
external-service.com:
image: busybox
command: sleep 100
ping:
image: busybox
command: ping external-service.com
depends_on:
- external-service.com
If you want to keep container names you can use links
version: "2"
services:
api:
image: busybox
command: sleep 100
ping:
image: busybox
links:
- api:external-service.com
command: ping external-service.com
depends_on:
- api
Or network aliases
version: "2"
services:
api:
image: busybox
command: sleep 100
networks:
pingnet:
aliases:
- external-service.com
ping:
image: busybox
command: ping external-service.com
depends_on:
- api
networks:
- pingnet
networks:
pingnet:
I'm not entirely clear what the problem is you're trying to solve, but if you're trying to make external-service.com inside the container direct traffic to your "mock" service, I think you should be able to do that using the extra_hosts directive in your docker-compose.yml file. For example, if I have this:
version: "2"
services:
example:
image: myimage
extra_hosts:
- google.com:172.23.254.1
That will result in /etc/hosts in the container containing:
172.23.254.1 google.com
And attempts to access http://google.com will hit my web server at 172.23.254.1.
I was able to solve this with -links, is there a way to do networks in docker-compose?
version: '3'
services:
MOCK:
image: api-mock:latest
container_name: api-mock-container
ports:
- "8081:80"
api:
image: my-service1:latest
links:
- MOCK:external-service.com
Related
I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.
I am running airflow 2 with docker-compose (works great) but I cannot make it accessible behind a nginx proxy, using a combo of nginxproxy/nginx-proxy and nginxproxy/acme-companion.
Other projects work fine using that combo (meaning, that combo is working fine) but it seems that I need to change some airflow cfgs to make it work.
The airflow docker-compose includes the following:
x-airflow-common:
&airflow-common
build: ./airflow-docker/
environment:
AIRFLOW__WEBSERVER__BASE_URL: 'http://abc.def.com'
AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'true'
[...]
services:
[...]
airflow-webserver:
<<: *airflow-common
command: webserver
expose:
- "8080"
environment:
- VIRTUAL_HOST=abc.def.com
- LETSENCRYPT_HOST=abc.def.com
- LETSENCRYPT_EMAIL=some.email#def.com
networks:
- proxy_default # proxy_default is the docker network the nginx-proxy container runs in
- default
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
[...]
[...]
[...]
networks:
proxy_default:
external: true
Airflow can be reached under the (successfully encrypted) address, but when one opens that url it results in the "Ooops! Something bad has happened." airflow error, more specifically a "sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: session" error, even though it works fine when not behind the proxy.
What am I missing?
I have two docker containers (linux containers on Windows 10 host) that are built from the microsoft/aspnetcore base image. Both containers run fine when I start them individually. I am trying to use Docker Compose to start both containers up (one is an identity provider using IdentityServer4 and the other is an api resource protected by Identity Server). I have the following docker-compose.yml file:
version: '3'
services:
identityserver:
image: eventloom/identityserver
build:
context: ../Eventloom.Web.IdentityProvider/Eventloom.Web.IdentityProvider
dockerfile: DockerFile
ports:
- 8888:80
eventsite:
image: eventloom/eventsite
build:
context: ./Eventloom.Web.Eventsite
dockerfile: Dockerfile
ports:
- 8080:80
links:
- identityserver
depends_on:
- identityserver
environment:
IdentityServer: "http://identityserver"
the startup class for the "eventsite" container uses IdentityModel to ping the Discovery endpoint of "identityserver". For some reason, the startup is never able successfully get the discovery information, even though I can log into the eventsite container and get ping responses from identityserver. Is there something else I need to do to allow eventsite to communicate over port 80 with identityserver?
It turns out that the HTTP communication was working fine and using the internal DNS properly. The issue was with my IdentityModel.DiscoveryClient object and not configuring it to allow HTTP only. I had to use VS to debug as the app was starting inside the container to figure it out. Thanks.
I have a Docker supported ASP NET Core app.
The docker-compose file looks like this:
version: '3'
services:
test:
image: test
build:
context: ./Test
dockerfile: Dockerfile
networks:
test_nw:
aliases:
- test_alias
oracledb:
image: sath89/oracle-12c
ports:
- "1521:1521"
networks:
test_nw:
aliases:
- oracledb_alias
networks:
test_nw:
But after starting the app I looked in the container of the ASP.NET Core app (docker exec -it ... bash) and checked the /etc/hosts file but the respective alias of the DB oracledb_alias does not appear in it. So the app does not find the DB when using oracledb_alias as host name in the connection string.
What did I do wrong? How do I solve this problem?
You did nothing wrong. Docker's earlier versions used to use /etc/hosts for resolving hostname and links. Now docker uses a internal DNS server for this.
So you don't get to see any information as such. Then only thing you can do is use a command and test if you can reach resolve the name or not
$ dig oracledb_alias
$ ping oracledb_alias
$ telnet oracledb_alias 1521
See the below link for more details
https://docs.docker.com/engine/userguide/networking/configure-dns/
I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"