Http Communication between 2 docker containers with Docker Compose - http

I have two docker containers (linux containers on Windows 10 host) that are built from the microsoft/aspnetcore base image. Both containers run fine when I start them individually. I am trying to use Docker Compose to start both containers up (one is an identity provider using IdentityServer4 and the other is an api resource protected by Identity Server). I have the following docker-compose.yml file:
version: '3'
services:
identityserver:
image: eventloom/identityserver
build:
context: ../Eventloom.Web.IdentityProvider/Eventloom.Web.IdentityProvider
dockerfile: DockerFile
ports:
- 8888:80
eventsite:
image: eventloom/eventsite
build:
context: ./Eventloom.Web.Eventsite
dockerfile: Dockerfile
ports:
- 8080:80
links:
- identityserver
depends_on:
- identityserver
environment:
IdentityServer: "http://identityserver"
the startup class for the "eventsite" container uses IdentityModel to ping the Discovery endpoint of "identityserver". For some reason, the startup is never able successfully get the discovery information, even though I can log into the eventsite container and get ping responses from identityserver. Is there something else I need to do to allow eventsite to communicate over port 80 with identityserver?

It turns out that the HTTP communication was working fine and using the internal DNS properly. The issue was with my IdentityModel.DiscoveryClient object and not configuring it to allow HTTP only. I had to use VS to debug as the app was starting inside the container to figure it out. Thanks.

Related

Can't reach server endpoints when running Docker Compose Python Interpreter

I have a small setup for a few services inside a docker-compose.yaml. For brevity, this is the service which is intended as the main API to use other services:
services:
fprint-api:
container_name: fprint-api-v2
image: "fprint-api:v0.0.1"
depends_on:
- fprint-svc
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- "8000:8000"
build:
context: ../.
dockerfile: docker/Dockerfile.fprint-api
# ...
fprint-api has a simple health-check endpoint like so:
#app.get("/health")
def health():
return "API OK"
If I just run docker-compose up on this, or use the Docker Compose run-configuration in PyCharm, everything works and I am able to make a GET request to http://localhost:8000.
However, if I use a remote python interpreter on said docker-compse.yaml and the fprint-api service, I can't reach this endpoint anymore. The system spins up, but the endpoint is not accessible and as such I am unable to debug my endpoints.
I am not sure what I'm missing here exactly.
Remote Interpreter Configuration
Run-Configuration for the fprint-api Service
Okay, that's an easy one.
uvicorn needs the --host flag set to 0.0.0.0 in order to be able to access the URL via the hostname localhost:

Multiple docker-compose: Error: getaddrinfo EAI_AGAIN from frontend to backend

I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.

I can't connect my ASP .NET app from Docker-container to my computer host database with "host.docker.internal:some-port"

I lost amount of time trying to connect my app container with my database Azure Cosmos DB Emulator. I am using loggers object to know where my app break, and I found that the problem is in the connection of the container out of him. I tried to use the famous host.docker.internal direction to connect my host but using my container name (the public IP and DNS internal server of docker).
Here is my appsettings.Development configuration:
"DocumentDb": {
"TenantKey": "Default",
"Endpoint": "https://project-cosmos-container:8081",
"AuthorizationKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
},
Here my Dokerfile-Cosmos (here I copy my app .ddl that I created before with dotnet build and dotnet publish):
FROM microsoft/dotnet:2.2-aspnetcore-runtime
# We create the folder inside the container
WORKDIR /local-project
# We are coping all project executables that we created with dotnet build and dotnet publish
COPY ./bin/Release/netcoreapp2.2/publish/* ./
EXPOSE 8000
EXPOSE 8081
# We indicate to execute the program in the executable of the project
ENTRYPOINT ["dotnet", "Local.Proyect.Core.dll"]
And finnally my docker-compose where I run the app:
version: '3.1'
services:
local-Proyect:
image: project-cosmos-image
container_name: project-cosmos-container
ports:
- 127.0.0.1:7000:8000
environment:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://+:8000
Maybe the problem is in the ports, I don't Know. You can see that I am trying use my port 7000 on my computer host to connect the container and the port 8081 (azure cosmos port)
Unsing the following configuration with host.docker.internal:8081it works.
"DocumentDb": {
"TenantKey": "Default",
"Endpoint": "https://host.docker.internal:8081",
"AuthorizationKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
},
So using the container name like DNS direction not works. I also have to do in Develpment environment because in other diferent host.docker.internal not works...
version: '3.1'
services:
local-Proyect:
image: project-cosmos-image
container_name: project-cosmos-container
ports:
- 127.0.0.1:7000:433
- 127.0.0.1:7001:80
environment:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://+:433;http://+:80

Docker-compose container using host DNS server

I'm running several containers on my "Ubuntu 16.10 Server" in a "custom" bridge network with compose 2.9 (in a yml version 2.1). Most of my containers are internally using the same ports, so there is no way for me to use the "host" network driver.
My containers are all links together, using the dedicated links attribute.
But, I also need to access services exposed outside of my containers. These services have dedicated URL with names registered in my company's DNS server.
While I have no problem to use public DNS and reach any public service from within my containers, I just can't reach my private DNS.
Do you know a working solution to use private DNS from a container? Or even better, use host's network DNS configuration?
PS: Of course, I can link to my company's services using the extra_hosts attribute in my services in my docker-compose.yml file. But... that's definitively not the goal of having a DNS. I don't want to register all my services in my YML file, and I don't want to update it each time services' IP are updated in my company.
Context :
Host: Ubuntu 16.10 server
Docker Engine: 1.12.6
Docker Compose: 1.9.0
docker-compose.yml: 2.1
Network: Own bridge.
docker-compose.yml file (extract):
version: '2.1'
services:
nexus:
image: sonatype/nexus3:$NEXUS_VERSION
container_name: nexus
restart: always
hostname: nexus.$URL
ports:
- "$NEXUS_81:8081"
- "$NEXUS_443:8443"
extra_hosts:
- "repos.private.network:192.168.200.200"
dns:
- 192.168.3.7
- 192.168.111.1
- 192.168.10.5
- 192.168.10.15
volumes_from:
- nexus-data
networks:
- pic
networks:
pic:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
I tried with and without the ipam configuration for the pic network, without any luck.
Tests & Results:
docker exec -ti nexus curl repos.private.network
returns properly the HTML page served by this service
docker exec -ti nexus curl another-service.private.network
Returns curl: (6) Could not resolve host: another-service.private.network; Name or service not known
While curl another-service.private.network from the host returns the appropriate HTML page.
And "of course" another-service.private.network is known in my 4 DNS servers (192.168.3.7, 192.168.111.1, 192.168.10.5, 192.168.10.15).
You don't specify which environment you're running docker-compose in e.g Mac, Windows or Unix, so it will depend a little bit on what changes are needed. You also don't specify if you're using the default bridge network in docker on a user created bridge network.
In either case, by default, Docker should try and map DNS resolution from the Docker Host into your containers. So if your Docker Host can resolve the private DNS addresses, then in theory your containers should be able to as well.
I'd recommend reading this official Docker DNS documentation as it is pretty reasonable. Here for the default Docker bridge network, here for user created bridge networks.
A slight gotcha is if you're running using Docker for Mac, Docker Machine or Docker for Windows you need to remember that your Docker Host is actually the VM running on your machine and not the physical box itself, so you need to ensure that the VM has the correct DNS resolution options set. You will need to restart your containers for changes to DNS resolution to be picked up by them.
You can of course override all the default settings using docker-compose. It has full options for explicitly setting DNS servers, DNS search options etc. As an example:
version: 2
services:
application:
dns:
- 8.8.8.8
- 4.4.4.4
- 192.168.9.45
You'll find the documentation for those features here.

Use host networking and additional networks in docker compose

I'm trying to set up a dev environment for my project.
I have a container (ms1) which should be put in his own network ("services" in my case), and a container (apigateway) which should access that network while exposing an http port to the host's network.
Ideally my docker compose file would look like this:
version: '2'
services:
ms1:
expose:
- "13010"
networks:
services:
aliases:
- ms1
apigateway:
networks:
services:
aliases:
- api
network_mode: "host"
networks:
services:
docker-compose doesn't allow to use network_mode and networks at the same time.
Do I have other alternatives?
At the moment I'm using this:
apigateway:
networks:
services:
aliases:
- api
ports:
- "127.0.0.1:10000:13010"
and then apigateway container listens on 0.0.0.0:13010. It works but it is slow and it freezes if the host's internet connection goes down.
Also, I'm planning on using vagrant in the future upon docker, does it allow to solve in a clean way?
expose in docker-compose does not publish the port on the host. Since you probably don't need service linking anymore (instead you should rely on Docker networks as you do already), the option has limited value in general and seems to provide no value at all to you in your scenario.
I suspect you've come to using it by mistake and after realizing that it didn't seem to have any effect by itself, stumbled upon the fact that using the host network driver would "make it work". This had nothing to do with the expose property, mind you. It's just that the host network driver lets contained processes bind to the host network interface directly. Thanks to this, you could reach the API gateway process from the outside. You could remove the expose property and it would still work.
If this is the only reason why you picked the host network driver, then you've fallen victim of the X-Y problem:
(tl;dr)
You should never need to use the host network driver in normal situations, the default bridge network driver works just fine. What you're looking for is the ports property, not expose. This sets up the appropriate port forwarding behind the scenes.
In docker 1.13 you should be able to create a service to bridge between the two networks. I'm using something similar to fix another problem and I think this could also help here:
docker service create \
--name proxy \
--network proxy \
--publish mode=host,target=80,published=80 \
--publish mode=host,target=443,published=443 \
--constraint 'node.hostname == myproxynode' \
--replicas 1 \
letsnginx
I would try this :
1/ Find the host network
docker network ls
2/ Use this dockercompose file
services:
ms1:
ports:
- "13010"
networks:
- service
apigateway:
networks:
- front
- service
networks:
front:
service:
external:
name: "<ID of the network>"

Resources