Hostname resolution fails when running docker build from a docker container - networking

We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).

Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .

Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.

Related

Why does aspnet core start on port 80 from within Docker?

TL;DR: Why does an aspnet core app run on port 80 from within a Docker image, but 5000 outside a docker image.
Elaborate
I went through the aspnet core / docker tutorial found here:
https://learn.microsoft.com/en-us/dotnet/core/docker/building-net-docker-images
Half way through the page, I start the application with the following as prescribed:
dotnet run
Among other things, this prints this:
Now Listening on: http://localhost:5000
Great. That is what I expected. The next thing in the tutorial is to start the exact same application from within a Docker image.
docker build -t aspnetapp .
docker run -it --rm -p 5000:80 --name aspnetcore_sample aspnetapp
This results in
Now listening on: http://[::]:80
Wait. Wat? Why is the aspnet core app running on port 80? It was running on port 5000 when I ran it directly from the machine. There were no configuration file changes.
I suspect that it has something to do with the base docker images, but am not yet skilled enough in docker to track this down.
The microsoft/aspnetcore-build container builds on top of the
microsoft/aspnetcore container. The dockerhub page for that says:
A note on ports
This image sets the ASPNETCORE_URLS environment variable to http://+:80 which means that if you have not explicity set a URL in your application, via app.UseUrl in your Program.cs for example, then your application will be listening on port 80 inside the container.
So this is the container actively setting the port to 80. You can override it, if you want, by doing this in your Dockerfile:
ENV ASPNETCORE_URLS=http://+:5000
Also, it is worth noting that because of the docker command you are using, you will still be able to access the application at http://localhost:5000 whether you are running the application directly or in a container.
without dockerfile you can set any port out of the docker container. (.NET Core 3.1, .NET 5, .NET 6, .NET 7+) with docker args
docker run -it --rm -p 5000:80 -p 5001:443 -e ASPNETCORE_HTTPS_PORT=https://+:5001
-e ASPNETCORE_URLS=http://+:5000 --name aspnetcore_sample aspnetapp
more details:
https://github.com/dotnet/dotnet-docker/blob/17c1eec582e84ba9cbea5641cd9cc13fe1a41c39/samples/run-aspnetcore-https-development.md?plain=1#L85
https://github.com/dotnet/dotnet-docker/blob/5926a01d44bd47b6202ba71e30f9faa08fad1aec/samples/run-in-sdk-container.md?plain=1#L109
If you are using .NET Core 2.2 or higher, then you should to use another image: mcr.microsoft.com/dotnet/core/aspnet:2.2. In that case specifying ENV ASPNETCORE_URLS=http://+:5000 does not help. You still can force app to listen to port 5000 by using UseUrls("http://*:5000") in Programs.cs file.
Some links in other answers are for older versions, or no longer exist. The below applies to v6.
All the mcr.microsoft.com/dotnet/aspnet images are here. Suppose you are using the alpine version.
The aspnet image is based on the runtime image, as shown here.
The runtime image is based on the runtime-deps image, as shown here.
The runtime-deps image is based on the amd64/alpine image, as shown here (an older version, but with the same structure). And it sets ENV ASPNETCORE_URLS=http://+:80, as shown here, which means the container is listening on port 80.
Windows Networking Stack Limitation plays hard on Windows Docker Container.Reference Video
docker run -it --rm -p ${host_computer_port}:${container_port} --name ${container_name} ${image_name}
Example of the command:
docker run -it --rm -p 5000:8090 --name dockerwebapp9172020c dockerwebapp9172020
What is above command mean?
Your machine Port (5000) is mapped to container Port (8090). It does not mean that application running in container listening on PORT: 8090. See docker file below on how to map container port to application port.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1903 AS base
WORKDIR /app
EXPOSE 8090
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS build
WORKDIR /src
COPY ["DockerWebApp/DockerWebApp.csproj", "DockerWebApp/"]
RUN dotnet restore "DockerWebApp/DockerWebApp.csproj"
COPY . .
WORKDIR "/src/DockerWebApp"
RUN dotnet build "DockerWebApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "DockerWebApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS http://*:8090
ENTRYPOINT ["dotnet", "DockerWebApp.dll"]
Testing
Windows Networking Stack Limitation will not allow to run following command directly.
http://localhost:5000
Let's get on with first workaround.
Workaround-1 Directly hit Container. (Below command in PowerShell or Command Prompt)
ps c:/>docker inspect f31e8add55af
Find IP Address of a container from "NETWORKS" node at very end and run command in browser.
http://{container IP}:8090
Workaroud-2 From Windows Host
Let's find Your Machine IP address first.
c:\>ipconfig
Once you find your Machine IP, run following command.
http://{Your Machine IP}:5000
since 5000 is mapped to container port(8090) and asp.net core application is also listening to 8090 port within container.
Reference:Windows Containers and Docker: 101

My docker container isn't starting on localhost (0.0.0.0) on Docker for Windows (Native using Hyper-V)

I'm following Digital Ocean's tutorial on how to start a nginx docker container (Currently on Step 4). Currently this is their output:
$ docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginx
But when I run it, this is my output (noticed the different IP of the container):
C:\>docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
C:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginx
Why does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)
Edit: I'm using Docker for windows (recently released) which apparently runs native using Hyper-V. My output for docker-machine ls is this:
C:\>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
C:\>
But when I run it, this is my output (noticed the different IP of the
container)
Since this a Windows machine, I assume that you're using Docker Toolbox Docker for Windows. 10.0.75.2 is the IP of the boot2docker virtual machine.
If you are using Windows or Mac OS, you will need some form of virtualization in
order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.
And how can I get the same results as Digital Ocean's? (Getting the
server to start on localhost)
Use a Linux distribution! Also you can enable Expose container ports on localhost in Docker For Windows Settings:
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command (or open a browser) to view the default web site on your nginx web server inside the container
curl http://192.168.99.100:80
if you are using a virtual machine on windows:
docker-machine ip default
https://docs.docker.com/machine/concepts/
When I ran this command for the first time: docker run -d -p 80:80 --name docker-tutorial docker101tutorial
I got this error:
docker: Error response from daemon: Conflict. The container name
"/docker-tutorial" is already in use by container "LONG_CONTAINER_ID".
You have to remove (or rename) that container to be able to reuse that
name.
so, I tried to remove this container using: docker rm -f LONG_CONTAINER_ID
then I did: docker run -d -p 3080:80 --name docker-tutorial docker101tutorial
note 3080:80 instead of 80:80... Had I run this from the docker desktop, I would see this default option below:

Docker "/bin/bash" could not be invoked when mounting an NFS file with -v on openstack

I'm running an Ubuntu 14.04 instance that has docker installed on openstack. I'm trying to mount a volume into a docker container. I'm doing this with
docker run -t -i -v /mnt/data/dir:/mnt/test ubuntu
Where /mnt/data/dir is an NFS shared directory. Doing this gets me:
docker:
Error response from daemon: Container command '/bin/bash' could not be invoked..
However, using a local directory instead of a mounted directory works exactly as expected.
I understand that docker doesn't natively support an NFS mounted file system, however the errors I googled are usually not of the form that I've mentioned above.
Any clue on how to proceed
Edit: I forgot to mention that its not just limited to /bin/bash could not be invoked. I tried running a tomcat server and that gave me the exact same error.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Docker run start services

I need nginx-openresty and redis in single docker container. I have written docker file its working fine. But thing i need to start my redis service after login into the docker bash to automate this I have written .sh file which contains instrutions like start and stop of redis server and nginx. ENTRYPOINT ["./startup.sh"]
and .sh file is
cd /etc/redis-installation/utils
echo -n | ./install_server.sh
service redis_6379 stop
cd /
cp ./dump.rdb /var/lib/redis/6379/
service redis_6379 start
openresty
My problem is that docker container start and exist when shell execution completed. How can stay the container keep running with nginx and redis in running state.
Try using docker-compose with a link between your app container and your redis container. I suggest using the official redis container

Resources