ASP.NET on Docker Not Serving Web App to Browser - asp.net

I can't get my ASP.NET web application to get served to my browser when the web app is containerized in Docker.
I'm running a Mac, and I've used Visual Studio Code to create an ASP.NET web application. It's a simple, out-of-the-box demo that is based on the yo aspnet "Empty Application." When run "native" (outside of Docker), this application serves a "Hello World!" to http://localhost:5000 just fine. In other words, running dnx web starts the web server (Kestrel) and yeilds:
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
This is good. Now enter Docker. I seem to have successfully built a Docker image containing the web application, and when I run the container in Docker, I get the same output from Kestrel. Also good, but, I can no longer load the "Hello World!" page in my browser at http://localhost:5000. Instead, I get a ERR_CONNECTION_REFUSED. This is fairly obviously because due to the Docker "indirection," there is nothing serving directly to port 5000 anymore. In other words, I think there's an incorrect forwarding configuration, or, I think am misunderstanding the addressing.
I believe that port forwarding is involved in this process. In my Dockerfile, I am using an EXPOSE 5000 which I thought would allow me to map my local use of port 5000 to the Docker container's port 5000 using a run command like this:
docker run -i -t -p 5000:5000 container_name
But that's not the case with http://localhost:5000 (ERR_CONNECTION_REFUSED). So it occurred to me that Docker is almost certainly not at localhost. I had noticed when Docker loads, it says:
docker is configured to use the default machine with IP 192.168.99.100
So, I thought I'd try http://192.168.99.100:5000, but again (confusingly?) ERR_CONNECTION_REFUSED. Next, I read an interesting article here and I was able to determine from the suggested command
docker inspect container_name | grep IPAddress
That the container is assigned "IPAddress": "172.17.0.2"
So, I thought I'd try http://172.17.0.2:5000. And now we might actually be getting somewhere, because instead of a ERR_CONNECTION_REFUSED, I instead get a spinning hourglass and a resulting timeout. But still no "Hello World!"
What might I be missing?

It turns out that the web application is available at the IP address of the virtual machine 192.168.99.100 as suspected. 172.17.0.2 was clearly some sort of red herring.
The real kicker seems to be that the container's default "internal" IP is 0.0.0.0
Following the excellent advice of this posting, I edited the Dockerfile and specified the following:
ENTRYPOINT ["dnx", "web", "--server.urls", "http://0.0.0.0:5000"]
Because...
This will allow our web application to serve requests that come in from the
port forwarding provided by Docker which defaults to 0.0.0.0
The port mapping is crucial to link the host's port to the container's, but the EXPOSE command is apparently redundant. Now, when I run
docker run -i -t -p 80:5000 container_name
I can simply browse to http://192.168.99.100 (port 80 is implicit)
And viola! There's my "Hello World!"

Apart from using http://0.0.0.0:5000 you can use http://*.5000
ENTRYPOINT ["dnx", "web", "--server.urls", "http://*:5000"]
or you can include this against the runtimes environment
"commands": {
"kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
},
"web": ......
and the entrypoint in the dockerfile can be
ENTRYPOINT ["dnx","-p","project.json","kestrel"]

Related

HTTP 403 Forbidden url error when access to docker container IIS setting server

Image of success call the swagger inside of the docker container
I success to start the Server in the docker container.
With the simple image of the Docker file, I install the dotnet sdk and hosting bundle, and set the IIS with command line.
I success to start the API Server with IIS and check with using curl inside of the docker container.
(Image is attached)
But, when I tried to call the outside from docker container, for example in my laptop, the only response is 403 Fordden url comes out.
HTTP 403 forbidden url error
I tried to compare with my local IIS setting, but every setting is exactly same.
No Managed code, and the Advance setting is same.
What's the problem?
This is the docker file I use.
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command"]
RUN Install-WindowsFeature Web-ASP
ADD https://download.microsoft.com/download/1/2/8/128E2E22-C1B9-44A4-BE2A-5859ED1D4592/rewrite_amd64_en-US.msi rewrite_amd64_en-US.msi
RUN Write-Host 'Installing URL Rewrite' ; Start-Process msiexec.exe -ArgumentList '/i', 'rewrite_amd64_en-US.msi', '/quiet', '/norestart' -NoNewWindow -Wait;
WORKDIR /app
COPY ./ /app
RUN mkdir C:/inetpub/wwwroot/api
COPY ./api C:/inetpub/wwwroot/api
EXPOSE 8080
and I install the dotnet-sdk3.1, dotnet-hosting-6.0.4
Please give me some advice.
Thanks in advance
I'm trying to containerize my server made with window and dotnet 3.1
But I got problem while I tried to access to exposed port.
By default, when you create or run a container using docker create or
docker run, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker
containers which are not connected to the container’s network, use the
--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
For more information about container networking, please refer to this manual: https://docs.docker.com/config/containers/container-networking/

How do I connect to a container hosted in Docker Toolbox?

I am attempting to run my ASP.NET Core 1.1 web API in a Docker container, but I cannot connect to the web API from a browser or curl. To troubleshoot, I have also brought up standard nginx and Apache httpd containers and cannot connect to these either, so I believe this is a Docker/Docker Toolbox/configuration issue rather than a problem with my application.
I'll focus on what I have done with nginx and Apache:
I am running Docker Toolbox on Windows 7 Professional, and everything seems to work as I would expect.
Docker commands all work as expected
I can access the underlying Windows filesystem
I can get the expected results from curl http://localhost (if I start the default IIS website on Windows 7)
So now I shut down IIS and run nginx in a container:
$ docker run -d -p 80:80 nginx
45bb1f373c11b820d8431de3eb3bf222d57d412de53e8625f461b62c4279e644
Docker now shows nginx running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45bb1f373c11 nginx "nginx -g 'daemon off" 47 seconds ago Up 48 seconds 0.0.0.0:80->80/tcp, 443/tcp admiring_pike
But I cannot connect with either curl (within the Docker Toolbox command prompt) or a web browser in Windows:
$ curl http://localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
I get exactly the same results if I run an Apache 2.4 (httpd) container.
Any ideas? Thanks! Peter
I have found the answer in another question here.
Because Docker Toolbox is running on a lightweight Linux VM, it has its own IP address. One needs either to map localhost to the VM using DOCKER_HOST ir access the VM via it's IP address, found using the command:
docker-machine ip default
As you are running on VM, you need to follow this docker document from here.
After that run the following command to check the IP address of your VM.
docker-machine ip default
Start the nginx and hit [ip default address]:port in the browser. It works!

Docker : Unable to run Docker commands

I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.

Can not access nginx container on a local windows machine

I'm running an nginx container on a windows 10 machine. I've stripped it down to a bare minimum - an nginx image provided in the Docker hub. I'm running it using:
docker run --name ng -d -P nginx
This is the output of docker ps:
b5411ff47ca6 nginx "nginx -g 'daemon off" 22 seconds ago Up 21 seconds 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp ng
And this is the IP I'm getting when doing docker inspect ng: "IPAddress": "172.17.0.2"
So, the next thing I'm trying to do is access the Nginx server from the host machine by opening http://172.17.0.2:32771 in browser of the host machine. This is not working (host not found etc).
Please advise
On windows, you are using Docker Toolbox, and the IP you need is 192.168.99.100 (which is the IP of the Docker Toolbox VM). The IP you got is the IP of the container inside the VM, which is not accessible directly from Windows.
Follow this article... https://docs.docker.com/get-started/part2/#run-the-app
And make sure your application is running not just docker.
docker run -d -p 4000:80 friendlyhello
After this on Windows 10 host machine
Worked http://192.168.99.100:4000/
Not working: http://localhost:4000/
I used the following command to map the internal port 80 on the running container to port 82 off localhost:
docker run --name webserver2 -d -p 82:80 nginx
accessing nginx image off localhost:82 works great.
The port you want to access from your local web browser is the first number before the :80 which is where nginx image runs virtually in the container.
There is lots of miscommunication out there on the issue -- it's a simple port mapping between the host machine (Windows you are running) and the container running on docker.

Restarting Containers When Using Docker and Nginx proxy_pass

I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.

Resources