Docker with multiple exposed ports - networking

I have a container with say, 3 ports, 1000 (nodejs-express), 1001 (python-flask) and 1002 (angular2-client) exposed. When I use
docker run --name test -d -p 1000:1000 -p 1001:1001 -p 1002:1002 docker_image
Only the Express server is working fine on the host computer. However, when I log into the container and do curl, all three servers are responding just fine.
Any ideas what is going on with multiple port bindings with docker/host?

Once you do the following:
EXPOSE ports on the DockerFile
set -p flag for each port to expose externally
You just need to make sure that your services allows external connections.
i.e. for python flask: http://dixu.me/2015/10/26/How_to_Allow_Remote_Connections_to_Flask_Web_Service/ the default listen is localhost. Make sure it's listening on 0.0.0.0

Related

Can't expose port with podman

I am trying to walk through a tutorial that brings up an application in a docker/podman container instance.
I have attempted to use -p port:port and --expose port but neither seems to work.
I've ensured that I see the port in a listen state with ss -an.
I've made sure there isn't anything else trying to bind to that port.
No matter what I do, I can never hit localhost:port or ip_address:port.
I feel like I am fundamentally missing something but don't know where to look next.
Any suggestions for things to try or documentation to review would be appreciated.
Thanks,
Shawn
Expose: (Reference)
Expose tells Podman that the container requests that port be open, but
does not forward it. All exposed ports will be forwarded to random
ports on the host if and only if --publish-all is also specified
As per Redhat documentation for Containerfile,
EXPOSE indicates that the container listens on the specified network
port at runtime. The EXPOSE instruction defines metadata only; it does
not make ports accessible from the host. The -p option in the podman
run command exposes container ports from the host.
To specify Port Number,
The -p option in the podman run command exposes container ports from
the host.
Example:
podman run -d -p 8080:80 --name httpd-basic quay.io/httpd-parent:2.4
In above example, Port # 80 is the port number which Container listens/exposes and we can access this from outside the container via Port # 8080

firebase serve in docker container not visible to host os

Running in a docker container with the ports 9005 available to the host os and when i run
firebase serve -p 9005
and then try to access this from the host os (windows)
using http://localhost:9005 I get an empty response
to force firebase serve to be visible you have to specify it to force it to bind to the address 0.0.0.0 otherwise the bind defaults to localhost
so you need to run
firebase serve -p 9005 -o 0.0.0.0
Make sure that 9005 is exposed and published using the docker command line option -p
For your host is the localhost e.g. 127.0.0.1, for the docker container is localhost maybe 127.0.0.1, too. But these are not the same these are two different things!
You have to configure a process running in a docker container to use all interfaces this is called 0.0.0.0 this is not the localhost.
firebase serve -p 9005 -o 0.0.0.0
Then you have to expose the port, in the above example 9005. See https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose
docker run --expose 9005 $CONTAINER $PARAMS
or in the Dockerfile with something like that:
EXPOSE 9005/tcp
EXPOSE 9005/udp
See here: https://docs.docker.com/engine/reference/builder/#expose

Docker Nginx disable default exposed port 80

Is there a way to disable the default EXPOSE 80 443 instruction in the nginx docker file without creating my own image?
I'm using Docker Nginx image and trying to expose only port 443 in the following way:
docker run -itd --name=nginx-test --publish=443:443 nginx
But I can see using docker ps -a that the container exposes port 80 as well:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddc0bca08acc nginx "nginx -g 'daemon off" 17 seconds ago Up 16 seconds 80/tcp, 0.0.0.0:443->443/tcp nginx-test
How can I disable it?
The expose instruction is in the docker file which the image is built from.
You need to create your own customized Image for that.
To get the job done:
First locate the dockerfile for the official nginx (library)
Then Edit the dockerfile's expose instruction to 443 only.
Now build your own image modified image using official(customized) dockerfile.
To answer your edited question:
Docker uses iptables, While you could manually update the firewall rules to make the service unavailable at a certain port, you would not be able to unbind the Docker proxy. So port 80 will still be consumed on the docker host and docker proxy.
according to nginx docker image configuration , you can set this before container starts passing an environment var like :
docker run -itd -e NGINX_PORT=443 --name=nginx-test nginx
see :
using environment variables in nginx configuration
then in your nginx you can set :
listen ${NGINX_PORT};
There is a workaround to free the port (but not to unexpose it). I tried avoiding to publish the port but it didn't work and I got errors about the por being already in use anyway. Until I found that the trick is to publish the exposed port but mapped to a different one.
Let me explain with an example.
This will still try to use port 80:
docker up -p 443:443
But this will use 443 and some other random port you pick
docker up -p 443:443 -p<some free port>:80
You can do this in your commands, docker-compose or ansible playbooks to be able to start more than one instance on the same machine. (ie: nginx, which exposes port 80 by default)
I do this from docker-compose and ansible too.

Connecting Docker Containers

Hello Helpful Developers,
I'm having issues connecting docker containers. I have built a subversion docker container and a mongo docker container.
docker run -d -p 3343:3343 -p 4434:4434 -p 18080:18080 --name svn-server mamohr/subversion-edge
docker run -p 27017:27017 --name my-mongo -d mongo
I'm able to hit http://x.x.x.x:18080/ from a browser, but unable to curl from the my-mongo instance. I can talk to each container from my development machine, but unable to talk from container to container.
I see things like --net=bridge, host, ????, but I'm getting confused.
Please help.....
Borrowing this schema from SDN hub, imagine that C1 is your SVN container and C2 is your Mongo container:
Both containers are connected to docker0 bridge and NATed to external 192.168.50.16 network.
To connect from your Mongo container, check the bridge0 IP address of the SVN container:
# docker inspect <svn-container-name>
"Networks": {
"bridge0": {
"IPAddress": "172.17.0.19",
}
then CURL directly to it's bridge0 IP address:
curl http://172.17.0.19:18080/
To get you immediately going, you can start your hosts with --net=host and then both containers and host will be able to communicate.
Or you can use link( --link ) between from mongo to the other container.
There is lot to explain about docker networking and the docker documentation will be good point to start.
Read the documentation at https://docs.docker.com/engine/userguide/networking/dockernetworks/
I would advice you to take a look at docker compose. I think it's the best way to manage a system, which is composed of many containers.
Here is the official guide: https://docs.docker.com/compose/
Docker containers by default start attached to a bridge network called default. You can do docker network ls and see the networks you have available. You can also create networks with different attributes etc...
So in your case, both your containers are being started on the same default network, which means they should be able to communicate with each other just fine. In fact, if you only want your SVN server to be able to talk to Mongo (and don't need to connect to mongo from your host) you don't even need to expose ports on the Mongo container. Containers on the same network as each can communicate with each other just fine without ports being exposed. Exposing ports is to allow host > container connectivity.
So, what hostname / port are you using when you try to curl from the mongo instance to your SVN instance? You should be using svn-server as that will resolve to the SVN container (using Docker's built-in DNS resolution).
Direct container to container networking via container name can be achieved with a user defined network.
docker network create mynet
docker run -d --net=mynet --name svn-server mamohr/subversion-edge
docker run -d --net=mynet --name my-mongo mongo
docker exec <svn-id> ping my-mongo
docker exec <mongo-id> ping svn-server
You should always be able to connect to mapped ports though, even in your current setup. The hosts runs a process that listens on that port so any host IP should do.
$ docker run -d -p 8080:80 --net=mynet --name sleep busybox nc -lp 80 -e echo here!
63115ef88664f1186ea012e41138747725790383c741c12ca8675c3058383e68
$ ss -lntp | grep 8080
LISTEN 0 128 :::8080 :::* users:(("exe",pid=6287,fd=4))
$ docker run busybox nc <any_host_ip> 8080
here!
Please remember, container is not available by default to the ourside world.
When you running the svn-server container, you published the container's 18080 port and mapped it from the host's 18080 port. So you can access it by http://your_host_IP:18080.
From your two docker run commands, both svn-server container and my-mongo container are on the default bridge network. These two containers are connected by docker0, so they can communicate each other directly by localhost.
But if you tried to access http://your_host_IP:18080 from within your my-mongo container, that means your request would first be send to docker0, but docker0 will drop your request because you're trying to access the host, not the svn-server container.
So try this curl http://localhost:18080 or curl http://svn-server_IP:18080 from my-mongo container to access svn-server container.

Can not access nginx container on a local windows machine

I'm running an nginx container on a windows 10 machine. I've stripped it down to a bare minimum - an nginx image provided in the Docker hub. I'm running it using:
docker run --name ng -d -P nginx
This is the output of docker ps:
b5411ff47ca6 nginx "nginx -g 'daemon off" 22 seconds ago Up 21 seconds 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp ng
And this is the IP I'm getting when doing docker inspect ng: "IPAddress": "172.17.0.2"
So, the next thing I'm trying to do is access the Nginx server from the host machine by opening http://172.17.0.2:32771 in browser of the host machine. This is not working (host not found etc).
Please advise
On windows, you are using Docker Toolbox, and the IP you need is 192.168.99.100 (which is the IP of the Docker Toolbox VM). The IP you got is the IP of the container inside the VM, which is not accessible directly from Windows.
Follow this article... https://docs.docker.com/get-started/part2/#run-the-app
And make sure your application is running not just docker.
docker run -d -p 4000:80 friendlyhello
After this on Windows 10 host machine
Worked http://192.168.99.100:4000/
Not working: http://localhost:4000/
I used the following command to map the internal port 80 on the running container to port 82 off localhost:
docker run --name webserver2 -d -p 82:80 nginx
accessing nginx image off localhost:82 works great.
The port you want to access from your local web browser is the first number before the :80 which is where nginx image runs virtually in the container.
There is lots of miscommunication out there on the issue -- it's a simple port mapping between the host machine (Windows you are running) and the container running on docker.

Resources