Can't expose port with podman - networking

I am trying to walk through a tutorial that brings up an application in a docker/podman container instance.
I have attempted to use -p port:port and --expose port but neither seems to work.
I've ensured that I see the port in a listen state with ss -an.
I've made sure there isn't anything else trying to bind to that port.
No matter what I do, I can never hit localhost:port or ip_address:port.
I feel like I am fundamentally missing something but don't know where to look next.
Any suggestions for things to try or documentation to review would be appreciated.
Thanks,
Shawn

Expose: (Reference)
Expose tells Podman that the container requests that port be open, but
does not forward it. All exposed ports will be forwarded to random
ports on the host if and only if --publish-all is also specified
As per Redhat documentation for Containerfile,
EXPOSE indicates that the container listens on the specified network
port at runtime. The EXPOSE instruction defines metadata only; it does
not make ports accessible from the host. The -p option in the podman
run command exposes container ports from the host.
To specify Port Number,
The -p option in the podman run command exposes container ports from
the host.
Example:
podman run -d -p 8080:80 --name httpd-basic quay.io/httpd-parent:2.4
In above example, Port # 80 is the port number which Container listens/exposes and we can access this from outside the container via Port # 8080

Related

Docker Nginx disable default exposed port 80

Is there a way to disable the default EXPOSE 80 443 instruction in the nginx docker file without creating my own image?
I'm using Docker Nginx image and trying to expose only port 443 in the following way:
docker run -itd --name=nginx-test --publish=443:443 nginx
But I can see using docker ps -a that the container exposes port 80 as well:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddc0bca08acc nginx "nginx -g 'daemon off" 17 seconds ago Up 16 seconds 80/tcp, 0.0.0.0:443->443/tcp nginx-test
How can I disable it?
The expose instruction is in the docker file which the image is built from.
You need to create your own customized Image for that.
To get the job done:
First locate the dockerfile for the official nginx (library)
Then Edit the dockerfile's expose instruction to 443 only.
Now build your own image modified image using official(customized) dockerfile.
To answer your edited question:
Docker uses iptables, While you could manually update the firewall rules to make the service unavailable at a certain port, you would not be able to unbind the Docker proxy. So port 80 will still be consumed on the docker host and docker proxy.
according to nginx docker image configuration , you can set this before container starts passing an environment var like :
docker run -itd -e NGINX_PORT=443 --name=nginx-test nginx
see :
using environment variables in nginx configuration
then in your nginx you can set :
listen ${NGINX_PORT};
There is a workaround to free the port (but not to unexpose it). I tried avoiding to publish the port but it didn't work and I got errors about the por being already in use anyway. Until I found that the trick is to publish the exposed port but mapped to a different one.
Let me explain with an example.
This will still try to use port 80:
docker up -p 443:443
But this will use 443 and some other random port you pick
docker up -p 443:443 -p<some free port>:80
You can do this in your commands, docker-compose or ansible playbooks to be able to start more than one instance on the same machine. (ie: nginx, which exposes port 80 by default)
I do this from docker-compose and ansible too.

Docker publishing ports to multiple IPs

If I have a host with two IPs, say 192.168.0.2 and 192.168.0.3 and I run a container like this:
docker run -p 192.168.0.3:80:80 some_container
and then I run another container like this:
docker run -p 80:80 some_other_container
Then what happens?
A) Second command fails with "address already in use" OR
B) some_other_container has its port 80 exposed on 192.168.0.2 while some_container has its port 80 exposed on 192.168.0.3 ?
If it's A) then how can I make this work in such a way that "some_container" always has its port 80 exposed on 192.168.0.3 and "some_other_container" which is started with "-p" (cannot specify IP) always exposes its ports on 192.168.0.2 ?
The first question is easy enough to answer with a quick test:
$ docker run -itd -p 127.0.0.1:80:80 nginx
acdf03bd196d2241d4f776ff701eab6222cc80bfb1b4dd06bc65af0a3625e602
$ docker run -itd -p 80:80 nginx
b75938101d9c8a28b0d7d220b0046a4f8884fb82e9bc337c65d48a214bc3e54f
docker: Error response from daemon: driver failed programming external connectivity on endpoint lonely_kirch (c144b82f83c7ab1c527c25d9a6807d37069a7382181f9bf98bb1b1cd93976313): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
Unless you want to rewrite the linux network stack (not recommended), I believe your options are to either pass the IP to your second run command, pass a default IP to the docker daemon (dockerd -ip 192.168.0.2), or pick a different port.

Docker with multiple exposed ports

I have a container with say, 3 ports, 1000 (nodejs-express), 1001 (python-flask) and 1002 (angular2-client) exposed. When I use
docker run --name test -d -p 1000:1000 -p 1001:1001 -p 1002:1002 docker_image
Only the Express server is working fine on the host computer. However, when I log into the container and do curl, all three servers are responding just fine.
Any ideas what is going on with multiple port bindings with docker/host?
Once you do the following:
EXPOSE ports on the DockerFile
set -p flag for each port to expose externally
You just need to make sure that your services allows external connections.
i.e. for python flask: http://dixu.me/2015/10/26/How_to_Allow_Remote_Connections_to_Flask_Web_Service/ the default listen is localhost. Make sure it's listening on 0.0.0.0

Connecting Docker Containers

Hello Helpful Developers,
I'm having issues connecting docker containers. I have built a subversion docker container and a mongo docker container.
docker run -d -p 3343:3343 -p 4434:4434 -p 18080:18080 --name svn-server mamohr/subversion-edge
docker run -p 27017:27017 --name my-mongo -d mongo
I'm able to hit http://x.x.x.x:18080/ from a browser, but unable to curl from the my-mongo instance. I can talk to each container from my development machine, but unable to talk from container to container.
I see things like --net=bridge, host, ????, but I'm getting confused.
Please help.....
Borrowing this schema from SDN hub, imagine that C1 is your SVN container and C2 is your Mongo container:
Both containers are connected to docker0 bridge and NATed to external 192.168.50.16 network.
To connect from your Mongo container, check the bridge0 IP address of the SVN container:
# docker inspect <svn-container-name>
"Networks": {
"bridge0": {
"IPAddress": "172.17.0.19",
}
then CURL directly to it's bridge0 IP address:
curl http://172.17.0.19:18080/
To get you immediately going, you can start your hosts with --net=host and then both containers and host will be able to communicate.
Or you can use link( --link ) between from mongo to the other container.
There is lot to explain about docker networking and the docker documentation will be good point to start.
Read the documentation at https://docs.docker.com/engine/userguide/networking/dockernetworks/
I would advice you to take a look at docker compose. I think it's the best way to manage a system, which is composed of many containers.
Here is the official guide: https://docs.docker.com/compose/
Docker containers by default start attached to a bridge network called default. You can do docker network ls and see the networks you have available. You can also create networks with different attributes etc...
So in your case, both your containers are being started on the same default network, which means they should be able to communicate with each other just fine. In fact, if you only want your SVN server to be able to talk to Mongo (and don't need to connect to mongo from your host) you don't even need to expose ports on the Mongo container. Containers on the same network as each can communicate with each other just fine without ports being exposed. Exposing ports is to allow host > container connectivity.
So, what hostname / port are you using when you try to curl from the mongo instance to your SVN instance? You should be using svn-server as that will resolve to the SVN container (using Docker's built-in DNS resolution).
Direct container to container networking via container name can be achieved with a user defined network.
docker network create mynet
docker run -d --net=mynet --name svn-server mamohr/subversion-edge
docker run -d --net=mynet --name my-mongo mongo
docker exec <svn-id> ping my-mongo
docker exec <mongo-id> ping svn-server
You should always be able to connect to mapped ports though, even in your current setup. The hosts runs a process that listens on that port so any host IP should do.
$ docker run -d -p 8080:80 --net=mynet --name sleep busybox nc -lp 80 -e echo here!
63115ef88664f1186ea012e41138747725790383c741c12ca8675c3058383e68
$ ss -lntp | grep 8080
LISTEN 0 128 :::8080 :::* users:(("exe",pid=6287,fd=4))
$ docker run busybox nc <any_host_ip> 8080
here!
Please remember, container is not available by default to the ourside world.
When you running the svn-server container, you published the container's 18080 port and mapped it from the host's 18080 port. So you can access it by http://your_host_IP:18080.
From your two docker run commands, both svn-server container and my-mongo container are on the default bridge network. These two containers are connected by docker0, so they can communicate each other directly by localhost.
But if you tried to access http://your_host_IP:18080 from within your my-mongo container, that means your request would first be send to docker0, but docker0 will drop your request because you're trying to access the host, not the svn-server container.
So try this curl http://localhost:18080 or curl http://svn-server_IP:18080 from my-mongo container to access svn-server container.

Assigning vhosts to Docker ports

I have a wildcard DNS set up so that all web requests to a custom domain (*.foo) map to the IP address of the Docker host. If I have multiple containers running Apache (or Nginx) instances, each container maps the Apache port (80) to some external inbound port.
What I would like to do is make a request to container-1.foo, which is already mapped to the correct IP address (of the Docker host) via my custom DNS server, but proxy the default port 80 request to the correct Docker external port such that the correct Apache instance from the specified container is able to respond based on the custom domain. Likewise, container-2.foo would proxy to a second container's apache, and so on.
Is there a pre-built solution for this, is my best bet to run an Nginx proxy on the Docker host, or should I write up a node.js proxy with the potential to manage Docker containers (start/stop/reuild via the web), or...? What options do I have that would make using the Docker containers more like a natural event and not something with extraneous ports and container juggling?
This answer might be a bit late, but what you need is an automatic reverse proxy. I have used two solutions for that:
jwilder/nginx-proxy
Traefik
With time, my preference is to use Traefik. Mostly because it is well documented and maintained, and comes with more features (load balancing with different strategies and priorities, healthchecks, circuit breakers, automatic SSL certificates with ACME/Let's Encrypt, ...).
Using jwilder/nginx-proxy
When running a Docker container Jason Wilder's nginx-proxy Docker image, you get a nginx server set up as a reverse proxy for your other containers with no config to maintain.
Just run your other containers with the VIRTUAL_HOST environment variable and nginx-proxy will discover their ip:port and update the nginx config for you.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
# start a first container for http://tutum.test.local
docker run -d -e "VIRTUAL_HOST=tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -e "VIRTUAL_HOST=deis.test.local" deis/helloworld
Using Traefik
When running a Traefik container, you get a reverse proxy server set up which will reconfigure its forwarding rules given docker labels found on your containers.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run --rm -it -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock traefik:1.7 --docker
# start a first container for http://tutum.test.local
docker run -d -l "traefik.frontend.rule=Host:tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -l "traefik.frontend.rule=Host:deis.test.local" deis/helloworld
Here are two possible answers: (1) setup ports directly with Docker and use Nginx/Apache to proxy the vhosts, or (2) use Dokku to manage ports and vhosts for you (which is how I learned to do Method 1).
Method 1a (directly assign ports with docker)
Step 1: Setup nginx.conf or Apache on the host, with the desired port number assignments. This web server, running on the host, will do the vhost proxying. There's nothing special about this with regard to Docker - it is normal vhost hosting. The special part comes next, in Step 2, to make Docker use the correct host port number.
Step 2: Force port number assignments in Docker with "-p" to set Docker's port mappings, and "-e" to set custom environment variables within Docker, as follows:
port=12345 # <-- the vhost port setting used in nginx/apache
IMAGE=myapps/container-1
id=$(docker run -d -p :$port -e PORT=$port $IMAGE)
# -p :$port will establish a mapping of 12345->12345 from outside docker to
# inside of docker.
# Then, the application must observe the PORT environment variable
# to launch itself on that port; This is set by -e PORT=$port.
# Additional goodies:
echo $id # <-- the running id of your container
echo $id > /app/files/CONTAINER # <-- remember Docker id for this instance
docker ps # <-- check that the app is running
docker logs $id # <-- look at the output of the running instance
docker kill $id # <-- to kill the app
Method 1b Hard-coded application port
...if you're application uses a hardcoded port, for example port 5000 (i.e. cannot be configured via PORT environment variable, as in Method 1a), then it can be hardcoded through Docker like this:
publicPort=12345
id=$(docker run -d -p $publicPort:5000 $IMAGE)
# -p $publicPort:5000 will map port 12345 outside of Docker to port 5000 inside
# of Docker. Therefore, nginx/apache must be configured to vhost proxy to 12345,
# and the application within Docker must be listening on 5000.
Method 2 (let Dokku figure out the ports)
At the moment, a pretty good option for managing Docker vhosts is Dokku. An upcoming option may be to use Flynn, but as of right now Flynn is just getting started and not quite ready. Therefore we go with Dokku for now: After following the Dokku install instructions, for a single domain, enable vhosts by creating the "VHOST" file:
echo yourdomain.com > /home/git/VHOST
# in your case: echo foo > /home/git/VHOST
Now, when an app is pushed via SSH to Dokku (see Dokku docs for how to do this), Dokku will look at the VHOST file and for the particular app pushed (let's say you pushed "container-1"), it will generate the following file:
/home/git/container-1/nginx.conf
And it will have the following contents:
upstream container-1 { server 127.0.0.1:49162; }
server {
listen 80;
server_name container-1.yourdomain.com;
location / {
proxy_pass http://container-1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
When the server is rebooted, Dokku will ensure that Docker starts the application with the port mapped to its initially deployed port (49162 here), rather than getting assigned randomly another port. To achieve this deterministic assignment, Dokku saves the initially assigned port into /home/git/container-1/PORT and on the next launch it sets the PORT environment to this value, and also maps Docker's port assignments to be this port on both the host-side and the app-side. This is opposed to the first launch, when Dokku will set PORT=5000 and then figure out whatever random port Dokku maps on the VPS side to 5000 on the app side. It's round about (and might even change in the future), but it works!
The way VHOST works, under the hood, is: upon doing a git push of the app via SSH, Dokku will execute hooks that live in /var/lib/dokku/plugins/nginx-vhosts. These hooks are also located in the Dokku source code here and are responsible for writing the nginx.conf files with the correct vhost settings. If you don't have this directory under /var/lib/dokku, then try running dokku plugins-install.
With docker, you want the internal ips to remain normal (e.g. 80) and figure out how to wire up the random ports.
One way to handle them, is with a reverse proxy like hipache. Point your dns at it, and then you can reconfigure the proxy as your containers come up and down. Take a look at http://txt.fliglio.com/2013/09/protyping-web-stuff-with-docker/ to see how this could work.
If you're looking for something more robust, you may want to take a look at "service discovery." (a look at service discovery with docker: http://txt.fliglio.com/2013/12/service-discovery-with-docker-docker-links-and-beyond/)

Resources