Is there a way to disable the default EXPOSE 80 443 instruction in the nginx docker file without creating my own image?
I'm using Docker Nginx image and trying to expose only port 443 in the following way:
docker run -itd --name=nginx-test --publish=443:443 nginx
But I can see using docker ps -a that the container exposes port 80 as well:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddc0bca08acc nginx "nginx -g 'daemon off" 17 seconds ago Up 16 seconds 80/tcp, 0.0.0.0:443->443/tcp nginx-test
How can I disable it?
The expose instruction is in the docker file which the image is built from.
You need to create your own customized Image for that.
To get the job done:
First locate the dockerfile for the official nginx (library)
Then Edit the dockerfile's expose instruction to 443 only.
Now build your own image modified image using official(customized) dockerfile.
To answer your edited question:
Docker uses iptables, While you could manually update the firewall rules to make the service unavailable at a certain port, you would not be able to unbind the Docker proxy. So port 80 will still be consumed on the docker host and docker proxy.
according to nginx docker image configuration , you can set this before container starts passing an environment var like :
docker run -itd -e NGINX_PORT=443 --name=nginx-test nginx
see :
using environment variables in nginx configuration
then in your nginx you can set :
listen ${NGINX_PORT};
There is a workaround to free the port (but not to unexpose it). I tried avoiding to publish the port but it didn't work and I got errors about the por being already in use anyway. Until I found that the trick is to publish the exposed port but mapped to a different one.
Let me explain with an example.
This will still try to use port 80:
docker up -p 443:443
But this will use 443 and some other random port you pick
docker up -p 443:443 -p<some free port>:80
You can do this in your commands, docker-compose or ansible playbooks to be able to start more than one instance on the same machine. (ie: nginx, which exposes port 80 by default)
I do this from docker-compose and ansible too.
Related
I am trying to walk through a tutorial that brings up an application in a docker/podman container instance.
I have attempted to use -p port:port and --expose port but neither seems to work.
I've ensured that I see the port in a listen state with ss -an.
I've made sure there isn't anything else trying to bind to that port.
No matter what I do, I can never hit localhost:port or ip_address:port.
I feel like I am fundamentally missing something but don't know where to look next.
Any suggestions for things to try or documentation to review would be appreciated.
Thanks,
Shawn
Expose: (Reference)
Expose tells Podman that the container requests that port be open, but
does not forward it. All exposed ports will be forwarded to random
ports on the host if and only if --publish-all is also specified
As per Redhat documentation for Containerfile,
EXPOSE indicates that the container listens on the specified network
port at runtime. The EXPOSE instruction defines metadata only; it does
not make ports accessible from the host. The -p option in the podman
run command exposes container ports from the host.
To specify Port Number,
The -p option in the podman run command exposes container ports from
the host.
Example:
podman run -d -p 8080:80 --name httpd-basic quay.io/httpd-parent:2.4
In above example, Port # 80 is the port number which Container listens/exposes and we can access this from outside the container via Port # 8080
I'm running an nginx container on a windows 10 machine. I've stripped it down to a bare minimum - an nginx image provided in the Docker hub. I'm running it using:
docker run --name ng -d -P nginx
This is the output of docker ps:
b5411ff47ca6 nginx "nginx -g 'daemon off" 22 seconds ago Up 21 seconds 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp ng
And this is the IP I'm getting when doing docker inspect ng: "IPAddress": "172.17.0.2"
So, the next thing I'm trying to do is access the Nginx server from the host machine by opening http://172.17.0.2:32771 in browser of the host machine. This is not working (host not found etc).
Please advise
On windows, you are using Docker Toolbox, and the IP you need is 192.168.99.100 (which is the IP of the Docker Toolbox VM). The IP you got is the IP of the container inside the VM, which is not accessible directly from Windows.
Follow this article... https://docs.docker.com/get-started/part2/#run-the-app
And make sure your application is running not just docker.
docker run -d -p 4000:80 friendlyhello
After this on Windows 10 host machine
Worked http://192.168.99.100:4000/
Not working: http://localhost:4000/
I used the following command to map the internal port 80 on the running container to port 82 off localhost:
docker run --name webserver2 -d -p 82:80 nginx
accessing nginx image off localhost:82 works great.
The port you want to access from your local web browser is the first number before the :80 which is where nginx image runs virtually in the container.
There is lots of miscommunication out there on the issue -- it's a simple port mapping between the host machine (Windows you are running) and the container running on docker.
I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work
I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.
I have a wildcard DNS set up so that all web requests to a custom domain (*.foo) map to the IP address of the Docker host. If I have multiple containers running Apache (or Nginx) instances, each container maps the Apache port (80) to some external inbound port.
What I would like to do is make a request to container-1.foo, which is already mapped to the correct IP address (of the Docker host) via my custom DNS server, but proxy the default port 80 request to the correct Docker external port such that the correct Apache instance from the specified container is able to respond based on the custom domain. Likewise, container-2.foo would proxy to a second container's apache, and so on.
Is there a pre-built solution for this, is my best bet to run an Nginx proxy on the Docker host, or should I write up a node.js proxy with the potential to manage Docker containers (start/stop/reuild via the web), or...? What options do I have that would make using the Docker containers more like a natural event and not something with extraneous ports and container juggling?
This answer might be a bit late, but what you need is an automatic reverse proxy. I have used two solutions for that:
jwilder/nginx-proxy
Traefik
With time, my preference is to use Traefik. Mostly because it is well documented and maintained, and comes with more features (load balancing with different strategies and priorities, healthchecks, circuit breakers, automatic SSL certificates with ACME/Let's Encrypt, ...).
Using jwilder/nginx-proxy
When running a Docker container Jason Wilder's nginx-proxy Docker image, you get a nginx server set up as a reverse proxy for your other containers with no config to maintain.
Just run your other containers with the VIRTUAL_HOST environment variable and nginx-proxy will discover their ip:port and update the nginx config for you.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
# start a first container for http://tutum.test.local
docker run -d -e "VIRTUAL_HOST=tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -e "VIRTUAL_HOST=deis.test.local" deis/helloworld
Using Traefik
When running a Traefik container, you get a reverse proxy server set up which will reconfigure its forwarding rules given docker labels found on your containers.
Let say your DNS is set up so that *.test.local maps to the IP address of your Docker host, then just start the following containers to get a quick demo running:
# start the reverse proxy
docker run --rm -it -p 80:80 -v /var/run/docker.sock:/var/run/docker.sock traefik:1.7 --docker
# start a first container for http://tutum.test.local
docker run -d -l "traefik.frontend.rule=Host:tutum.test.local" tutum/hello-world
# start a second container for http://deis.test.local
docker run -d -l "traefik.frontend.rule=Host:deis.test.local" deis/helloworld
Here are two possible answers: (1) setup ports directly with Docker and use Nginx/Apache to proxy the vhosts, or (2) use Dokku to manage ports and vhosts for you (which is how I learned to do Method 1).
Method 1a (directly assign ports with docker)
Step 1: Setup nginx.conf or Apache on the host, with the desired port number assignments. This web server, running on the host, will do the vhost proxying. There's nothing special about this with regard to Docker - it is normal vhost hosting. The special part comes next, in Step 2, to make Docker use the correct host port number.
Step 2: Force port number assignments in Docker with "-p" to set Docker's port mappings, and "-e" to set custom environment variables within Docker, as follows:
port=12345 # <-- the vhost port setting used in nginx/apache
IMAGE=myapps/container-1
id=$(docker run -d -p :$port -e PORT=$port $IMAGE)
# -p :$port will establish a mapping of 12345->12345 from outside docker to
# inside of docker.
# Then, the application must observe the PORT environment variable
# to launch itself on that port; This is set by -e PORT=$port.
# Additional goodies:
echo $id # <-- the running id of your container
echo $id > /app/files/CONTAINER # <-- remember Docker id for this instance
docker ps # <-- check that the app is running
docker logs $id # <-- look at the output of the running instance
docker kill $id # <-- to kill the app
Method 1b Hard-coded application port
...if you're application uses a hardcoded port, for example port 5000 (i.e. cannot be configured via PORT environment variable, as in Method 1a), then it can be hardcoded through Docker like this:
publicPort=12345
id=$(docker run -d -p $publicPort:5000 $IMAGE)
# -p $publicPort:5000 will map port 12345 outside of Docker to port 5000 inside
# of Docker. Therefore, nginx/apache must be configured to vhost proxy to 12345,
# and the application within Docker must be listening on 5000.
Method 2 (let Dokku figure out the ports)
At the moment, a pretty good option for managing Docker vhosts is Dokku. An upcoming option may be to use Flynn, but as of right now Flynn is just getting started and not quite ready. Therefore we go with Dokku for now: After following the Dokku install instructions, for a single domain, enable vhosts by creating the "VHOST" file:
echo yourdomain.com > /home/git/VHOST
# in your case: echo foo > /home/git/VHOST
Now, when an app is pushed via SSH to Dokku (see Dokku docs for how to do this), Dokku will look at the VHOST file and for the particular app pushed (let's say you pushed "container-1"), it will generate the following file:
/home/git/container-1/nginx.conf
And it will have the following contents:
upstream container-1 { server 127.0.0.1:49162; }
server {
listen 80;
server_name container-1.yourdomain.com;
location / {
proxy_pass http://container-1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
When the server is rebooted, Dokku will ensure that Docker starts the application with the port mapped to its initially deployed port (49162 here), rather than getting assigned randomly another port. To achieve this deterministic assignment, Dokku saves the initially assigned port into /home/git/container-1/PORT and on the next launch it sets the PORT environment to this value, and also maps Docker's port assignments to be this port on both the host-side and the app-side. This is opposed to the first launch, when Dokku will set PORT=5000 and then figure out whatever random port Dokku maps on the VPS side to 5000 on the app side. It's round about (and might even change in the future), but it works!
The way VHOST works, under the hood, is: upon doing a git push of the app via SSH, Dokku will execute hooks that live in /var/lib/dokku/plugins/nginx-vhosts. These hooks are also located in the Dokku source code here and are responsible for writing the nginx.conf files with the correct vhost settings. If you don't have this directory under /var/lib/dokku, then try running dokku plugins-install.
With docker, you want the internal ips to remain normal (e.g. 80) and figure out how to wire up the random ports.
One way to handle them, is with a reverse proxy like hipache. Point your dns at it, and then you can reconfigure the proxy as your containers come up and down. Take a look at http://txt.fliglio.com/2013/09/protyping-web-stuff-with-docker/ to see how this could work.
If you're looking for something more robust, you may want to take a look at "service discovery." (a look at service discovery with docker: http://txt.fliglio.com/2013/12/service-discovery-with-docker-docker-links-and-beyond/)