Just started implementing docker containers, I'm not sure if it is possible or not yet. Is it possible to publish a docker container based on URL or at specific host header? For example, two containers running at port 192.168.1.2 and port 80 but the first container has website abc.com and the second container has website xyz.com.
Can we use some reverse proxy server e.g. NGINX (or any other that you suggest) to direct web request to respective docker container?
No, you can't have "two containers running at IP 192.168.1.2 and port 80", but you can have a reverse-proxy running at IP 192.168.1.2 and port 80 and route to containers running at different IP+port.
Yes, you could do that, you can run a nginx container (or in the host) and it will redirect the content to the right container using the requested server name.
You can map the nginx 80 port in the nginx container to the host and link the others containers to it and then configurate nginx to do the proxy.
Here is a post about how to do it:
http://www.yannmoisan.com/docker.html
If you want to generate nginx configuration dynamically when you start/stop docker containers, you can consider using jwilder/nginx-proxy project. This will give you more flexibility when deciding your domains.
Related
I need two containers in one task defination. One for wordpress and another for nginx, however the traffic should route from nginx to wordpress. This should be done using aws fargate.
How to connect two containers ? so that nginx should send traffic to wordpress container !
In AWS Fargate, all containers in the same task can access each other at 127.0.0.1 or localhost over their respective ports.
Let's say you have Nginx configured to listen on port 80 and WordPress configured to listen on port 9000. To setup Nginx and Wordpress as you describe, you would have your Application Load Balancer forward traffic to the Nginx container on port 80, and you would configure Nginx to forward traffic to WordPress at 127.0.0.1:9000.
We have a dockerized app that we imported on a Compute Engine instance with ubuntu 16.04.
It contains a nginx reverse proxy running on port 80 and in /etc/hosts we've added 127.0.0.1 mydockerizedapp
The GCE got an external IP address.
How can I set so that when I go on this external IP from a browser, I see the files served by the container nginx ?
You have to expose the ports of your container on the host machine by mapping it.
If you use the cli: --port-mappings=80:80:TCP
I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.
Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).
Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?
EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.
No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:
github.com - moby/moby - Assigning service published ports to IP
github.com - moby/moby - docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0
So you could subscribe to them and/or participate in the discussion.
Further reading:
How to bind the published port to specific eth[x] in docker swarm mode
Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.
In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.
"Temp" solution that I am using is leaning on alpine/socat image.
Idea:
use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
add that container to the same network of the swarm service we want to expose only to localhost
expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
use socat of this container to forward trafic to swarm's machine
Command:
docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.
Daemon command:
docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
My use case:
Sometimes I need to access ES directly, so this approach is just fine for me.
Would like to see some docker's native solution, though.
P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.
See restart policy docs here:
https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart
I try to deploy my Spring Boot on DigitalOcean. I built docker image and run it on server and everything is fine (docker run -p 8080:8080 hub_user/docker_image). I have my own domain and ip address (access url to my application is myapp.com:8080). But how I can hide port number from url to access my application? How I can use my domain without port 8080?
If you are using http, what I suppose, the default port is the 80. So if you write myapp.com is equivalent to myapp.com:80.
docker run -p 80:8080 hub_user/docker_image
This isn't really a docker question per se. As AxelWass says, port 80 is the default port that HTTP uses (browsers automatically try and go here when you visit your site myapp.com). Your application is actually running inside the container on port 8080, so if you just map 8080:8080 then docker will forward traffic coming to your host on port 8080 (the first one) to 8080 (the second one) inside your container.
Now, if you want traffic coming to the server on port 80 (which all web traffic will by default) to be forwarded to your container, you need to map it like 80:8080.
I have a docker container running on a Centos host and has a host port: container port mapping. The docker container has an web application running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2f8ce62bb69 image1 "/bin/bash" 16 hours ago Up 16 hours 22/tcp, 0.0.0.0:7001->7001/tcp nostalgic_elion
I can access the application over http by host IP address and host port which is mapped. However if I replace the host IP with container IP, then I get an error saying "site cannot be reached" ERR_CONNECTION_TIMED_OUT.
Is it possible to access using the container IP and exposed port over http? Unfortunately I do not have much background on networking.
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers. (https://docs.docker.com/v1.7/articles/networking/)
The docs however, say it is possible to have outside world talk to containers with some extra run options. The docs say about using the run with options -P or ----publish-all=true|false. Refer the options in the same docker networking page.
If your only need is to share different ip address to teams. update your host file with the docker containers ip address - localhost
My /etc/hosts file:
container-ip localhost
container-ip localhost
container-ip localhost