If I have a host with two IPs, say 192.168.0.2 and 192.168.0.3 and I run a container like this:
docker run -p 192.168.0.3:80:80 some_container
and then I run another container like this:
docker run -p 80:80 some_other_container
Then what happens?
A) Second command fails with "address already in use" OR
B) some_other_container has its port 80 exposed on 192.168.0.2 while some_container has its port 80 exposed on 192.168.0.3 ?
If it's A) then how can I make this work in such a way that "some_container" always has its port 80 exposed on 192.168.0.3 and "some_other_container" which is started with "-p" (cannot specify IP) always exposes its ports on 192.168.0.2 ?
The first question is easy enough to answer with a quick test:
$ docker run -itd -p 127.0.0.1:80:80 nginx
acdf03bd196d2241d4f776ff701eab6222cc80bfb1b4dd06bc65af0a3625e602
$ docker run -itd -p 80:80 nginx
b75938101d9c8a28b0d7d220b0046a4f8884fb82e9bc337c65d48a214bc3e54f
docker: Error response from daemon: driver failed programming external connectivity on endpoint lonely_kirch (c144b82f83c7ab1c527c25d9a6807d37069a7382181f9bf98bb1b1cd93976313): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
Unless you want to rewrite the linux network stack (not recommended), I believe your options are to either pass the IP to your second run command, pass a default IP to the docker daemon (dockerd -ip 192.168.0.2), or pick a different port.
Related
I am trying to walk through a tutorial that brings up an application in a docker/podman container instance.
I have attempted to use -p port:port and --expose port but neither seems to work.
I've ensured that I see the port in a listen state with ss -an.
I've made sure there isn't anything else trying to bind to that port.
No matter what I do, I can never hit localhost:port or ip_address:port.
I feel like I am fundamentally missing something but don't know where to look next.
Any suggestions for things to try or documentation to review would be appreciated.
Thanks,
Shawn
Expose: (Reference)
Expose tells Podman that the container requests that port be open, but
does not forward it. All exposed ports will be forwarded to random
ports on the host if and only if --publish-all is also specified
As per Redhat documentation for Containerfile,
EXPOSE indicates that the container listens on the specified network
port at runtime. The EXPOSE instruction defines metadata only; it does
not make ports accessible from the host. The -p option in the podman
run command exposes container ports from the host.
To specify Port Number,
The -p option in the podman run command exposes container ports from
the host.
Example:
podman run -d -p 8080:80 --name httpd-basic quay.io/httpd-parent:2.4
In above example, Port # 80 is the port number which Container listens/exposes and we can access this from outside the container via Port # 8080
I have a wordpress official container with a dock port 80 mapped to 32795 external... when I go to administration area of wordpress I get this error:
Important: HTTP Loopback Connections are not enabled on this server. If you need to contact your web host, tell them that when PHP tries to connect back to the site at the URL http://localhost:32795/wp-admin/admin-ajax.php and it gets the error cURL error 7: Failed to connect to localhost port 32795: Connection refused. There may be a problem with the server configuration (eg local DNS problems, mod_security, etc) preventing connections from working properly.
I think the problem is that the site inside the container tries to communicate with the 32795 port instead of 80, but it can not because this door is only seen from the outside of the container...
I created a script inside the site with phpinfo, and I checked the loopback connections are on...
There is a solution for this? I have docker un windows with kitematic
thanks
I had a similar problem running WordPress with Nginx on Docker Desktop for Windows. I needed to add an entry to the container's hosts file that directed my local.example.com domain to hit my ingress-nginx controller so that WordPress' loopback requests would work. Although my setup might be slightly different this might help you.
Open /Windows/System32/drivers/etc/hosts and copy the IP address that's next to host.docker.internal. Add an entry to the container's hosts file on startup that ties the domain to the hosts IP by doing one of the following. IP is what you copied from your machine's hosts file by host.docker.internal
Docker argument:
--add-host="local.example.com:IP"
Docker compose:
extra_hosts:
- "local.example.com:IP"
Kubernetes:
hostAliases:
- ip: "IP"
hostnames:
- "local.example.com"
Problem is inside the container the opened port is 80 and docker is exposing 32795 for external connections
Wordpress configuration is pointing to port 32795, you might expose port 80 by doing docker run -p 80:80 and change wordpress configuration to use port 80
If you can't use port :80 a little bit more complicated solution is to use iptables port forwarding internally
Example
➜ ~ docker run -d --cap-add=NET_ADMIN --cap-add=NET_RAW -p 5000:80 nginx
835b039cc92bd9f32b960181bf370d39869c88f5a757423966b467fe01ac219e
➜ ~ docker exec -it 835b039cc92bd9 bash
root#835b039cc92b:/# apt update -qqq ; apt install iptables -yqqq
root#835b039cc92b:/# iptables -t nat -A OUTPUT -o lo -p tcp --dport 5000 -j REDIRECT --to-
port 80
root#835b039cc92b:/# apt install telnet -yqqq
root#835b039cc92b:/# telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
root#835b039cc92b:/# exit
# from outside the container
➜ ~ telnet localhost 5000
Trying ::1...
Connected to localhost.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
I have a container with say, 3 ports, 1000 (nodejs-express), 1001 (python-flask) and 1002 (angular2-client) exposed. When I use
docker run --name test -d -p 1000:1000 -p 1001:1001 -p 1002:1002 docker_image
Only the Express server is working fine on the host computer. However, when I log into the container and do curl, all three servers are responding just fine.
Any ideas what is going on with multiple port bindings with docker/host?
Once you do the following:
EXPOSE ports on the DockerFile
set -p flag for each port to expose externally
You just need to make sure that your services allows external connections.
i.e. for python flask: http://dixu.me/2015/10/26/How_to_Allow_Remote_Connections_to_Flask_Web_Service/ the default listen is localhost. Make sure it's listening on 0.0.0.0
I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work
I am using the latest version of boot2docker version 1.3.2, 495c19a on a windows 7 (SP1) 64 bit machine.
My docker container is running a celery process which attempts to connect to a rabbitMQ service running on the same machine that boot2docker is running on.
The Celery process running within the docker container cannot connect to RabbitMQ and reports the following :
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect
to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
I have reason to believe this is a network related issue, associated with routing from the container, to the VirtualBox host, and from the host to the RabbitMQ service running on the local machine; I do not know how to configure this and I was wondering if anyone can advise me how to proceed?
I tried setting up port 5672 in port forwarding but it didn't work (but I believe this is for incoming traffic to the VM, like boot2docker ssh).
I am running the container as docker run -i -t tagname
I am not specifying a host with -h when I run the container.
I'm sorry if this question appears rather clueless or if the answer appears obvious ... I appreciate any help!
Some additional information :
The routing table of the host VM is what boot2docker configured during installation as follows :
docker0 IP Address is 172.17.42.1
eth0 IP Address is 10.0.2.15
eth1 IP Address is 192.168.59.103
eth0 is attached to NAT (Adapter 1) in the VirtualBox VM network configuration.
Adapter 1 has port forwarding setup for ssh; default setting of host IP 127.0.0.1, host port 2022, guest port 22.
eth1 is attached to Host-only adapter (Adapter 2).
Both adapters are set to promiscuous mode (allow all).
The IP Address of the docker container is 172.17.0.33.
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
127.0.0.1 is a special IP address that means "me", and inside the container it means "me the container", so this is why it is not connecting to the outer host. So the first thing to do is change the IP address where you are trying to connect to Rabbit to that of the outer host where it is running.
Then you probably have to do something about routing, but let's take one step at a time.
as your RabbitMQ server is running on your Windows host, you need to tell your container that it should talk to that IP - which would probably be 192.168.59.3
most importantly, your container's 127.0.0.1 is only a loopback device to that container's services - not even the boot2docker vm's ports.
You could set up an ambassador container that has --expose=80 and uses something like socat to forward all traffic from that container to your host (see svendowideit/ambassador). Then you'd --link that ambassador container to your current image
but personally, I'd avoid that initially, and just configure your containerised app to talk to the real host's IP
You have to specifc explicitely ports for port redirection separately for boot2docker and docker.
Please try this:
c:\>boot2docker init
c:\>boot2docker up
c:\>boot2docker ssh -L 0.0.0.0:5672:localhost:5672
docker#boot2docker:~$ docker run -it -p 5672:5672 tagname