We have a dockerized app that we imported on a Compute Engine instance with ubuntu 16.04.
It contains a nginx reverse proxy running on port 80 and in /etc/hosts we've added 127.0.0.1 mydockerizedapp
The GCE got an external IP address.
How can I set so that when I go on this external IP from a browser, I see the files served by the container nginx ?
You have to expose the ports of your container on the host machine by mapping it.
If you use the cli: --port-mappings=80:80:TCP
Related
When I run a local cluster using Docker for Window's built-in Kubernetes server, and install the Nginx ingress, the server is accessible on the entire local network. How can I bind the server to the loopback address (127.0.0.1) only so that it is not accessible?
I tried setting the Nginx's LoadBalancer service's loadBalancerIp to 127.0.0.1 but that didn't work.
I worked around this by switching to Kind (instead of Docker Desktop's K8s). It allows specifying listenAddress=127.0.0.1 when creating a cluster.
I have a VM running on GCP and got my docker installed on it. I have NGINX web server running on it with a static reserved external/public IP address. I can easily access this site by the public IP address. Now, I have my Artifactory running on this VM as a Docker container and the whole idea is to access this Docker container (Artifactory to be precise) using the same public IP address with a specific port, say 8081. I have configured the reverse proxy in the NGINX web server to bypass the request to the internal IP address of my docker container of Artifactory but the request is not reaching to it and cannot access the Artifactory.
Docker container is running:-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a4119d923hd8 docker.bintray.io/jfrog/artifactory-pro:latest "/entrypoint-artifac…" 57 minutes ago Up 57 minutes 0.0.0.0:8081->8081/tcp my-app-dev-artifactory-pro
Here are my reverse proxy settings:-
server {
listen 81;
listen [::]:81;
server_name [My External Public IP Address];
location / {
proxy_pass https://localhost:8081;
}
}
Since you are using GCP to run this, I think that your issue is very simple. First, you do not have to have an Nginx in order to get to Artifactory inside a Docker container. You should be able to reach it very easily using the IP and port (for example XX.XX.XX.XX:8081) and I can see that in the Nginx configuration you are listening to port 81 which is not in use by Artifactory. I think that the issue here is either you did not allow HTTP communication to your GCP instance in the instance configuration, or you did not map the port in the "docker run" command.
You can see if the port is mapped by running the command "docker ps" and see if in the "PORTS" section there are ports that are mapped. If not, you will need to map the port (8081 to 8081) and make sure you GCP instance have HTTP traffic enabled, then you will be able to get to Artifactory with IP:PORT.
Hopefully straightforward. I know how to bind to the host only with
-p 127.0.0.1:$HOSTPORT:$CONTAINERPORT
The issue I'm encountering is that doing this preventing me from accessing the mapped host port over an ssh tunnel to the docker host.
Is there way to do this without having to block the port upstream from the docker host somewhere?
Just make the target of your ssh tunnel localhost or 127.0.0.1.
ssh -L local-port:127.0.0.1:container-port docker-host
Would forward your local-port to localhost:container-port on docker-host. No need to expose the container port to the external network.
I have a docker container running on a Centos host and has a host port: container port mapping. The docker container has an web application running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2f8ce62bb69 image1 "/bin/bash" 16 hours ago Up 16 hours 22/tcp, 0.0.0.0:7001->7001/tcp nostalgic_elion
I can access the application over http by host IP address and host port which is mapped. However if I replace the host IP with container IP, then I get an error saying "site cannot be reached" ERR_CONNECTION_TIMED_OUT.
Is it possible to access using the container IP and exposed port over http? Unfortunately I do not have much background on networking.
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers. (https://docs.docker.com/v1.7/articles/networking/)
The docs however, say it is possible to have outside world talk to containers with some extra run options. The docs say about using the run with options -P or ----publish-all=true|false. Refer the options in the same docker networking page.
If your only need is to share different ip address to teams. update your host file with the docker containers ip address - localhost
My /etc/hosts file:
container-ip localhost
container-ip localhost
container-ip localhost
I want to expose docker host IP to outer host which like this post.
There is a container running web service on port 3000 which is mapped to its host (a virtual box docker machine). How do I map this virtual network to my physic host except using Nginx?
(p.s. my network is pppoe)