just for better understanding:
I have a windows7 machine/virtualBox. The Docker machine is runing an I deployed a nginx container to it. So far so good.
Now I just wonder...
In all documentations stands that I just have to use:
docker run -d -p 8080:80 nginxImg
And then I can reach the engine with
localhost:8080
But in my env my localhost cant reach the container.
I have to use the "docker-machin ip".
e.g.
192.168.99.100:8080
and that reaches the nginx Container.
Its ok for me, but I want to know why it works diffrent on my machine then in all docs explained?? Did I miss something or made a mistake?
Kind regards
Gregor
When they say localhost it means that the host machine where the docker server is running. This in your case is the Virtual Machine. Which has an IP 192.168.99.100.
If this VM has GUI installed and you can launch a browser inside the VM then you will be able to browse localhost:8080 inside the VM.
But from your Windows machine the VM is as good as a remote server. So you need the IP to reach it.
Related
I have dockerized shinyproxy app and I can reach it through http://localhost:8080. But I cant reach it outside the host computer.
I tried http://my-ip-adress:8080
I allowed port 8080 through windows firewall
I read comments on port forwarding through VM in here. But I am using ubuntu 20. 04 in windows 10 and there is no VM running so I dont know how to do port forwarding.
I tried to run container docker run -itdp 0.0.0.0:8080:8080 shinyproxy-example
But none of them worked.
At the end of the day I wanna get an address put it in an iframe then a website eventually. Am I missing a step or is there another way to do this?
I an new in docker and webdeveloping so any help is appreciated.
I'm new to Docker (have been working with KVM earlier). The first problem I ran in to was how to configure a bridged network in Docker. I would to have a similiar configuration as a KVM bridged network. Does anyone know if this is possible?
docker run --network=host
But if what you want is to access your container from outside use the port mapping option.
docker run -p 80:80
You will access your container using the host ip and the port you specified.
Docker internally in linux use iptables to redirect the traffic from your host to the container.
Regards
I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.
Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).
Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?
EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.
No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:
github.com - moby/moby - Assigning service published ports to IP
github.com - moby/moby - docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0
So you could subscribe to them and/or participate in the discussion.
Further reading:
How to bind the published port to specific eth[x] in docker swarm mode
Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.
In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.
"Temp" solution that I am using is leaning on alpine/socat image.
Idea:
use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
add that container to the same network of the swarm service we want to expose only to localhost
expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
use socat of this container to forward trafic to swarm's machine
Command:
docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.
Daemon command:
docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
My use case:
Sometimes I need to access ES directly, so this approach is just fine for me.
Would like to see some docker's native solution, though.
P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.
See restart policy docs here:
https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart
Basically my question is: How do I connect to a docker host on the network?
Background:
We have a Windows Server 2012 machine that I would like to run a docker engine from.
I've managed to get it running with docker-machine and the hyperv driver. I've also successfully gotten a docker host to work on my computer locally using VirtualBox, and have been using it.
To ease access to docker for other people on the network on a perpetual set-up, I'd like to use the docker host instance on the server with Hyper-V.
In my search for answers, I've not been able to find any mention of provisioning hosts on the network, only on the local and cloud.
I'd like to know what commands do I have to use to connect my local docker-machine to the server's docker host, and use it as the active docker host?
There's a blog post explaining how to add a docker engine with an IP with the generic driver, as well as some extra steps you need to go through.
ADDING AN EXISTING DOCKER HOST TO DOCKER MACHINE : A FEW TIPS
SSH Keys
The bottom section on certs explains how to get working on the remote docker engine after connecting with the create command
Old answer
To create/connect successfully the local machine must be able to ssh into the remote docker engine, and not just the server hosting the docker engine. This means a public key was generated and added (using puttygen or ssh-keygen) on the local machine and the OpenSSH RSA public key was added to the list of authorized keys in ~/.ssh/authorized_keys on the remote docker engine.
An example of an OpenSSH RSA public key (because I get confused by these formats):
ssh-rsa AAAAB3NzaC1kc3MAAACBAJ3hB5SAF6mBXPlZlRoJEZi0KSIN+NU2iGiaXZXi9CDrgVxTp6/sc56UcYCp4qjfrZ2G3+6PWbxYso4P4YyUC+61RU5KPy4EcTJske3O+aNvec/20cW7PT3TvH1+sxwGrymD50kTiXDgo5nXdqFvibgM61WW2DGTKlEUsZys0njRAAAAFQDs7ukaTGJlZdeznwFUAttTH9LrwwAAAIAMm4sLCdvvBx9WPkvWDX0OIXSteCYckiQxesOfPvz26FfYxuTG/2dljDlalC+kYG05C1NEcmZWSNESGBGfccSYSfI3Y5ahSVUhOC2LMO3JNjVyYUnOM/iyhzrnRfQoWO9GFMaugq0jBMlhZA4UO26yJqJ+BtXIyItaEEJdc/ghIwAAAIBFeCZynstlbBjP648+mDKIvzNSS+JYr5klGxS3q8A56NPcYhDMxGn7h1DKbb2AV4pO6y+6hDrWo3UT4dLVuzK01trwpPYp6JXTSZZ12ZaXNPz7sX9/z6pzMqhX4UEfjVsLcuF+ZS6aQCPO0ZZEa1z+EEIZSD/ykLQsDwPxGjPBqw= rsa-key-20160224
Not having this key in the remote docker engine gave me a exit status 255 when I attempted to docker ssh into it. At this point, only regular ssh docker#192.168.1.165 worked. Be prepared to repeat the above process.
The article also mentions sudo, but the boot2docker image used by the Hyper-V driver already allows password-less sudo so that part is already done.
Ports
Make sure TCP port 2376 is allowed connection to the remote docker engine, through the server's firewall rules, physical firewall etc.
The Command to Run
Then this command connects the remote engine to docker-machine:
> docker-machine create --driver generic --generic-ip-address 192.168.1.165 --generic-ssh-user %USERNAME% vm
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.101:2376 v1.10.1
vm - generic Running tcp://192.168.1.165:2376 Unknown
vm is the newly added docker engine from the network, and 192.168.1.165 is the IP of the docker engine on the server.
Certs
If this works, just copying over the certs (ca.pem, ca-key.pem, cert.pem, key.pem) from the remote server directory %USERPROFILE%\.docker\machine\machines\<server's local docker engine name> to the same location on the local machine should keep it connected. Do not use docker-machine regenerate-certs since this disables any connections that other computers might have to that docker engine, including the server itself.
Active
Then finally making the engine active completes the connection.
> IF /F "tokens=*" %G ('docker-machine env vm') do %G
Note: This issue points out that the command docker-machine create --driver none --url=tcp://192.168.1.165:2376 <name> should add a remote machine's docker engine as well, should the "none" driver be working in a future version.
Is it possible to run Symfony dev on a virtual machine?
In the config.php it queries the request uri, if not 127.0.0.1 in errors and says it can only be configured on localhost.
How do i get around this?
There is a good reason for that - config and app_dev are private resources and should only be ran from localhost.
Since you want to run your dev from VM Host (rather than VM Guest) you should just add your interface address to both config.php and app_dev.php.
For example, I have set up VMWare which allocated the address 192.168.78.10 to my VM Guest. My VM Host is at 192.168.78.1 (first address in that network). So you need to add 198.168.78.1 to both of those files.
Alternatively, you can comment out the line judging the IP (127.0.0.1) in app_dev.php.
Sometimes it is worthy to do so especially when everything is fine in the local env but fails in the actual prod env.