Connect to Flask from Virtualbox Guest using NAT - networking

I'm making a client-server app and i would make my flask server on my host and the client on a virtualbox machine .
I've made the Flask host as that
app.run(debug=True,host='0.0.0.0')
But when i try to connect from firefox on virtualbox to 0.0.0.0:5000 or 127.0.0.1:5000 I get "Unable to connect" Message .
I'm using NAT network , i've forwarded the 5000 HOST ==> 5000 Guest .
Still have the same problem .
My Host machine : MacOS
My Guest Machine : Ubuntu 16.04
PS:
The server is working fine on the host machine
I've tried the bridge network and it's working (i connect using my host ip instead of 0.0.0.0 ) , but i want do it with NAT because i've
many ports to forward and because it's reliable for my project .

Related

Docker for Windows application not accessible over browser when connected via VPN

I have docker for windows running windows container. Machine is connected to corporate network via Cisco AnyConnect VPN. We have been having this issue for sometime with no solutions. To explain the problem here is an example. Go to docker image here https://hub.docker.com/_/microsoft-dotnet-samples and run the below commands in command prompt / powershell in sequence
docker pull mcr.microsoft.com/dotnet/samples:aspnetapp
docker run -it --rm -p 8000:80 --name aspnetcore_sample mcr.microsoft.com/dotnet/samples:aspnetapp
replace port 8000 with something else if there is an error in lines of hns file being used. Go to browser and do http://localhost:8000 assuming its running on port 8000. It doesn't connect for me. Instead of localhost i also tried below command to find the ipaddress of the container running the image and then replace localhost with that ip address but with same response unable to connect.
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
here is an screenshot of the network adapter docker sets up by default
here is another screenshot of the nat network adapter

SSH forward port to local host name

I have next setup:
Local host - my work PC
Project VM - Vagrant box with project files, runned on my work PC
Remote host - remote PC, from which I need to access hosts on Project VM
Project VM setup (/etc/hosts on Local host):
192.168.100.102 host1.vm.private
192.168.100.102 sub1.host1.vm.private
192.168.100.102 sub2.host1.vm.private
"host1" subdomains resolved by application router and served by nginx (config for "host1.vm.private" on Project VM):
server {
listen 80;
server_name ~^(.+\.)?host1\.vm\.private$;
...
}
I need to make "sub(1|2|N).host1.vm.private" reachable from remote host. How this can be done?
So, i found the solution: Trouble SSH Tunneling to remote server
The main issue is that invalid HTTP header was sent and nginx cant resolve a virtual host.
Run on local PC ssh -R 8888:192.168.100.102:80 <remote_pc_credentionals>. Or, run "inversed" command with ssh -L flag on remote PC.
Add "sub1.host1.vm.private" to /etc/hosts on remote PC: 127.0.0.1 sub1.host1.vm.private
OR
Send "Host" header with each request: curl -H "Host: sub1.host1.vm.private" "http://localhost:8888/some/path"

Centos VM with Docker getting host unreachable when trying to connect to itself

I have Docker running on a Centos VM, with bridged network. running
ifconfig
shows that my VM gets a valid IP address. Now I'm running some software within a docker container/image (which works within other docker/networking configurations). Some of my code running in the docker container uses SSL Connection (java) to connect to itself. In all other run configurations, this works perfectly. But when running in bridged mode with Centos VM and docker-compose, I'm getting an SSL Connect exception, error: Host unreachable. I can ping to and ssh into the VM with the same IP address and this all works fine. I'm sorry that I can't post actual setup/code and scripts as it's too much to post and it's also proprietary.
I'm baffled by this - why am I getting Host Unreachable in the aforementioned configuration?
FYI, I resolved the problem on centos by using the default "bridged" containers provided by Docker, but adding the following to my firewalld configuration:
firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --reload
service firewalld restart
You might also need to open up a port to allow external communication, like so:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
My solution was to switch to an Ubuntu VM, because switching my docker compose to the default "bridged" network broke my aliases, which I really needed
The only remaining question here is why after configuring firewalld, a user-configured network on docker-compose cannot access the external IP, forcing us to switch to the default bridged network

How do I connect a Docker container running in boot2docker to a network service running on another host?

I am using the latest version of boot2docker version 1.3.2, 495c19a on a windows 7 (SP1) 64 bit machine.
My docker container is running a celery process which attempts to connect to a rabbitMQ service running on the same machine that boot2docker is running on.
The Celery process running within the docker container cannot connect to RabbitMQ and reports the following :
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect
to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
I have reason to believe this is a network related issue, associated with routing from the container, to the VirtualBox host, and from the host to the RabbitMQ service running on the local machine; I do not know how to configure this and I was wondering if anyone can advise me how to proceed?
I tried setting up port 5672 in port forwarding but it didn't work (but I believe this is for incoming traffic to the VM, like boot2docker ssh).
I am running the container as docker run -i -t tagname
I am not specifying a host with -h when I run the container.
I'm sorry if this question appears rather clueless or if the answer appears obvious ... I appreciate any help!
Some additional information :
The routing table of the host VM is what boot2docker configured during installation as follows :
docker0 IP Address is 172.17.42.1
eth0 IP Address is 10.0.2.15
eth1 IP Address is 192.168.59.103
eth0 is attached to NAT (Adapter 1) in the VirtualBox VM network configuration.
Adapter 1 has port forwarding setup for ssh; default setting of host IP 127.0.0.1, host port 2022, guest port 22.
eth1 is attached to Host-only adapter (Adapter 2).
Both adapters are set to promiscuous mode (allow all).
The IP Address of the docker container is 172.17.0.33.
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
127.0.0.1 is a special IP address that means "me", and inside the container it means "me the container", so this is why it is not connecting to the outer host. So the first thing to do is change the IP address where you are trying to connect to Rabbit to that of the outer host where it is running.
Then you probably have to do something about routing, but let's take one step at a time.
as your RabbitMQ server is running on your Windows host, you need to tell your container that it should talk to that IP - which would probably be 192.168.59.3
most importantly, your container's 127.0.0.1 is only a loopback device to that container's services - not even the boot2docker vm's ports.
You could set up an ambassador container that has --expose=80 and uses something like socat to forward all traffic from that container to your host (see svendowideit/ambassador). Then you'd --link that ambassador container to your current image
but personally, I'd avoid that initially, and just configure your containerised app to talk to the real host's IP
You have to specifc explicitely ports for port redirection separately for boot2docker and docker.
Please try this:
c:\>boot2docker init
c:\>boot2docker up
c:\>boot2docker ssh -L 0.0.0.0:5672:localhost:5672
docker#boot2docker:~$ docker run -it -p 5672:5672 tagname

Connect to nginx (VirtualBox, Fedora) from Windows host

Fedora in VirtualBox running django dev server (bound to 0.0.0.0:8000) and nginx (listening to port 90)
I have NAT connection set up for the VM and port forwarding 8000 -> 8000, 8001 -> 90
I can see django as 127.0.0.1:8000
But no response from 127.0.0.1:8001
Any ideas?
Dumb question: Can the Fedora guest connect OK to nginx running locally?
Not so dumb question: Have you used tcpdump/wireshark/smartsniff or a similar tool to see if the traffic is making it through the host->guest at all? perhaps the Fedora firewall is blocking non-local connections to port 90?
Also, why not just add a "Host Only" second network adapter to the Fedora guest and forget about fiddling with the NAT settings?

Resources