Connect to nginx (VirtualBox, Fedora) from Windows host - nginx

Fedora in VirtualBox running django dev server (bound to 0.0.0.0:8000) and nginx (listening to port 90)
I have NAT connection set up for the VM and port forwarding 8000 -> 8000, 8001 -> 90
I can see django as 127.0.0.1:8000
But no response from 127.0.0.1:8001
Any ideas?

Dumb question: Can the Fedora guest connect OK to nginx running locally?
Not so dumb question: Have you used tcpdump/wireshark/smartsniff or a similar tool to see if the traffic is making it through the host->guest at all? perhaps the Fedora firewall is blocking non-local connections to port 90?
Also, why not just add a "Host Only" second network adapter to the Fedora guest and forget about fiddling with the NAT settings?

Related

Set GITLAB to be accessible on LAN

After many research i have not found anything...
I install GITLAB on a CentOS VM. The CentOS ip address is 192.168.100.1.
In the file /etc/gitlab/gitlab.rb, I modified the line:
external_url 'http:192.168.100.1:1234'
I executed the command 'gitlab-ctl reconfigure' and no errors appeared.
When I use Firefox, and I can access to my Gitlab with all the Centos' interfaces:
192.168.100.1:1234
127.0.0.1:1234
It is normal because when i execute 'netstat -ntlp', I can see:
tcp 0 0.0.0.0:1234 LISTEN 22222/nginx:master
What is the problem?
I cannot access to GitLAB outside from the same Network 192.168.100.1/24.
From an other VM on the same network (192.168.100.2), i can ping '192.168.100.2'. I also make an ssh connection but if I made a:
curl 192.168.100.1:1234
The result is "Time out"
Thank,
Vincent

Centos VM with Docker getting host unreachable when trying to connect to itself

I have Docker running on a Centos VM, with bridged network. running
ifconfig
shows that my VM gets a valid IP address. Now I'm running some software within a docker container/image (which works within other docker/networking configurations). Some of my code running in the docker container uses SSL Connection (java) to connect to itself. In all other run configurations, this works perfectly. But when running in bridged mode with Centos VM and docker-compose, I'm getting an SSL Connect exception, error: Host unreachable. I can ping to and ssh into the VM with the same IP address and this all works fine. I'm sorry that I can't post actual setup/code and scripts as it's too much to post and it's also proprietary.
I'm baffled by this - why am I getting Host Unreachable in the aforementioned configuration?
FYI, I resolved the problem on centos by using the default "bridged" containers provided by Docker, but adding the following to my firewalld configuration:
firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --reload
service firewalld restart
You might also need to open up a port to allow external communication, like so:
firewall-cmd --zone=public --add-port=8080/tcp --permanent
My solution was to switch to an Ubuntu VM, because switching my docker compose to the default "bridged" network broke my aliases, which I really needed
The only remaining question here is why after configuring firewalld, a user-configured network on docker-compose cannot access the external IP, forcing us to switch to the default bridged network

How do I connect a Docker container running in boot2docker to a network service running on another host?

I am using the latest version of boot2docker version 1.3.2, 495c19a on a windows 7 (SP1) 64 bit machine.
My docker container is running a celery process which attempts to connect to a rabbitMQ service running on the same machine that boot2docker is running on.
The Celery process running within the docker container cannot connect to RabbitMQ and reports the following :
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect
to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
I have reason to believe this is a network related issue, associated with routing from the container, to the VirtualBox host, and from the host to the RabbitMQ service running on the local machine; I do not know how to configure this and I was wondering if anyone can advise me how to proceed?
I tried setting up port 5672 in port forwarding but it didn't work (but I believe this is for incoming traffic to the VM, like boot2docker ssh).
I am running the container as docker run -i -t tagname
I am not specifying a host with -h when I run the container.
I'm sorry if this question appears rather clueless or if the answer appears obvious ... I appreciate any help!
Some additional information :
The routing table of the host VM is what boot2docker configured during installation as follows :
docker0 IP Address is 172.17.42.1
eth0 IP Address is 10.0.2.15
eth1 IP Address is 192.168.59.103
eth0 is attached to NAT (Adapter 1) in the VirtualBox VM network configuration.
Adapter 1 has port forwarding setup for ssh; default setting of host IP 127.0.0.1, host port 2022, guest port 22.
eth1 is attached to Host-only adapter (Adapter 2).
Both adapters are set to promiscuous mode (allow all).
The IP Address of the docker container is 172.17.0.33.
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
127.0.0.1 is a special IP address that means "me", and inside the container it means "me the container", so this is why it is not connecting to the outer host. So the first thing to do is change the IP address where you are trying to connect to Rabbit to that of the outer host where it is running.
Then you probably have to do something about routing, but let's take one step at a time.
as your RabbitMQ server is running on your Windows host, you need to tell your container that it should talk to that IP - which would probably be 192.168.59.3
most importantly, your container's 127.0.0.1 is only a loopback device to that container's services - not even the boot2docker vm's ports.
You could set up an ambassador container that has --expose=80 and uses something like socat to forward all traffic from that container to your host (see svendowideit/ambassador). Then you'd --link that ambassador container to your current image
but personally, I'd avoid that initially, and just configure your containerised app to talk to the real host's IP
You have to specifc explicitely ports for port redirection separately for boot2docker and docker.
Please try this:
c:\>boot2docker init
c:\>boot2docker up
c:\>boot2docker ssh -L 0.0.0.0:5672:localhost:5672
docker#boot2docker:~$ docker run -it -p 5672:5672 tagname

Vagrant forward port 8080 to 80

So I have a NGINX server listening on port 8080 with uwsgi on Vagrant box. The config.vm.forward_port 8080, 80 is not working for me. I know that it's recommended to forward on ports higher than 2000, but I need the 80. Is there any issue for that?
I'm using vagrant for development, but I need to make some tests from outside using my domain name on port 80.
Thanks for your help.
When trying to forward ports to less than 1025 vagrant gives me following message which you might have missed:
You are trying to forward to privileged ports (ports <= 1024). Most
operating systems restrict this to only privileged process (typically
processes running as an administrative user). This is a warning in case
the port forwarding doesn't work. If any problems occur, please try a
port higher than 1024.
I was using port forwarding to same port with following configuration:
config.vm.forward_port 80, 80
And then run vagrant up, but when trying curl localhost, it wasn't able to connect to host. But when running vagrant as sudo user sudo vagrant up, then I was able to access the port from my host.
is port 80 available i.e. if you run netstat -an | grep 80, does it show in the list as already being used by another process? Is uwsgi added to the module list of nginx (and did you run make/make install on it)? Have you tried checking if you need to use higher privilenges (perhaps try running as sudo).

browser in host can not see vagrant box, portforward does not work

I have installed Vagrant in my Window XP, and in my Vagrantfile I have:
Vagrant::Config.run do |config|
# Setup the box
config.vm.box = "lucid32"
config.vm.forward_port 80, 8080
config.vm.network :hostonly, "192.168.10.200"
end
But I see no sign of my vagrant box when I type "http://192.168.10.200:8080" in browser.
IP address of the virtual box is correct, because from within the vbox, I have:
vagrant#lucid32:~$ ifconfig
....
eth1 Link encap:Ethernet HWaddr 08:00:27:79:c5:4b
inet addr:192.168.10.200 Bcast:192.168.10.255 Mask:255.255.255.0
There seem to be no firewall problem because if I type
vagrant#lucid32:~$ curl 'http://google.com'
it works fine.
I have read Vagrant's port forwarding not working
and tried:
vagrant#lucid32:~$ curl 'http://localhost:80'
curl: (7) couldn't connect to host
and also
vagrant#lucid32:~$ curl 'http://localhost:8080'
curl: (7) couldn't connect to host
So, looks like port forward is not working...
If you know what I can do so I can access my vbox from host browser, can you help me?
Thanks in advance
If you just started a Vagrant box with this Vagrantfile, there is nothing more than an empty Ubuntu Lucid, which does not run any service yet. So there is nothing served on port 80, this is why there is nothing to see either from inside the box on port 80 or the host machine on 8080.
For you Vagrant machine to provide some services (such as a web server on port 80), you have to do some provisioning. You can do it manually or using Chef or Puppet which are hooked into Vagrant's up process.
I had a similar problem. Sometimes using port forwarding for ports below 2000 is a problem. What worked for me is choosing ports that are above 2000. So my vagrantfile now looks like:
config.vm.network :forwarded_port, host: 4500, guest: 9000
Typing localhost:4500 on my host machine now just works fine. It seems like you are on an older version of vagrant than mine, so you can edit your vagrant file to something like
config.vm.forward_port 9000, 4500
Now typing localhost:4500 on your host machine should work fine.
Good luck,

Resources