Docker - curl returned "connection reset" - networking

I have docker host in a virtual machine.
the host is boot2docker 1.10-rc1.
and a container from a centOS 7.2 image.
I tried to run some application inside the container.
I started the two application and check the network status:
[root#564f3e59142b logs]# netstat -lnput
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:41656 0.0.0.0:* LISTEN 11995/BmtMDProvider
tcp6 0 0 :::44027 :::* LISTEN 4405/java
both application provides some HTTP service.
when I curl both applications (inside the same container) :
the response of java is OK
[root#564f3e59142b logs]# curl 127.0.0.1:44027
curl: (52) Empty reply from server
but on BmtMDProvider I got "connection reset by peer" instantly. This is a HTTP service url and it shouldn't return a "connection reset".
[root#564f3e59142b logs]# curl 127.0.0.1:41656
curl: (56) Recv failure: Connection reset by peer
the BmtMDProvider is some application from third party (I can't modify it) and works normally on a "real" machine.
Could I have some suggestion ,guide or diagnostic steps to find out where the "connection reset" comes from? Thanks.
Edit:
BmtMDProvider is a process spawned by java and it have a random port. the may be multiple instances of BmtMDProvider. java access BmtMDProvider by http (they are in same docker container and java got "connection reset", the same as curl)

Try running your container with IPV4 ports, meaning if you are currently running using
$ docker run -p 41656:41656 BmtMDProvider
run it as
$ docker run -p 127.0.0.1:41656:41656 BmtMDProvider

Related

Unable to reach Google Compute over port 9000

I have a google compute running CentOS 7, and I wrote up a quick test to try and communicate with it over port 9000 (from my home PC) - but I'm unexpectedly getting network errors.
This happens both with my test script (which attempts to send a payload) and even with plink.exe (which I'm just using to check the port availability).
>plink.exe -v -raw -P 9000 <external_IP>
Connecting to <external_IP> port 9000
Failed to connect to <external_IP>: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused
I've added my external IP to googles firewall (https://console.cloud.google.com/networking/firewalls) and set to allow ingress traffic over port 9000 (it's the lowest priority, at 1000)
I also updated firewalld in CentOS to allow TCP traffic over the port:
Redirecting to /bin/systemctl start firewalld.service
[foo#bar ~]$ sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
success
[foo#bar ~]$ sudo firewall-cmd --reload
success
I've confirmed my listener is running on port 9000
[foo#bar ~]$ netstat -npae | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1000 18381 1201/python3
By default, CentOS 7 doesn't use iptables (just to be sure, I confirmed it wasn't running)
Am I missing something?
NOTE: Actual external IP replaced with <external_IP> placeholder
Update:
If I nmap my listener over port 9000 from the CentOS 7 compute instance over a local IP, like 127.0.0.1 I get some results. Interestingly, if I make the same nmap call over the servers external IP -- nadda. So this has to be a firewall, right?
external call
[foo#bar~]$ nmap <external_IP> -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 00:33 UTC
Nmap scan report for <external_IP>.bc.googleusercontent.com (<external_IP>)
Host is up (0.00043s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds
Internal Call
[foo#bar~]$ nmap 127.0.0.1 -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 04:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
In this case software running on the backend VM must be listening any IP (0.0.0.0 or ::), your's is listening to "127.0.0.1:9000" and it should be "0.0.0.0:9000".
The way to fix that it's to change the service config to listen to 0.0.0.0 instead of 127.0.0.1 .
Cheers.

reverse tunnel with ssh: channel 0: connection failed: Connection refused

I am trying to set up a reverse ssh tunnel between a local machine behind a router and a machine on the Internet, so that the Internet machine can tunnel back and mount a disk on the local machine.
On the local machine, I type
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:2222 root#ip_of_remote_machine
This causes the remote machine to listen on port 2222. But when I try to mount the sshfs disk on the remote machine, I get "connection refused" on the local machine. Interestingly, port 2222 doesn't show up on the local machine as being bound. However, I'm definitely talking to ssh on the local machine since it complains
debug1: channel 0: connection failed: Connection refused
I have GatewayPort set to Yes on both machines. I also have AllowTcpForwarding yes on both machines as well.
First, the line needs to be
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:22 root#ip_of_remote_machine
Where port 22 represents the ssh server of the local machine.
Second, since I am using sshfs, the following line needs to be in its sshd_config
Subsystem sftp /usr/lib64/misc/sftp-server

Docker publishing ports to multiple IPs

If I have a host with two IPs, say 192.168.0.2 and 192.168.0.3 and I run a container like this:
docker run -p 192.168.0.3:80:80 some_container
and then I run another container like this:
docker run -p 80:80 some_other_container
Then what happens?
A) Second command fails with "address already in use" OR
B) some_other_container has its port 80 exposed on 192.168.0.2 while some_container has its port 80 exposed on 192.168.0.3 ?
If it's A) then how can I make this work in such a way that "some_container" always has its port 80 exposed on 192.168.0.3 and "some_other_container" which is started with "-p" (cannot specify IP) always exposes its ports on 192.168.0.2 ?
The first question is easy enough to answer with a quick test:
$ docker run -itd -p 127.0.0.1:80:80 nginx
acdf03bd196d2241d4f776ff701eab6222cc80bfb1b4dd06bc65af0a3625e602
$ docker run -itd -p 80:80 nginx
b75938101d9c8a28b0d7d220b0046a4f8884fb82e9bc337c65d48a214bc3e54f
docker: Error response from daemon: driver failed programming external connectivity on endpoint lonely_kirch (c144b82f83c7ab1c527c25d9a6807d37069a7382181f9bf98bb1b1cd93976313): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
Unless you want to rewrite the linux network stack (not recommended), I believe your options are to either pass the IP to your second run command, pass a default IP to the docker daemon (dockerd -ip 192.168.0.2), or pick a different port.

nginx not accessible outside of Docker container

This has to be a simple problem. I'm using boot2docker. If I ssh into boot2docker:
docker#boot2docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89ec1492d7c7 mycontainer:latest "/bin/sh -c nginx" 35 seconds ago Up 35 seconds 80/tcp, 0.0.0.0:80->1000/tcp desperate_mestorf
Then curl:
docker#boot2docker:~$ curl localhost:1000 curl: (7) Failed connect to
localhost:1000; Connection refused
docker#boot2docker:~$ curl
localhost:80 curl: (56) Recv failure: Connection reset by peer
Curl'd on port 80 just to make sure I'm not going crazy. Then I connect to my containers bash:
docker#boot2docker:~$ docker exec -i -t 89ec1492d7c7 bash
root#89ec1492d7c7:/srv/www# curl localhost
<!DOCTYPE html><html><head><link rel="stylesheet" href="/main.css"></head><body><h1>Welcome to Harp.</h1><h3>This is yours to own. Enjoy.</h3></body></html>root
Boom! It works, even tried this while leaving the default port 80. What's really weird is I have other containers on my box that I can get to. Even outside of my boot2docker VM (which I'm only using to take one more thing out of the equation). This must be simple right?
just another the same question in Unable to connect to Docker Nginx build
Here is the way to connect nginx docker container service:
docker ps # confirm nginx is running, which you have done.
docker port desperate_mestorf # get the ports, for example: 80/tcp, 0.0.0.0:80->1000/tcp
boot2docker ip # get the IP address, for example: 192.168.59.103
So now, you should be fine to connect to:
http://192.168.59.103:1000

Varnish Cache - Connection Refused

I have Nginx running on 8080, while Varnish runs on port 80. I can do
wget localhost:8080
in shell and get a response, but if I run
wget localhost
I get connection refused. For reference, I'm trying to access it externally but get the same problem. Hopefully I can solve access from localhost first!
Thanks in advance!
netstat -tulnp shows you every port and service running
iptables -L shows you if port open or blocked
cheers

Resources