HTTP request to VM - http

I have a jetty server running under port 8080 on VM. VM in its turn runs on remote server under port 10000. Is it legit to address it as http://someremote.org:10000:8080/request? Or should I use SSH somehow?

What I was looking for is called ssh tunneling. You make a tunnel from your port to remote's machine port like that:
ssh -p 10000 -L 18080:localhost:8080 user#remote.host.org
18080 here is port, that you use on your local machine in order to get to remote's 8080 port.

Related

Unable to reach Google Compute over port 9000

I have a google compute running CentOS 7, and I wrote up a quick test to try and communicate with it over port 9000 (from my home PC) - but I'm unexpectedly getting network errors.
This happens both with my test script (which attempts to send a payload) and even with plink.exe (which I'm just using to check the port availability).
>plink.exe -v -raw -P 9000 <external_IP>
Connecting to <external_IP> port 9000
Failed to connect to <external_IP>: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused
I've added my external IP to googles firewall (https://console.cloud.google.com/networking/firewalls) and set to allow ingress traffic over port 9000 (it's the lowest priority, at 1000)
I also updated firewalld in CentOS to allow TCP traffic over the port:
Redirecting to /bin/systemctl start firewalld.service
[foo#bar ~]$ sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
success
[foo#bar ~]$ sudo firewall-cmd --reload
success
I've confirmed my listener is running on port 9000
[foo#bar ~]$ netstat -npae | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1000 18381 1201/python3
By default, CentOS 7 doesn't use iptables (just to be sure, I confirmed it wasn't running)
Am I missing something?
NOTE: Actual external IP replaced with <external_IP> placeholder
Update:
If I nmap my listener over port 9000 from the CentOS 7 compute instance over a local IP, like 127.0.0.1 I get some results. Interestingly, if I make the same nmap call over the servers external IP -- nadda. So this has to be a firewall, right?
external call
[foo#bar~]$ nmap <external_IP> -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 00:33 UTC
Nmap scan report for <external_IP>.bc.googleusercontent.com (<external_IP>)
Host is up (0.00043s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds
Internal Call
[foo#bar~]$ nmap 127.0.0.1 -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 04:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
In this case software running on the backend VM must be listening any IP (0.0.0.0 or ::), your's is listening to "127.0.0.1:9000" and it should be "0.0.0.0:9000".
The way to fix that it's to change the service config to listen to 0.0.0.0 instead of 127.0.0.1 .
Cheers.

reverse tunnel with ssh: channel 0: connection failed: Connection refused

I am trying to set up a reverse ssh tunnel between a local machine behind a router and a machine on the Internet, so that the Internet machine can tunnel back and mount a disk on the local machine.
On the local machine, I type
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:2222 root#ip_of_remote_machine
This causes the remote machine to listen on port 2222. But when I try to mount the sshfs disk on the remote machine, I get "connection refused" on the local machine. Interestingly, port 2222 doesn't show up on the local machine as being bound. However, I'm definitely talking to ssh on the local machine since it complains
debug1: channel 0: connection failed: Connection refused
I have GatewayPort set to Yes on both machines. I also have AllowTcpForwarding yes on both machines as well.
First, the line needs to be
/usr/bin/ssh -N -f -R *:2222:127.0.0.1:22 root#ip_of_remote_machine
Where port 22 represents the ssh server of the local machine.
Second, since I am using sshfs, the following line needs to be in its sshd_config
Subsystem sftp /usr/lib64/misc/sftp-server

How to port forwarding/tunneling TCP on Nginx

I am using nginX
nginx version: nginx/1.4.6 (Ubuntu)
I have an app listening TCP on IPv4 port besides 80.
How I can to proxy/forward from domain on TCP 80 to this port.
What keywords should I find or nginx configurations?
Thanks
I think what you need is reverse proxy
Here is great tutorial how to forward connection from nginx to apache
This tutorial shows how to forward connection from nginx on port 80 to apache on port 8080
There are some options:
You can use ssh-forwarding:
plink <ssh user>#<server_ip> -pw <ssh pass> -L 0.0.0.0:<external port>:<target ip in internal network>:<target port in internal network>
Create VPN by OpenVPN for example
Check here - https://unix.stackexchange.com/questions/290223/how-to-configure-nginx-as-a-reverse-proxy-for-different-port-numbers

How do I connect a Docker container running in boot2docker to a network service running on another host?

I am using the latest version of boot2docker version 1.3.2, 495c19a on a windows 7 (SP1) 64 bit machine.
My docker container is running a celery process which attempts to connect to a rabbitMQ service running on the same machine that boot2docker is running on.
The Celery process running within the docker container cannot connect to RabbitMQ and reports the following :
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect
to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
I have reason to believe this is a network related issue, associated with routing from the container, to the VirtualBox host, and from the host to the RabbitMQ service running on the local machine; I do not know how to configure this and I was wondering if anyone can advise me how to proceed?
I tried setting up port 5672 in port forwarding but it didn't work (but I believe this is for incoming traffic to the VM, like boot2docker ssh).
I am running the container as docker run -i -t tagname
I am not specifying a host with -h when I run the container.
I'm sorry if this question appears rather clueless or if the answer appears obvious ... I appreciate any help!
Some additional information :
The routing table of the host VM is what boot2docker configured during installation as follows :
docker0 IP Address is 172.17.42.1
eth0 IP Address is 10.0.2.15
eth1 IP Address is 192.168.59.103
eth0 is attached to NAT (Adapter 1) in the VirtualBox VM network configuration.
Adapter 1 has port forwarding setup for ssh; default setting of host IP 127.0.0.1, host port 2022, guest port 22.
eth1 is attached to Host-only adapter (Adapter 2).
Both adapters are set to promiscuous mode (allow all).
The IP Address of the docker container is 172.17.0.33.
[2014-12-02 10:28:41,141: ERROR/MainProcess] consumer: Cannot connect to amqp:// guest:**#127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds...
127.0.0.1 is a special IP address that means "me", and inside the container it means "me the container", so this is why it is not connecting to the outer host. So the first thing to do is change the IP address where you are trying to connect to Rabbit to that of the outer host where it is running.
Then you probably have to do something about routing, but let's take one step at a time.
as your RabbitMQ server is running on your Windows host, you need to tell your container that it should talk to that IP - which would probably be 192.168.59.3
most importantly, your container's 127.0.0.1 is only a loopback device to that container's services - not even the boot2docker vm's ports.
You could set up an ambassador container that has --expose=80 and uses something like socat to forward all traffic from that container to your host (see svendowideit/ambassador). Then you'd --link that ambassador container to your current image
but personally, I'd avoid that initially, and just configure your containerised app to talk to the real host's IP
You have to specifc explicitely ports for port redirection separately for boot2docker and docker.
Please try this:
c:\>boot2docker init
c:\>boot2docker up
c:\>boot2docker ssh -L 0.0.0.0:5672:localhost:5672
docker#boot2docker:~$ docker run -it -p 5672:5672 tagname

How To Chain SSH Tunnels

I am trying to set up a simple SSH tunnels chain.
I have the following machines:
local machine, at 10.0.0.1.
remote machine, at 10.0.0.2.
I have the following programs:
client.py:
import socket
CLIENT_HOST = [...]
CLIENT_PORT = [...]
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.connect((CLIENT_HOST, CLIENT_PORT))
sock.send('test')
sock.close()
server.py:
import socket
SERVER_HOST = [...]
SERVER_PORT = [...]
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind((SERVER_HOST, SERVER_PORT))
server.listen(1)
client = server.accept()[0]
print client.recv(1024)
client.close()
server.close()
Now:
I run client.py (CLIENT_HOST='127.0.0.1', CLIENT_PORT=8000) and server.py (SERVER_HOST='', SERVER_PORT=8000) on the same machine, and it works as expected.
I run client.py (CLIENT_HOST='127.0.0.1', CLIENT_PORT=8000) on the local machine, and server.py (SERVER_HOST='', SERVER_PORT=8001) on the remote machine. I then run PuTTY and add a local SSH tunnel with the source port 8000 and the destination 10.0.0.2:8001, and it works as expected.
I run client.py (CLIENT_HOST='127.0.0.1', CLIENT_PORT=8001) on the remote machine, and server.py (SERVER_HOST='', SERVER_PORT=8002) on the local machine. I then run PuTTY and add a remote SSH tunnel with the source port 8001 and the destination 127.0.0.1:8002, and it works as expected.
However, when I run client.py (CLIENT_HOST='127.0.0.1', CLIENT_PORT=8000) and server.py (SERVER_HOST='', SERVER_PORT=8002) on the local machine, and run two PuTTYs, one with a local SSH tunnel from source port 8000 to destination 10.0.0.2:8001, and one with a remote SSH tunnel from source port 8001 to destination 127.0.0.1:8002, nothing happens.
As I see it, the message from client.py should be sent to the local machine's port 8000, where PuTTY listens and should redirect it via SSH to the remote machine's port 8001, where a PuTTY listens and should redirect it via SSH to the local machine's port 8002, where it should reach server.py.
What is wrong, and how do I fix it?
Thanks.
You probably need to tick 'Local ports accept connections from other hosts' and 'Remote ports do the same'.
By the way, netcat is a more useful standard utility for trying this kind of thing out, if it's available on your OS.

Resources