How to setup Ngrok like server for TCP connections? - nginx

I would setup an ngrok like self-hosted server. But have some troubles with TCP connections. It works well with https protocol with below Nginx config (it forward my local web server with ssh command):
ssh -R 8888:localhost:5000 abc.xyz
upstream tunnel {
server 127.0.0.1:8888;
}
server {
server_name abc.xyz;
access_log /var/log/nginx/$host;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://localhost:8888/;
}
error_page 502 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I step up with TCP connections with forwarding my vnc server port 5900 with below config:
stream {
log_format dns '$remote_addr - - [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
access_log /var/log/nginx/access.log dns;
error_log /var/log/nginx/error.log;
upstream stream_backend {
server 127.0.0.1:5902;
}
server {
listen 5903;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
}
I expect It would forward my local vnc server to internet like we could do with ngrok with ssh command.
ssh -L 5902:127.0.0.1:5900 root#ip
Is there anything wrong this that configs?
Here is the acess log and error on my server after trying connect with port 5903:
Error Log:
2022/02/19 09:32:54 [notice] 35807#35807: signal process started
2022/02/19 09:33:09 [error] 35808#35808: *9 connect() failed (111: Unknown error) while connecting to upstream, client: 14.186.105.235, server: 0.0.0.0:5903, upstream: "127.0.0.1:5902", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/02/19 09:34:05 [error] 35808#35808: *11 connect() failed (111: Unknown error) while connecting to upstream, client: 14.186.105.235, server: 0.0.0.0:5903, upstream: "127.0.0.1:5902", bytes from/to client:0/0, bytes from/to upstream:0/0
Access Log:
14.186.105.235 - - [19/Feb/2022:09:33:09 +0000] TCP 502 0 0 0.000 "127.0.0.1:5902"
14.186.105.235 - - [19/Feb/2022:09:34:05 +0000] TCP 502 0 0 0.000 "127.0.0.1:5902"

Related

Redirect Https request to local Http application server

I have the following task:
I need to use Google Chrome browser to navigate to:
https://mytestserver.com/users/list
and this should be redirected to my local Java server that listens to Http requests on port 8080.
I'm running on Mac OSX, to achieve that I did the following:
Added 127.0.0.1 mytestserver.com to /etc/host file.
Installed Nginx server on Docker container with the following config:
events {
worker_connections 1024;
}
http {
server {
listen 443 ssl;
server_name mytestserver.com;
ssl_certificate /etc/nginx/certs/star_com.crt;
ssl_certificate_key /etc/nginx/certs/star_com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
}
Then I run my local application server and listen to incoming Http requests on 8080,
and finally I try to run https://mytestserver.com/users/list and I'm getting 502 error.
In the Nginx logs I can see this error:
2021/07/06 20:37:47 [error] 23#23: *3 connect() failed (111: Connection refused)
while connecting to upstream, client: 172.17.0.1, server: mytestserver.com, request:
"GET /users/list HTTP/1.1", upstream: "http://127.0.0.1:8080/users/list", host: "mytestserver.com"
What am I missing here?
What worked for me was setting host.docker.internal as the address for the host container.

NGINX proxy to anycable websocket server causing "111: Connection refused"

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!
So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

Flask-SocketIO 502 Error on AWS EC2 with [CRITICAL] Worker Timeouts

I'm setting up a reverse-proxy NGINX EC2 deployment of a flask app on AWS by following this guide. More specifically, I'm using a proxy pass to a gunicorn server (see config info below).
Things have been running smoothly, and the flask portion of the setup works great. The only thing is that, when attempting to access pages that rely on Flask-SocketIO, the client throws a 502 (Bad Gateway) and some 400 (Bad Request) errors. This happens after successfully talking a bit with the server, but then the next message(s) (e.g. https://example.com/socket.io/?EIO=3&transport=polling&t=1565901966787-3&sid=c4109ab0c4c74981b3fc0e3785fb6558) sit(s) at pending, and after 30 seconds the gunicorn worker throws a [CRITICAL] WORKER TIMEOUT error and reboots.
A potentially important detail: I'm using eventlet, and I've applied monkey patching.
I've tried changing around ports, using 0.0.0.0 instead of 127.0.0.1, and a few other minor alterations. I haven't been able to locate any resources online that deal with these exact issues.
The tasks asked of the server are very light, so I'm really not sure why it's hanging like this.
GNIX Config:
server {
# listen on port 80 (http)
listen 80;
server_name _;
location ~ /.well-known {
root /home/ubuntu/letsencrypt;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443 ssl;
server_name _;
...
location / {
# forward application requests to the gunicorn server
proxy_pass http://127.0.0.1:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
...
}
Launching the gunicorn server:
gunicorn -b 127.0.0.1:5000 -w 1 "app:create_app()"
Client socket declaration:
var protocol = window.location.protocol
var socket = io.connect(protocol + '//' + document.domain + ':' + location.port);
Requirements.txt:
Flask_SQLAlchemy==2.4.0
SQLAlchemy==1.3.4
Flask_Login==0.4.1
Flask_SocketIO==4.1.0
eventlet==0.25.0
Flask==1.0.3
Flask_Migrate==2.5.2
Sample client-side error messages:
POST https://example.com/socket.io/?EIO=3&transport=polling&t=1565902131372-4&sid=17c5c83a59e04ee58fe893cd598f6df1 400 (BAD REQUEST)
socket.io.min.js:1 GET https://example.com/socket.io/?EIO=3&transport=polling&t=1565902131270-3&sid=17c5c83a59e04ee58fe893cd598f6df1 400 (BAD REQUEST)
socket.io.min.js:1 GET https://example.com/socket.io/?EIO=3&transport=polling&t=1565902165300-7&sid=4d64d2cfc94f43b1bf6d47ea53f9d7bd 502 (Bad Gateway)
socket.io.min.js:2 WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=4d64d2cfc94f43b1bf6d47ea53f9d7bd' failed: WebSocket is closed before the connection is established
Sample gunicorn error messages (note: first line is the result of a print statement)
Client has joined his/her room successfully
[2019-08-15 20:54:18 +0000] [7195] [CRITICAL] WORKER TIMEOUT (pid:7298)
[2019-08-15 20:54:18 +0000] [7298] [INFO] Worker exiting (pid: 7298)
[2019-08-15 20:54:19 +0000] [7300] [INFO] Booting worker with pid: 7300
You need to use the eventlet worker with Gunicorn:
gunicorn -b 127.0.0.1:5000 -w 1 -k eventlet "app:create_app()"

nginx SSL connection fail

i am having a issue trying to authenticate my account with github plugin and a reverse-proxy which is nginx, this is my configuration
Gerrit version: 2.14.8
$ cat /etc/nginx/sites-available/my_config_file
server {
listen 443;
server_name my_server_hostname;
ssl on;
ssl_certificate conf.d/certificate.crt;
ssl_certificate_key conf.d/certificate.key;
location ^~ / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
$ cat /var/log/nginx/error.log
018/08/15 08:49:47 [error] 3247#3247: *1 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream, client: IP_ADDRESS, server: my_server_hostname, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:8081/", host: "my_server_hostname"
Gerrit log does not shows any error
I think that this is a very simple issue but i can not figure out how to fix it, please help me on this.
Thanks

nginx 502 Bad Gateway with NodeBB

I get 502 bad gateway when connecting to my NodeBB installation using my domain
NodeBB is running on default port (4567)
My nginx seems to be configured properly (when connecting using the IP): http://puu.sh/mLI7U/0e03691d4c.png
My nodebb seems to be configured properly (when connecting using the IP):
http://puu.sh/mLI95/5fdafcaed9.png
My A record directing the IP to my VPS is configured properly.
Here is my etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name sporklounge.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:4567/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
My NodeBB config.json
{
"url": "http://localhost:4567",
"secret": "25d0d6a2-0444-49dc-af0c-bd693f5829d8",
"database": "redis",
"redis": {
"host": "127.0.0.1",
"port": "6379",
"password": "",
"database": "0"
}
}
Here is my var/log/nginx/error.log
2016/01/27 12:04:42 [error] 22026#0: *4062 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:80/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
2016/01/27 12:21:06 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4567/", host: "sporklounge.com"
2016/01/27 12:21:07 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:4567/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
All help is greatly appreciated and I will answer all questions that i can to help get a solution, thank you!
The one thing I see is that according to the docs, your url config value should be the full web-accessible address that points to your NodeBB. That would be sporklounge.com, not the current value.
It could also be that the backend is just sometimes responding slowly. Try very high values of this value in Nginx to see if the backend eventually responds:
# For testing, allow very long response times.
proxy_read_timeout 5m;
Also, use netstat to confirm the backend is running on port 4567:
sudo netstat -nlp | grep ':4567'
Wait, the answer may right in your logs, which give you the reason for the connection failure:
(13: Permission denied) while connecting to upstream
See the related question:
(13: Permission denied) while connecting to upstream:[nginx]

Resources