Mercure 0.14 configuration behind Nginx reverse proxy with supervisor - server not responding - nginx

I need help configuring Mercure 0.14 to run on port 3000 on a production server, which nginx will proxy to behind the path /mercure/.
To install Mercure, I did the following:
Grabbed Mercure 0.14 linux x86_64 binary from the release page on github. https://github.com/dunglas/mercure/releases
Unpacked the tarball and moved the mercure binary to /usr/bin/mercure
I set up supervisor with the following configuration (with sensitive info redacted):
# file: /etc/supervisor/conf.d/mercure.conf
[program:mercure]
command=/usr/bin/mercure run
process_name=%(program_name)s_%(process_num)s
numprocs=1
environment=HOME="/home/[redacted]",USER="[redacted]",JWT_KEY=[redacted],SUBSCRIPTIO\
NS="1",CORS_ALLOWED_ORIGINS="[redacted]",USE_FORWARDED_HEADERS="1",PUBLISH_ALLOWED_ORIGIN\
S="http://localhost:3000",directory=/tmp,SERVER_NAME=:3000
autostart=true
autorestart=true
startsecs=5
startretries=10
user=[redacted]
redirect_stderr=false
stdout_capture_maxbytes=1MB
stderr_capture_maxbytes=1MB
stdout_logfile=/var/log/mercure.out.log
stderr_logfile=/var/log/mercure.error.log
Note that I set SERVER_NAME=:3000 in order to run it on port 3000.
Supervisor was able to start the process and shows it as running.
$ sudo supervisorctl status
mercure:mercure_0 RUNNING pid 1410254, uptime 0:09:30
However, I'm not able to curl the server...
$ curl localhost:3000/.well-known/mercure
curl: (7) Failed to connect to localhost port 3000: Connection refused
Finally, I have Nginx set up to proxy requests on port 80 to the mercure server on port 3000. This is my Nginx configuration:
location /mercure/ {
proxy_pass http://127.0.0.1:3000/;
proxy_read_timeout 24h;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
However, going to (servername)/mercure/.well-known/mercure results in a 502.
Mercure's logs only show:
{"level":"info","ts":1675706204.7100313,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1675706204.7100613,"msg":"serving initial configuration"}
But there is no error message in the logs.
I think the Nginx part is correct, but for some reason the server is just not running/responding on port 3000. Can anyone help?

Related

cannot visit minio server dashboard from ubuntu

i am working on nginx and minio on the ubuntu.
the minio server is started from
'nohup /usr/local/bin/minio server /data/tenant1 --address :9001 > /opt/logs/minio.log 2>&1 &#',
and it is working.
then I started the nginx and configure the nginx server with the following configuration.
nano /etc/nginx/sites-available/default
server {
listen 9000;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:9001;
}
}
sudo systemctl restart nginx
from the opening ports, it is clear to see the minio is running on port 9001
minio 9349 root 12u IPv6 35833021 0t0 TCP *:9001 (LISTEN)
nginx 12416 www-data 8u IPv4 36153228 0t0 TCP *:9000 (LISTEN)
At last, the gateway is inactive from the output of ufw status. and my server security group also allows 9000.
however when I tried to visit the minio server dashboard from http://IP:9000/minio , it is not working,
any problem with my configuration?
The MinIO console dashboard listens on a separate port - you need to specify this with the option --console-address :10000 - if you do not specify this, a random port is chosen for you and it will be displayed in MinIO's logs in your /opt/log/minio.log file. The 9001 port in your setup is for the S3 storage API only.

gunicorn reverse proxy accessible from internet

I configured nginx and gunicorn to serve flask app. And I started gunicorn with this command
gunicorn --bind 0.0.0.0:5000 wsgi:app My website is accessible from my provided ip address on port 80. However It is accessible on port 5000 as well. It seems my reverse proxy works as it should be, but gunicorn server can be accessible as well.
I'm planning to disable port 5000, but not sure this is the correct, secure way to solve such problem.
This is my nginx conf file:
server {
server_name <my_ip_adress>;
access_log /var/log/nginx/domain-access.log;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
# This line is important as it tells nginx to channel all requests to port 5000.
# We will later run our wsgi application on this port using gunicorn.
proxy_pass http://127.0.0.1:5000/;
}
}
You're binding gunicorn to 0.0.0.0 hence it's available on the external interfaces. Assuming this is just one box, instead:
gunicorn --bind 127.0.0.1:5000 wsgi:app
This no longer listens for requests from external interfaces, meaning all requests must come through nginx.
Of course if you did bind gunicorn to 0.0.0.0 you could make a firewall rule with iptables to DROP traffic to that port from external interfaces.
If you are using a cloud provider they may implement this firewall functionality natively on their platform - for example Security Groups on AWS EC2 would allow you to create a 'webserver' group which only allows traffic through for ports 80 & 443.

Nginx Proxy Pass to External APIs- 502 Bad Gateway

Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!

Flask-SocketIO 502 Error on AWS EC2 with [CRITICAL] Worker Timeouts

I'm setting up a reverse-proxy NGINX EC2 deployment of a flask app on AWS by following this guide. More specifically, I'm using a proxy pass to a gunicorn server (see config info below).
Things have been running smoothly, and the flask portion of the setup works great. The only thing is that, when attempting to access pages that rely on Flask-SocketIO, the client throws a 502 (Bad Gateway) and some 400 (Bad Request) errors. This happens after successfully talking a bit with the server, but then the next message(s) (e.g. https://example.com/socket.io/?EIO=3&transport=polling&t=1565901966787-3&sid=c4109ab0c4c74981b3fc0e3785fb6558) sit(s) at pending, and after 30 seconds the gunicorn worker throws a [CRITICAL] WORKER TIMEOUT error and reboots.
A potentially important detail: I'm using eventlet, and I've applied monkey patching.
I've tried changing around ports, using 0.0.0.0 instead of 127.0.0.1, and a few other minor alterations. I haven't been able to locate any resources online that deal with these exact issues.
The tasks asked of the server are very light, so I'm really not sure why it's hanging like this.
GNIX Config:
server {
# listen on port 80 (http)
listen 80;
server_name _;
location ~ /.well-known {
root /home/ubuntu/letsencrypt;
}
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443 ssl;
server_name _;
...
location / {
# forward application requests to the gunicorn server
proxy_pass http://127.0.0.1:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
...
}
Launching the gunicorn server:
gunicorn -b 127.0.0.1:5000 -w 1 "app:create_app()"
Client socket declaration:
var protocol = window.location.protocol
var socket = io.connect(protocol + '//' + document.domain + ':' + location.port);
Requirements.txt:
Flask_SQLAlchemy==2.4.0
SQLAlchemy==1.3.4
Flask_Login==0.4.1
Flask_SocketIO==4.1.0
eventlet==0.25.0
Flask==1.0.3
Flask_Migrate==2.5.2
Sample client-side error messages:
POST https://example.com/socket.io/?EIO=3&transport=polling&t=1565902131372-4&sid=17c5c83a59e04ee58fe893cd598f6df1 400 (BAD REQUEST)
socket.io.min.js:1 GET https://example.com/socket.io/?EIO=3&transport=polling&t=1565902131270-3&sid=17c5c83a59e04ee58fe893cd598f6df1 400 (BAD REQUEST)
socket.io.min.js:1 GET https://example.com/socket.io/?EIO=3&transport=polling&t=1565902165300-7&sid=4d64d2cfc94f43b1bf6d47ea53f9d7bd 502 (Bad Gateway)
socket.io.min.js:2 WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=4d64d2cfc94f43b1bf6d47ea53f9d7bd' failed: WebSocket is closed before the connection is established
Sample gunicorn error messages (note: first line is the result of a print statement)
Client has joined his/her room successfully
[2019-08-15 20:54:18 +0000] [7195] [CRITICAL] WORKER TIMEOUT (pid:7298)
[2019-08-15 20:54:18 +0000] [7298] [INFO] Worker exiting (pid: 7298)
[2019-08-15 20:54:19 +0000] [7300] [INFO] Booting worker with pid: 7300
You need to use the eventlet worker with Gunicorn:
gunicorn -b 127.0.0.1:5000 -w 1 -k eventlet "app:create_app()"

nginx run npm with non-default port?

I have my angular app running in the root url, port 80, and I want to access the api running at port 8090. But when I try to change the port nginx listens to 8090 it says that I have a conflict (since npm is running it at 8090). So I switched it to 8100 instead. But when I try to hit that port it doesn't connect. My goal is to be able to go to http://174.131.183.112:8100 for my api.
server {
listen 8100;
server_name api._;
location / {
proxy_pass http://174.131.183.112:8090;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Note: the weird thing is that if I don't have anything else running in nginx, then it still uses default port 80 to connect to my api, as if the 8100 wasn't there at all.
I realized that I setup ufw to block all ports. So I had to sudo ufw allow 8100 to open it and now it works.

Resources