private_pub/faye and nginx tcp -- 502 Bad Gateway - tcp

So I got the tcp module for nginx all set up and am trying to use this with private_pub (faye) for websockets. As of now I'm getting very slow loading from faye and a 502 Bad Gateway errors. Everyone points towards configuring it like so:
I have this in my nginx.conf:
tcp {
timeout 1d;
websocket_read_timeout 1d;
websocket_send_timeout 1d;
upstream websockets {
server 199.36.105.34:9292;
check interval=300 rise=2 fall=5 timeout=1000;
}
server {
listen 9200;
server_name 2u.fm;
timeout 43200000;
websocket_connect_timeout 43200000;
proxy_connect_timeout 43200000;
so_keepalive on;
tcp_nodelay on;
websocket_pass websockets;
}
I've tried every variation of that on the web. I want to be able to hit it from my domain "2u.fm/faye" but the only way I can get that to work is to do a proxy inside my http block:
location /faye {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
break;
}
Adding that makes it work at 2u.fm/faye but now I'm back at square one, still getting super slow responses and 502 Bad Gateway's. Which I think makes sense as it's routing through http still and not directly to tcp. I've tried hitting 199.36.105.34:9200 directly but I get no response.

Related

client ip behind nginx reverse proxy

I'm running an nginx reverse proxy to be able to run multiple servers behind my firewall. I noticed on my mail server the error log is filled with "failed login from < local ip of nginx >" and I was wondering how can I set it so I get the remote IP of the person/bot that is trying to login so I might use that information for auto blocking those addresses (for example)?
This is my current config:
server {
listen 8443 ssl http2;
server_name mail.domain.com;
location / {
proxy_set_header Host $host;
proxy_pass https://<internal ip>/;
client_max_body_size 0;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
}
}
I think you're looking for one of these
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
you can add these to http, server or location block and read the header in your app to filter the request
Just found out my mail server (Kerio) does nothing with the information forwarded by the reverse proxy, so the only thing I can do is hope for an update that does.

Nginx returns 502 to browsers but works fine with curl

I have a MediaWiki running in a kubernetes cluster. The kubernetes cluster is behind an nginx proxy with the following config:
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 1024;
}
http {
upstream rancher {
server 192.168.122.90:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name .domain;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_connect_timeout 75s;
}
}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
I can get to the main page of the wiki, but have to log in before using it. When I click to login using OAuth2 I get a 502 status from the nginx proxy server (nginx reports that the upstream ended the connection prematurely). If I do the same request with curl I get a 302 with the location of the authorization endpoint as expected. I really don't understand why it is like that. Not using the proxy and directly accessing the cluster (from the vm host) works just as normally but that isn't what I want.
So the issue was not related to nginx, nor kubernetes. It was an issue with mediawiki, where compression had some funny behaviour. See more here, if anyone encounters anything similar:)

Nginx as a reverse proxy to hide HLS origin servers

I'm no nginx professional or amateur for that mater, just know how to google a lot and I need help finishing of a reverse proxy.
What I currently have a main server that handles connections which then hands the client to one of 8 backend servers that deliver the stream in either TS or HLS. I want to put a proxy at the front that acts as the main server but also delivers the stream (like an edge server i guess, but no caching) so that the origin servers are hidden.
I have got it to work with TS, but I can't for the life of me workout how to get it to work with HLS no matter how much I packet capture. It pulls the manifest fine but unlike with TS it isn't pulling the segments from the origin servers.
Here is the code I've done so far (could probably be cleaner but this was all done with google)
server {
listen 80;
server_name proxy_IP_here;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
location ~ .(m3u8|mpd)$ {
proxy_pass backend_IP_for_Main;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
}
}
location / {
proxy_pass backend_IP_for_Main;
sub_filter 'dns_i_have_it_fildering_here' 'proxy_IP_here';
sub_filter_once off;
sub_filter_types text/javascript application/json;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
If I remove
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
that from the .mm3u8 location block HLS will work but will be delivered directly by the origin server to the end client and not through the proxy.
Any help greatly appreciated.
Thanks in advance.

Nginx returns HTTP Status 200 instead 302 on a proxy_pass configuration

I have the following configuration on a NGINX which is serving as a reverse proxy to my Docker machine located at: 192.168.99.100:3150.
Basically, I need to hit: http://localhost:8150 and the content displayed has to be the content from inside the Docker.
The configuration bellow is doing his job.
The point here is that when hitting the localhost:8150 I'm getting http status code 302, and I would like to get the http status code 200.
Does anyone know if it's possible to be done on Nginx or any other way to do that?
server {
listen 8150;
location / {
proxy_pass http://192.168.99.100:3150;
}
}
Response from a request to http://localhost:8150/products
HTTP Requests
-------------
GET /projects 302 Found
I have found the solution.
Looks that a simple proxy_pass doens't work quite fine with ngrok.
I'm using proxy_pass with upstream and it's working fine.
Bellow my configuration.
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream rorweb {
server 192.168.99.100:3150 fail_timeout=0;
}
server {
listen 8150;
server_name git.example.com;
server_tokens off;
root /dev/null;
client_max_body_size 20m;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://rorweb;
}
}
include servers/*;
}
My environment is like this:
Docker (running a rails project on port 3150)
Nginx (as a reverse proxy exposing the port 8150)
Ngrok (exporting my localhost/nginx)

nginx and proxying WebSockets

I'm trying to proxy WebSocket + HTTP traffic with nginx.
I have read this: http://nginx.org/en/docs/http/websocket.html
My config looks like:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ourapp.com;
location / {
proxy_pass http://127.0.0.1:100;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
I have 2 problems:
1) The connection closes once a minute.
2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:
WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200
Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
Problem 1: As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)
Problem 2: Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.
Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.

Resources