Configure Nginx reverse proxy for MQTT - nginx

I'm trying to setting up a reverse proxy that resolve localhost:8081 to a broker installed on an other machine.
My Nginx config file is:
worker_processes 1;
events {
worker_connections 1024;
}
server {
listen 8081;
server_name localhost;
location / {
proxy_pass tcp://192.168.1.177:1883;
}
}
But when I try to connect to the broker (from the machine where I'm configuring Nginx) with the command
mosquitto_sub -h localhost -p 8081 -t "stat/tasmota_8231A8/POWER1"
I get the error Connection refused.
Edit:
Mosquitto broker config:
persistence true
persistence_location /var/lib/mosquitto/
include_dir /etc/mosquitto/conf.d
listener 1883
allow_anonymous true
Edit
I try with this config file for nginx
worker_processes 1;
events {
worker_connections 1024;
}
stream {
listen 8081;
proxy_pass 192.168.1.77:1883;
}

This won't work for native MQTT.
What you have configured is a HTTP proxy, but MQTT != HTTP.
You need to configure nginx as a stream proxy. e.g.
stream {
server {
listen 8081;
proxy_pass 192.168.1.77:1883;
}
}
https://docs.nginx.com/nginx/admin-guide/tcp-udp-load-balancer/
Or configure mosquitto to support MQTT over WebSockets (Assuming the client supports this as well). Then you can use HTTP based proxying as WebSockets bootstrap via HTTP.

Related

Secure access redirection

I would like to use nginx to serve files from
https://example.com/documents/*
But to redirect all other requests to localhost:4443. I need to use stream to pass on the client certificate to the listening application on 4443, since otherwise it seems that nginx conducts the TLS handshake and it fails.
I have tried this, but nginx claims (of course) that the 443 port is in use:
http {
server {
listen 443;
location /documents/ {
sendfile on;
}
}
stream {
server {
listen 443;
proxy_pass localhost:4443;
}
}
I cannot use another port, it should be the same port.
How can I make this work?

NGinx and Proxy Protocol forwarding

I'm trying to create an NGinx configuration in which NGinx receives the proxy protocol header on the incoming connection and passes it on the outgoing connection. Essentially to propagate the real ip address to the final target. I'm using the following configuration:
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
}
}
However, the proxy protocol header I receive on the 'some_backend' contains NGinx' ip address and not the source IP address.
Is something wrong with the configuration I'm using?
Can this at all be done?
Oops, I did it again...
It seems that the missing part is adding the set_real_ip_from directive with the ip range you use to access NGinx
http://nginx.org/en/docs/stream/ngx_stream_realip_module.html
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
set_real_ip_from 172.17.0.0/24;
}
}

docker swarm mode nginx between 2 swarm clusters

I'm trying to create a proxy in front of a swarm cluster.
This proxy is inside of another swarm cluster, to provide HA.
This is the current structure:
Proxy cluster (ip range 192.168.98.100 ~ 192.168.98.102)
proxy-manager1;
proxy-worker1;
proxy-worker2;
App cluster (ip range 192.168.99.100 ~ 192.168.99.107)
app-manager1;
app-manager2;
app-manager3;
app-worker1;
app-worker2;
app-worker3;
app-worker4;
app-worker5;
When I configure nginx with the app-manager's IP address, the proxy redirection works good.
But when I configure nginx with the app-manager's hostname or DNS, the proxy service does not find the server to redirect.
This is the working config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream app {
server 192.168.99.100:8080;
server 192.168.99.101:8080;
server 192.168.99.102:8080;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}
Is that a good practice?
Or maybe I'm doing wrong?
You have to make sure that nginx can resolve the hostnames to ip addresses. Check this out for more information: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
Check how nginx is resolving the hostnames or check the hosts files on the nginx servers.
On a side note, I sometimes will run this for the local dns https://cr.yp.to/djbdns.html and then makes sure that all the nginx servers use that dns service running on a server. It is easier to configure djbdns than to keep all the host files updated.

Nginx TCP forwarding based on domain name

i am trying to use nginx proxy in front of 2 different servers
example.com , example1.com >> nginx 10.0.0.1 >>>> 10.0.0.2 , 10.0.0.3
stream {
server {
listen 1935;
proxy_pass 10.0.0.2:1936;
proxy_protocol on;
}
server {
listen 1935;
proxy_pass 10.0.0.3:1936;
proxy_protocol on;
}
}
i have check the tcp load balance guide but i could not find how to make it work
Although there is no server_name in TCP/UDP protocol, you can forward the traffic to different upstream based on $server_addr. My example is here: https://stackoverflow.com/a/44821204/5085270
According examples in tcp load balancing page of nginx
Try this example:
stream {
upstream rtmp_servers {
least_conn;
server 10.0.0.2:1935;
server 10.0.0.3:1935;
}
server {
listen 1935;
proxy_pass rtmp_servers;
}
}
P.S. Put it outside of http {} block, edit /etc/nginx/nginx.conf add it after closing } (at end of file)
I don't think that it's possible do this using nginx. However this can be done easily with HAproxy. HAProxy can pass-thru encrypted traffic based on the SNI (Server Name Indication), which is an extension of the TLS protocol.
./haproxy/haproxy.cfg
defaults
maxconn 1000
mode http
log global
option dontlognull
timeout http-request 5s
timeout connect 5000
timeout client 2000000 # ddos protection
timeout server 2000000 # stick-table type ip size 100k expire 30s store conn_cur
frontend https
bind *:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend app1-servers if { req.ssl_sni -i example1.com } # <--- specify domain name here
use_backend app2-servers if { req.ssl_sni -i example2.com }
backend app1-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.2:443 # <--- specify IP here
backend app2-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.3:443
We are using tcp forward to back-end docker swarm cluster using below simple configuration in haproxy.cfg using ha-proxy
global
log 127.0.0.1 local0 debug
defaults
log global
listen l1
bind 0.0.0.0:443
mode tcp
timeout connect 4000
timeout client 180000
timeout server 180000
server swarm_node1 x.x.1.167:443
server swarm_node2 x.x.1.168:443
server swarm_node3 x.x.1.169:443
Use the server_name directive to determine which server block is used for a given request.
server {
listen 1935;
server_name example.com;
location / {
proxy_pass 10.0.0.1:1936;
# the usual proxy_* stuff
}
}
server {
listen 1935;
server_name example1.com;
location / {
proxy_pass 10.0.0.2:1936;
# the usual proxy_* stuff
}
}
Source: http://nginx.org/en/docs/http/server_names.html

nginx best practices for reloading servers

I have a nginx config which has:
events {
worker_connections 1024;
}
http {
upstream myservers {
server server1.com:9000;
server server2.com:9000;
server server3.com:9000;
}
server {
access_log /var/log/nginx/access.log combined;
listen 9080;
location / {
proxy_pass http://myservers;
}
}
}
I need to reload the servers and the method I am using is to bring up the new servers on port 9001 and then do nginx -s reload with the following modification to the config:
upstream myservers {
server server1.com:9000 down;
server server2.com:9000 down;
server server3.com:9000 down;
server server1.com:9001;
server server2.com:9001;
server server3.com:9001;
}
Then I bring down the old servers. However, before I bring down the old servers, I need to make sure all workers that were handling requests to these old servers are done. How do I check this? Also, is this the best way to reload backend servers with the free version of nginx?

Resources