NGINX Load Balancing a Turn Server - nginx

I am attempting to put a Load Balancer in front of a Turn Server for use with WebRTC. I am using one turn server in my examples below until I get the load balancer working. The turn server requires multiple ports including one UDP as listed below:
TCP 80
TCP 443
TCP 3478
TCP 3479
UDP 3478
I have attempted to place an Amazon Elastic Load Balancer (AWS ELB) in front of the Turn Server, but it does not support the UDP port. So I am now running Ubuntu on an EC2 Instance with all these ports open and I have installed NGINX.
I've edited the /etc/nginx/nginx.conf file and added a "stream" section to it with both upstream and servers for each port. However, it does not appear to be passing the traffic correctly.
stream {
# IPv4 Section
upstream turn_tcp_3478 {
server 192.168.1.100:3478;
}
upstream turn_tcp_3479 {
server 192.168.1.100:3479;
}
upstream turn_upd_3478 {
server 192.168.1.100:3478;
}
# IPv6 Section
upstream turn_tcp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
upstream turn_tcp_ipv6_3479{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3479;
}
upstream turn_udp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
server {
listen 3478; # tcp
proxy_pass turn_tcp_3478;
}
server {
listen 3479; # tcp
proxy_pass turn_tcp_3479;
}
server {
listen 3478 udp;
proxy_pass turn_upd_3478;
}
server {
listen [::]:3478;
proxy_pass turn_tcp_ipv6_3478;
}
server {
listen [::]:3479;
proxy_pass turn_tcp_ipv6_3479;
}
server {
listen [::]:3478 udp;
proxy_pass turn_udp_ipv6_3478;
}
}
I have also created a custom load balancer configuration file at /etc/nginx/conf.d/load-balancer.conf and placed the following in it.
upstream turn_http {
server 192.168.1.100;
}
upstream turn_https {
server 192.168.1.100:443;
}
upstream turn_status {
server 192.168.1.100:8080;
}
upstream turn_ipv6_http {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:80;
}
upstream turn_ipv6_https {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:443;
}
server {
listen 80;
location / {
proxy_pass http://turn_http;
}
}
server {
listen 443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_https;
}
}
server {
listen 8080;
location / {
proxy_pass http://turn_status;
}
}
server {
listen [::]:80;
location / {
proxy_pass http://turn_ipv6_http;
}
}
server {
listen [::]:443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_ipv6_https;
}
}
The http and https traffic appear to be working fine based on the custom load-balancer.conf file.
I am unsure why the TCP/UDP Ports I have configured in the ngnix.conf file are not working as intended.

Your configuration of the NGINX Load Balancer is fine.
I suggest verifying the following:
The security groups in your Amazon EC2 Turn Server instance should have matching inbound ports with your Load Balancer configuration.
Check the configuration files on your turn server and verify that the ports it is listening to are the same ports as you are forwarding on your load balancer. For example, you have TCP 3479 being forwarded on your NGINX config. You need to make sure that the turn server is listening to that port.
Lastly, you may also need to setup some IP Tables similar to what you have setup on your Turn Server. Review your Turn Server's configuration and see if you need to do any iptables or ip6table configuration on the Load Balancer.

Take a look at this confing method link

Related

Secure access redirection

I would like to use nginx to serve files from
https://example.com/documents/*
But to redirect all other requests to localhost:4443. I need to use stream to pass on the client certificate to the listening application on 4443, since otherwise it seems that nginx conducts the TLS handshake and it fails.
I have tried this, but nginx claims (of course) that the 443 port is in use:
http {
server {
listen 443;
location /documents/ {
sendfile on;
}
}
stream {
server {
listen 443;
proxy_pass localhost:4443;
}
}
I cannot use another port, it should be the same port.
How can I make this work?

Nginx Reverse Proxy upstream not working

I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}

Nginx TCP forwarding based on domain name

i am trying to use nginx proxy in front of 2 different servers
example.com , example1.com >> nginx 10.0.0.1 >>>> 10.0.0.2 , 10.0.0.3
stream {
server {
listen 1935;
proxy_pass 10.0.0.2:1936;
proxy_protocol on;
}
server {
listen 1935;
proxy_pass 10.0.0.3:1936;
proxy_protocol on;
}
}
i have check the tcp load balance guide but i could not find how to make it work
Although there is no server_name in TCP/UDP protocol, you can forward the traffic to different upstream based on $server_addr. My example is here: https://stackoverflow.com/a/44821204/5085270
According examples in tcp load balancing page of nginx
Try this example:
stream {
upstream rtmp_servers {
least_conn;
server 10.0.0.2:1935;
server 10.0.0.3:1935;
}
server {
listen 1935;
proxy_pass rtmp_servers;
}
}
P.S. Put it outside of http {} block, edit /etc/nginx/nginx.conf add it after closing } (at end of file)
I don't think that it's possible do this using nginx. However this can be done easily with HAproxy. HAProxy can pass-thru encrypted traffic based on the SNI (Server Name Indication), which is an extension of the TLS protocol.
./haproxy/haproxy.cfg
defaults
maxconn 1000
mode http
log global
option dontlognull
timeout http-request 5s
timeout connect 5000
timeout client 2000000 # ddos protection
timeout server 2000000 # stick-table type ip size 100k expire 30s store conn_cur
frontend https
bind *:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend app1-servers if { req.ssl_sni -i example1.com } # <--- specify domain name here
use_backend app2-servers if { req.ssl_sni -i example2.com }
backend app1-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.2:443 # <--- specify IP here
backend app2-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.3:443
We are using tcp forward to back-end docker swarm cluster using below simple configuration in haproxy.cfg using ha-proxy
global
log 127.0.0.1 local0 debug
defaults
log global
listen l1
bind 0.0.0.0:443
mode tcp
timeout connect 4000
timeout client 180000
timeout server 180000
server swarm_node1 x.x.1.167:443
server swarm_node2 x.x.1.168:443
server swarm_node3 x.x.1.169:443
Use the server_name directive to determine which server block is used for a given request.
server {
listen 1935;
server_name example.com;
location / {
proxy_pass 10.0.0.1:1936;
# the usual proxy_* stuff
}
}
server {
listen 1935;
server_name example1.com;
location / {
proxy_pass 10.0.0.2:1936;
# the usual proxy_* stuff
}
}
Source: http://nginx.org/en/docs/http/server_names.html

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

What does upstream mean in nginx?

upstream app_front_static {
server 192.168.206.105:80;
}
Never seen it before, anyone knows, what it means?
It's used for proxying requests to other servers.
An example from http://wiki.nginx.org/LoadBalanceExample is:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
This means all requests for / go to the any of the servers listed under upstream XXX, with a preference for port 8000.
upstream defines a cluster that you can proxy requests to. It's commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing.
If we have a single server we can directly include it in the proxy_pass directive. For example:
server {
...
location / {
proxy_pass http://192.168.206.105:80;
...
}
}
But in case if we have many servers we use upstream to maintain the servers. Nginx will load-balance based on the incoming traffic, as shown in this answer.

Resources