i am trying to use nginx proxy in front of 2 different servers
example.com , example1.com >> nginx 10.0.0.1 >>>> 10.0.0.2 , 10.0.0.3
stream {
server {
listen 1935;
proxy_pass 10.0.0.2:1936;
proxy_protocol on;
}
server {
listen 1935;
proxy_pass 10.0.0.3:1936;
proxy_protocol on;
}
}
i have check the tcp load balance guide but i could not find how to make it work
Although there is no server_name in TCP/UDP protocol, you can forward the traffic to different upstream based on $server_addr. My example is here: https://stackoverflow.com/a/44821204/5085270
According examples in tcp load balancing page of nginx
Try this example:
stream {
upstream rtmp_servers {
least_conn;
server 10.0.0.2:1935;
server 10.0.0.3:1935;
}
server {
listen 1935;
proxy_pass rtmp_servers;
}
}
P.S. Put it outside of http {} block, edit /etc/nginx/nginx.conf add it after closing } (at end of file)
I don't think that it's possible do this using nginx. However this can be done easily with HAproxy. HAProxy can pass-thru encrypted traffic based on the SNI (Server Name Indication), which is an extension of the TLS protocol.
./haproxy/haproxy.cfg
defaults
maxconn 1000
mode http
log global
option dontlognull
timeout http-request 5s
timeout connect 5000
timeout client 2000000 # ddos protection
timeout server 2000000 # stick-table type ip size 100k expire 30s store conn_cur
frontend https
bind *:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend app1-servers if { req.ssl_sni -i example1.com } # <--- specify domain name here
use_backend app2-servers if { req.ssl_sni -i example2.com }
backend app1-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.2:443 # <--- specify IP here
backend app2-servers
mode tcp
balance roundrobin
option ssl-hello-chk
server server1 10.0.0.3:443
We are using tcp forward to back-end docker swarm cluster using below simple configuration in haproxy.cfg using ha-proxy
global
log 127.0.0.1 local0 debug
defaults
log global
listen l1
bind 0.0.0.0:443
mode tcp
timeout connect 4000
timeout client 180000
timeout server 180000
server swarm_node1 x.x.1.167:443
server swarm_node2 x.x.1.168:443
server swarm_node3 x.x.1.169:443
Use the server_name directive to determine which server block is used for a given request.
server {
listen 1935;
server_name example.com;
location / {
proxy_pass 10.0.0.1:1936;
# the usual proxy_* stuff
}
}
server {
listen 1935;
server_name example1.com;
location / {
proxy_pass 10.0.0.2:1936;
# the usual proxy_* stuff
}
}
Source: http://nginx.org/en/docs/http/server_names.html
Related
I use Nginx (not Nginx plus) and FluentBit for one scenario.
In fact, the requests are sent to the UDP port in Nginx, and then Nginx sends them to Fluentbit in a round-robin so all requests are proxied to the server fluentbit_upstreams.
FluentBit does not return anything by default, so Nginx cannot notice that any of the servers are down.
I used fail_timeout and max_fails but it didn't help me.
pstream fluentbit_upstreams {
server fluentBitA.dev:5555 weight=1 fail_timeout=30s max_fails=1;
server fluentBitB.dev:6666 weight=1 fail_timeout=30s max_fails=1;
}
server {
listen 13149 udp;
proxy_pass fluentbit_upstreams;
proxy_responses 1;
error_log /var/log/nginx/udp.log;
}
How can this problem be solved? how Nginx can notice that one of the servers is down
Environment: Nginx 1.14.0 (see dockerfile for more details).
To limit the number of concurrent connections for a specific location
in a server, one can use two methods - limit_conn (third example for all ips)
and upstream max_conns.
Is there a difference in the way the two methods works?
Can someone explain or refer to explanation?
example of limiting using upstream max_conns:
http {
upstream foo{
zone upstream_foo 32m;
server some-ip:8080 max_conns=100;
}
server {
listen 80;
server_name localhost;
location /some_path {
proxy_pass http://foo/some_path;
return 429;
}
}
}
limiting using limit_conn:
http {
limit_conn_zone $server_name zone=perserver:32m;
server {
listen 80;
server_name localhost;
location /some_path {
proxy_pass http://some-ip:8080/some_path;
limit_conn perserver 100;
limit_conn_status 429;
}
}
}
upstream max_conns is the number of connections from the nginx server to an upstream proxy server. max_conns is more to make sure backend servers do not get overloaded. Say you have an upstream of 5 servers that nginx can send to. Maybe one is underpowered so you limit the total number of connections to it to keep from overloading it.
limit_conn is the number of connections to the nginx server from a client and is to limit abuse from requests to the nginx server. For example you can say for a location that an IP can only have 10 open connections before maxing out.
Also note that, if the max_conns limit has been reached, the request can be placed in a queue for further processing, provided that the queue (NGINX Plus) directive is also included to set the maximum number of requests that can be simultaneously in the queue:
upstream backend {
server backend1.example.com max_conns=3;
server backend2.example.com;
queue 100 timeout=70;
}
If the queue is filled up with requests or the upstream server cannot be selected during the timeout specified by the optional timeout parameter, or the queue parameter is omitted, the client receives an error (502).
I am attempting to put a Load Balancer in front of a Turn Server for use with WebRTC. I am using one turn server in my examples below until I get the load balancer working. The turn server requires multiple ports including one UDP as listed below:
TCP 80
TCP 443
TCP 3478
TCP 3479
UDP 3478
I have attempted to place an Amazon Elastic Load Balancer (AWS ELB) in front of the Turn Server, but it does not support the UDP port. So I am now running Ubuntu on an EC2 Instance with all these ports open and I have installed NGINX.
I've edited the /etc/nginx/nginx.conf file and added a "stream" section to it with both upstream and servers for each port. However, it does not appear to be passing the traffic correctly.
stream {
# IPv4 Section
upstream turn_tcp_3478 {
server 192.168.1.100:3478;
}
upstream turn_tcp_3479 {
server 192.168.1.100:3479;
}
upstream turn_upd_3478 {
server 192.168.1.100:3478;
}
# IPv6 Section
upstream turn_tcp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
upstream turn_tcp_ipv6_3479{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3479;
}
upstream turn_udp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
server {
listen 3478; # tcp
proxy_pass turn_tcp_3478;
}
server {
listen 3479; # tcp
proxy_pass turn_tcp_3479;
}
server {
listen 3478 udp;
proxy_pass turn_upd_3478;
}
server {
listen [::]:3478;
proxy_pass turn_tcp_ipv6_3478;
}
server {
listen [::]:3479;
proxy_pass turn_tcp_ipv6_3479;
}
server {
listen [::]:3478 udp;
proxy_pass turn_udp_ipv6_3478;
}
}
I have also created a custom load balancer configuration file at /etc/nginx/conf.d/load-balancer.conf and placed the following in it.
upstream turn_http {
server 192.168.1.100;
}
upstream turn_https {
server 192.168.1.100:443;
}
upstream turn_status {
server 192.168.1.100:8080;
}
upstream turn_ipv6_http {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:80;
}
upstream turn_ipv6_https {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:443;
}
server {
listen 80;
location / {
proxy_pass http://turn_http;
}
}
server {
listen 443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_https;
}
}
server {
listen 8080;
location / {
proxy_pass http://turn_status;
}
}
server {
listen [::]:80;
location / {
proxy_pass http://turn_ipv6_http;
}
}
server {
listen [::]:443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_ipv6_https;
}
}
The http and https traffic appear to be working fine based on the custom load-balancer.conf file.
I am unsure why the TCP/UDP Ports I have configured in the ngnix.conf file are not working as intended.
Your configuration of the NGINX Load Balancer is fine.
I suggest verifying the following:
The security groups in your Amazon EC2 Turn Server instance should have matching inbound ports with your Load Balancer configuration.
Check the configuration files on your turn server and verify that the ports it is listening to are the same ports as you are forwarding on your load balancer. For example, you have TCP 3479 being forwarded on your NGINX config. You need to make sure that the turn server is listening to that port.
Lastly, you may also need to setup some IP Tables similar to what you have setup on your Turn Server. Review your Turn Server's configuration and see if you need to do any iptables or ip6table configuration on the Load Balancer.
Take a look at this confing method link
I'm trying to create an NGinx configuration in which NGinx receives the proxy protocol header on the incoming connection and passes it on the outgoing connection. Essentially to propagate the real ip address to the final target. I'm using the following configuration:
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
}
}
However, the proxy protocol header I receive on the 'some_backend' contains NGinx' ip address and not the source IP address.
Is something wrong with the configuration I'm using?
Can this at all be done?
Oops, I did it again...
It seems that the missing part is adding the set_real_ip_from directive with the ip range you use to access NGinx
http://nginx.org/en/docs/stream/ngx_stream_realip_module.html
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
set_real_ip_from 172.17.0.0/24;
}
}
I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}