I've installed Nginx on one of my servers in order to be used as a load balancer for my Rancher application.
I based my configuration on the one found here: https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/
And so my config is:
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server <ipnode1>:80 max_fails=3 fail_timeout=5s;
server <ipnode2>:80 max_fails=3 fail_timeout=5s;
server <ipnode3>:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server <ipnode1>:443 max_fails=3 fail_timeout=5s;
server <ipnode2>:443 max_fails=3 fail_timeout=5s;
server <ipnode3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
My configuration is working as expected but I've recently installed Nextcloud on my cluster. Which is giving me the following error:
Your web server is not properly set up to resolve “/.well-known/caldav”. Further information can be found in the
documentation.
Your web server is not properly set up to resolve “/.well-known/carddav”. Further information can be found in the
documentation.
So I would like to add a "location" directive but I'm not able to do it.
I tried to update my config as follow:
...
stream {
upstream rancher_servers_http {
...
}
server {
listen 80;
proxy_pass rancher_servers_http;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
upstream rancher_servers_https {
...
}
server {
listen 443;
proxy_pass rancher_servers_https;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But it's telling me
"location" directive is not allowed here in /etc/nginx/nginx.conf:21
Assuming location directive is not allowed in a stream configuration I tried to add an http block like this:
...
stream {
...
}
http {
server {
listen 443;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
server {
listen 80;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But then I got this message:
bind() to 0.0.0.0:443 failed (98: Address already in use)
(same for the port 80).
Can someone help me with this ? How can I add the location directive without affecting my actual configuration ?
Thank you for reading.
Edit
Well it seems that the stream directive prevent me from adding other standard directives. I tried to add the client_max_body_size inside server but I'm having the same issue:
directive is not allowed here
Right now your setup uses nginx as an TCP proxy. Such configuration of nginx passes through traffic without analysis - it can be ssh, rdp, whatever traffic and it will work regardless of protocols because nginx do not try to check stream content.
That is the reason why location directive does not work in context of streams - it is http protocol related function.
To take advantage of high level protocol analysis nginx need to be aware of protocol going through it, i.e. be configured as an HTTP reverse proxy.
For it to work server directive should be placed in http scope instead of stream scope.
http {
server {
listen 0.0.0.0:443 ssl;
include /etc/nginx/snippets/letsencrypt.conf;
root /var/www/html;
server_name XXXX;
location / {
proxy_pass http://rancher_servers_http;
}
location /.well-known/carddav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
root /var/www/html;
server_name xxxx;
location / {
proxy_pass http://rancher_servers_http;
}
}
}
Drawback of this approach for you would be need of certificate management reconfiguration.
But you will load off ssl encryption to nginx and gain intelligent ballancing based on http queries.
Related
I have a very simple load balancing configuration, set it up for PoC purpose. My app server1 and load balancer server is same.Below is my load balncer conf file content. Please help me is this correct?
At the moment, whenever all my request goes to IP1. I expect it to route traffic to IP2 as well whenever I hit IP1, please correct if this understanding is wrong.
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
Your configuration is correct. Sending multiple requests to your NGINX Proxy Port 80 will Loadbalance the traffic with the default LB-Algorithem round-robin to one of your backend (upstream) servers.
Check this out:
https://www.nginx.com/resources/wiki/start/topics/examples/loadbalanceexample/
http {
upstream myproject {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
You can try this from any Linux command line
for ((i=1;i<=10;i++)); do curl -v "http://localhost"; sleep 1; done
This should print AppServer1, AppServer2, AppServer3 and start again from 1.
A demo-backend could look like
server {
listen 8080;
location / {
return 200 "AppServer1\n";
}
}
server {
listen 8081;
location / {
return 200 "AppServer2\n";
}
}
server {
listen 8082;
location / {
return 200 "AppServer3\n";
}
}
I have just tested in a fresh nginx docker container without any problem.
We are trying to figure out how to add the chosen server in response headers.
For now, we use $upstream_addr to get ip address and port, and it works, but is there a way to get server hostname instead of ? (just as declared in 'upstream' block)
Here is our (simplified) nginx configuration :
upstream my_upstream {
ip_hash;
server production001 max_fails=2 fail_timeout=15s;
server production002 max_fails=2 fail_timeout=15s;
}
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://my_upstream ;
add_header X-Upstream $upstream_addr always;
}
}
Which produces the following header in response : "x-upstream: XX.XX.XX.XX:XXXX"
What we would like to get : "x-upstream: production001"
If you know the IP address of the upstream servers then you could use a map
upstream my_upstream {
ip_hash;
server prod1:80;
server prod2:80;
server prod3:80;
}
map $upstream_addr $upstream_name {
~.*192.168.1.1:80 production1;
~.*192.168.1.2:80 production2;
~.*192.168.1.3:80 production3;
default $upstream_addr;
}
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://my_upstream ;
add_header X-Upstream $upstream_name always;
}
}
You need the regex in the map for when NGINX tries more than one server, the last name in the list is the server that responded.
I'm a bit new to using nginx so I'm likely missing something obvious. I'm trying to create an nginx server that will reverse proxy to a set of web servers that use https.
I've been able to get it to work with one server list this:
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://<server1>.herokuapp.com;
}
}
However, as soon I try to add in the 'upstream' configuration element it no longer works.
upstream backend {
server <server1>.herokuapp.com;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
I've tried adding in 443, but that also fails.
upstream backend {
server <server1>.herokuapp.com:443;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
Any ideas what I'm doing wrong here?
I have 2 nodes files and user anther server blancer of them.
My config:
worker_processes 40;
events {
worker_connections 2000;
}
http {
upstream backend {
server 192.168.1.44:80;
}
server {
listen *:80;
server_name 5.9.XX.XX ;
location / {
proxy_pass http://backend;
}
}
}
My problem is when I try to work it not get any data but when I try to use:
proxy_pass http://192.168.1.44:80;
it's working good.
I am confused. Where is the problem?
it's fixed when i try use port 8080 with upstream working but i have problem some urls 404 redirect with port 8080 not 80
Is there a way to return 20% of the time a different page in Nginx for a given URL and User-Agent header (for the purpose of A/B testing)?
You should check the following module:
http://nginx.org/en/docs/http/ngx_http_split_clients_module.html
It was created exactly for A/B testing.
With loadbalance feature,
http {
upstream myproject {
server 127.0.0.1:8000 weight=4;
server 127.0.0.1:8001;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
server {
listen 8000;
location / {
root /var/www/A;
}
}
server {
listen 8001;
location / {
root /var/www/B;
}
}
}
Not so pretty, but maybe works :)
It can be use split_clients module with configuration
http {
upstream myproject1 {
server 127.0.0.1:8000;
}
upstream myproject2 {
server 127.0.0.1:8001;
}
split_clients $remote_addr $upstream {
25% myproject2;
* myproject1;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://$upstream;
}
}
}