I am trying to use nginx for loadbalancing. I have to use ip_hash because I work with websockets. Following is the configuration:
#user nobody;
worker_processes 3;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream my_http_servers {
ip_hash;
server 127.0.0.1:3001;
server 127.0.0.1:3004;
server 127.0.0.1:3003;
}
server {
listen 3000;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://my_http_servers;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Now I have all the 3 servers and nginx running locally on machine 1 (ip: 192.168.10.2).
I also have a frontend application which calls this backend server. My frontend runs on 192.168.10.2:4200.
When I call the http://192.168.10.2:4200 from machine1, it goes to say server1.
From my machine2 which is connected to the same WIFI (ip: 192.168.10.23), I call http://192.168.10.2:4200, but it still goes to server1.
ip_hash is not correctly doing load balancing. I am not sure what I am doing wrong, I understand ip_hash will be a sticky connection, so all requests from machine1 should go to server1 but from machine2 it should go to some other servers?
Edit:
I even tried using hash $remote_addr; instead of ip_hash, but still all requests are going to the same single server. This is my configuration using hash:
worker_processes 3;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream my_http_servers {
hash $remote_addr;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
server {
listen 3000;
server_name localhost;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://my_http_servers;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
According to the docs
The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key.
So as an example all these addresses 192.168.1.* , will be mapped to the same server.
If your server is running in your office network and both of the machine you tested are also connected to your office network, it probably won't work because usually office networks are configured such that all devices will get ip addresses with the same three octets.
If both the machines you used are running on the same office network then they probably have the same external ip, so they will also be mapped to the same server.
If you run both machines with an actual different external ips with where the first three octets are different, there is still a 33% chance that hashing two different ips will result in passing both of them to the same server
But if you use "hash" directive instead of "ip_hash" then you can combine several request variables into hash calculation. Example:
hash '$remote_addr $cookie_zzz $http_user_agent';
When you use remote IP addresses in directive "hash" , they (IP addresses) are treated as ordinary variables and can be used for round-robin upstreaming.
hash '$remote_addr';
Related
Within my AWS environment, I have an nginx server configured to point to an AWS application load balancer's DNS hostname within it's http > upstream backends configuration. All has been working just fine, but recently it would appear that the IP address of the AWS ALB as changed (although it's DNS hostname is immutable) causing my application to fail.
Digging through the nginx log files and checking dig results, it appears that nginx is retaining the IP address of the backend host and not attempting to resolve the IP address every time a request comes in. Once I restart the nginx service, everything starts working again.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
client_max_body_size 50M;
upstream backends {
server internal-private-aws-alb-hostname.elb.amazonaws.com:443;
}
server {
listen 443 ssl;
server_name my.servername.com;
ssl_certificate_key /path/to/key.pem;
ssl_certificate /path/to/cert.pem;
ssl_protocols TLSv1.2;
location / {
proxy_pass https://backends;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Has anyone experienced such behavior before and if so, aware of any configuration changes that could be made to make nginx more reliable in this manner? I wondered if there were caching configs that I should be focussing upon, but other than ssl_session_cache shared:SSL:10m; configured in the http section, everything else is rather vanilla.
proxy_pass will not resolve DNS for every request, only looked up on start or configuration reload.
You can use a variable in proxy_pass to force resolve DNS, something like this:
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
Not really. Even if your NGINX host never cached DNS queries, the authoritative DNS servers certainly are. What's the TTL of the ALB's A record? (That's a rhetorical question.)
Also, the ALB's IP address shouldn't change throughout its lifetime. What's the ALB's created timestamp? It sounds like the ALB was inadvertently deleted and recreated. Enable deletion protection to prevent this from happening again.
I am a newb and i installed jupyterhub with nginx reverse proxy on my ubuntu 18.04 server. I built my own root CA and self signed certificate with openssl. Https connections works very well if my rootCA is installed on my others computers. I want to block access for the computers who don't have my rootCA.
the file /etc/nginx/nginx.conf is untouched and my config file /etc/nginx/sites-available/jupyter.conf is:
# top-level http config for websocket headers If Upgrade is defined,
# Connection = upgrade If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name 192.168.4.70 mlserver.net localhost;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name 192.168.4.70 mlserver.net localhost;
ssl_certificate /etc/ssl/certs/mlserver.net.crt;
ssl_certificate_key /etc/ssl/private/mlserver.net.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
How can i edit this file to block access for computers who dont have certificate ?
What nginx directive add ?
Thanx.
I want to block access for the computers who don't have my rootCA.
This is not possible. The server has no information if the client has successfully validated the server certificate (i.e. clients which have the rootCA) or if a client simply skipped certificate validation (clients which don't have rootCA).
One could try to add a HSTS header so that browsers will not simply allow to ignore certificate problems. But this can also be bypassed on the client side without the server noticing, it just makes it a bit harder.
If you want to control who can access the notebook you would need proper authentication of the clients instead. Knowledge of the rootCA is not client authentication.
I have a MediaWiki running in a kubernetes cluster. The kubernetes cluster is behind an nginx proxy with the following config:
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 1024;
}
http {
upstream rancher {
server 192.168.122.90:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name .domain;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_connect_timeout 75s;
}
}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
I can get to the main page of the wiki, but have to log in before using it. When I click to login using OAuth2 I get a 502 status from the nginx proxy server (nginx reports that the upstream ended the connection prematurely). If I do the same request with curl I get a 302 with the location of the authorization endpoint as expected. I really don't understand why it is like that. Not using the proxy and directly accessing the cluster (from the vm host) works just as normally but that isn't what I want.
So the issue was not related to nginx, nor kubernetes. It was an issue with mediawiki, where compression had some funny behaviour. See more here, if anyone encounters anything similar:)
Server :Ubuntu: 18.04.4
Hosting Server: NGINX: 1.16.1
A school needs to publish their learning management system to the Internet so teachers/students can learn from home.
I am a Network Engineer and I have very little experience with NGINX and reverse proxy servers in general, apart from setting firewall rules.
So, I have had this almost working. My first config seems to work, it passes the traffic so it seems, and I get the login prompt, but when entering valid credentials I get an authentication error.
I found some suggestions that this is due to NTLM authentication.
I found some further information that suggested I needed to use streams. I tried this, and don't even get any authentication prompt at all.
So, I did contact NGINX to see if I needed NGINX Plus, but they said I should post here first to see if someone knows how to make this work.
The first config attempt is below:
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name URL.URL.URL/daymap;
ssl_certificate /etc/ssl/certs/CERTNAME.crt;
ssl_certificate_key /etc/ssl/private2/KEYNAME.key;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
# ssl_dhparam /path/to/dhparam;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
resolver 172.31.4.10;
#proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass_header Authorisation;
proxy_pass http://URL.URL.URL.URL.URL/daymap/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
}
}
The Stream config is below:
stream {
upstream backend {
hash $remote_addr consistent;
server URL.URL.URL:443 weight=5;
server IP.IP.IP.IP:443 max_fails=3 fail_timeout=30s;
}
upstream dns {
server 172.31.4.10:53;
server 172.31.4.11:53;
}
server {
listen 443;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass backend;
}
server {
listen 127.0.0.1:53 udp reuseport;
proxy_timeout 20s;
proxy_pass dns;
}
resolver 172.31.4.10;
}
I'll appreciate any insight. Hopefully, someone can see what I am doing wrong.
Regards,
Jason.
This began working all by itself about 4 days after posting this. The streams config is the one that is working.
Hope that might help someone else.
proxying NTLM is not easy. An NTLM authentication requeries a single connection established between the client and the server. The ussage of keepalive connections will not work in this case.
For more details about how to authenticate using NTLM: https://learn.microsoft.com/en-us/openspecs/office_protocols/ms-grvhenc/b9e676e7-e787-4020-9840-7cfe7c76044a
In NGINX Plus (commercial subscription) there is a special directive to tell the proxy server NTLM is in place.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm
This directive is only available in NGINX Plus. With NGINX OSS you can work with streams module instead of http to work arround the keepalives for the backend connections.
Let me know if you need more help on this.
I'm trying to proxy WebSocket + HTTP traffic with nginx.
I have read this: http://nginx.org/en/docs/http/websocket.html
My config looks like:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ourapp.com;
location / {
proxy_pass http://127.0.0.1:100;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
I have 2 problems:
1) The connection closes once a minute.
2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:
WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200
Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
Problem 1: As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)
Problem 2: Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.
Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.