Within my AWS environment, I have an nginx server configured to point to an AWS application load balancer's DNS hostname within it's http > upstream backends configuration. All has been working just fine, but recently it would appear that the IP address of the AWS ALB as changed (although it's DNS hostname is immutable) causing my application to fail.
Digging through the nginx log files and checking dig results, it appears that nginx is retaining the IP address of the backend host and not attempting to resolve the IP address every time a request comes in. Once I restart the nginx service, everything starts working again.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
client_max_body_size 50M;
upstream backends {
server internal-private-aws-alb-hostname.elb.amazonaws.com:443;
}
server {
listen 443 ssl;
server_name my.servername.com;
ssl_certificate_key /path/to/key.pem;
ssl_certificate /path/to/cert.pem;
ssl_protocols TLSv1.2;
location / {
proxy_pass https://backends;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Has anyone experienced such behavior before and if so, aware of any configuration changes that could be made to make nginx more reliable in this manner? I wondered if there were caching configs that I should be focussing upon, but other than ssl_session_cache shared:SSL:10m; configured in the http section, everything else is rather vanilla.
proxy_pass will not resolve DNS for every request, only looked up on start or configuration reload.
You can use a variable in proxy_pass to force resolve DNS, something like this:
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
Not really. Even if your NGINX host never cached DNS queries, the authoritative DNS servers certainly are. What's the TTL of the ALB's A record? (That's a rhetorical question.)
Also, the ALB's IP address shouldn't change throughout its lifetime. What's the ALB's created timestamp? It sounds like the ALB was inadvertently deleted and recreated. Enable deletion protection to prevent this from happening again.
Related
I have tried a lot of searching and trial and errors to set up reverse proxy for organizr but it seems to not work. need help from you experts.
My Setup: All on Dockers
nginx: docker instance setup to serve my multiple domains using wordpress.
organizr: separate docker instance for my media. The config files are in separate folder. I can access this locally without any issues. I intend to put this behind one of my domains as a subdomain ex. organizr.domain.com.
with below config, I always get upstream timed out (110: Connection timed out). I have followed many tutorials but may have mixed up the config now. I would appreciate if someone could help with clear instructions to set this up.
conf file in nginx's folder
# Organizr Subdomain
upstream organizr_backend {
server 192.168.X.X:9999;
}
server {
listen 443 ssl http2;
server_name organizr.domain.co.uk;
ssl_certificate /etc/letsencrypt/live/domain.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.co.uk/privkey.pem;
ssl_prefer_server_ciphers on;
error_log /var/log/nginx/organizr.domain.co.uk.error.log error;
access_log /var/log/nginx/organizr.domain.co.uk.access.log main;
location / {
proxy_pass http://organizr_backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Searched google for many combinations, but i am unable to get the desired results.
I am deploying InvenioRDM as local.
Here is a gist of the limitations.
InvenioRDM as local instance for prototyping
Application is strictly IP address and port bound
Aim is to link IP to URL in a seamless manner
The work so far:
InvenioRDM local instance exposes application frontend only
Approaches:
i) Mimic production: The Nginx configuration was initially setup that
mirrored the production. The production environment is purely
containers. Very complex so i decided to try a simpler approach.
ii) Transparent Proxy: Use Nginx to pass on everything and replace
the URLs at ingress (proxy_pass) and egress (proxy_redirect). The
benefit is to simplify the web server configuration as the
application does handle http requests.
My default.conf is as follows.
# HTTP server
server {
# Redirects all requests to https. - this is in addition to HAProxy which
# already redirects http to https. This redirect is needed in case you access
# the server directly (e.g. useful for debugging).
listen 80; # IPv4
server_name server.name;
return 301 https://$host$request_uri;
}
#HTTPS Server
server {
listen 443 ssl;
server_name server.name;
charset utf-8;
keepalive_timeout 5;
ssl_certificate /etc/ssl/test.crt;
ssl_certificate_key /etc/ssl/test.key;
ssl_session_cache builtin:1000 shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AE$
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
proxy_request_buffering off;
proxy_http_version 1.1;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://127.0.0.1:5000;
proxy_read_timeout 90;
proxy_redirect https://127.0.0.1:5000 https://server.name;
}
}
My issue is that when accessing publicly on the IP address server.name (hidden for obvious reasons), it returns with the internal Class A IP address (10.X.X.X) of the machine which is offcourse not accessible publicaly. What am I missing here.
I am new to this, and I am at my wits end.
Server :Ubuntu: 18.04.4
Hosting Server: NGINX: 1.16.1
A school needs to publish their learning management system to the Internet so teachers/students can learn from home.
I am a Network Engineer and I have very little experience with NGINX and reverse proxy servers in general, apart from setting firewall rules.
So, I have had this almost working. My first config seems to work, it passes the traffic so it seems, and I get the login prompt, but when entering valid credentials I get an authentication error.
I found some suggestions that this is due to NTLM authentication.
I found some further information that suggested I needed to use streams. I tried this, and don't even get any authentication prompt at all.
So, I did contact NGINX to see if I needed NGINX Plus, but they said I should post here first to see if someone knows how to make this work.
The first config attempt is below:
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name URL.URL.URL/daymap;
ssl_certificate /etc/ssl/certs/CERTNAME.crt;
ssl_certificate_key /etc/ssl/private2/KEYNAME.key;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
# ssl_dhparam /path/to/dhparam;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
resolver 172.31.4.10;
#proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass_header Authorisation;
proxy_pass http://URL.URL.URL.URL.URL/daymap/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
}
}
The Stream config is below:
stream {
upstream backend {
hash $remote_addr consistent;
server URL.URL.URL:443 weight=5;
server IP.IP.IP.IP:443 max_fails=3 fail_timeout=30s;
}
upstream dns {
server 172.31.4.10:53;
server 172.31.4.11:53;
}
server {
listen 443;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass backend;
}
server {
listen 127.0.0.1:53 udp reuseport;
proxy_timeout 20s;
proxy_pass dns;
}
resolver 172.31.4.10;
}
I'll appreciate any insight. Hopefully, someone can see what I am doing wrong.
Regards,
Jason.
This began working all by itself about 4 days after posting this. The streams config is the one that is working.
Hope that might help someone else.
proxying NTLM is not easy. An NTLM authentication requeries a single connection established between the client and the server. The ussage of keepalive connections will not work in this case.
For more details about how to authenticate using NTLM: https://learn.microsoft.com/en-us/openspecs/office_protocols/ms-grvhenc/b9e676e7-e787-4020-9840-7cfe7c76044a
In NGINX Plus (commercial subscription) there is a special directive to tell the proxy server NTLM is in place.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm
This directive is only available in NGINX Plus. With NGINX OSS you can work with streams module instead of http to work arround the keepalives for the backend connections.
Let me know if you need more help on this.
I am trying to install Apache OpenMeetings. I however wants to use Nginx as the reverse proxy to run the application on port 443 using Let's Encrypt free SSL.
If I try to load the application on port 5080, I successfully get the interface, but when try using the domain name on port 443 HTTPS, It is not loading the resources.
Image with Errors.
Here's my nginx virtual host file.
upstream openmeetings {
server 127.0.0.1:5080;
}
server {
listen 80;
server_name openmeetings.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name openmeetings.example.com;
ssl_certificate /etc/letsencrypt/live/openmeetings.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/openmeetings.example.com/privkey.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/openmeetings.access.log;
location / {
proxy_pass http://openmeetings;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
}
}
I faced same problem. (vit Openmeetings 5.0.0-M4)
I found next:
Openmeetings use ajax over WebSocket.
adding
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
to http section
and
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
to location
It solve status 400 problem
Then I meet problem with Content Security Policy
I feel like connect-src policy configured automatic on first connect to server.
So after change used domain I need restart Openmeetings.
Problem with media stream play
On Check setup recording produce long browser console message ending with
onaddstream is deprecated! Use peerConnection.ontrack instead.
...
Remote ICE candidate received
Look like it incompatibility with old Firefox 54.0 on Linux
On latest Firefox 75.0 on Windows it works!
It is also necessary to rewrite server.xml referring to
nginx managed SSL with Tomcat 7
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="x-forwarded-for"
remoteIpProxiesHeader="x-forwarded-by"
protocolHeader="x-forwarded-proto"
/>
I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.