I just setup nginx + gunicorn to serve up a Pyramid web application. My application relies on getting the subdomain, which is different for every client. I didn't know this before, but when going through gunicorn, it seems that the only thing I can get from domain is what i have in my INI file used to configure Gunicorn - localhost.
I'm wondering if there is a way to get the actual, full domain name where the request originated? It can't be anything hard coded since the subdomains could be different for each request. Does anyone have any ideas how to make this happen?
UPDATE
I made the change requested by Fuero, changing the value I had for
proxy_set_header Host $http_host;
to
proxy_set_header Host $host;
Unfortunately, that didn't do it. I'm still seeing 127.0.0.1:6500 in the environment as the remote address, host, etc. The only thing that shows me the actual client request domain is the referrer. I'm including my config file below hoping something stills stand out.
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
# sendfile on;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css
application/json application/x-javascript
text/xml application/xml application/xml+rss
text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream myapp-site {
server 127.0.0.1:6500;
}
server {
access_log /var/www/tmsenv/logs/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 60s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
proxy_buffering off;
proxy_temp_file_write_size 64k;
proxy_pass http://myapp-site;
proxy_redirect off;
}
}
}
Since you're using WSGI, you can find the hostname in environ['HTTP_HOST'] See PEP 333 for more details and other information you can retrieve.
Add this to your config:
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
This preserves the server name that was entered in the browser in the Host: header and adds the client's IP address in X-Forwarded-For:.
Access them via environ['HTTP_HOST'] and environ['HTTP_X_FORWARDED_FOR']. WSGI might be smart enough to respect X-Forwarded-For: though when setting REMOTE_IP.
I finally got this working and the fix was something stupid, as typical. While digging for answers I noticed my config didn't have the root or static directory location for which I'm using nginx in the first place. I asked the person who set this up and they pointed out that it was in another config file, which is consumed via the include.
include /etc/nginx/sites-enabled/*;
I went to that file and added the suggested headers and it works.
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
Related
I have on a production server an Angular app (using Universal for server-side rendering) running on Node Express localhost:4000, and I configured Nginx reverse proxy for the app. The production server is using HTTPS for its domain name.
Here is nginx config in /etc/config/sites-enabled:
location ~ (/landing/home|/landing/company|/landing/contact|/landing/support) {
proxy_pass http://127.0.0.1:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
add_header X-Cache-Status $upstream_cache_status;
}
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Here is nginx.conf
user ubuntu;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
server_name_in_redirect off;
# include /etc/nginx/mime.types;
# default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
In Chrome Dev Tools - Network, here is sample request & response for an image (SVG file). The image that nginx sent is older version, and has since updated (file name unchanged) on the Angular side. Please note that this file is just a sample, the issue I'm facing is not just this one file, but all static files (including css and js files).
request header
response header
To verify, I did curl on a client and on the server, to the same image file. Here are result of curl:
curl result from a client browser, result was from nginx
curl result on the server, comparing between curl to localhost:4000 and curl to the actual public url
We can see that in response from localhost:4000 it is the latest version of the image, where in response from the public url it is older version of the same image.
I checked in /etc/nginx, there is no chache folder in there. I thought about clearing nginx's cache, but I couldn't find it there.
I have tried adding many things in config, including:
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache';
if_modified_since off;
expires off;
etag off;
and
proxy_cache off;
And somehow even the X-Cache-Status doesn't show up in response header neither, but comparing the curl result from localhost and the public url, it is clear to me that it must be something to do with nginx.
Anyone have suggestion on what to do to make nginx sends response from the actual output of localhost:4000, instead of from cache? Thank you.
UPDATE #1:
Sorry I only included partial nginx conf. I have found the root cause: I actually have two Node Express running on the same domain, one is on port 4000 (Angular Universal) and the other is on port 5000 (non-Universal). I have updated the excerpt of nginx conf above to include the other location directive for the one on port 5000. Please see my answer below for further explanation of what I did wrong to cause the problem in this question.
I found out the root cause of the problem.
I actually have two Node Express running on the same server, same domain. One in on port 4000 (uses Angular Universal), and the other on port 5000 (non-Universal). I have edited my question to include the second location directive in nginx conf.
The way I had my nginx conf made it looked like the whole page came as response from localhost:4000, but some parts within the page (images, style sheets, etc) actually were response from localhost:5000, due to the url of the request did not match the pattern in nginx conf for localhost:4000. So localhost:5000 got to respond the the request, and the files that localhost:5000 had was older version (not all, but the one I tested with curl happened to be older version).
I only realized this situation when I disabled the second location directive in the nginx conf, effectively disabled localhost:5000 from responding to any request, and then I saw many 404 errors because of that.
To solve this problem, meaning to have both localhost:4000 and localhost:5000 active, and still getting the correct respond, I had to make some adjustments to the routings in my Angular code.
I'm trying to set up end to end http2 connections on a Amazon Elastic Beanstalk application. I'm using node js and fastify with http2 support (it works great on my local machine). By defaul, the nginx reverse proxy that EB creates on the EC2 instance where the code gets deployed, uses http/1.1, so I need to change that.
I have read here how to do it (see reverse proxy configuration section). The problem is that if you see the nginx.conf file:
#Elastic Beanstalk Nginx Configuration File
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 66982;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}
In the last line include conf.d/elasticbeanstalk/*.conf;, a file 00_application.conf gets included. That file contains the following:
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
And there you can see the proxy_http_version parameter I need to change to 2.0.
Any idea how can I achieve that? I can add configuration files to conf.d and replace the entire nginx.conf file, but I do not really know how to change that value from there.
Create a file with extension .config inside the .ebextension folder for example 01-mynginx.conf
Inside the config file, use the files key to create files on the instance and the container_commands key to run system commands after the application and web server has been set up
files:
“/etc/nginx/conf.d/01-mynginx.conf”:
mode: “000644”
owner: root
group: root
content: |
keepalive_timeout 120s;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
fastcgi_send_timeout 120s;
fastcgi_read_timeout 120s;
container_commands:
nginx_reload:
command: “sudo service nginx reload”
Elastic Beanstalk automatically creates the 01-mynginx.conf file inside /etc/nginx/conf.d folder and is included in the main elastic beanstalk nginx configuration.
I have been battling this issue for some days now. I found a temporary solution but just can't wrap my head around what exactly is happening.
So what happens is that one request is handled immediately. And if I send the same request right after it hangs on 'waiting' for 60 seconds. If I cancel the request and send a new one it is handled correctly again. If I send a request after this one it hangs again. This cycle repeat.
It sounds like a load-balancing issue but I didn't set it up. Does nginx have some sort of default load balancing for connection to the upstream server?
The error received is upstream timed out (110: Connection timed out).
I found out that changing these proxy parameters, it only hangs for 3 seconds and every subsequent request now handles fine (after the waited one). Because of a working keep-alive connection I suppose.
proxy_connect_timeout 3s;
It looks like setting up a connection to the upstream is timing out and then after the timeout it tries again and succeeds. Also in the "(cancelled)request - ok request - (cancelled)request" cycle described above there is no keep-alive being setup. Only if I wait for the request to complete. Which takes 60 seconds without the above settings and is unacceptable.
It happens for both domains..
NGINX conf:
worker_processes 1;
events
{
worker_connections 1024;
}
http
{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server
{
server_name domain.com www.domain.com;
root /usr/share/nginx/html;
index index.html index.htm;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:3001/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#TEMP fix
proxy_connect_timeout 3s;
}
}
DOMAIN2 conf:
server {
server_name domain2.com www.domain2.com;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
#TEMP fix
proxy_connect_timeout 3s;
}
}
I found the answer. However, I still don't fully understand why and how. I suspect setting up the keep-alive wasn't working as it should. I read to the documentation and found the answer there: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
For both the configuration files I added a 'upstream' block.
i.e.
DOMAIN2.CONF:
upstream backend
{
server 127.0.0.1:5000;
keepalive 16;
}
location /api/
{
proxy_redirect off;
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
# REMOVED THE TEMP FIX
}
Make sure to:
Clear the Connection header
Use 127.0.0.1 instead of localhost in upstream block
Set http version to 1.1
Are there any performance benefits or performance degradation in using both varnish and nginx proxy cache together? I have a magento 2 site running with nginx cache, redis for session storage and backend cache and varnish in front. All on same centos machine. Any inputs or advice please? Below currently used nginx configuration file.
# Server globals
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /var/run/nginx.pid;
# Worker config
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
# SSL PCI Compliance
ssl_session_cache shared:SSL:40m;
ssl_buffer_size 4k;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Error pages
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 502 503 504 /error/50x.html;
# Cache settings
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache settings
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
It would be simply undesirable.
Magento+Varnish work together tightly connected. The key to efficient caching is having your app (Magento) being able to invalidate a specific page's cache when content for it has been changed.
E.g. you updated a price for a product - Magento talks to Varnish and sends a purge request for specific cache tags, which include product ID.
There is simply no such thing/integration between Magento and NGINX, so you risk, at minimum, having:
stale pages / old product data being displayed
users seeing an account of each other (as long as you keep your config above), unless you configure nginx cache to bypass on Magento specific cookies
The only benefit of having cache in NGINX (TLS side) is saving on absolutely
neglible proxy buffering overhead. It's definitely not worth the trouble, so you should be using only cache in Varnish.
I'm trying to figure out how to set up Nginx as a reverse proxy from a single domain to multiple backend sites based on the location.
Nginx Config:
server {
listen 80;
underscores_in_headers on;
server_name test.example.com;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css application/javascript;
gzip_vary on;
gzip_proxied any;
proxy_http_version 1.1;
location /page1/ {
proxy_pass http://www.siteone.com/pageone;
proxy_set_header Host www.siteone.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /page2/ {
proxy_pass http://www.sitetwo.com/pagetwo;
proxy_set_header Host www.sitetwo.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The problem is that static files (images, css, etc.) are all broken. The initial request returns fine, but subsequent GET requests all go to the proxy subdomain (ex: test.example.com/css/style.css), and return 404 or 500 errors.
I tried to work around this with a static files location, or a catch all (e.g., "location /" or "location ~* ^(.*).(css|js|etc..)"), but I can't do that for both proxied sites. As a workaround I also tried catching the referer URL and setting the catch-all's proxy_pass based on that, but it didn't work for everything and seemed kind of prone to failure.
I know this isn't a common setup, but unfortunately for our use case we can't use the more common method of a subdomain & server block for each proxied request. Our requirement is for a single subdomain proxying to two or more backends based on the path (e.g., test.example.com/this-path -> backend.domain.com/can-be-anything).
We're using this proxy as a caching server, so I'd also be open to doing this with Varnish + Nginx for SSL termination if it better supports the use case.
Open to any suggestions from the community, and thanks!