Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page".
http {
# passenger_root /opt/passenger-2.2.11;
# passenger_ruby /usr/bin/ruby1.8;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
upstream mongrel {
server 00.000.000.000:8000;
server 00.000.000.000:8001;
}
server {
listen 443;
server_name domain.com;
ssl on;
ssl_certificate /etc/ssl/localcerts/domain_combined.crt;
ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
# ssl_prefer_server_ciphers on;
location / {
root /current/public/;
index index.html index.htm;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mongrel;
}
}
}
Do you have an explicit server entry for port 80? It could be that an nginx default directive is intercepting the regular HTTP traffic.
Add a another server block just to be sure:
server {
listen 80;
server_name domain.com www.domain.com;
rewrite ^(.*) https://domain.com$1 permanent;
}
This will redirect all traffic for your app to https, which even if it isn't what you ultimately want to happen, at least you know you're missing the non-https block and you can then replace this with the directives you need.
Related
I have openresty running on my aws instance (Instance A) and the ip address of the server is already tied to the domain name myapp.john.com.
My app is running on another aws instance (Instance B) within the same private network. It has private ip address of 192.42.56.87 and the app is running on port :80.
I want to set up my openresty / nginx such that when visiting prod.myapp.john.com, nginx directs me to 192.42.56.87:80. And when visiting test.myapp.john.com, nginx directs me to another instance (Instance C) running the test version of my app, say on 192.xx.xx.xx:80
Below are code in (Instance A):
Main config file /usr/local/openresty/nginx/conf/nginx.conf is defined as:
# Main NGNX Config File
#user www-data;
worker_processes auto;
pid logs/nginx.pid;
error_log logs/error.log info;
error_log logs/error.log notice;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 100000;
resolver 8.8.8.8 valid=30s ipv6=off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
gzip on;
# Include all the sites for the domain
include /usr/local/openresty/nginx/sites/*;
}
/usr/local/openresty/nginx/sites/prod.myapp.john.com is defined as:
server {
listen 80;
listen [::]:80;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
ssl_certificate /etc/letsencrypt/live/myapp.john.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.john.com/privkey.pem;
location / {
proxy_pass http://192.42.56.87:80/;
expires 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
Now, in chrome browser when I visit prod.myapp.john.com, there is no response at all since the request never get to my Instance A;
However, if I change
server_name prod.myapp.john.com
to
server_name myapp.john.com
it works and the web page gets rendered.
Why?
How can I include more site files in /usr/local/openresty/nginx/sites/ and set the server blocks in config correctly to provide more subdomains on my site?
I am very newbie on NGINX.
In my project, the server is defined in both etc/nginx/nginx.conf and etc/nginx/conf.d/proxy.conf. And etc/nginx/conf.d/proxy.conf is included in nginx.conf
I am not understand the relationship the server's setting in these two files. ex. In nginx.conf, server's setting is listen 80 ; listen [::]:80 ; and in proxy.conf, server's setting is listen 80 proxy_protocol.
In above example, which setting will be used in real communication?
Does the server's setting of proxy.conf overwrite the server's setting of nginx.conf?
or the server's setting of proxy.conf will be merged into server's setting of nginx.conf?
Please find the full conf files as below:
etc/nginx/conf.d/proxy.conf
content: |
client_max_body_size 500M;
server_names_hash_bucket_size 128;
upstream backend {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80 proxy_protocol;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
large_client_header_buffers 8 32k;
set_real_ip_from 10.0.0.0/8;
real_ip_header proxy_protocol;
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://backend;
proxy_redirect off;
Enables WebSocket support
location /v1/cable {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
}
etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
location / {
}
}
}
Nginx selects a server block to process a request based on the values of the listen and server_name directives.
If a matching server name cannot be found, the default server for that port will be used.
In the configuration in your question, the server block in proxy.conf is encountered first, so it becomes the de-facto default server for port 80.
The server block in nginx.conf will only match requests which use the correct host name, i.e. http://localhost
See this document for details.
I'm using NGINX to serve static files, reverse proxy to Django and verify client certificates.
I don't want the certificate to be asked at the url root, so I created another server on Nginx.conf to ask for the certificate on port 8443. This server is intended just to ask for the certificate and redirect the client back to port 443, where the reverse proxy to Django occurs.
This is my Nginx.conf:
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 65;
gzip on;
upstream app {
server django:8000;
}
# Redirect from HTTP to HTTPS
server {
listen *:80;
server_name localhost;
return 301 https://localhost$request_uri;
}
server {
listen *:443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/ssl/certificate.pem;
ssl_certificate_key /etc/nginx/ssl/certificate.key;
ssl_password_file /etc/nginx/ssl/certificate.pass;
ssl_verify_client off;
ssl_client_certificate /etc/nginx/ssl/chain.cer;
ssl_verify_depth 3;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_prefer_server_ciphers on;
server_tokens off;
underscores_in_headers on;
location /static/ {
autoindex off;
alias /static_files/;
}
location / {
try_files $uri $uri/ #app_web;
}
location #app_web {
proxy_pass http://app;
proxy_pass_request_headers on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header CERTINFO $http_certinfo;
proxy_redirect off;
}
}
server {
listen *:8443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/ssl/certificate.pem;
ssl_certificate_key /etc/nginx/ssl/certificate.key;
ssl_password_file /etc/nginx/ssl/certificate.pass;
ssl_verify_client on;
ssl_client_certificate /etc/nginx/ssl/certificate.cer
ssl_verify_depth 3;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_prefer_server_ciphers on;
add_header CERTINFO $ssl_client_s_dn;
return 301 https://$host$request_uri;
}
}
When the client authenticates on port 8443 and I return him to port 443, I need to forward his certificate information, the $ssl_client_s_dn to be more specific. To do so, I'm trying to use the method add_header on the server of the port 8443 (CERTINFO) and the method proxy_set_header capturing the value with $http_certinfo on the server of the port 443. But this solution is not working. The header is not forwarded from port 8443 to port 443 server.
My question is: is there a way to do that? Can I set some kind of "global" variable on the http block, change its value on port 8443 and than use the updated value on port 443 to forward it to Django?
Thank you so much!
I think this is somewhat of an interesting issue, and being new to nginx I'm not sure what configuration I need to look at to resolve this. I am hosting two web applications using the nginx reverse proxy. If a request comes in looking for myapp1.com it will route them to my app hosted at 127.0.0.1:3000, if they request myapp2.com it will route them to my app hosted at 127.0.0.1:3001. The problem is when I try to hit both of them in a relatively short amount of time (IE: hit http://myapp1.com and http://myapp2.com in browser). It will usually serve the first one up and then the request for the other will take too long and fail. Whats going wrong with nginx? Below are my configuration files.
Edit:
After connecting to a VPN this issue does not happen. Is this an issue because I am on the same LAN connection with the host? Is there any way to get around that?
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" ' 'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main_ext;
error_log /var/log/nginx/error.log warn;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/default
server {
listen 80;
server_name myapp1.com www.myapp1.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp1.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp1.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
server_name myapp2.com www.myapp2.com;
location / {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I am trying to make nginx have two functions like fiddler does:
1、Redirect requests from data.abc.com to 127.0.0.1:9000
2、Pass all other requests to their original servers
my nginx.conf is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 8008;
server_name data.abc.com;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass https://127.0.0.1:9000/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
However, right now every request via port 8008 is redirected, it seems like server_name doesn't work, how to make other requests go to original server?
Your configuration is to redirect all request to https://127.0.0.1:9000
Add 2 different server block like follows
1) Redirect data.abc.com to https://127.0.0.1:9000
server {
listen 8008;
server_name data.abc.com;
return 301 https://127.0.0.1:9000$request_uri
}
2) Serve request for another website:
server {
listen 8008 default_server;;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
}