I am running an Nginx reverse proxy but when I am doing a curl I am getting error while running
curl https://test-website.com:444
"SSL routines:ssl3_get_record:wrong version number".
Here is my default.conf
listen 444 ssl;
server_name test-website.com;
# Path for SSL config/key/certificate
ssl_certificate /etc/ssl/certs/nginx/site1.crt;
ssl_certificate_key /etc/ssl/certs/nginx/site1.key;
include /etc/nginx/includes/ssl.conf;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_pass https://IdentityApi:5501;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
# Default
server {
listen 80 default_server;
server_name _;
root /var/www/html;
charset UTF-8;
error_page 404 /backend-not-found.html;
location = /backend-not-found.html {
allow all;
}
location / {
return 404;
}
access_log off;
log_not_found off;
error_log /var/log/nginx/error.log error;
}
ssl.conf
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
proxy.conf
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_intercept_errors on;
Is there any issue with the conf file?
This issue helped me with the same error but a different circumstance:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Had to change from https to HTTP
While I don't think this solves your question, maybe thinking about where there could be a different protocol in your file might help.
Related
I can't get my dotnet mvc app to be hosted correctly over ssl (https). It only works over http. The following is my relevant nginx files (with "example.org" used instead of my domain)
/etc/nginx/sites-enabled/default
# Default server configuration
#
server {
server_name example.org *.example.org;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# SSL configuration
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80 default_server;
# listen [::]:80 default_server deferred;
return 444;
}
server {
if ($host = example.org) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name example.org *.example.org;
return 404; # managed by Certbot
}
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
This makes my site work over "http://example.org" but not over "https://example.org". I don't know why it won't work over https? I tried altering my /etc/nginx/nginx.conf file to make it like the recommended documentation for asp.net hosting via Microsoft. Here's my new /etc/nginx/nginx.conf file.
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
include /etc/nginx/proxy.conf;
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
server_tokens off;
sendfile on;
# Adjust keepalive_timeout to the lowest possible value that makes sense
# for your use case.
keepalive_timeout 29;
client_body_timeout 10; client_header_timeout 10; send_timeout 10;
upstream my-app{
server 127.0.0.1:5000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.org *.example.org;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_tickets off;
ssl_stapling off;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
#Redirects all traffic
location / {
proxy_pass http://my-app;
limit_req zone=one burst=10 nodelay;
}
}
}
When I change my /etc/nginx/nginx.conf file to the above, both "http://example.org" and "https://example.com" fail. So how do I get this app to work over https?
The problem was actually my ufw firewall. When I was setting up the droplet I did the commands:
sudo ufw enable
sudo ufw allow OpenSSH
sudo ufw default deny incoming
sudo ufw allow 'Nginx HTTP'
The problem above is that was supposed to do sudo ufw allow 'Nginx Full' then sudo reboot. After this, my original nginx configuration worked!
Try the config below (don't forget to disable https redirection in your app - remove app.UseHttpsRedirection(); from your Program.cs (Startup.cs) and remove applicationUrl https reference from launchSettings.json "https://localhost:5001"):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log;
upstream web-api {
server 127.0.0.1:5000;
}
server {
listen 80;
server_name yourdomain.com;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name yourdomain.com;
ssl_certificate /etc/ssl/certs/yourdomain.com.crt;
ssl_certificate_key /etc/ssl/private/yourdomain.com.key;
location / {
proxy_pass http://web-api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
I have the following conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8443 ssl;
server_name unifi.bob.net;
ssl on;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_certificate /var/lib/docker/volumes/letsencrypt/_data/live/unifi.bob.net/fullchain.pem;
ssl_certificate_key /var/lib/docker/volumes/letsencrypt/_data/live/unifi.bob.net/privkey.pem;
location /wss/ {
proxy_pass https://192.168.1.3:8443;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 86400;
}
location / {
proxy_pass https://192.168.1.3:8443/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
}
}
server {
listen 8443 ssl;
server_name nas.bob.net;
ssl on;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_certificate /var/lib/docker/volumes/letsencrypt/_data/live/nas.bob.net/fullchain.pem;
ssl_certificate_key /var/lib/docker/volumes/letsencrypt/_data/live/nas.bob.net/privkey.pem;
location / {
proxy_pass http://192.168.1.254:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
}
}
server {
listen 880;
server_name unifi.bob.net;
return 301 https://unifi.bob.net$request_uri;
}
server {
listen 880;
server_name nas.bob.net;
return 301 https://nas.bob.net$request_uri;
}
}
this all works fine if i hit http://nas.bob.net i get redirected to https://nas.bob.net and to the internal resource fine this also works the same for unifi.bob.net
however if i try my external ip or a record i get redirected to the unifi recource?
should it not just do nothing or am i missing something from the config?
Thanks
Found to answer, i had no default_server set in any config. now added this and all working as expected
Thanks
I have a RoR app running in Nginx. I deploy the application to server using capistrano and puma. It works well under this nginx configuration:
upstream puma {
server unix:///home/kiui/apps/kiui/shared/tmp/sockets/kiui-puma.sock;
}
server {
listen 80;
keepalive_timeout 70;
server_name kiuiapp.com;
root /home/kiui/apps/kiui/current/public;
access_log /home/kiui/apps/kiui/current/log/nginx.access.log;
error_log /home/kiui/apps/kiui/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
}
But I need run the rails app with https to use a Facebook app in it. I created a auto signed ssl certificate following this tutorial create autosigned ssl certificate and changed the nginx configuration to that:
upstream puma {
server unix:///home/kiui/apps/kiui/shared/tmp/sockets/kiui-puma.sock;
}
server {
listen 443 ssl;
keepalive_timeout 70;
server_name kiuiapp.com;
ssl on;
ssl_certificate /etc/ssl/kiui.crt;
ssl_certificate_key /etc/ssl/kiui.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
root /home/kiui/apps/kiui/current/public;
access_log /home/kiui/apps/kiui/current/log/nginx.access.log;
error_log /home/kiui/apps/kiui/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
}
It not work! The browser give me ERR_CONNECTION_TIMED_OUTerror. Someone could help me?
SOLUTION:
upstream puma {
server unix:///home/kiui/apps/kiui/shared/tmp/sockets/kiui-puma.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
keepalive_timeout 70;
server_name kiuiapp.com;
ssl on;
ssl_certificate /root/kiuiapp.com.chain.cer;
ssl_certificate_key /root/kiuiapp.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
root /home/kiui/apps/kiui/current/public;
access_log /home/kiui/apps/kiui/current/log/nginx.access.log;
error_log /home/kiui/apps/kiui/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
}
I think the problem was the ssl certificate chain. It was not well created.
I have nginx bitnami container deployed in Openshift that serves my application. The issue that I am facing is that the redirect is not working. In the logs, there are no indications that the request is caught by a proxy_pass location block.
So. the idea is that a request to app.com/backend1/api/something should be forwarded to service1.com/backend1/api/something. The same goes for service2.
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream service1 {
server service1.com;
}
upstream service2 {
server service2.com;
}
server {
listen 8443 ssl;
listen [::]:8443 http2 ssl;
server_name app.com;
error_log /opt/bitnami/nginx/error.log debug;
ssl on;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
include /opt/bitnami/nginx/conf/mime.types;
root /opt/bitnami/nginx/html;
location ~ ^/backend1/api/(.*)$ {
proxy_pass https://service1/backend1/api/$1;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/backend2/api/(.*)$ {
proxy_pass https://service2/backend2/api/$1;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
try_files $uri /index.html;
}
}
}
I have also tried moving the order of the location blocks, as well as moving the root directive, but without success.
Any ideas on how to resolve this issue?
I have nginx config for the current and legacy application where the only difference between the two blocks is DNS-specific entries and root path. How can I put specific parts of the config in a variable or something and then call that variable in both server config blocks?
server {
listen 0.0.0.0:443 ssl;
server_name mysite.com;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:15m;
ssl_session_timeout 15m;
root /home/deployer/apps/myapp/current/public;
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
}
if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}
if ($http_user_agent ~* (spider|AcoiRobot|msnbot|scrapbot|catall|wget) ) {
return 403;
}
location ^~ /assets/ {
gzip_static on;
gzip_vary on;
expires max;
add_header Cache-Control public;
}
location ~ \.(gif|png|jpe?g|JPE?G|GIF|PNG {
valid_referers none blocked mysite.com *.mysite.com;
if ($invalid_referer) {
return 403;
}
}
location /evil/ {
valid_referers none blocked mysite.com *.mysite.com;
if ($invalid_referer) {
return 403;
}
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
How can I DRY up everything below the root line?
Time has proven Alexey Ten's comment about using include to be the right way to go.
We use this in production:
File structure in /etc/nginx
nginx.conf
sites-enabled/app_config
modules/shared_serve
modules/shared_ssl_settings
In /etc/nginx/sites-enabled/app_config:
upstream puma {
server unix:/tmp/puma.socket fail_timeout=1;
}
server {
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include modules/shared_ssl_settings;
include modules/shared_serve;
}
In /etc/nginx/modules/shared_ssl_settings:
listen 443 ssl;
listen [::]:443;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers On;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:30m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
In /etc/nginx/modules/shared_serve:
location ~ \.(php|aspx|asp|myadmin)$ { return 444; log_not_found off; }
root /home/deployer/apps/example_app/current/public;
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
The only gotcha is that your deploy script has to ensure the file structure in /etc/nginx. Naturally, you can name your module directory anything else. You might even keep the includable files right in /etc/nginx without a subdirectory.
You could use a map to define which app root to use based on $host:
map $host $app_root {
default /home/deployer/apps/myapp/current/public;
legacy.mysite.lv /home/deployer/apps/myapp/legacy/public;
}
Add another server_name directive to match your legacy app (use the same name in the map). Then use the variable in your root directive:
root $app_root;