Golang + nginx + https - nginx

I have - Go to the server as a listener http and https. Nginx configured to process incoming requests for http + https. Certificates in order.
Using separate servers runs perfectly on the results of queries to them on https protocol. However, when I use a proxying nginx https is not getting a response from the server and the server Go
"http: TLS handshake error from 127.0.0.1:54037: tls: first record
does not look like a TLS handshake
What could be the problem?
Client Go:
package main
import (
"net/http"
"log"
)
func HelloSSLServer(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("This is an example server.\n"))
// fmt.Fprintf(w, "This is an example server.\n")
// io.WriteString(w, "This is an example server.\n")
}
func main() {
http.HandleFunc("/", HelloSSLServer)
go http.ListenAndServe("192.168.1.2:80", nil)
err := http.ListenAndServeTLS("localhost:9007", "/etc/letsencrypt/live/somedomain/fullchain.pem", "/etc/letsencrypt/live/somedomain/privkey.pem", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Nginx config:
server {
listen 192.168.1.2:80;
server_name somedomain;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 192.168.1.2:443 ssl;
server_name somedomain;
access_log /var/log/nginx/dom_access.log;
error_log /var/log/nginx/dom_error.log;
ssl_certificate /stuff/ssl/domain.cert;
ssl_certificate_key /stuff/ssl/private.cert;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
location /
{
proxy_pass http://localhost:9007;
# proxy_redirect http://localhost:1500 http://site1;
proxy_cookie_domain localhost somedomain;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Client-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}

Use https with the proxy_pass
location /
{
proxy_pass https://localhost:9007;
...
}

nginx .config file should like this
server {
listen 443 ssl http2;
listen 80;
server_name www.mojotv.cn;
ssl_certificate /home/go/src/my_go_web/ssl/**.pem;
ssl_certificate_key /home/go/src/my_go_web/ssl/**.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AESGCM:ALL:!DH:!EXPORT:!RC4:+HIGH:!MEDIUM:!LOW:!aNULL:!eNULL;
ssl_prefer_server_ciphers on;
location /(css|js|fonts|img)/ {
access_log off;
expires 1d;
root "/home/go/src/my_go_web/static";
try_files $uri #backend;
}
location / {
try_files /_not_exists_ #backend;
}
location #backend {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:********;
}
access_log /home/wwwroot/www.mojotv.cn.log;## nginx log path
}
the golang web app with http2 ssl feature shiped with nginx

I get a similar error message to OP for a more basic Go server that doesn't have extra config.
tls: first record does not look like a TLS handshake
My temp fix was simply to make sure the test URL includes both "https://" and the port number in the URL.
didn't work - ipaddress
didn't work - https://ipaddress
worked - https://ipaddress:8081
It'll do for testing, until a more advanced setup. Just posting this to help others in troubleshooting.

Related

Newbie - how do I configure NGINX to only serve request from a specific domain? [duplicate]

Is it possible to allow only users typing in xxxxxx.com (fictive), so they should make a DNS-lookup and connect. And block users who uses my public ip to connect ?
Configuration:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name xxxxxxx.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins.access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://10.0.11.32:80;
proxy_read_tenter code hereimeout 360;
proxy_redirect http://10.0.11.32:80 https://xxxxxxx.com;
}
}
The $http_host parameter is set to the value of the Host request header. nginx uses that value to select a server block. If a server block is not found, the default server is used, which is either marked as default_server or is the first server block encountered. See this documentation.
To force nginx to only accept named requests, use a catch all server block to reject anything else, for example:
server {
listen 80 default_server;
return 403;
}
server {
listen 80;
server_name www.example.com;
...
}
With the SSL protocol, it depends on whether or not you have SNI enabled. If you are not using SNI, then all SSL requests pass through the same server block, in which case you will need to use an if directive to test the value of the $http_host value. See this and this for details.

the redirect from http to https does not work in nginx

I am trying to redirect all the http traffic to https and my nginx conf looks like this:
upstream upstreamServer {
server upstream_serv:80;
}
server {
listen 80;
server_name ~^(([a-zA-Z0-9]+)|)test\.xy\.abc\.io$ ;
access_log /var/log/nginx/access.log backend;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name ~^(([a-zA-Z0-9]+)|)test\.xy\.abc\.io$ ;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /path/to/cert_chain.pem;
ssl_certificate_key /path/to/cert_key.pem;
ssl_trusted_certificate /path/to/cert_chain.pem;
access_log /var/log/nginx/access.log backend;
# Redirect all traffic in /.well-known/ to lets encrypt
location /.well-known/acme-challenge/ {
root /var/tmp;
index index.html index.htm;
}
location / {
proxy_pass http://upstreamServer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_buffering off;
if ($uri ~* ".(js|png|jpg|jpeg|svg|gif|avi|mp3|mp4)$" ){
expires 1d;
add_header Cache-Control public;
}
proxy_pass_request_headers on;
}
}
But for some reason it doesn't work. I read about how the nginx chooses the server block and location block. The setup looks correct to me according to what I understand but still the site keeps loading on http when I hit the url http://test.xy.abc.io instead of redirecting me to https.
I also tried using only
return 301 https://$host$request_uri;
instead of
location / {
return 301 https://$host$request_uri;
}
but it doesn't work either.
Did I get right that your page is still loading the unencrypted http version? Did you reaload the service to load the changed config file? (sorry to ask that stupid question back)
nginx -t && nginx -s reload
I personally use in all nginx instances I maintain something like this:
server {
listen 80 default_server;
# no server_name means all
# For let's encrypt domains: .well-known/acme-challenge
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /var/www/certbot;
}
# Redirect http -> https.
location / {
return 301 https://$host$request_uri$is_args$args;
}
}
The problem was there is a GCP loadbalancer before my nginx proxy. Which was forwarding all the requests on https to my nginx proxy no matter if the orignal reuquest was http or https. After searching the internet I found that loadbalancer can not force https on clients. So this what I had to do in my nginx location block.
if ($http_x_forwarded_proto = http) {
return 301 https://$host$request_uri;
}
and the complete solution looks like this:
server {
listen 80;
listen 443 ssl;
server_name ~^(([a-zA-Z0-9]+)|)test\.xy\.abc\.io$ ;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /path/to/cert_chain.pem;
ssl_certificate_key /path/to/cert_key.pem;
ssl_trusted_certificate /path/to/cert_chain.pem;
access_log /var/log/nginx/access.log backend;
# Redirect all traffic in /.well-known/ to lets encrypt
location /.well-known/acme-challenge/ {
root /var/tmp;
index index.html index.htm;
}
location / {
if ($http_x_forwarded_proto = http) {
return 301 https://$host$request_uri;
}
proxy_pass http://upstreamServer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_buffering off;
if ($uri ~* ".(js|png|jpg|jpeg|svg|gif|avi|mp3|mp4)$" ){
expires 1d;
add_header Cache-Control public;
}
proxy_pass_request_headers on;
}
}

Can I use WordPress blog as a subfolder of my main domain with a https NGINX?

I am developing a plateform on node/meteorjs stack and I want to add a WordPress blog for our website as well.
https//www.XXXXXX.com --> go to meteor app
https//www.XXXXXX.com/blog --> go to blog
I've got a NGINX front with https certificate
My NGINX config is :
`
server {
listen 80;
server_name XXXX.ovh;
return 301 https://XXXX.ovh$request_uri;
}
upstream meteorapp {
server 127.0.0.1:3000;
}
upstream blog {
server 52.16.157.100;
}
server {
listen 80;
server_name www.XXXX.ovh;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name XXXX.ovh;
return 301 https://www.XXXX.ovh$request_uri;
}
server {
listen 443 ssl default_server;
root /var/www/html;
server_name www.XXXX.ovh;
ssl_certificate /etc/letsencrypt/live/XXXX.ovh/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/XXXX.ovh/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
location /blog {
proxy_pass http://blog;
proxy_set_header Host $host;
}
location /wp-content {
proxy_pass http://blog;
proxy_set_header Host $host;
}
location /wp-admin {
proxy_pass http://blog;
proxy_set_header Host $host;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://meteorapp;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
location ~ /.well-known {
allow all;
}
}
My blog is hosted on other server and my meteor is in a docker container.
With this configuration, css and image of my blog doesn't work (i try to access the http ressources...
so I got some errors as :
Mixed Content: The page at 'https://www.cdispo.ovh/blog' was loaded over
HTTPS, but requested an insecure image 'http://www.XXXX.ovh/wp-content/themes/twentyseventeen/assets/images/header.jpg'. This content should also be served over HTTPS.
how can I do ?
You should instead use a subdomain in this manner "blog.myapp.com". Otherwise if the Meteor app controls the root ie "myapp.com" you will need to redirect all requests coming in to "myapp.com/blog" in your router.

How do I fix this Nginx configuration to properly proxy WebSocket requests instead of returning a 301?

Nginx noob. Trying to configure Nginx to act as an SSL proxy server in front of another web server running at http://localhost:8082. That is, I want all requests to http://localhost to be redirected to https://localhost. That part is working just fine.
Problem is, the app on port 8082 also uses WebSocket connections at ws://localhost:8082/public-api/repossession-requests-socket. I'm trying to redirect any connections to ws://localhost/public-api/repossession-requests-socket to wss://localhost/public-api/repossession-requests-socket and have Nginx proxy those WebSocket requests to ws://localhost:8082/public-api/repossession-requests-socket.
Instead, the WebSocket connections are failing because Nginx is returning a 301 for both ws://localhost/public-api/repossession-requests-socket & wss://localhost/public-api/repossession-requests-socket. My configuration is below; I'm using the Docker image nginx:alpine in my tests ($PWD is mapped to /app).
How do I need to change this so that I no longer see 301s?
events {
worker_connections 1024;
}
http {
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name localhost;
ssl_certificate /app/docker/public.pem;
ssl_certificate_key /app/docker/private.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /app/access-443.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8082;
proxy_read_timeout 90;
proxy_redirect http://localhost:8082 https://localhost;
}
location /public-api/repossession-requests-socket/ {
proxy_pass http://localhost:8082;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Found the problem. The trailing slash on the end of the location stanza.
location /public-api/repossession-reqeuests-socket/ should have been location /public-api/repossession-reqeuests-socket.

Nginx reverse proxy, only allow connection from hostname not ip

Is it possible to allow only users typing in xxxxxx.com (fictive), so they should make a DNS-lookup and connect. And block users who uses my public ip to connect ?
Configuration:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name xxxxxxx.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins.access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://10.0.11.32:80;
proxy_read_tenter code hereimeout 360;
proxy_redirect http://10.0.11.32:80 https://xxxxxxx.com;
}
}
The $http_host parameter is set to the value of the Host request header. nginx uses that value to select a server block. If a server block is not found, the default server is used, which is either marked as default_server or is the first server block encountered. See this documentation.
To force nginx to only accept named requests, use a catch all server block to reject anything else, for example:
server {
listen 80 default_server;
return 403;
}
server {
listen 80;
server_name www.example.com;
...
}
With the SSL protocol, it depends on whether or not you have SNI enabled. If you are not using SNI, then all SSL requests pass through the same server block, in which case you will need to use an if directive to test the value of the $http_host value. See this and this for details.

Resources