TLS pass-through via nginx giving SSL certificate problem - nginx

I have a service that's currently fronted by AWS' API Gateway. API Gateway does not offer static ("elastic", in aws parlance) IPs.
A client requires the ability to hit the API while using an IP allowlist, so i've been attempting to configure a (dockerized) nginx proxy on an instance with an elastic IP. I'm able to get a response from API Gateway via the proxy, but it's complaining about SSL.
From a browser, addressing the instance's IP: "ERR_SSL_VERSION_OR_CIPHER_MISMATCH"
From a shell on the instance itself:
# curl -v https://localhost:443
* Trying 127.0.0.1:443...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
My nginx.conf is very basic:
events {}
stream {
upstream test_api {
server my.domain.placeholder.com:443;
}
server {
listen 443;
proxy_pass test_api;
}
}
I've been using the docs at https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/# but have not had any success.
Can anyone offer any insight on using nginx streams for this use case? I'd prefer not to terminate SSL on nginx itself and just use it as a TLS pass-through if possible.

Update/resolution:
To test this problem, I configured haproxy as a passthrough proxy instead of nginx, and had the exact same result. Apache/httpd too! So, I started to suspect my testing methods and found them to be the source of the failure. See https://stackoverflow.com/a/46355026/3620843
TL;DR: my curl invocation was failing because I was asking the server for "localhost", which of course did not resolve to the backend server. Based on that, it stands to reason that requesting the frontend server's IP in a browser would react similarly.
What's needed is curl's --resolve option.
This works:
curl -v --resolve example.com:443:127.0.0.1 https://example.com

Related

Traefik not finding TLS secret

I have a self-signed site.dev certificate with CN=site.dev and ext3 DNS domain including site.dev, as a secret in k8s in the default ns, with type: kubernetes.io/tls and keys: tls.crt and tls.key. Since it's self-signed it does not contain intermediate certs (it can't).
Traefik is running with args:
- --configfile=/config/traefik.toml
- --defaultentrypoints=https,http
- --entrypoints=Name:https Address::443 TLS
- --entrypoints=Name:http Address::80
And when the ingress starts, Traefik logs:
{"level":"error","msg":"Error configuring TLS for ingress default/site-dev: secret default/site-dev-tls does not exist","time":"2019-04-20T21:09:02Z"}
The ingress has
tls:
- secretName: site-dev-tls`
And this is the output of curl:
curl https://site.dev:443/ -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to site.dev (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* stopped the pause stream!
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
$ curl http://site.dev:443/
404 page not found
$ kubectl auth can-i get secrets/site-dev-tls --namespace default --as system:serviceaccount:kube-system:traefik-ingress-controller
yes
I'm not sure what I'm doing wrong... Any help appreciated.
404 means, that there is no service or deployment behind the ingress.
The ingress works perfectly fine, according to the curl output
Just deploy something and point the ingress to it. That should solve the issue

Verify if nginx is working correctly with Proxy Protocol locally

Environment
I have set up Proxy Protocol support on an AWS classic load balancer as shown here which redirects traffic to backend nginx (configured with ModSecurity) instances.
Everything works great and I can hit my websites from the open internet.
Now, since my nginx configuration is done in AWS User Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.
Problem
Before enabling proxy protocol I used to check whether my nginx instance is healthy, and ModSecurity is working by checking a 403 response from this command
$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per this link.
# curl -k https://localhost -v
* About to connect() to localhost port 443 (#0)
* Trying ::1...
* Connected to localhost (::1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
# cat /var/logs/nginx/error.log
2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ򫂱�u��%d�z��mRN�[e��<�,�
�+̩� �0��/̨��98k�̪32g�5=�/<
" while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443
What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
You can use the --haproxy-protocol curl option, which adds the extra proxy protocol info to the request.
curl --haproxy-protocol localhost
So:
curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
Proxy Protocol append a plain text line before the streaming anything
PROXY TCP4 127.0.0.1 127.0.0.1 0 8080
Above is an example, but this happens the very first thing. Now if I have a NGINX listening on SSL and http both using proxy_protocol then it expects to see this line first and then any other thing
So if do
$ curl localhost:81
curl: (52) Empty reply from server
And in nginx logs
web_1 | 2017/10/27 06:35:15 [error] 5#5: *2 broken header: "GET / HTTP/1.1
If I do
$ printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 80\r\nGET /test/abc\r\n\r\n" | nc localhost 81
You can reach API /test/abc and args_given = ,
It works. As I am able to send the proxy protocol it accepts
Now in case of SSL if I use below
printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 8080\r\nGET /test/abc\r\n\r\n" | openssl s_client -connect localhost:8080
It would still error out
web_1 | 2017/10/27 06:37:27 [error] 5#5: *1 broken header: ",(�� #_5���_'���/��ߗ
That is because the client is trying to do Handshake first instead of sending proxy protocol first then handshake
So you possible solutions are
Terminate SSL on LB and then handle http on nginx with proxy_protocol and use the the nc command option I posted
Add a listen 127.0.0.1:<randomlargeport> and execute your test using the same. This is still safe as you are listening to localhost only
Add another SSL port and use listen 127.0.0.1:443 ssl and listen <private_ipv4>:443 ssl proxy_protocol
All solutions are in priority order as per my choice, you can make your own choice
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.
curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
Unfortunately bash version didn't work in my case, so I wrote python3 code:
#!/usr/bin/env python3
import socket
import sys
def check_status(host, port):
'''Check app status, return True if ok'''
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(3)
s.connect((host, port))
s.sendall(b'GET /status HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: curl7.0\r\nAccept: */*\r\n\r\n')
data = s.recv(1024)
if data.decode().endswith('OK'):
return True
else:
return False
try:
status = check_status('127.0.0.1', 80)
except:
status = False
if status:
sys.exit(0)
else:
sys.exit(1)

Confused about nginx proxy and sni

Server info
I have a server with nginx 1.12.1 enabled sni and resin 3.1.6 on jdk 1.6.0
The nginx here use 80 port to proxy 8080 and 443 port to proxy 8443 of resin.
First issue
When the nginx is running, I can get access with 80. But access to 443, I will get a 502 error, and in the error log, I got dh key too small.
Second issue
Then I compile the nginx with openssl-0.9.8f,and sni is disabled.This time I got everything to be done.But when I compile the nginx with openssl-0.9.8f and enable the sni.I got a 502 again,and error log is SSL: error:1
40773F2:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert unexpected message
What I confused
So,the first issue about dh key too small is belong to Weak Diffie-Hellman ? The reason is my jdk is too old?
And the second issue is because nginx send sni information to resin,but resin doesn't support sni,so can't do ssl handshake?

curl (60) unable to get local issuer certificate

I'm trying to connect to the OneDrive API (https://api.onedrive.com) using curl, but it throws me
curl: (60) SSL certificate problem: unable to get local issuer certificate
The page opens in Firefox without a certificate warning. I checked the certificate info, put all three certificates (Baltimore CyberTrust Root, Microsoft IT SSL SHA2 and storage.live.com) into /usr/share/ca-certificates and installed them using
dpkg-reconfigure ca-certificates
However, I keep getting this error. Any ideas?
me#host:~$ curl -vi https://api.onedrive.com
* Rebuilt URL to: https://api.onedrive.com/
* Hostname was NOT found in DNS cache
* Trying 134.170.109.152...
* Connected to api.onedrive.com (134.170.109.152) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs/
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
me#host:~$ curl -V
curl 7.38.0 (x86_64-pc-linux-gnu) libcurl/7.38.0 OpenSSL/1.0.1k zlib/1.2.8 libidn/1.29 libssh2/1.4.3 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API SPNEGO NTLM NTLM_WB SSL libz TLS-SRP

RabbitMQ connection through Nginx

I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP

Resources