Need Certificate chain (on the incoming interface) from Nginx - nginx

I am using a setup wherein a chain certificate(Root CA Cert-> Intermediate CA Cert -> Client Cert) is being sent to the Nginx. I need to configure Nginx in such a way that it forwards the entire certificate chain to the middleware. Right now, it is just sending the leaf certificate i.e. client certificate.
I found the following options from the Nginx's documentation (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate)
1- $ssl_client_escaped_cert
2- $ssl_client_cert
None of the above returns the full certificate chain.
Is anyone aware if there is such an option available ?

This seems to be impossible by design - see https://serverfault.com/questions/576965/nginx-proxy-pass-with-a-backend-requesting-client-certificates
The usage of $ssl_client_escaped_cert (as explained in https://clairekeum.wordpress.com/2018/12/05/passing-client-cert-through-nginx-to-the-backend/) seems to be your only option.

This may not be a complete answer, but thought I'd post some resources that may give you a couple of ideas.
If you want the client cert details downstream, then one option is to avoid terminating Mutual TLS in nginx by using the stream module. Here is an example:
Mutual TLS Secured API Blog Post .
NGINX Config
In this setup there are 2 Mutual TLS connections being routed via nginx:
To authenticate with an Authorization Server - where Mutual TLS is not handled by nginx
To call an API with a certificate bound access token - where nginx terminates TLS
Note that this uses a LUA plugin and the ssl_client_raw_cert property to do the extra work of calculating a SHA256 thumbprint, which NGINX itself does not support.
Generally though it makes sense to externalise Mutual TLS plumbing from application level components, as in the above example. Eg you can forward ssl_client_eacaped_cert to your middleware, but perhaps nginx should do the more detailed work of checking issuers.

Related

Nginx confg issue - couldn't connect to S3 compatible storage from NodeJS test program

This is my first quetion on StackOverflow.
I have a requirement to provision LB, and proxy layer in DMZ for the clients to reach a backend S3 compatible storage to read buckets. I am using multiple instances of Nginx for this: one instance for LB (node 1), and two instances (node 2,3) as reverse proxy. LB (node 1) listens on https 443 and has a CA signed cert, and is visible on internet. Node 2,3 are listening on http 80, and will fwd requests to backend S3 compatible storage listens on https with self-signed certs.
When I use a test NodeJS program, from with in DMZ layer, to directly connect to the S3 compatible storage, I could read and list buckets using AWS client, with accessKeyId and secretAccessKey.
But when I use the same test NodeJS program, from internet, with same accessKeyId and secretAccessKey, and trying to connect to node 1 (eventually reach backend S3 compatile storage), I am getting the following error:
{"message":"The request signature we calculated does not match the signature you provided.
Check your AWS Secret Access Key and signing method.
For more information, see REST Authentication and SOAP Authentication for details.",
"code":"SignatureDoesNotMatch",
"region":null,
"time":"2018-12-18T12:34:28.313Z",
"requestId":"2899219037",
"statusCode":403,"retryable":false,
"retryDelay":14.04655267301651}
I tried multiple ways to understand and solve this. It looks like my Nginx config is not passing http headers correctly. But I didn't explicitly config anything to hide http headers, and my understanding is that, all headers will pass through unless we explicitly block them.
Other than reaching to S3 compatible backend storage, the calls are going through Nginx. I have even tested to reach another host instead of S3 compatible storage (with self-signed cert), it worked well.
Pl suggest any solution. Let me know any info I may need to add to this question.
srinivas
Resolved.
In my case what fixed the issue is setting up Host as header.
location /something {
...
proxy_set_header Host $http_host;
...
}
My understanding is, Host is used as part of signature generation / verification. It is stripped by Nginx by default, and setting this up explicitly resolved it.

How HTTPS is different than HTTP request?

I understand that HTTTPS is secured and it requires SSL certificate issued by CA authority to make the application secure. But what I do not understand is that its in-depth difference with HTTP.
My question, as a user, if I make a request to an application with HTTP or if I make same request to HTTPS what is the actual difference? The traffic remains same to both. Is there any traffic filtering happening if I use HTTPS?
Thanks
HTTPS, as an application protocol is just HTTP over TLS, so there are very few differences, the s in the URL and some consequences for proxy, that is all.
Now you are speaking about the traffic and the filtering. Here you have a big difference because using TLS adds confidentiality and integrity: passive listeners will see nothing about the HTTP data exchanged, including headers. The only thing visible will be the hostname (taken from the https:// URL) as this is needed at the TLS level before HTTP even happens, through a mechanism called SNI (Server Name Indication) that is now used everywhere to be able to install multiple services using TLS under different names but with a single IP address.

Is it possible to have client certificates with HTTP (not HTTPS)?

I have an application set up like this:
There is a server, with a reverseproxy/load balancer that acts as the HTTPS termination (this is the one that has a server certificate), and several applications behind it(*)
However, some applications require authentication of the client with a certificate. Authentication cannot happen in the reverse proxy. Will the application be able to see the user certificate, or will it be jettisoned by the HTTPS->HTTP transfer?
(*) OK, so this is a Kubernetes ingress, and containers/pods.
It will be lost. I think you need to extract it in the reverse proxy (i.e. Nginx) and pass it in as a HTTP header if you really must. See for example https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate. Not very secure as the cert is passed in the clear!
I don't know if we have that level of control over the ingress, personally I'm using a normal Nginx server for incoming traffic instead.

Canonical handling of HTTPS request when SSL not supported

If a client is requesting a domain that does not have a valid CA signed certificate and the server not intend on supporting HTTPS but does support HTTP for this domain, what is the best way to handle this in the web server. Note, the server does handle requests for SSL (HTTPS) on other domains so it is listening on 443.
Example where this would apply is for multi sub-domains where the sub-domains are dynamically created and thus making it extremely difficult to register CA signed certificates.
I've seen people try to respond with HTTP error codes but these seem moot as the client (browser) will first verify the certificate and will present the hard warning to the user before processing any HTTP. Therefore the client will only see the error code if they "proceed" past the cert warning.
Is there a canonical way of handling this scenario?
There is no canonical way for this scenario. Clients don't automatically downgrade to HTTP if HTTPS is broken and it would be a very bad idea to change clients in this regard - all what an attacker would need to do to attack HTTPS would be to infer with the HTTPS traffic to make a client downgrade to unprotected HTTP traffic.
Thus, you need to make sure that the client either does not try to attempt to access URL's which do not work properly (i.e. don't publish such URL's) or to make sure that you have a working certificate for these subdomains, i.e. adapt the processes for creation of subdomains so that they not only have an IP address but also a valid certificate (maybe use wildcard certificates).
Considering these websites don't have to work with SSL, the webserver should close all SSL connections for them in a proper way.
There is no canonical way for this, but RFC 5246 implicitly suggests to interrupt the handshake on the server side by using the user_cancel + close_notify alerts. How to achieve this is another question, it will be a configuration of the default SSL virtual host.
user_canceled
This handshake is being canceled for some reason unrelated to a
protocol failure. If the user cancels an operation after the
handshake is complete, just closing the connection by sending a
close_notify is more appropriate. This alert should be followed
by a close_notify. This message is generally a warning.
If you are dealing with subdomains, you probably can use a wildcard certificate for all of your subdomains.
Adding the CA certificate to your client will remove the warning (that's what companies do, no worry).
When hosting with Apache, for example, you can use VirtualDocumentRoot to add domains without editing your configuration. Have a look at the solution provided here : Virtual Hosting in SSL with VirtualDocumentRoot

Nginx catch "broken header" when listening to proxy_protocol

I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.
The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.

Resources