How can I dynamically verify client certificate in Openresty? - nginx

For some strange reason I have to deal with client apps that can be configured to use either JWT or MTLS. If the server finds an Auth header I will use the token in there. On the other hand, if the token is not found but a client cert is given, I can use that instead. However if the Auth header is present I don't want to require the client cert and I will ignore it if passed, even if it is wrong.
I know I can set ssl_verify_client optional; which will mean ngx.var.ssl_client_cert is populated and I can refer to it in Lua script specified in access_by_lua_file. Then I know how to load the cert as an x509: local x509cert = x509.new(ngx.var.ssl_client_raw_cert, "PEM"), that loads fine.
Question is, how do I verify the client cert once I have loaded it? If there is a key in nginx.conf like this: ssl_certificate_key tls.key; then should I verify the client cert against that? If so, how do I read it?
Or is there an easier alternative to implementing optional MTLS verification?

Related

Does Pact.Net support https verification?

I want to verify my pact against an API that has an https endpoint.
My request is timing out when I run the pact.
Does Pact.Net supports https verification or am I missing something?
Yes, it should be able to do this.
I'm going to guess that the https target is using a self-signed certificate. To work around that you can specify the following env vars to fix this:
To connect to a Pact Broker that uses custom SSL cerificates, set the environment variable $SSL_CERT_FILE or $SSL_CERT_DIR to a path that contains the appropriate certificate.
(see also https://github.com/pact-foundation/pact-ruby-standalone/releases)
You could enable debug logging to see what the process is doing, consult the docs on how to do that.

Is it possible to verify the sender origin of an http request using TLS

I have created an API endpoint, I have a user of that endpoint requesting from servers at stackoverflow.com. I want to verify that the request was made from stackoverflow.com servers. One way I could verify it came from stackoverflow.com is to ask the developer to sign the request with their let's encrypt domain private key. I can then use their public key to decrypt the message.
I'm not totally sure I can decrypt the privately encrypted message with their public key but even if I could, I would like to avoid having the developer do any special type of encryption. Could I use TLS to verify the origin domain?
TLS supports client authentication, also called '2-way' or 'mutual' authentication. (SSL3 also did, but you should not be using SSL3.) See e.g. TLS1.2
'updated' for ECC and TLS1.3.
How to use this depends on the software (typically library or middleware) being used for TLS, which you didn't indicate; it is even possible some TLS stack doesn't support it at all, though I've never heard of any. Some stacks or use-cases allow client auth to be invoked without any code change, and others with only minimal or localized code change.
Some details that may or may not matter:
this does not sign the request. It authenticates the TLS connection (to be exact, it normally signs a transcript of the handshake) and then the data transferred over the connection is MACed (as well as encrypted) using keys created (and thus authenticated) by the handshake. This provides authentication but not nonrepudiation for the data; you the receiver can reliably determine it came from the sender, but you can't reliably prove this to a third party. For the closely related case of 'proving' the server, see the numerous crossdupes linked at https://security.stackexchange.com/questions/205074/is-it-possible-to-save-a-verifiable-log-of-a-tls-session .
this authenticates the data was sent by the identified client; it says nothing about the origin which as Sam Jason points out is often different.
the client is not necessarily identified by a domain name; it can be a person, organization, or something else. However, many CAs issue a single cert for both TLS server auth and client auth (look at the ExtendedKeyUsage extension in your own or any sample cert(s) to see) and in that case with few exceptions the subject is identified by a domain name or name(s) or at least wildcard(s).
I'm pretty sure you should be using some sort of API key or maybe something similar to how twilio signs its requests
one reason for these patterns is that it's common for HTTP requests to be proxied, with static requests handled by something other than the code/application server. therefore the TLS connection would have been terminated at the proxy server, and the actual application code wouldn't be able to easily see anything about the TLS connection used by the remote server

How to access Firebase from http://localhost:3000

I'm developing a React app, and I get this error when trying to sign in to Firebase using their email auth provider.
Failed to load https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyPassword?key=.....:
Response to preflight request doesn't pass access control check: The
'Access-Control-Allow-Origin' header has a value 'https://localhost:3000'
that is not equal to the supplied origin. Origin 'http://localhost:3000'
is therefore not allowed access.
(Notice the https on line 3 versus http on line 4)
It looks like they changed Access-Control-Allow-Origin from *, to the https version of whatever domain you're calling from?
Does this mean I now need to configure my React app to run as https://localhost:3000?
You want to create a .env file in the root of your project and set HTTPS=true. This will launch your app using a self signed certificate.
Take a look at the advanced configuration options of create-react-app here
https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#advanced-configuration
If you need more control over the certificate and do not want to eject. Take a look at react-app-rewired (https://github.com/timarney/react-app-rewired). You can config the devServer to use a custom certificate using the Extended Configuration Options here (https://github.com/timarney/react-app-rewired#extended-configuration-options)

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

Why are client SSL certificates filtered out by default in IIS?

Turns out that if the client sends a request signed with his certificate that certificate is ignored by IIS and not passed to managed code. This is because there's <system.webServer><security><access sslFlags> property that is set to "None" by default and that means "ignore the certificate".
Changing that value is not allowed by default because <security> section is locked and so it must first be unlocked. Clearly someone made an effort to disallow certificates passing to managed code by default.
Why is this the default? Why not just let the certificate through and not touch it?

Resources