Mutual TLS for webhook, using nginx - nginx

I'm using DocuSign's eSignature API. Rails app, server is nginx. I'm trying to get Mutual TLS working, with no luck so far. I used the instructions here.
I'm currently in DocuSign's sandbox - is there any reason Mutual TLS wouldn't work in the sandbox? I'm not seeing $ssl_client_fingerprint or $ssl_client_s_dn in my access_log.
Edit: I'm not getting any errors from nginx. Webhooks are working, I just don't see the client fingerprint, or $ssl_client_s_dn in my nginx logs. My lone question is: does mutual TLS with nginx work when webhook POSTs come from DocuSign's sandbox?
Edit 2: I figured out my issue. The nginx configuration was fine. I didn't have verify_ssl_host set to true when creating the DocuSign API client.
configuration = DocuSign_eSign::Configuration.new
configuration.host = base_path
configuration.verify_ssl_host = true # I was missing this
api_client = DocuSign_eSign::ApiClient.new(configuration)

Mutual TLS can work in the Developer Environment (also known as "Sandbox" or demo).
https://www.docusign.com/blog/dsdev-mutual-tls-stuff-know has more information about it.
If you need help with that - please provide more information about your issue.

Related

Configuring Keycloak OIDC with an nginx (OpenResty) reverse-proxy

I am experimenting with a two-service docker-compose recipe, largely based on
the following GitHub project:
https://github.com/rongfengliang/keycloak-openresty-openidc
After streamlining, my configuration looks something like the following fork
commit:
https://github.com/Tythos/keycloak-openresty-openidc
My current issue is, the authorization endpoint ("../openid-connect/auth") uses
the internal origin ("http://keycloak-svc:"). Obviously, if users are
redirected to this URL, their browsers will need to cite the external origin
("http://localhost:"). I thought the PROXY_ADDRESS_FORWARDING variable for the
Keycloak service would fix this, but I'm wondering if I need to do something
like a rewrite on-the-fly in the nginx/openresty configuration.
To replicate, from project root::
docker-compose build
docker-compose up --force-recreate --remove-orphans
Then browse to "http://localhost:8090" to start the OIDC flow. You can
circumvent the origin issue by, once you encounter the aforementioned origin
issue, by replacing "keycloak-svc" with "localhost", which will forward you to
the correct login interface. Once there, though, you will need to add a user
to proceed. To add a user, browse to "http://localhost:8080" in a separate tab
and follow these steps before returning to the original tab and entering the
credentials:
Under Users > Add user:
username = "testuser"
email = "{{whatever}}"
email verified = ON
Groups > add "restybox-group"
After user created:
Go to "Credentials" tab
Set to "mypassword"
Temporary = OFF
Authorization Servers such as Keycloak have a base / internet URL when running behind a reverse proxy. You don't need to do anything dynamic in the reverse proxy - have a look at the frontend URL configuration.
Out of interest I just answered a similar question here, that may help you to understand the general pattern. Aim for good URLs (not localhost) and a discovery endpoint that returns intermet URLs rather than internal URLs.

Does Pact.Net support https verification?

I want to verify my pact against an API that has an https endpoint.
My request is timing out when I run the pact.
Does Pact.Net supports https verification or am I missing something?
Yes, it should be able to do this.
I'm going to guess that the https target is using a self-signed certificate. To work around that you can specify the following env vars to fix this:
To connect to a Pact Broker that uses custom SSL cerificates, set the environment variable $SSL_CERT_FILE or $SSL_CERT_DIR to a path that contains the appropriate certificate.
(see also https://github.com/pact-foundation/pact-ruby-standalone/releases)
You could enable debug logging to see what the process is doing, consult the docs on how to do that.

Cannot use secure cookies with Nginx/Node API

Let me start by saying that I have thoroughly looked over all information on stackoverflow and the net in regards to this issue, and I have tried several different things to try and get this working.
I am using the package "cookie-session", but when I set secure to true, my cookies are sent with the request (I can see making a login request with Postman) but the session seems like it is not working through the browser. When secure is set to false everything is fine, works as expected.
Let me explain my setup, I have two servers:
First Server
Serverside rendered React App running Node JS
Running Nginx server
HTTPS is setup here "www.example.com"
Proxying any requests made to "www.example.com/api" to second server
Second Server
Node/express app handling API requests
Running Nginx server
From my understanding, a secure cookie can only be sent if the request is made through HTTPS. Which I believe it is (setup above).
On the second server I have tried using the trust proxy, still no luck:
app.set('trust proxy', 1);
app.use(cookieSession({
maxAge: 7 * 24 * 60 * 60 * 1000,
keys: [env.SESSION.COOKIE_KEY],
secure: true
}));
I also figured that this may have something to do with the headers that are sent with Nginx, I have tried many different headers on both servers e.g. (proxy_set_header X-Forwarded-Proto $scheme), still no luck.
For the life of me I am not sure what to do from here to get secure cookies working.
Also to mention again that everything works fine with secure set to false. When secure is set to true, I can make a request through Postman to my login and receive my cookies in the request, but it appears the session is not applied on the client.
Could this have anything to do with not having a HTTPS cert installed on the second server? If so, how would I add one anyways as both servers run on the same domain "www.example.com" + proxy /api requests to "www.example.com/api"? Thanks for your help.

Decode JWT token on nginx server and log it

We are using nginx server for reverse proxying few micro-services. Every request has Authorization header containing JWT token. Now, what we need to do is extract user details from JWT token and log it on nginx server. Is there anyway to decode and log JWT? I looked into few lua scripts for authenticating request using JWT but that is not what we need. Also, we are trying to avoid using installing Lua on nginx server.
Any help would be greatly appreciated.
EDIT: We are fine with Lua based solution as well.
Relating to your problem and following the comments you could use the official Nginx Plus module to approach in the best way the task:
Authenticating API Clients with JWT and NGINX Plus
But this obviously cost money and in case you want something open-source you should check this project:
TeslaGov /ngx-http-auth-jwt-module
The above module, is working now a days, it's not easy as use the Nginx plus module but it's opensource.
To finish, relating to your Edition, here is a Lua solution.
ubergarm / openresty-nginx-jwt
I am not very into Lua and the project seems to be outdated due it doesn't receive an update since 2018, I share you the link in case you can found something useful from it.
I hope this helps to solve your problem, regards.

How to handle missing Host header in Rails 3.1

I'm seeing several exceptions a day on a very low traffic site. The exceptions look like this:
Missing host to link to! Please provide the :host parameter,
set default_url_options[:host], or set :only_path to true
actionpack (3.1.1) lib/action_dispatch/http/url.rb:25:in `url_for'
-------------------------------
Request:
-------------------------------
* URL : http:///
This is abridged for clarity, but there are no other significant identifying details. There is no user agent or referer for instance. What appears to be going on is that these are HTTP/1.0 requests lacking the Host header. Now it's strange to me that this exception even occurs, because the domain name in question is canonicalized by nginx using 301s, therefore it's impossible to even reach the Rails app without using the correct domain.
I don't understand why Rails would depend on that header anyway, since it seems Nginx should be passing through the more reliable canonical domain, however I am not familiar with Rack internals. If anyone has any guidance for how to best solve this I would appreciate it.
Is there a good reason Rails/Rack is depending on this header?
Is there potentially a Rack bug here?
Should I inject the header with a middleware?
Should I hack something in Rails to suppress it?
Should I configure Nginx to reject HTTP/1.0 requests?
It may be impossible to reach the application without the client using the correct domain, but that's not the issue here. The issue is the server knowing the correct domain. Without a Host header and without a fully-qualified URL, how can the server know what host the client requested?

Resources