Redirecting Requests to https breaks Stripe webhook - nginx

I recently modified my nginx server to redirect all www.mysite requests to https://mysite
The problem is that when I did that, my stripe webhook I had set up is now failing with a 301 redirect error. How do I alter my nginx server to that only requests coming from my domain are redirected? (or at least I think that's the solution, I'm a front end guy).
Here's my server.
server {
listen 443;
server_name mysite.com;
root /var/www/mysite.com/app/mysite;
ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/mykey.key;
#enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used.
ssl_protocols SSLv3 TLSv1;
#Disables all weak ciphers
ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name www.mysite.com;
return 301 https://mysite.com$request_uri;
}

As mpcabd mentioned, Stripe webhooks will not follow redirects for security reasons. As he also mentioned, while you can filter by IP, it's a never-ending battle (and Stripe has previously stated they do intend to eventually stop publishing an IP list).
The even easier and better set-it-and-forget-it solution:
In the Stripe dashboard, reconfigure your webhooks to use HTTPS.
Bam. Done.

What you can do is to exclude Stripe from the redirect, I think their hook doesn't follow redirects for security reasons which is fair, so try to see what IP's they use and make sure you don't redirect if the $http_x_real_ip or $remote_addr is from Strip's IP list.
But as clearly stated here by Stripe:
... because we occasionally have to adjust this list of IP
addresses without any advance notice, we strongly recommend against
using IP-based access control to protect your webhook endpoints.
Instead, we recommend serving your webhook endpoint over SSL,
embedding a secret identifier in the webhook URL that is only known to
you and Stripe, and/or retrieving the event by ID from our API ...
So my answer would be to check if the location requested is the Stripe webhook, then serve it without a redirect, else redirect the request.

Please concatenates the intermediate certificate with your signed SSL certificate.
Ref :
https://futurestud.io/tutorials/how-to-configure-nginx-ssl-certifcate-chain

Related

I wonder reverse proxy server configuration with frontend server, api server, nginx server

I'm configuring reverse proxy server with nginx
Nginx.conf file is like this, location / -> front server address, location /api -> api server address.
Front server fetch from http://${api_addr}/api originally(before setting nginx), but now I changed api URL to http://${nginx_addr}/api for constructing reverse proxy server. I am wondering if it is correct to send the request directly from the front to the api address or if it is correct to send the request to the nginx address?
reverse proxy server structure
So you're configuring a website and you want it to direct traffic to your frontend (html etc) and have an api route going to your api, if I'm reading that correctly?
You'd do it similar to this
server {
listen 80;
server_name yourdomain.com;
set $frontend = "frontend-stuff.com";
set $backend = "backend.com";
location /api {
## if your api backend starts at / rather than /api you'd rewrite away the /api path
# rewrite /api/(.*) /api/$1 break;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $backend;
proxy_pass http://$backend;
break;
}
location / {
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $frontend;
proxy_pass http://$frontend;
break;
}
}
The variables stop nginx hitting an 'emerg' (fatal error) if a host falls down in the background between reloads; it can also be helpful with services where the frontend has a large IP range like cloudfront etc.
In the case of your frontend if you're calling something like CloudFront you'd need to force TLS1.2
proxy_ssl_protocols TLSv1.2;
proxy_ssl_server_name on;
X-Forwarded-Proto https is needed if the backend app is returning paths (.net apps use this to set paths to https etc)
I am wondering if it is correct to send the request directly from the front to the api address or if it is correct to send the request to the nginx address?
Its best to proxy all your requests for an application via the same site config for multiple reasons, such as...
Combined logging (easier to debug)
Simpler to secure (set CSP and unified security headers across site)
Easier to handle CORS for any frontend related activities (ajax/xhrf)
If you provide a bit more info I can probably pad this out
It is best practice to always query the Nginx endpoint and not the specific port. By directly querying the specific api port, you are completely bypassing the server routing service and could therefore accidentally overload your api endpoint if not careful.
By routing everything through the Nginx server, you ensure that your api service remains healthy and works as expected.

ASP.NET Owin OAuth callback URL with reverse proxy

I need your help to solve an issue i have with OAuth on my MVC5 application. On my development environment everything's fine. I set up Twitter/Google/Facebook/Microsoft providers and it works like a charm for now.
My issue is on a test environment. I'm using nGinx as a front server to holds the certificates and serves some static content through a subdirectory of the domain.
The proxy part is configured as followed :
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The problem is that all my configured callback URLs for external providers are on the HTTPS scheme but running the application on HTTP makes the callback url having an HTTP protocol (for example, the authorized callback URL is https://example.com/signin-facebook but the effective callback URL sent to provider is http://example.com/signin-facebook).
I saw on other posts that there is a AspNetCore solution with UseForwardedHeaders but as i'm still on normal AspNet, it's not an option.
As a dirty workaround, i temporarly allowed URLs with an HTTP protocol as callback URLs for Twitter/Facebook and Google but Microsoft is strict and only allow HTTPS (This workaround works because my nGinx is configured to perform a 301 Redirect on incoming HTTP requests to the same request over HTTPS)
Does anyone have a solution to change the scheme of the base URL used to build the callback URL ?

Nginx https proxy, do not terminate ssl

I want to proxy https requests to a certain domain to another address:
server {
server_name site1;
listen 443;
ssl on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://172.17.0.1:44110;
}
}
Nginx complains with:
nginx: [emerg] no "ssl_certificate" is defined for the "ssl" directive in /etc/nginx/nginx.conf:33
The point is that the certificate is actually on the proxied server.
How can I tell nginx to not terminate the ssl layer, and simply proxy it to the configured url?
What I am looking for is something similar to this, but with server_name support.
You should probably have to add:
proxy_ssl_session_reuse off;
see here
What you want to do is not possible in NGINX so far as I know. I actually had written out an answer that turned out to have duplicated the link you provided to another StackOverflow answer. If you consider what you are asking, it is in effect for NGINX to be able to Man-in-the-Middle the communication between the client browser and your origin. I don't think you really want this to be possible as it would make SSL/TLS quite useless.
You will either need to do what the linked StackOverflow answer does with the stream module, or you will need to move the certificate to be hosted by NGINX.
Cloudflare has created "Keyless" SSL which allows for the private material to be hosted elsewhere, but only the origin side of it is open source. You would have to modify NGINX to be able to implement the proxy side of the protocol, though perhaps someone else has done that as well. This is likely overkill for your needs.

Nginx redirect HTTPS to HTTP with URL rewrite

I'm trying to setup an nginx reverse proxy that will receive HTTPS connections and redirect these via HTTP to our backend server.
I've got the basics correct and the reverse proxy works fine but I also need to rewrite links on the backend webpage so that the frontend hostname and port are displayed in the browser.
My config is as follows -
server {
listen 1493 ssl;
server_name ngx.example.com;
ssl_certificate /etc/nginx/ssl/public.cert;
ssl_certificate_key /etc/nginx/ssl/private.rsa;
location / {
proxy_pass http://ws1.example.com:1483;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
I can browse to the page and the frontend hostname:port is displayed but clicking on any links (which also show the frontend hostname:port) returns "400 Bad Request. The plain HTTP request was sent to HTTPS port". My backend is HTTP only and the error is originating from nginx.
How do I make this work? It was much easier in IIS...
Thanks!

Linode NodeBalancer Vs Nginx

I have a NodeBalancer created to route my request on Tomcat server via HTTP. I see that NodeBalancer is doing good but now I have to install Nginx server to server static contact and as well as reverse proxy to redirect my http traffic to HTTPS.
I have a below scenario--
User-----via http---->NodeBalncer(http:80) ---->Nginx--->Redirect to HTTPS---->NodeBalancer(https:443)------> Tomcat on HTTP:8080
Below is sample flow
1) User send a request using HTTP:80
2) NodeBalancer received request on HTTP:80 and forward to Nginx
3) Nginx redirect request to HTTPS
4) Now NodeBalancer received request on HTTPS:443 and forward to Serving Tomcat on HTTP:8080 after terminating SSL on NodeBalancer.
Now, if I need to serve all static content like (images/|img/|javascript/|js/|css/|stylesheets/) then before forwarding all HTTPS request via NodeBalance to serving Tomcat I need to forward them via Nginx to serve static content.
I can do it via pointing NodeBalncer to Nginx but then what about Tomcat clustering because NodeBalancer will always forward all HTTPS request to Nginx and I have to maintain session stickiness using Nginx which is pretty much like LoadBalancing via Nginx. I see everything can be done via Nginx server itself. Instead of terminating all user request to NodeBalancer I can directly use Nginx.
I did execute some scenarios by installing Nginx and redirecting HTTP to HTTPS and independently serving static content also but I stuck with provided NodeBalancer to serve my purpose. I am planing to drop Linode NodeBalncer and use Nginx as LoadBalancer as well as service static content.
Looking some expert advise/comments on this or suggest me if my approach is wrong.
Serving the static content and the redirect to https are two different issues. Your general approach sounds fine. I personally would do everything using Nginx and lose the NodeBalancer but that's for a personal website. If this is for business then you need to consider monitoring etc and NodeBalancer might provide some features you want to keep.
Send all traffic from NodeBalancer to Nginx and use Nginx as both the load balancer and to terminate all SSL traffic. Heres a simple examples that terminates SSL and serves images. In this case we're routing all traffic to the tomcat upstream server on port 80 which is load balanced using IP hash so you get sticky sessions. You would be adding a load balancer here.
upstream tomcat {
ip_hash;
server 192.168.1.1:80;
server 192.168.1.2:80;
server 192.168.1.3:80;
}
server {
listen 443;
server_name www.example.org;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem;
location / {
proxy_cache example_cache;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host www.example.org:80;
proxy_pass_request_headers on;
proxy_pass http://tomcat;
}
location /images/ {
root /var/www/images/;
autoindex off;
}
}
To achieve sticky sessions you have several options that you need to read up on. IP load balancing is probably the simplest to setup.

Resources