Nginx https proxy, do not terminate ssl - nginx

I want to proxy https requests to a certain domain to another address:
server {
server_name site1;
listen 443;
ssl on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://172.17.0.1:44110;
}
}
Nginx complains with:
nginx: [emerg] no "ssl_certificate" is defined for the "ssl" directive in /etc/nginx/nginx.conf:33
The point is that the certificate is actually on the proxied server.
How can I tell nginx to not terminate the ssl layer, and simply proxy it to the configured url?
What I am looking for is something similar to this, but with server_name support.

You should probably have to add:
proxy_ssl_session_reuse off;
see here

What you want to do is not possible in NGINX so far as I know. I actually had written out an answer that turned out to have duplicated the link you provided to another StackOverflow answer. If you consider what you are asking, it is in effect for NGINX to be able to Man-in-the-Middle the communication between the client browser and your origin. I don't think you really want this to be possible as it would make SSL/TLS quite useless.
You will either need to do what the linked StackOverflow answer does with the stream module, or you will need to move the certificate to be hosted by NGINX.
Cloudflare has created "Keyless" SSL which allows for the private material to be hosted elsewhere, but only the origin side of it is open source. You would have to modify NGINX to be able to implement the proxy side of the protocol, though perhaps someone else has done that as well. This is likely overkill for your needs.

Related

I wonder reverse proxy server configuration with frontend server, api server, nginx server

I'm configuring reverse proxy server with nginx
Nginx.conf file is like this, location / -> front server address, location /api -> api server address.
Front server fetch from http://${api_addr}/api originally(before setting nginx), but now I changed api URL to http://${nginx_addr}/api for constructing reverse proxy server. I am wondering if it is correct to send the request directly from the front to the api address or if it is correct to send the request to the nginx address?
reverse proxy server structure
So you're configuring a website and you want it to direct traffic to your frontend (html etc) and have an api route going to your api, if I'm reading that correctly?
You'd do it similar to this
server {
listen 80;
server_name yourdomain.com;
set $frontend = "frontend-stuff.com";
set $backend = "backend.com";
location /api {
## if your api backend starts at / rather than /api you'd rewrite away the /api path
# rewrite /api/(.*) /api/$1 break;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $backend;
proxy_pass http://$backend;
break;
}
location / {
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $frontend;
proxy_pass http://$frontend;
break;
}
}
The variables stop nginx hitting an 'emerg' (fatal error) if a host falls down in the background between reloads; it can also be helpful with services where the frontend has a large IP range like cloudfront etc.
In the case of your frontend if you're calling something like CloudFront you'd need to force TLS1.2
proxy_ssl_protocols TLSv1.2;
proxy_ssl_server_name on;
X-Forwarded-Proto https is needed if the backend app is returning paths (.net apps use this to set paths to https etc)
I am wondering if it is correct to send the request directly from the front to the api address or if it is correct to send the request to the nginx address?
Its best to proxy all your requests for an application via the same site config for multiple reasons, such as...
Combined logging (easier to debug)
Simpler to secure (set CSP and unified security headers across site)
Easier to handle CORS for any frontend related activities (ajax/xhrf)
If you provide a bit more info I can probably pad this out
It is best practice to always query the Nginx endpoint and not the specific port. By directly querying the specific api port, you are completely bypassing the server routing service and could therefore accidentally overload your api endpoint if not careful.
By routing everything through the Nginx server, you ensure that your api service remains healthy and works as expected.

Nginx Reverse proxy - top-level domain not working - DNS error

I am trying to setup an nginx reverse proxy for my domain and a few of its subdomains. The subdomains work perfectly, but I keep getting ERR_NAME_NOT_RESOLVED on the top-level domain.
Except for the server_name and the proxy_pass port, the nginx config is identical between the top-level domain and its subdomains.
nginx config:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:5500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
DNS settings:
This is more likely to be a DNS issue than an Nginx one, but I don't understand why the subdomains work and the top-level one doesn't.
#AlexeyTen's comment about restarting my browser gave me an idea which ended up fixing my issue.
Basically, I use Acrylic DNS proxy on my development computer to handle .local domains for development. Most people normally use the hosts file for adding local domains, but I find that process tedious as I have worked with hundreds of local domains over the years so I ended up using this proxy that accepts wildcard domains which means I never have to touch the hosts file again.
However, in this instance, my local DNS proxy seemed to have a corrupt cache of my top-level domain. I just purged the cache and restarted the proxy and that fixed everything. I don't exactly know why this happened, but it's good to know that it can happen so it would be the first place for me to look if something similar happens in the future.
Thank you to #AlexeyTen for making me think outside the box. While it wasn't the browser's DNS cache, that comment made me realize that perhaps there was nothing wrong with my DNS settings on the server and instead something wrong with my local computer.

Redirecting Requests to https breaks Stripe webhook

I recently modified my nginx server to redirect all www.mysite requests to https://mysite
The problem is that when I did that, my stripe webhook I had set up is now failing with a 301 redirect error. How do I alter my nginx server to that only requests coming from my domain are redirected? (or at least I think that's the solution, I'm a front end guy).
Here's my server.
server {
listen 443;
server_name mysite.com;
root /var/www/mysite.com/app/mysite;
ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/mykey.key;
#enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used.
ssl_protocols SSLv3 TLSv1;
#Disables all weak ciphers
ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name www.mysite.com;
return 301 https://mysite.com$request_uri;
}
As mpcabd mentioned, Stripe webhooks will not follow redirects for security reasons. As he also mentioned, while you can filter by IP, it's a never-ending battle (and Stripe has previously stated they do intend to eventually stop publishing an IP list).
The even easier and better set-it-and-forget-it solution:
In the Stripe dashboard, reconfigure your webhooks to use HTTPS.
Bam. Done.
What you can do is to exclude Stripe from the redirect, I think their hook doesn't follow redirects for security reasons which is fair, so try to see what IP's they use and make sure you don't redirect if the $http_x_real_ip or $remote_addr is from Strip's IP list.
But as clearly stated here by Stripe:
... because we occasionally have to adjust this list of IP
addresses without any advance notice, we strongly recommend against
using IP-based access control to protect your webhook endpoints.
Instead, we recommend serving your webhook endpoint over SSL,
embedding a secret identifier in the webhook URL that is only known to
you and Stripe, and/or retrieving the event by ID from our API ...
So my answer would be to check if the location requested is the Stripe webhook, then serve it without a redirect, else redirect the request.
Please concatenates the intermediate certificate with your signed SSL certificate.
Ref :
https://futurestud.io/tutorials/how-to-configure-nginx-ssl-certifcate-chain

nginx reverse proxy and ports

I had a question regarding a proxy pass. A lot of tutorials show a configuration like this in some way or form, with a port identifed:
location / {
proxy_pass http://x.x.x.100:80;
proxy_set_header X-Real-IP $remote_addr;
}
Can someone explain to me why the port needs to be used? Does it need to be a specific number, or is it even necessary?
The explicityly specified port is:
not necessary IF you're reverse proxying to something on the default http (80) or https (443) ports
necessary if you're reverse proxying to something running on any non-default port (common when your application server and webserver are on the same host)
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass shows examples shows examples without the portnumber

How do I use nginx to reverse-proxy an IP camera's mjpeg stream?

I'm using nginx on OpenWRT to reverse-proxy a motion-jpeg feed from an IP camera, but I'm experiencing lag of up to 10-15 seconds, even at quite low frame sizes and rates. With the OpenWRT device removed from the path, the camera can be accessed with no lag at all.
Because of the length of the delay (and the fact that it grows with time), this looks like some kind of buffering/caching issue. I have already set proxy_buffering off, but is there something else I should be watching out for?
Thanks.
I installed mjpg-streamer on an Arduino Yun, and then in my routers settings setup port forwarding whitelisted to my webserver only.
Here is my Nginx config which lives in the sites-enabled directory.
server {
listen 80;
server_name cam.example.com;
error_log /var/log/nginx/error.cam.log;
access_log /var/log/nginx/access.cam.log;
location / {
set $pp_d http://99.99.99.99:9999/stream_simple.html;
if ( $args = 'action=stream' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
if ( $args = 'action=snapshot' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
proxy_pass $pp_d;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I never got this working to my satisfaction with nginx. Depending on your specific needs, two solutions which may be adequate:
if you can tolerate the stream being on a different port, pass it through using the port forwarding feature of OpenWRT's built-in firewall.
use the reverse-proxy capabilities of tinyproxy. The default package has the reverse-proxy capabilities disabled by a flag, so you need to be comfortable checking out and building it yourself. This method is definitely more fiddly, but does also work.
I'd still be interested to hear of anyone who gets this working with nginx.
I have Nginx on Openwrt BB (wndr3800) reverse-proxying to a dlink 932LB1 ip cam, and it's working nicely. No significant lag, even before I disabled proxy_buffering. If I have a lot of stuff going over the network, the video can get choppy, but no more than it does with a straight-to-camera link from the browser (or from any of my ip cam apps). So... it is possible.
Nginx was the way to go for me. I tried tinyproxy & lighttpd for the reverse proxying, but each has missing features on OpenWrt. Both tinyproxy and lighttpd require custom compilation for the full reverse proxy features, and (AFAIK) lighttpd will not accept FQDNs in the proxy directive.
Here's what I have going:
Basic or digest auth on public facing Nginx provides site-wide access control.
I proxy my CGI scripts (shell, haserl, etc) to Openwrt's uhttpd.
Tightly controlled reverse-proxy to the camera mjpeg & jpeg API, no
other camera functions are exposed to the public.
Camera basic-auth handled by Nginx (proxy_set_header), so no backend
authorization code exposed to public.
Relatively small footprint (no perl, apache, ruby, etc).
I would include my nginx.conf here, except there's nothing unusual about it... just the bare bones proxy stuff. You might try tcpdump or wireshark to see what's cluttering your LAN, if traffic is indeed your culprit.
But it sounds like something about your router is the cause of the delay. Maybe the hardware just can't handle the cpu/traffic load, or there could be something else on your Openwrt setup that is hogging the highway. Is your video smooth and just delayed? Or are you seeing seriously choppy video? The lengthening delay you mention does sound like a buffer/cache thing... but I don't know what would be doing that.

Resources