Serving Polymer PWA with nginx reverse proxy - nginx

I'm trying to serve my Polymer PWA with an HTTP/2 reverse proxy using nginx, but I cannot get it to work properly. The PWA is served unbundled with prpl-server at 127.0.0.1:38765, which works fine. My prpl-server looks like this:
const express = require('express')
const prpl = require('prpl-server')
const config = require('./build/polymer.json')
const app = express()
const port = 38765
app.get('*', prpl.makeHandler('./build/', config))
app.listen(port)
and my nginx config at /etc/nginx/sites-available/default looks like this:
upstream app {
server 127.0.0.1:38765;
keepalive 64;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name app; # or full domain? tried both, doesn't work
location / {
proxy_pass http://app$request_uri;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto-Version $http2;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
# Cache Controls
# This section sets response expiration which prevents 304 not modified
expires 0;
add_header Pragma public;
add_header Cache-Control "public";
access_log off;
# Security Patches
# This section are security patches in case the client overrides
# these values, the server re-enables it and enforce its rules
add_header X-XSS-Protection "1; mode=block";
add_header X-Frame-Options "deny";
add_header X-Content-Type-Options "nosniff";
}
ssl on;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 1h;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
}
When I go to the page, all dependencies seem to be downloaded over h2 except for ma-app.html (the app shell), which gives me a 502 error. All other files download with a 200 status and have the same size (minus some compression) as when I go to port 38765 directly, but the page is blank.
Am I missing something? Why doesn't the shell download properly? All files' request URLs are exactly the same for the nginx reverse proxy as for the prpl-server except for the port number.
Screenshots
It works when I access the prpl-server directly:
Does not work when I go through the nginx reverse proxy:
Some info for the failed request:

The problem had something to do with the buffer size being too small, as mentioned here: https://github.com/Polymer/prpl-server-node/issues/50#issuecomment-333270848.
I added
proxy_buffer_size 128k;
proxy_buffers 32 32k;
proxy_busy_buffers_size 128k;
in the location section of the nginx config and now the thing works.

Related

Nginx mp4 module start end with MinIO (S3) storage

I have minIO storage with a video folder inside the bucket where all the videos are uploaded.
I have configured Nginx proxy_pass to this folder to access the videos. I use the http://nginx.org/en/docs/http/ngx_http_mp4_module.html module. Everything works fine, I mean videos are available through the link (http://localhost:8888/videos/1.mp4), and playback also works.
The problem is, I want to request some range of the video, but the start-end parameters don't work (for example http://localhost:8888/videos/1.mp4?start=100&end=220), every time it gives me a complete video. Is it possible to use them in my case or for any workaround?
my Nginx configuration
server {
listen 8888;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
#charset koi8-r;
#access_log logs/host.access.log main;
# Proxy requests to the bucket "photos" to MinIO server running on port 9000
location /videos {
mp4;
mp4_buffer_size 5m;
mp4_max_buffer_size 10m;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
#add_header Accept-Ranges;
add_header Accept-Ranges bytes;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://127.0.0.1:9000/mybucket/videos;
}
}

Nginx 301 when accessing a proxied domain locally

I have a Nginx running as a reverse proxy for a domain, let's call it "testdomain.com", the proxy itself is working, and I can access this website from almost anywhere I want, except locally.
To clarify it better, here's my architecture:
I have a ESXi server which has a pfsense VM, the pfsense VM port forwards all requests destined to port 80 to the port 80 of another VM. That VM has a docker container which is running nginx, so it sends to port 80 of the container, and then it proxy pass the HTTP request to another external server where tha application (WordPress) is hosted. As I said it earlier, it works fine, however, if execute a curl locally (i.e wihitn my first my first VM or nginx container) to my address it returns the following:
curl testdomain.com
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
It seems that Nginx can't find the vhost, here's how my .conf for the website looks:
server {
listen 80;
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
server_name testdomain.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
expires 60M;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://myexternalserver.com:80;
}
}
server {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
listen 443 ssl http2;
server_name testdomain.com;
access_log /var/log/nginx/access.log;
ssl_certificate /etc/nginx/ssl/nginx-selfsigned.crt;
ssl_certificate_key /etc/nginx/ssl/nginx-selfsigned.key;
#
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
#
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
#
add_header Strict-Transport-Security "max-age=31536000" always;
#
ssl_session_cache shared:SSL:40m;
ssl_session_timeout 4h;
ssl_session_tickets on;
location / {
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
expires 60M;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://myexternalserver.com:443;
}
}
I apologize if I miss any relevant info.
Thank you!

Nginx reverse proxy on Synology DSM stopped working after update to DSM 6.2.2 Update 3

I have a DS415+ with a custom setup for reverse proxy for several services running in Docker containers following this post on Reddit. Everything worked perfectly until I updated to DSM 6.2.2 Update 3. Since then, trying to access these services results in timeouts, although curl-ing localhost:port or DiskStation_LAN_address:port works fine.
I tried renewing the certificates from LetsEncrypt, taking out some of the options one at a time, clearing the connection via:
proxy_set_header Connection "";
Nothing worked...
This is my custom server.conf file:
server {
listen 80;
listen [::]:80;
server_name XXXXXXX.XXXXXXXX.XXX;
# Include this if you want to get a letsencrypt certificate for the domain you're using
location ^~ /.well-known/acme-challenge/ {
auth_basic off;
root /var/lib/letsencrypt;
default_type "text/plain";
}
# Include this if you want to automatically redirect to HTTPS
location / {
return 301 https://XXXXXXX.XXXXXXXX.XXX$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name XXXXXXX.XXXXXXXX.XXX;
large_client_header_buffers 4 32k;
# Include these if you want to use a specific certificate,
# you'll need to find the location of the letsencrypt after you get it...
# so this might need to be updated afterwards
ssl_certificate /usr/syno/etc/certificate/_archive/XXXXXX/fullchain.pem;
ssl_certificate_key /usr/syno/etc/certificate/_archive/XXXXXX/privkey.pem;
# add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;
# Include this if you want basic authentication required
# auth_basic “Restricted”;
# auth_basic_user_file /etc/nginx/.htpasswd;
# Sonarr, requires Sonarr update webhome configuration to match
location /sonarr {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_intercept_errors on;
proxy_http_version 1.1;
proxy_pass http://localhost:8989;
proxy_redirect default;
}
}
Does anyone have any suggestions for diagnosing why the timeout occurs, and hopefully a solution? As I said, the services are running and can be accessed using the NAS address + port, but can't be accessed from outside. nginx is version 1.15.7. Many thanks in advance!
I feel so stupid... turns out that, for whatever reason, the port forwarding rules on my router had reset. Once I restored them, everything works perfectly well.

Bad Gateway with NGINX as reverse proxy

I've been trying to redirect traffic from https://server:443 to internally http://server:8088 using NGINX as a reverse proxy, I can see my service on 8088 is running since I can access to it, by the time I try to access it from https and port 443 it gives me a 502 bad gateway error. The service Im running is Apache Superset.
I have already created my cert.pem and key.pem files. Already tried several combinations on /etc/nginx/conf.d/default.conf on the location section but no luck so far.
server {
listen 443 http2 ssl;
server_name localhost;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
location / {
add_header Front-End-Https on;
add_header Cache-Control "public, must-revalidate";
add_header Strict-Transport-Security "max-age=2592000; includeSubdomains";
proxy_pass_header Authorization;
proxy_pass http://localhost:8088;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
}
}
I'd expect to hit https://server:443 and it will display my service which is running at http://server:8088.

Is it bad practice to have multiple Unicorn App Servers with one Nginx reverse proxy server?

I have a Linux box running Ubuntu 14.04 with about 50gb of memory.
I've got a 5 or 6 Ruby-on-Rails web applications, each with a Unicorn App server, all served by an Nginx reverse proxy server.
Each app is hosted in a sub-directory.
eg:
www.webserver.com/app1
www.webserver.com/app2
Each app gets maybe 50-100 requests per day. They are all little apps to facilitate business processes at my firm.
My Nginx config file looks something like this:
upstream app1 {
#path to Unicorn SOCK file;
}
upstream app2 {
#path to Unicorn SOCK file;
}
upstream app3 {
#path to Unicorn SOCK file;
}
# ...several more apps
server {
listen 443 ssl;
access_log #path;
error_log #path;
ssl_certificate #path;
ssl_certificate_key #path;
add_header X-UA-Compatible "IE=Edge,chrome=1";
root /srv/apps/app1/public;
location /app1 {
proxy_pass http://app1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /app2 {
proxy_pass http://app2;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /app3 {
proxy_pass http://app3;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
# ...several more apps
}
This setup has worked without issue for a year or so, but I have this nagging feeling I'm doing this all wrong....
Am I going to run into problems if I keep adding apps? Is there a better way to do this?
Update:
By "problems," I mean:
static resource path collisions?
memory issues? namely, using more than I need to accomplish same behavior?
And by "a better way to do this," I mean:
other than sending requests to the relevant unicorn server by parsing out the name of the sub-directory in the URL
should I be using a single Nginx reverse proxy to serve multiple apps?
For the same configuration into differents apps, you can use include directive.
Example, create file named /etc/nginx/global_proxy.conf with this content :
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
And in nginx.conf your section /appX :
location /appX {
proxy_pass http://appX;
include /etc/nginx/global_proxy.conf;
}
And to increase your security, i recommend you adding dhparam, and add this to SSL configuration :
# SSL :
# drop SSLv3 (POODLE vulnerability)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Recommanded ciphers
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# Diffie–Hellman key exchange (D–H)
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# config to enable HSTS(HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
# force timeouts if one of backend is died
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
To generate dhparam.pem file :
openssl dhparam -out dhparam.pem 4096

Resources