How to make BrowserSync work with an nginx proxy server? - nginx

(If needed, please see my last question for some more background info.)
I'm developing an app that uses a decoupled front- and backend:
The backend is a Rails app (served on localhost:3000) that primarily provides a REST API.
The frontend is an AngularJS app, which I'm building with Gulp and serving locally (using BrowserSync) on localhost:3001.
To get the two ends to talk to each other, while honoring the same-origin policy, I configured nginx to act as a proxy between the two, available on localhost:3002. Here's my nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 3002;
root /;
# Rails
location ~ \.(json)$ {
proxy_pass http://localhost:3000;
}
# AngularJS
location / {
proxy_pass http://localhost:3001;
}
}
}
Basically, any requests for .json files, I'm sending to the Rails server, and any other requests (e.g., for static assets), I'm sending to the BrowserSync server.
The BrowserSync task from my gulpfile.coffee:
gulp.task 'browser-sync', ->
browserSync
server:
baseDir: './dist'
directory: true
port: 3001
browser: 'google chrome'
startPath: './index.html#/foo'
This all basically works, but with a couple of caveats that I'm trying to solve:
When I run the gulp task, based on the configuration above, BrowserSync loads a Chrome tab at http://localhost:3001/index.html#/foo. Since I'm using the nginx proxy, though, I need the port to be 3002. Is there a way to tell BrowserSync, "run on port 3001, but start on port 3002"? I tried using an absolute path for startPath, but it only expects a relative path.
I get a (seemingly benign) JavaScript error in the console every time BrowserSync starts: WebSocket connection to 'ws://localhost:3002/browser-sync/socket.io/?EIO=3&transport=websocket&sid=m-JFr6algNjpVre3AACY' failed: Error during WebSocket handshake: Unexpected response code: 400. Not sure what this means exactly, but my assumption is that BrowserSync is somehow confused by the nginx proxy.
How can I fix these issues to get this running seamlessly?
Thanks for any input!

To get more control over how opening the page is done, use opn instead of browser sync's mechanism. Something like this (in JS - sorry, my Coffee Script is a bit rusty):
browserSync({
server: {
// ...
},
open: false,
port: 3001
}, function (err, bs) {
// bs.options.url contains the original url, so
// replace the port with the correct one:
var url = bs.options.urls.local.replace(':3001', ':3002');
require('opn')(url);
console.log('Started browserSync on ' + url);
});
I'm unfamiliar with Nginx, but according to this page, the solution to the second problem might look something like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# ...
# BrowserSync websocket
location /browser-sync/socket.io/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}

I only succeed by appending /browser-sync/socket.io to proxy_pass url.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# ...
# BrowserSync websocket
location /browser-sync/socket.io/ {
proxy_pass http://localhost:3001/browser-sync/socket.io/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}

You can also do this from the gulp/browsersync side very simply by using its proxy option:
gulp.task('browser-sync', function() {
browserSync({
...
proxy: 'localhost:3002'
});
});
This means your browser connects to browsersync directly like normally via gulp, except now it proxies nginx. As long as your front-end isn't hard-coding hosts/ports in URLs, the requests to Rails will go through the proxy and have the same origin so you can still POST and such. This might be desirable for some as this change for a development setting goes in the development section of your code (gulp+browsersync) versus conditionalizing/changing your nginx config which also runs in production.

Setup for browser-sync to work with python (django) app that runs on uwsgi via websocket. Django app is prefixed with /app to generate url that looks like http://example.com/app/admin/
server {
listen 80;
server_name example.com;
charset utf-8;
root /var/www/example/htdocs/static;
index index.html index.htm;
try_files $uri $uri/ /index.html?$args;
location /app {
## uWSGI setup
include /etc/nginx/uwsgi_params;
uwsgi_pass unix:///var/run/example/uwsgi.sock;
uwsgi_param SCRIPT_NAME /app;
uwsgi_modifier1 30;
}
location /media {
alias /var/www/example/htdocs/storage;
}
location /static {
alias /var/www/example/htdocs/static;
}
}

Related

NGINX proxy_pass to defined upstream instead of https url directly

I have an nginx config that looks similar to this (simplified):
http {
server {
listen 80 default_server;
location /api {
proxy_pass https://my-bff.azurewebsites.net;
proxy_ssl_server_name on;
}
}
}
Essentially, I have a reverse proxy to an API endpoint that uses https.
Now, I would like to convert this to an upstream group to gain access to keepalive and other features. So I tried this:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
}
}
}
Yet it doesn't work. Clearly I'm missing something.
In summary, how do I correctly do this "conversion" i.e. from url to defined upstream?
I have tried switching between http instead of https in the proxy_pass directive, but that didn't work either.
I was honestly expecting this to be a simple replacement. One upstream for another, but I'm doing something wrong it seems.
Richard Smith pointed me in the right direction.
Essentially, the issue was that the host header was being set to "bff-app" instead of "my-bff.azurewebsites.net" and this caused the remote server to close the connection.
Fixed by specifying header manually like below:
http {
upstream bff-app {
server my-bff.azurewebsites.net:443;
}
server {
listen 80 default_server;
location /api {
proxy_pass https:/bff-app;
proxy_ssl_server_name on;
# Manually set Host header to "my-bff.azurewebsites.net",
# otherwise it will default to "bff-app".
proxy_set_header Host my-bff.azurewebsites.net;
}
}
}

nginx upstream proxy_pass not working for heroku?

The below nginx config is working fine if I hardcode my herokuapp(backend API) in proxy_pass section:
http {
server {
listen 8080;
location / {
proxy_pass http://my-app.herokuapp.com;
}
}
}
events { }
However if I try to add this in the upstream directive, its going to 404 page. I want to add this in upstream directive because I have other herokuapps as well where I want to load balance my requests.
This is the config which is not working:
http {
upstream backend {
server my-app.herokuapp.com;
}
server {
listen 8080;
location / {
proxy_pass http://backend;
}
}
}
events { }
These are all the things I tried after checking other SO answers:
add Host header while proxy passing. proxy_set_header Host $host;
add an extra slash at the end of backend.
In upstream directive, add server my-app.herokuapp.com:80 instead of just server my-app.herokuapp.com
In upstream directive, add server my-app.herokuapp.com:443 instead of just server my-app.herokuapp.com. This gives timeout probably because heroku doesn't allow 443(or maybe I didn't configure it).
Found the Issue: I was adding the wrong host. For heroku, for some reason you need to add host header with value as exactly what your app name is.
If your herokuapp name is my-app.herokuapp.com, then you need to add this line for sure:
proxy_set_header Host my-app.herokuapp.com;
Full working config below:
http {
upstream backend {
server my-app.herokuapp.com;
}
server {
listen 8080;
location / {
proxy_pass http://backend;
proxy_set_header Host my-app.herokuapp.com;
}
}
}
events { }

Nginx Proxy Pass not passing port

I'm working locally for the moment.
I have an NGINX configuration for nuxtwoo.example.com.
Whenver I visit nuxtwoo.example.com, I need it to proxy localhost:3000, which is working fine, however I also need it to pass the port :300.
location / {
proxy_pass http://localhost:3000;
}
What I need,
http://nuxtwoo.example.com -> proxy_pass : localhost: 3000 -> URL in browser, nuxtwoo.example.com:3000.
This will also need to for other params, such as nuxtwoo.example.com/blog, should go proxy_pass localhost:3000/blog, and the browser url should be nuxtwoo.example.com:3000/blog.
Can't seem to figure this one out.
You need to use an upstream
upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}

Nginx supervisord configuration

I have a supervisord server running on localhost:9001.
I am trying to serve it at localhost/supervisord.
The nginx config is like this:
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /tmp/nginx.pid;
#daemon off;
events {
worker_connections 1024;
}
http {
# MIME / Charset
default_type application/octet-stream;
charset utf-8;
# Logging
access_log /var/log/nginx/access.log;
# Other params
server_tokens off;
tcp_nopush on;
tcp_nodelay off;
sendfile on;
upstream supervisord {
server localhost:9001;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 5;
location ^~ /stylesheets {
alias /Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/ui/stylesheets;
access_log off;
}
location ^~ /images {
alias /Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/ui/images;
access_log off;
}
location /supervisord {
# Set client IP / Proxy IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Set host header
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://supervisord/;
}
}
}
Before adding ^~ /images and ^~ /stylesheets locations the page was returning a 502 Bad Gateway.
With the above config I am able to access localhost/supervisord but the CSS is missing on the page.
I see the css / images are loaded correctly in the browser:
But I see an error message in the browser console and it seems to be the culprit:
The mimetype in the browser for localhost/stylesheets/supervisor.css it shows as octet-stream instead of text/css.
The mimetype in the browser for localhost:9001/stylesheets/supervisor.css it shows as the correct text/css.
How can I fix this error ?
I thought about dynamically rewriting the mimetype for static files, but I am not an expert in nginx and have no idea how to do that from nginx config.
It's really interesting such an obvious function like putting web interfaces behind transparent reverse proxy is not that straightforward to configure as it should be.
Anyway this is what I do to get reverse proxy working with applications where root can't be modified, like supervisor:
location /supervisor {
proxy_pass http://127.0.0.1:9001/;
}
location / {
if ($http_referer ~ "^.*/supervisor"){
return 301 /supervisor/$request_uri;
}
}
Application-side requests will hit the main end-point but then NginX will re-direct them to the /supervisor EP
This works in most cases but not always.The following supervisor's web functions will fail:
getting action confirmation - you can start / stop services but the result page will fail to load; just go to the /supervisor EP to check the result
live tail does not work; however there is a log display with manual refresh which works under the link with the program name.
Anyway this partial support is good engough for me and you may find it also useful.
I was able to get it working simply with this:
upstream supervisor {
server 127.0.0.1:9001;
}
server {
# ...
location /supervisor/ {
proxy_pass http://supervisor/;
}
}
Even worked in the browser with and without ending slash in url (ie both http://example.com/supervisor and http://example.com/supervisor/ worked).
This was a must for me!

how to expose server port in upstream block of Nginx configuration?

We have a handful of homogenous application loosely under SOA pattern. Because of homogeneity, we have been able to define some neat pattern in Nginx to proxy all of our SOA apps through one configuration. Following Nginx configuration is absolutely working absolute wonders in conjunction with DNSmasq to resolve anything.yourdomain.devel eg. a.stackoverflow.devel, b.stackoverflow.devel domains and route that to appropriate app servers under your project folder via designated ports via maps.
worker_processes 2;
events {
worker_connections 1024;
}
http {
map $host $static_content_root {
hostnames;
default /path/to/project/folder;
# For typical standalone apps living in your project directory
# *.myapp.local.devel -> /path/to/project/myapp/public
~^([^\.]+\.)*(?<app>[^\.]+)\.devel$ /path/to/project/folder/$app/public; #rails pattern
}
map $app $devel_proxy_port1 {
default 3000;
domain1 3000;
domain2 4000;
}
map $app $devel_proxy_port2 {
default 3001;
domain1 3001;
domain2 4001;
}
server {
listen 127.0.0.1;
server_name ~^([^\.]+\.)*(?<app>[^\.]+)\.[^\.]+.devel$;
location / {
root $static_content_root; # Using the map we defined earlier
try_files $uri $uri/index.html #dynamic;
}
location #dynamic {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
proxy_pass http://127.0.0.1:$devel_proxy_port1;
}
}
}
Now, in order to simulate multiple servers behind Nginx load balancer. I thought of doing following proxy configuration which points to upstream rather than directly pointing to one server:port pair.
proxy_pass http://backend;
upstream backend {
server http://127.0.0.1:$devel_proxy_port1;
server http://127.0.0.1:$devel_proxy_port2;
}
I thought above would work but it always emits following error hinting the variables of map blocks are not available inside upstream context.
[emerg] 69478#0: invalid host in upstream "http://127.0.0.1:$devel_proxy_port1" in /usr/local/etc/nginx/nginx.conf:57
Is this an expected behavior?
Yes, variable can not be used inside upstream. You can create few upstream blocks with different names (upstream backend, upstream backend_domain, etc), resolve upstream name through map and put this variable to proxy_pass:
upstream backend {
server http://127.0.0.1:3000;
server http://127.0.0.1:3001;
}
upstream backend_domain1 {
server http://127.0.0.1:3002;
server http://127.0.0.1:3003;
}
upstream backend_domain2 {
server http://127.0.0.1:3004;
server http://127.0.0.1:3005;
}
...
upstream backend_domain30 {
server http://127.0.0.1:3060;
server http://127.0.0.1:3061;
}
map $app $devel_proxy {
default backend;
domain1 backend_domain1;
domain2 backend_domain2;
...
domain30 backend_domain30;
}
...
proxy_pass $devel_proxy;
...
In some cases you can skip map block using $app inside proxy_pass: proxy_pass backend_$app;, but need additional checks for $app values. Also, map allow to to map different "domains" to same applications.

Resources