The situation I have is that I have a dev box full of different applications like a minecraft server, a couchdb server, and a basic wordpress blog behind nginx, which handles forwarding.
Now they all have their own way of handling logins, but what i'd like to set up is somekind of authentication proxy.
In a sense, intercept all the HTTP requests coming to the server and check if they are authenticated, if not return a login page, if they are let the request through to wordpress or couchdb. I could have a list of users in the server to let my friends login with 1 log.
I've tried googling with many different key words but haven't found out how this could be done with for example NGINX? Im a bit of a newbie when it comes to networking please help!
Here's a rough example to do HTTP auth in nginx and then proxy the connection to the original source:
location /couchdb {
auth_basic "Restricted";
auth_basic_user_file htpasswd;
rewrite /couchdb/(.*) /$1 break; # Optional, depends on what you're proxying to
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The above is from the couchdb docs, but the general idea is applicable to any HTTP-based app you want to protect. You'd need to repeat this config block for every distinct app.
Note the rewrite in the above setup lets you work with apps that don't expect to live anywhere but the URL root. It is not required.
Also note that if you want to have a single page where users log in and then have that login be shared across all your apps, that is much more complicated. That is commonly referred to as Single Sign On and requires a specific configuration for each app that you'd intend to integrate.
Related
Good evening everyone,
I have developed a web app for our school that is loaded into a kiosk app on ChromeOS and Windows so that students can take digital tests and exams in a protected environment.
The web app also allows you to consult sources, these sources are links to, for example, a news site, wikipedia, you name it..
Unfortunately, many external links do not work because they are loaded into the web app via iFrame. And nowadays many websites do not allow this by passing this in the headers such as x-frame options.
I had hope at https://github.com/niutech/x-frame-bypass but unfortunately it no longer works.
I also come to the conclusion that a reverse proxy could offer a solution here, but I have no experience with this and research does not make it any easier for me. Or are there even better/other solutions?
As a test I was able to realize through the following that google.be can be loaded within an iFrame, however I encounter 2 problems that I hope I can find a solution for this way.
Issue 1: Images and CSS not loading
The content links to the proxy server, of course that content does not exist on the reverse proxy server.
Issue 2: Every teacher can create exams/tests with their own external sources, it is impossible to add all those external URLS to the reverse proxy every time
That's why I thought of getting the url for the proxy_pass from the url of the reverse proxy url.
Reverse proxy url: http://sitekiosk.xyz/bypass/google.be
google.be gets used in the proxy_pass
Reverse proxy url: http://sitekiosk.xyz/bypass/wikipedia.be
wikipedia.be gets used in the proxy_pass
And so on...
location /bypass {
proxy_set_header Host www.google.be;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://google.be/;
proxy_hide_header Content-Security-Policy;
proxy_hide_header X-Frame-Options;
add_header X-Frame-Options "ALLOWALL";
}
Is this technically possible? Can someone help me with this?
Thank you in advance for taking the time to go through this!
My apologies for my Google Translate English :-)
I have a django-cms application running behind a nginx server. I am using proxy_pass to send traffic to the cms application. I am using location /django-cms , so when I go to https://nginxserver/django-cms It actually works and send the traffic to the CMS server, however the CMS application is sending back a 302 response and the response contains Location: en/ , so the browser tries to hit https://nginxserver/en/ instead f https://nginxserver/django-cms/en. This obviously results in a 404 error. How can I make sure that everything meant for the CMS server hits https://nginxserver/django-cms/ ?
Here is the relevant section from the nginx.conf file.
location /django-cms {
auth_request /request_validate;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://10.0.2.29:8000;
}
location /django-cms {
proxy_pass http://10.0.2.29:8000;
proxy_redirect ~^/(.*) scheme://$http_host/django-cms/$1;
}
The proxy redirect instruction may help you, can you try it? It adds the django-cms to any redirect the backend(cms) gives you.
It's my first time using it, but it looks like that's how it's used in the nginx documentation.
(Found another question that has kind of the same problem as you):
Intercepting backend 301/302 redirects (proxy_pass) and rewriting to another location block possible?
Just in case you also want to check it :D
I have a problem with a particular nginx setup. The scenario is like this: Applications need to access a couchdb service via a nginx proxy. The nginx needs to set an authorization header in order to get access to the backend. The problem is that the backend service endpoint's DNS changes sometimes and that's causing my services to stop working until I reload nginx.
I'm trying to setup the upstream as a variable, but when I do that, authorization stops working, the backend returns 403. When I just use the upstream directive, it works just fine. The upstream variable has the correct value, no errors in logs.
The config snippet below:
set $backend url.to.backend;
location / {
proxy_pass https://$backend/api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host url.to.backend;
proxy_set_header Authorization "Basic <authorization_gibberish>";
proxy_temp_path /mnt/nginx_proxy;
}
Any help will be appreciated.
Unless you have the commercial version, nginx caches the resolution of an upstream (proxy_pass is basically a "one server upstream"), so the only way to re-resolve it is to perform a restart or reload of the configuration. This is assuming the changing DNS is the issue.
From the upstream module documentation:
Additionally, the following parameters are available as part of our
commercial subscription:
...
resolve - monitors changes of the IP
addresses that correspond to a domain name of the server, and
automatically modifies the upstream configuration without the need of
restarting nginx (1.5.12)
I deployed an meteor app to a digital ocean droplet and mapped that to a domain. I'm pretty new to server management so I followed a guide to set up a reverse proxy with nginx to point to the correct port (the meteor app is served on port 3000).
I created a file called trackburnr.com in /etc/nginx/sites-available with this content:
server {
listen 80;
server_name trackburnr.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
And start / restart the nginx service.
Now, here's the catch. If I navigate to trackburnr.com:3000, it always works. So I'm confident my droplet and DNS record on the domain works fine.
If I navigate to trackburnr.com, it seems like it's working fine, but if I refresh the page after a few minutes or navigate to it with another browser, it returns the "page not found" page from my internet provider.
If I restart the service, it usually works fine for a another few minutes and then stops working again.
There are several guides about this as it's a popular setup for deploying meteor apps, but they all use this same approach.
Following another answer in here I tried setting proxy_pass as a variable beforehand and passing it, but with no success.
Has anyone encountered similar issues?
I think I figured it out. My domain provider had an DNS redirect set up which redirected trackburner.com to www.trackburnr.com. Obviously that subdomain wasn't mapped in nginx.
I revered the redirect so that www redirected to the non-www version and that seemed to do the trick.
After that I was incurring in 400 Bad Request. I attribute this to the google analytics code in my header which made the cookies too big. I fixed this by adding the large_client_header_buffers 4 16k; to my server tag in the nginx conf file. More info about that here
I'm using nginx on OpenWRT to reverse-proxy a motion-jpeg feed from an IP camera, but I'm experiencing lag of up to 10-15 seconds, even at quite low frame sizes and rates. With the OpenWRT device removed from the path, the camera can be accessed with no lag at all.
Because of the length of the delay (and the fact that it grows with time), this looks like some kind of buffering/caching issue. I have already set proxy_buffering off, but is there something else I should be watching out for?
Thanks.
I installed mjpg-streamer on an Arduino Yun, and then in my routers settings setup port forwarding whitelisted to my webserver only.
Here is my Nginx config which lives in the sites-enabled directory.
server {
listen 80;
server_name cam.example.com;
error_log /var/log/nginx/error.cam.log;
access_log /var/log/nginx/access.cam.log;
location / {
set $pp_d http://99.99.99.99:9999/stream_simple.html;
if ( $args = 'action=stream' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
if ( $args = 'action=snapshot' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
proxy_pass $pp_d;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I never got this working to my satisfaction with nginx. Depending on your specific needs, two solutions which may be adequate:
if you can tolerate the stream being on a different port, pass it through using the port forwarding feature of OpenWRT's built-in firewall.
use the reverse-proxy capabilities of tinyproxy. The default package has the reverse-proxy capabilities disabled by a flag, so you need to be comfortable checking out and building it yourself. This method is definitely more fiddly, but does also work.
I'd still be interested to hear of anyone who gets this working with nginx.
I have Nginx on Openwrt BB (wndr3800) reverse-proxying to a dlink 932LB1 ip cam, and it's working nicely. No significant lag, even before I disabled proxy_buffering. If I have a lot of stuff going over the network, the video can get choppy, but no more than it does with a straight-to-camera link from the browser (or from any of my ip cam apps). So... it is possible.
Nginx was the way to go for me. I tried tinyproxy & lighttpd for the reverse proxying, but each has missing features on OpenWrt. Both tinyproxy and lighttpd require custom compilation for the full reverse proxy features, and (AFAIK) lighttpd will not accept FQDNs in the proxy directive.
Here's what I have going:
Basic or digest auth on public facing Nginx provides site-wide access control.
I proxy my CGI scripts (shell, haserl, etc) to Openwrt's uhttpd.
Tightly controlled reverse-proxy to the camera mjpeg & jpeg API, no
other camera functions are exposed to the public.
Camera basic-auth handled by Nginx (proxy_set_header), so no backend
authorization code exposed to public.
Relatively small footprint (no perl, apache, ruby, etc).
I would include my nginx.conf here, except there's nothing unusual about it... just the bare bones proxy stuff. You might try tcpdump or wireshark to see what's cluttering your LAN, if traffic is indeed your culprit.
But it sounds like something about your router is the cause of the delay. Maybe the hardware just can't handle the cpu/traffic load, or there could be something else on your Openwrt setup that is hogging the highway. Is your video smooth and just delayed? Or are you seeing seriously choppy video? The lengthening delay you mention does sound like a buffer/cache thing... but I don't know what would be doing that.