Reverse proxy to bypass X-Frame-Options - nginx

Good evening everyone,
I have developed a web app for our school that is loaded into a kiosk app on ChromeOS and Windows so that students can take digital tests and exams in a protected environment.
The web app also allows you to consult sources, these sources are links to, for example, a news site, wikipedia, you name it..
Unfortunately, many external links do not work because they are loaded into the web app via iFrame. And nowadays many websites do not allow this by passing this in the headers such as x-frame options.
I had hope at https://github.com/niutech/x-frame-bypass but unfortunately it no longer works.
I also come to the conclusion that a reverse proxy could offer a solution here, but I have no experience with this and research does not make it any easier for me. Or are there even better/other solutions?
As a test I was able to realize through the following that google.be can be loaded within an iFrame, however I encounter 2 problems that I hope I can find a solution for this way.
Issue 1: Images and CSS not loading
The content links to the proxy server, of course that content does not exist on the reverse proxy server.
Issue 2: Every teacher can create exams/tests with their own external sources, it is impossible to add all those external URLS to the reverse proxy every time
That's why I thought of getting the url for the proxy_pass from the url of the reverse proxy url.
Reverse proxy url: http://sitekiosk.xyz/bypass/google.be
google.be gets used in the proxy_pass
Reverse proxy url: http://sitekiosk.xyz/bypass/wikipedia.be
wikipedia.be gets used in the proxy_pass
And so on...
location /bypass {
proxy_set_header Host www.google.be;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://google.be/;
proxy_hide_header Content-Security-Policy;
proxy_hide_header X-Frame-Options;
add_header X-Frame-Options "ALLOWALL";
}
Is this technically possible? Can someone help me with this?
Thank you in advance for taking the time to go through this!
My apologies for my Google Translate English :-)

Related

How to add nginx location back to 302 redirect response location

I have a django-cms application running behind a nginx server. I am using proxy_pass to send traffic to the cms application. I am using location /django-cms , so when I go to https://nginxserver/django-cms It actually works and send the traffic to the CMS server, however the CMS application is sending back a 302 response and the response contains Location: en/ , so the browser tries to hit https://nginxserver/en/ instead f https://nginxserver/django-cms/en. This obviously results in a 404 error. How can I make sure that everything meant for the CMS server hits https://nginxserver/django-cms/ ?
Here is the relevant section from the nginx.conf file.
location /django-cms {
auth_request /request_validate;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://10.0.2.29:8000;
}
location /django-cms {
proxy_pass http://10.0.2.29:8000;
proxy_redirect ~^/(.*) scheme://$http_host/django-cms/$1;
}
The proxy redirect instruction may help you, can you try it? It adds the django-cms to any redirect the backend(cms) gives you.
It's my first time using it, but it looks like that's how it's used in the nginx documentation.
(Found another question that has kind of the same problem as you):
Intercepting backend 301/302 redirects (proxy_pass) and rewriting to another location block possible?
Just in case you also want to check it :D

Nginx Reverse proxy - top-level domain not working - DNS error

I am trying to setup an nginx reverse proxy for my domain and a few of its subdomains. The subdomains work perfectly, but I keep getting ERR_NAME_NOT_RESOLVED on the top-level domain.
Except for the server_name and the proxy_pass port, the nginx config is identical between the top-level domain and its subdomains.
nginx config:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:5500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
DNS settings:
This is more likely to be a DNS issue than an Nginx one, but I don't understand why the subdomains work and the top-level one doesn't.
#AlexeyTen's comment about restarting my browser gave me an idea which ended up fixing my issue.
Basically, I use Acrylic DNS proxy on my development computer to handle .local domains for development. Most people normally use the hosts file for adding local domains, but I find that process tedious as I have worked with hundreds of local domains over the years so I ended up using this proxy that accepts wildcard domains which means I never have to touch the hosts file again.
However, in this instance, my local DNS proxy seemed to have a corrupt cache of my top-level domain. I just purged the cache and restarted the proxy and that fixed everything. I don't exactly know why this happened, but it's good to know that it can happen so it would be the first place for me to look if something similar happens in the future.
Thank you to #AlexeyTen for making me think outside the box. While it wasn't the browser's DNS cache, that comment made me realize that perhaps there was nothing wrong with my DNS settings on the server and instead something wrong with my local computer.

Website/webserver fault tolerance - the best practices

For example, I have two servers in the same network, with identical code/software. If the primary server goes down, I want the second one to become primary.
I heard about the following approaches:
Run a proxy server (nginx/haproxy/etc.) in front of these two.
Run CARP - Common Address Redundancy Protocol.
Round-robin DNS.
What are the pros and cons of the above approaches? And what are the best practices to achieve this?
I'm not too familiar with CARP but I can try to help with the remaining two options:
Round-Robin DNS gives you load balancing but if a server fails it will still receive requests (which will fail too)
i.e : the DNS www.example.com points to both x.x.x.1 and x.x.x.2 if x.x.x.2 dies the DNS will still be resolved to x.x.x.2 and clients will still try to request from it, so this brings your fail rate to half your requests during the downtime (not good)
Even if you change the DNS to point to only x.x.x.1 during the downtime; DNS propagation will take long and you will still loose requests.
In my honest opinion placing a load balancer (proxy server) in front of your stack is the only way to go
I'm really fond of HAProxy but its by no means the only solution (find what works for you)
Proxy-Servers gives you a lot more control over your application stack in the form of High Availability (HA)
you can load balance between 2 to N backend servers and loose any number of them and still be running.
you can schedule downtime anytime of the day to do maintenance or deployments and not influence your clients.
Built in health checks poll the backend servers and take them out of the load as needed and place them back when they've recovered.
The cons to HA Load Balancing is usually the number of rules that have to be setup in order to keep sessions correct or routing of special cases. yes it can get complex but there is A LOT of support in the community and its easily learn-able.
another con to HA Load Balancing is that the proxy server itself become a single point of failure but this can be overcome easily with heartbeatd and a second proxy server.
Hope this answers some of your questions
A good way for making your apps fault tolerant would be using nginx as your load balancer. You can make a config like
upstream some_name {
server server_ip;
server server_ip2;
};
server {
listen 80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://some_name
}
}
plus this nginx upstream object takes further flags like max_fails=10 fail_timeout=20s and is smart enough to know if one server goes down, it switches to the next server that's online and so much more than that.
Please check this official nginx website for more information about it.

Setting up HTTP authentication in a dev box

The situation I have is that I have a dev box full of different applications like a minecraft server, a couchdb server, and a basic wordpress blog behind nginx, which handles forwarding.
Now they all have their own way of handling logins, but what i'd like to set up is somekind of authentication proxy.
In a sense, intercept all the HTTP requests coming to the server and check if they are authenticated, if not return a login page, if they are let the request through to wordpress or couchdb. I could have a list of users in the server to let my friends login with 1 log.
I've tried googling with many different key words but haven't found out how this could be done with for example NGINX? Im a bit of a newbie when it comes to networking please help!
Here's a rough example to do HTTP auth in nginx and then proxy the connection to the original source:
location /couchdb {
auth_basic "Restricted";
auth_basic_user_file htpasswd;
rewrite /couchdb/(.*) /$1 break; # Optional, depends on what you're proxying to
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The above is from the couchdb docs, but the general idea is applicable to any HTTP-based app you want to protect. You'd need to repeat this config block for every distinct app.
Note the rewrite in the above setup lets you work with apps that don't expect to live anywhere but the URL root. It is not required.
Also note that if you want to have a single page where users log in and then have that login be shared across all your apps, that is much more complicated. That is commonly referred to as Single Sign On and requires a specific configuration for each app that you'd intend to integrate.

How do I use nginx to reverse-proxy an IP camera's mjpeg stream?

I'm using nginx on OpenWRT to reverse-proxy a motion-jpeg feed from an IP camera, but I'm experiencing lag of up to 10-15 seconds, even at quite low frame sizes and rates. With the OpenWRT device removed from the path, the camera can be accessed with no lag at all.
Because of the length of the delay (and the fact that it grows with time), this looks like some kind of buffering/caching issue. I have already set proxy_buffering off, but is there something else I should be watching out for?
Thanks.
I installed mjpg-streamer on an Arduino Yun, and then in my routers settings setup port forwarding whitelisted to my webserver only.
Here is my Nginx config which lives in the sites-enabled directory.
server {
listen 80;
server_name cam.example.com;
error_log /var/log/nginx/error.cam.log;
access_log /var/log/nginx/access.cam.log;
location / {
set $pp_d http://99.99.99.99:9999/stream_simple.html;
if ( $args = 'action=stream' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
if ( $args = 'action=snapshot' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
proxy_pass $pp_d;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I never got this working to my satisfaction with nginx. Depending on your specific needs, two solutions which may be adequate:
if you can tolerate the stream being on a different port, pass it through using the port forwarding feature of OpenWRT's built-in firewall.
use the reverse-proxy capabilities of tinyproxy. The default package has the reverse-proxy capabilities disabled by a flag, so you need to be comfortable checking out and building it yourself. This method is definitely more fiddly, but does also work.
I'd still be interested to hear of anyone who gets this working with nginx.
I have Nginx on Openwrt BB (wndr3800) reverse-proxying to a dlink 932LB1 ip cam, and it's working nicely. No significant lag, even before I disabled proxy_buffering. If I have a lot of stuff going over the network, the video can get choppy, but no more than it does with a straight-to-camera link from the browser (or from any of my ip cam apps). So... it is possible.
Nginx was the way to go for me. I tried tinyproxy & lighttpd for the reverse proxying, but each has missing features on OpenWrt. Both tinyproxy and lighttpd require custom compilation for the full reverse proxy features, and (AFAIK) lighttpd will not accept FQDNs in the proxy directive.
Here's what I have going:
Basic or digest auth on public facing Nginx provides site-wide access control.
I proxy my CGI scripts (shell, haserl, etc) to Openwrt's uhttpd.
Tightly controlled reverse-proxy to the camera mjpeg & jpeg API, no
other camera functions are exposed to the public.
Camera basic-auth handled by Nginx (proxy_set_header), so no backend
authorization code exposed to public.
Relatively small footprint (no perl, apache, ruby, etc).
I would include my nginx.conf here, except there's nothing unusual about it... just the bare bones proxy stuff. You might try tcpdump or wireshark to see what's cluttering your LAN, if traffic is indeed your culprit.
But it sounds like something about your router is the cause of the delay. Maybe the hardware just can't handle the cpu/traffic load, or there could be something else on your Openwrt setup that is hogging the highway. Is your video smooth and just delayed? Or are you seeing seriously choppy video? The lengthening delay you mention does sound like a buffer/cache thing... but I don't know what would be doing that.

Resources