Trying to setup nginx as a reverse proxy for facebook, but getting facebook error - nginx

my fathers job requires him to use a VPN to access anything work related (emails, websites, etc) outside his office, and the VPN blocks facebook, currently whenever he wants to use facebook, he has to log off the VPN first. He asked me if I could set up something to try and get around that, so I am attempting to setup NGINX on Debian 9 to act as a reverse proxy, however I have very little experience with NGINX. I have found that if I include proxy_set_header Host $host; then I can get to facebook but see
"Sorry, something went wrong.
We're working on getting this fixed as soon as we can."
But if I don't include it the VPN still blocks facebook.
Any advice?
nginx config

You're not going to be able to reverse proxy Facebook, for a few reasons:
facebook.com isn't going to load from an alternative hostname, such as kyles-facebook-proxy-clone.com. The browser sends a request header, Host. Facebook's servers won't serve for a hostname that they aren't expecting.
Undoubtedly there's some client-side JavaScript that will be hardcoded to other hostnames you're not proxying (for API access, CDNs for images/video, etc.) that will break, unless you rewrote the page in your code as well (which isn't reasonably possible due to obfuscation).
You can't serve traffic for facebook.com without having a properly signed certificate for HTTPS. HTTPS is required for facebook.com due to HSTS.
Even if you managed to get a certificate, it isn't going to work due to key pinning.
What can you do?
Use a proper proxy server.
Use Tor.
Ask for Facebook to be let through on the VPN.

Related

When implementing a web proxy, how should the server report lower-level protocol errors?

I'm implementing an HTTP proxy. Sometimes when a browser makes a request via my proxy, I get an error such as ECONNRESET, Address not found, and the like. These indicate errors below the HTTP level. I'm not talking about bugs in my program -- but how other servers behave when I send them an HTTP request.
Some servers might simply not exist, others close the socket, and still others not answer at all.
What is the best way to report these errors to the caller? Is there a standard method that, if I use it, browsers will convert my HTTP message to an appropriate error message? (i.e. they get a reply from the proxy that tells them ECONNRESET, and they act as though they received the ECONNRESET themselves).
If not, how should it be handled?
Motivations
I really want my proxy to be totally transparent and for the browser or other client to work exactly as if it wasn't connected to it, so I want to replicate the organic behavior of errors such as ECONNRESET instead of sending an HTTP message with an error code, which would be totally different behavior.
I kind of thought that was the intention when writing an HTTP proxy.
There are several things to keep in mind.
Firstly, if the client is configured to use the proxy (which actually I'd recommend) then fundamentally it will behave differently than if it were directly connecting out over the Internet. This is mostly invisible to the user, but affects things like:
FTP URLs
some caching differences
authentication to the proxy if required
reporting of connection errors etc <= your question.
In the case of reporting errors, a browser will show a connectivity error if it can't connect to the proxy, or open a tunnel via the proxy, but for upstream errors, the proxy will be providing a page (depending on the error, e.g. if a response has already been sent the proxy can't do much but close the connection). This page won't look anything like your browser page would.
If the browser is NOT configured to use a proxy, then you would need to divert or intercept the connection to the proxy. This can cause problems if you decide you want to authenticate your users against the proxy (to identify them / implement user-specific rules etc).
Secondly HTTPS can be a real pain in the neck. This problem is growing as more and more sites move to HTTPS only. There are several issues:
browsers configured to use a proxy, for HTTPS URLS will firstly open a tunnel via the proxy using the CONNECT method. If your proxy wants to prevent this then any information it provides in the block response is ignored by the browser, and instead you get the generic browser connectivity error page.
if you want to provide any other benefits one normally wishes from a proxy (e.g. caching / scanning etc) you need to implement a MitM (Man-in-the-middle) and spoof server SSL certificates etc. In fact you need to do this if you just want to send back a block-page to deny things.
There is a way a browser can act a bit more like it was directly connected via a proxy, and that's using SOCKS. SOCKS has a way to return an error code if there's an upstream connection error. It's not the actual socket error code however.
These are all reasons why we wrote the WinGate Internet Client, which is a LSP-based product for our product WinGate. Client applications then learn the actual upstream error codes etc.
It's not a favoured approach nowadays though, as it requires installation of software on the client computer.
I wouldn't provide them too much info. Report what you need through internal logs in case you have to solve the problem. Return a 400, 403 or 418. Why? Perhaps the're just hacking.

Canonical handling of HTTPS request when SSL not supported

If a client is requesting a domain that does not have a valid CA signed certificate and the server not intend on supporting HTTPS but does support HTTP for this domain, what is the best way to handle this in the web server. Note, the server does handle requests for SSL (HTTPS) on other domains so it is listening on 443.
Example where this would apply is for multi sub-domains where the sub-domains are dynamically created and thus making it extremely difficult to register CA signed certificates.
I've seen people try to respond with HTTP error codes but these seem moot as the client (browser) will first verify the certificate and will present the hard warning to the user before processing any HTTP. Therefore the client will only see the error code if they "proceed" past the cert warning.
Is there a canonical way of handling this scenario?
There is no canonical way for this scenario. Clients don't automatically downgrade to HTTP if HTTPS is broken and it would be a very bad idea to change clients in this regard - all what an attacker would need to do to attack HTTPS would be to infer with the HTTPS traffic to make a client downgrade to unprotected HTTP traffic.
Thus, you need to make sure that the client either does not try to attempt to access URL's which do not work properly (i.e. don't publish such URL's) or to make sure that you have a working certificate for these subdomains, i.e. adapt the processes for creation of subdomains so that they not only have an IP address but also a valid certificate (maybe use wildcard certificates).
Considering these websites don't have to work with SSL, the webserver should close all SSL connections for them in a proper way.
There is no canonical way for this, but RFC 5246 implicitly suggests to interrupt the handshake on the server side by using the user_cancel + close_notify alerts. How to achieve this is another question, it will be a configuration of the default SSL virtual host.
user_canceled
This handshake is being canceled for some reason unrelated to a
protocol failure. If the user cancels an operation after the
handshake is complete, just closing the connection by sending a
close_notify is more appropriate. This alert should be followed
by a close_notify. This message is generally a warning.
If you are dealing with subdomains, you probably can use a wildcard certificate for all of your subdomains.
Adding the CA certificate to your client will remove the warning (that's what companies do, no worry).
When hosting with Apache, for example, you can use VirtualDocumentRoot to add domains without editing your configuration. Have a look at the solution provided here : Virtual Hosting in SSL with VirtualDocumentRoot

Google index https instead of http

I have a wordpress website using free hosting of Open Shift. When I search goole for my website name. I received a result that contain https ssl. But when I click this links google chrome will go to:
Attackers might be trying to steal your information from phamquan.com (for example, passwords, messages, or credit cards). NET::ERR_CERT_COMMON_NAME_INVALID
This server could not prove that it is phamquan.com; its security certificate is from *.rhcloud.com. This may be caused by a misconfiguration or an attacker intercepting your connection.
Because my website doesn't have ssl cerification. How can i disallow google index all links of my website as https. Only allow http links.
The only way to prevent Google from indexing the HTTPS version of the site is to stop listening to HTTPS. The main problem here is that your webserver is currently listening to HTTPS requests, although your website is not configured to deliver a valid certificate.
If you can't access the server configuration, another approach described here and here is to use the canonical link tag to link to the HTTP version of the site as a hint that the correct version is the HTTP and not the HTTPS.

Nginx - Allow requests from IP range with no header set

I'm trying to use nginx behind a Compute Engine http load balancer. I would like to allow Health Check requests to come through a port unauthorized and all other requests to be authorized with basic auth.
The Health Check requests come from IP block: 130.211.0.0/22. If I see requests coming from this IP block with no X-forwarded-for header, then it is a health check from the load balancer.
I'm confused on how to set this up with nginx.
Have you tried using Nginx header modules? Googling around I found these:
HttpHeadersMoreModule
Headers
There's also a similar question here.
Alternative. In the past I worked with a software (RT), which had thought of this possibility in the software itself, providing a subdirectory for unauthorized access (/noauth/). Maybe your software might have the same, and you could configure GCE health check to point to something like /noauth/mycheck.html.
Please remember that headers can be easily forged, so an attacker who knows your vulnerability could access your server without auth.

How to configure squid to be a Transparent proxy?

I am working with Squid Proxy Server as I have also used cyberoam,Sonicwall and Clear OS.
I want to setup my own proxy like above products ie authentication in transparent proxy.
Actually I setup transparent proxy but at that time my HTTPS site is not working.Then I configure one iptables rule that redirect all http & https traffic to 3128(squid port) only. but here I can access all my https websites but I cant block them.
My requirement is when I am going to access any website at first time it will ask me to authentication and then and only i can access internet. In log reports also I can show its Username and one more thing it will also possible in thinclient(terminal service).
Anybody help me short-out this problem ?
Proxy authentication doesn't work in transparent proxies setups. The browser should have the proxy configured to catch the authentication request from a proxy and to request the credentials to an user.
Another thing is that you can create a transparent proxy for HTTPS. Why? Because when the browser connects, it's connected to the proxy, not the real server. The browser will try to negotiate SSL which is a thing that Squid won't support. There are tricks to do this, but you'll break the SSL security, browser will complain, etc. There are one tool that I used to get this working: u2nl, but it's a hack that tunnels HTTPS trought the proxy.
The best option, is to use a non-transparent proxy. If you want to avoid browsers configuration, have a look at WPAD
As said before, you can't really block HTTPS sites with Squid, and you can't really use authentication with the proxy running at his transparent mode.
As far as I could use and cofigure, you can use an external acl to force a kind of login, but the login requests will not be treated by the proxy, but you can work it with some PERL.
And about the HTTPS thing, you could work it with some hacks, but it is a very sensible question, because the server performance with be punished with this kind of use and you could be pointed as a fraudulent service, which isn't cool... Believe me.

Resources