How to protect a web server FROM a reverse proxy server - asp.net

I have a website "www.website.com".
Recently I found out that somebody has set up a reverse proxy with an almost identical url "www.website1.com" in front of my website.
I'm concerned of those users who came to my website through that reverse proxy. Their username and passwords might be logged when they login.
Is there a way for me to have my web server refuse reverse proxy?
For example, I've set up a reverse proxy using squid with the url "www.fakestackoverflow.com" in front of "www.stackoverflow.com". So whenever I type "www.fakestackoverflow.com" in my web browser address bar, I'll be redirected to "www.stackoverflow.com" by the reverse proxy. Then I notice the url in my address bar changed to "www.stackoverflow.com" indicating that I'm no longer going through the reverse proxy.
"www.stackoverflow.com" must've detected that I came to the website from another url and then redirected me to the website through the actual url.
How do I do something like that in ASP.NET web application?
Also asked on server fault.

First, use JavaScript to sniff document.location.href and match it against your domain:
var MyHostName = "www.mydomain.com";
if (0 == document.location.href.indexOf("https://"))
{
MyHostName = "https://" + MyHostName + "/";
if (0 != document.location.href.indexOf(MyHostName)) {
var new_location = document.location.href.replace(/https:\/\/[^\/]+\//, MyHostName);
if(new_location != document.location.href)
document.location.replace(new_location);
}
}
else
{
MyHostName = "http://" + MyHostName + "/";
if (0 != document.location.href.indexOf(MyHostName)) {
var new_location = document.location.href.replace(/http:\/\/[^\/]+\//, MyHostName);
if(new_location != document.location.href)
document.location.replace(new_location);
}
}
Second: write a init script to all your ASP pages to check if the remote user IP address matches the address of the reverse proxy. If it matches, redirect to a tinyurl link which redirects back to your real domain. Use tinyurl or other redirection service to counter reverse proxy's url rewriting.
Third: write a scheduled task to do a DNS lookup on the fake domain, and update a configuration file which your init script in step 2 uses. Note: Do not do a DNS lookup in your ASP because DNS lookups can stall for 5 seconds. This opens a door for DOS against your site. Also, don't block solely based on IP address because it's easy to relocate.
Edit: If you're considered of the proxy operator stealing user passwords and usernames, you should log all users who are served to the proxy's IP address, and disable their accounts. Then send email to them explaining that they have been victims of a phishing attack via a misspelled domain name, and request them to change their passwords.

After days of searching and experimenting, I think I've found an explanation to my question. In my question, I used stackoverflow.com as an example but now I'm going to use whatismyipaddress.com as my example since both exhibit the same behaviour in the sense of url rewriting plus whatismyipaddress.com is able to tell my ip address.
First, in order to reproduce the behaviour, I visited whatismyipaddress.com and got my ip address, say 111.111.111.111. Then I visited www.whatismyipaddress.com (note the additional www. as its prefix) and the url in my browser's address bar changed back to whatismyipaddress.com discarding the prefix. After reading comments from Josh Stodola, it strucked me to prove this point.
Next, I set up a reverse proxy with the url www.myreverseproxy.com and ip address 222.222.222.222 and I have it performed the two scenarios below:
I have the reverse proxy points to whatismyipaddress.com (without the prefix **www.). Then typed www.myreverseproxy.com in my browser's address bar. The reverse proxy then relayed me to whatismyipaddress.com and the url in my address bar didn't change (still showing www.myreverseproxy.com). I further confirmed this by checking the ip address on the webpage which showed 222.222.222.222 (which is the ip address of the reverse proxy). This means that I'm still viewing the webpage through the reverse proxy and not directly connected to whatismyipaddress.com.
Then I have the reverse proxy points to www.whatismyipaddress.com (with the prefix wwww. this time). I visited www.myreverseproxy.com and this time the url in my address bar changed from www.myreverseproxy.com to whatismyipaddress.com. The webpage showed my ip address as 111.111.111.111 (which is the real ip address of my pc). This means that I'm no longer viewing the webpage through the reverse proxy and redirected straight to whatismyipaddress.com.
I think this is some kind of url rewriting trick which Josh Stodola has pointed out. I think I'm gonna read more on this. As to how to protect a server from reverse proxy, the best bet is to use SSL. Encrypted information passing through a proxy will be of no use since it can't be read in plain sight thus preventing eavesdropping and man-in-the-middle attack which what reverse proxy exactly is.
Safeguarding with javascript though can be seen trivial since javascript can be stripped off easily by a reverse proxy and also prevent other online services like google translate from accessing your website.

If you were to do Authentication over SSL using https://, you can bypass the proxy in most cases.
You can also look for the X-Forwarded-For header in the incoming request and match it against the suspicious proxy.

As I see it, your fundamental issue here is that whatever application layer defence measures you put in place to mitigate this attack can be worked around by the attacker, assuming this really is a malicious attack made by a competent adversary.
In my view, you should definitely be using HTTPS, which in principle would allow the user to confirm for sure whether they're talking to the right server, but this relies on the user knowing to check for this. Some browsers these days display extra information in the URL bar about which legal entity owns the SSL certificate, which would help, as it's unlikely an attacker would be able to persuade a legitimate certificate authority to issue a certificate in your name.
Some of the other comments here said that HTTPS can be intercepted by intermediate proxy servers, which is not actually true. With HTTPS, the client issues a CONNECT request to the proxy server, which tunnels all future traffic direct to the origin server, without being able to read any of it. If we assume that this proxy server is entirely bespoke and malicious, then it can terminate the SSL session and intercept the traffic, but it can only do that with its own SSL certificate, not with yours. This certificate will either be self signed (in which case clients will get lots of warning messages) or a genuine certificate issued by a certificate authority, in which case it'll have the wrong legal entity name, and you should be able to go back to the certificate authority, have the cert revoked and potentially ask the police to take action against the owner of the certificate, if you have reasonable suspicion that they are phishing.
The other thing I can think of which would mitigate this threat to some extent would be to implement one-time password functionality, either using a hardware/software token or using (my personal favorite) an SMS sent to the user's phone when they log in. This wouldn't prevent the attacker getting access to the session once, but should prevent them being able to log in in future. You could further protect the users by requiring another one time password before allowing them to see/edit particularly sensitive details.

There's very little you can do to prevent this without causing legitimate proxies (translation, google cache, etc..) from failing. If you don't care if people use such services, then simply set your web app to always redirect if the base url is not correct.
There are some steps you can take if you are aware of the proxies, and can find out their IP addresses, but that can change and you would have to stay on top of it. #jmz's answer is quite good in that regard.

I have come with an idea, and I think a solution.
First of all you do not need all page to be overwrite because this way you block other proxies, and other services (like google automatic translate).
So let say that you won to be absolute sure about the login page.
So what you do, when a user gets on login.aspx page you make a redirect with the full path of your site again to login.aspx.
if(Not all ready redirect on header / or on parametres from url)
Responce.Redirect("https://www.mysite.com/login.aspx");
This way I do not think that transparent proxy can change the get header and change it.
Also you can log any proxy, and or big requests from some ips and check it. When you found a Fishing site like the one you say you can also report it.
http://www.antiphishing.org/report_phishing.html
https://submit.symantec.com/antifraud/phish.cgi
http://www.google.com/safebrowsing/report_phish/

Maybe create a black-list of URLs and compare requests with Response.Referer if the website is on that list then kill the request or do a redirection of your own.
The black-list is obviously something you would have to manually update.

Ok i have went throu a similar situation but i managed to overcome it by using another forwarded domain that points to my original perminantly , then checking with code if the client is the reverse server or not if it it i would redirect them to my second domain which will go to the original
Check out more info from here: http://alphablog.xyz/your-website-is-being-mirrored-by-someone-else-without-your-knowledge/

The simplest way would probably be to put some Javascript code on your page that examines window.location to see if the top level domain (TLD) matches what you expect, and if not, replaces it with your correct domain (causing the browser to reload to the proper site instead).

Related

What settings are required to put AWS CloudFront CDN in front of a squarespace website?

I had trouble getting AWS CloudFront to work with SquareSpace. Issues with forms not submitting and the site saying website expired. What are the settings that are needed to get CloudFront working with a Squarespace site?
This is definitely doable, considering I just set this up. Let me share the settings I used on Cloudfront, Squarespace, and Route53 to make it work. If you want to use a different DNS provide than AWS Route53, you should be able to adapt these settings. Keep in mind that this is not an e-commerce site, but a standard site with a blog, static pages, and forms. You can likely adapt these instructions for other issues as/if they come up.
Cloudfront (CDN)
To make this work, you need to create a Cloudfront Distribution for Web.
Origin Settings
Origin Domain Name should be set to ext-cust.squarespace.com. This is Squarespace's entry point for external domain names.
Origin Path can be left blank.
Origin ID is just the unique ID for this distribution and should auto-populate if you're on the distribution creation screen, or be fixed if you're editing Origin Settings later.
Origin Custom Headers do not need to be set.
Default Cache Behavior Settings / Behaviors
Path Patterns should be left at Default.
I have Viewer Protocol Policy set to Redirect HTTP to HTTPS. This dictates whether your site can use one or both of HTTP or HTTPS. I prefer to have all traffic routed securely, so I redirect all HTTP traffic to HTTPS. Note that you cannot do the reverse and redirect HTTPS to HTTP, as this will cause authentication issues (your browser doesn't want to expose what you thought was a secure connection).
Allowed HTTP Methods needs to be GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. This is because forms (and other things such as comments, probably) use the POST HTTP method to work.
Cached HTTP Methods I left to just GET, HEAD. No need for anything else here.
Forward Headers needs to be set to All or Whitelist. Squarespace's entry point we mentioned earlier needs to know where what domain you're coming from to serve your site, so the Host header must be whitelisted, or allowed with everything else if set to All.
Object Caching, Minimum TTL, Maximum TTL, and Default TTL can all be left at their defaults.
Forward Cookies cookies is the missing component to get forms working. Either you can set this to All, or Whitelist. There are certain session variables that Squarespace uses for validation, security, and other utilities. I have added the following values to Whitelist Cookies: JSESSIONID, SS_MID, crumb, ss_cid, ss_cpvisit, ss_cvisit, test. Make sure to put each value on a separate line, without commas.
Forward Query Strings is set to True, as some Squarespace API calls use query strings so these must be passed along.
Smooth Streaming, Restrict Viewer Access, and Compress Objects Automatically can all be left at their default values, or chosen as required if you know you need them to be set differently.
Distribution Settings / General
Price Class and AWS WAF Web ACL can be left alone.
Alternate Domain Names should list your domain, and your domain with the www subdomain attached, e.g. example.com, www.example.com.
For SSL Certificate, please follow the tutorial here to upload your certificate to IAM if you haven't already, then refresh your certificates (there is a control next to the dropdown for this), select Custom SSL Certificate and select the one you've provisioned. This ensures that browsers recognize your SSL over HTTPS as valid. This is not necessary if you're not using HTTPS at all.
All following settings can be left at default, or chosen to meet your own specific requirements.
Route 53 (DNS)
You need to have a Hosted Zone set up for your domain (this is specific to Route 53 setup).
You need to set an A record to point to your Cloudfront distribution.
You should set a CNAME record for the www subdomain name pointing to your Cloudfront distribution, even if you don't plan on using it (later we'll go through setting Squarespace to only use the root domain by redirecting the www subdomain)
Squarespace
On your Squarespace site, you simply need to go to Settings->Domains->Connect a Third-Party Domain. Once there, enter your domain and continue. Under the domain's settings, you can uncheck Use WWW Prefix if you'd like people accessing your site from www.example.com to redirect to the root, example.com. I prefer this, but it's up to you. Under DNS Settings, the only value you need is CNAME that points to verify.squarespace.com. Add this CNAME record to your DNS settings on Route 53, or other DNS provider. It won't ever say that your connection has been fully completed since we're using a custom way of deploying, but that won't matter.
Your site should now be operating through Cloudfront pointing to your Squarespace deployment! Please note that DNS propogation takes time, so if you're unable to access the site, give it some time (up to several hours) to propogate.
Notes
I can't say exactly whether each and every one of the values set under Whitelist Cookies is necessary, but these are taken from using the Chrome Inspector to determine what cookies were present under the Cookie header in the request. Initially I tried to tell Cloudfront to whitelist the Cookie header itself, but it does not allow that (presumably because it wants you to use the cookie-specific whitelist). If your deployment is not working, see if there are more cookies being transmitted in your requests (under the Cookie header, the values you're looking for should look like my_cookie=somevalue;other_cookie=othervalue—my_cookie and other_cookie in my example are what you'd add to the whitelist).
The same procedure can be used to forward other headers entirely that may be needed via the Forward Headers whitelist. Simply inspect and see if there's something that looks like it might need to go through.
Remember, if you're not whitelisting a header or cookie, it's not getting to Squarespace. If you don't want to bother, or everything is effed (pardon my language), you can always set to allow all headers/cookies, although this adversely affects caching performance. So be conservative if you can.
Hope this helps!
Here are the settings to get CloudFront working with Squarespace!
Behaviours:
Allowed HTTP Methods Ensure that you select: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. Otherwise forms will not work:
Forward Headers: Select whitelist and choose 'Host'. Otherwise squarespace will not know which website they need to load up and you get the message 'Website has expired' or similar.
Origins:
Origin Domain Name set as: ext-cust.squarespace.com
Origin Protocol Policy Select HTTPS so that traffic between the CDN and the origin is secure too
General
Alternate Domain Names (CNAMEs) put both your www and none www addresses here and let Squarespace decide on if to direct www to root or vice-versa (.e.g example.com www.example.com)
You can now configure SSL on CloudFront
HTTPS You can now enforce HTTPS using a certificate for your site here rather than in Squarespace
Setting I'm unsure about still:
Forward Query Strings: recommended not for caching reasons but I think this could break things...
Route53
Create A records for www and root (e.g. example.com www.example.com) and set as an alias to your CloudFront distribution

iis cname to subdomain, get subdomain from the request

What I'd really like to do is set up an azure site called site.com. Then have hundreds of subdomains such as foo.site.com, bar.site.com, baz.site.com etc. My asp.net mvc application will pull out the subdomain as this will be used as an identifier.
Next I'd like to have other domains CNamed to the subdomains. Such as hello.othersite.com -> foo.site.com, so that the browser still shows hello.othersite.com but I'd be able to get the foo subdomain out of the request.
I don't want to have to configure any of this because there are going to be lots of subdomains, essentially one per account.
Is this actually possible?
I've tried a few tests but I'm not 100% sure how to proceed. Would I just:
Setup site.com to accept *.site.com
CNAME hello.othersite.com to foo.site.com (do I want masking, forwarding etc?)
Does the incoming http request contain any information about the subdomain (foo) that
it's CNAMED to?
I hope this isn't too vague and hand wavey but some confirmation of its plausibility would be a great help.
It is not just Azure, but the whole web.
I don't think it is possible to get out of the HTTP request for hello.othersite.com that it actually mapped in the DNS to foo.site.com. And this is because your actual HTTP request will look something like
GET /index.html HTTP/1.1
(other headers)
host: hello.othersite.com
Your web server, whatever it is, has no idea that hello.othersites.com is mapped via CNAME to foo.site.com. The request you receive is for hello.othersite.com.
If you do forwarding, the users will never stay on hello.othersite.com but will be redirected to foo.site.com. I guess this is not what you want.
Direct domain masking is usually done via iFrame, which would also not recommend.
I would do the following, as nothing else comes to mind at the moment:
Setup the site to accept *.site.com and *.othersite.com
Add Wildcard CNAME map to my Azure cloud service, i.e. *.site.com -> CNAME -> my.cloudapp.net. Same for both custom domains I want to have.
Perform necessary checks in my app to figure out domain mappings <-> user accounts.

Creating subdomain in URL alaising

I am creating a social networking site and one of the requirements is to have the subdomain like URL for each user. For example, for the user1 his profile page will be user1.mysitename.com and for the user2 profile page will be user2.mysitename.com.
Can it be done using url aliasing? basically user1.mysitename.com should be www.mysitename.com/profile.aspx?username=user1
I will be hosting this in windows 2003 (IIS6), any help is highly appreciated.
You can either respond to each GET request of user1.mysitename.com with the same contents as www.mysitename.com/profile.aspx?username=user1 or you can answer using a redirection (HTTP 302 response) from the first url to the second url.
However, you have to first make sure the DNS server who is authoritative on mysitename.com is aware to all these domains and respond with the answer you need (either the IP of the server, or a CNAME to a domain that is linked to an IP).
EDIT:
When someone will try to surf to user1.mysitename.com he will first try to resolve user1.mysitename.com to get it's IP - here you need someone to let him know what is the IP of the domain user1.mysitename.com.
After the user has the IP of the domain, he will request the page using HTTP GET request. You need to respond to it somehow. One way is to redirect him to a different URL (www.mysitename.com/profile.aspx?username=user1). Another way is to simply respond to the GET request and give him the page he's looking for.

Hide webpage file with random folder and SSL?

Does an SSL request show the page being requested or just the domain?
I am trying to hide files in a directory on a webserver by using random folder names.
i.e. https://www.mydomain.com/DKSLW3020SLK43J9S0935KJSLK350S9/MyFile.pdf
The random folder name providing the security instead of a password. The risk of this is any third party intercepting router hops to see the page request and then the hidden folder is not so hidden.
If is access it using SSL is the SSL connection made to mydomain.com first, then the page requested or will a snooper see the entire:
/DKSLW3020SLK43J9S0935KJSLK350S9/MyFile.pdf
ending of the request made?
Thanks for your help!
A normal man-in-the-middle attack will not see the full URL. However, your end user might choose to pass around the URL. Is that a concern?
If you mean HTTPS, then the snooper will see nothing at all; just the address of the site to which the user is connecting.
Try it yourself using a HTTP sniffer like Fiddler. (Fiddler has an option to decrypt HTTPS, but this is not possible for a snooper.)
Yes, SSL will protect the URL from third parties.

Can I ban an IP address (or a range of addresses) in the ASP.NET applicaton?

What would be the easiest way to ban a specific IP (or a range of addresses) from being
able to access my publicly available web site?
Is it possible to do so using the ASP.NET only, without resorting to modifying any IIS settings?
It is easy and fast in asp.net using httpmodule, just take a look at Hanselman's post:
http://www.hanselman.com/blog/AnIPAddressBlockingHttpModuleForASPNETIn9Minutes.aspx
You can check the Request.ServerVariables["REMOTE_ADDR"] value and if they're banned redirect them to yahoo or something.
Indeed, Spencer Ruport's suggestion is the right way to go about it. (Not sure I would redirect to Yahoo however - an page informing the user they have been banned would be better, with some option for contacting the web admin if the client feels they were inadvertently banned).
I would add that it would be wise to check the HTTP_X_FORWARDED_FOR server variable (representing the IP forwarded by a proxy, or null if none) firstly in order to avoid the issue of the IP address for the proxy (and thus potentially many other users) also being banned.

Resources