I heard of sites using other site to redirect users either to their own site or to hide behind another site. In my code i redirect in a few places such as post a comment (its easier to use a return url then figure out the page using data given).
How do i check if the return URL is my own url? I think i use absolute paths so i can easily check if the first character is '/' but then i will lose relative flexibility. This also disallows me from doing http://mysite.com/blah in the redirect url. I could patch the url by adding mysite + string but i'll need to figure out if string is a relative url or already a mysite.com url.
Whats the easiest way to ensure i am only redirecting to my site?
How about, if the redirectUrl contains "://" (which includes http://, https://, ftp://, etc.) then it must also start with "http://mysite.com". If it does not contain "://" then it is relative and should not be a problem. Something like this:
if (!(redirectUrl.Contains("://") ^ redirectUrl.IndexOf("http://mysite.com") == 0))
{
Response.Redirect(redirectUrl);
}
I hadn't thought of this before, but how about using an encrypted version of the URL in the query string parameter?
Alternatively, you could keep a list of the actual URLs in some persistent store (persistent for a couple of hours, maybe), and in the query string, just include the index into the persistent store of URLs. Since You'd be the only code manipulating this persistent, server-side store, the worst a malicious user could do would be to redirect to a different valid URL.
This seems to be an odd question, and it should not be a concern if you are in full control over the redirect process. If for some reason you are allowing input from the user to be actively involved in a redirect (as in the code below)
Response.Redirect(someUserInput);
Then, yes, a user could have your code send them off to who knows where. But if all you are ever doing is
Response.Redirect("/somepage.aspx")
Then those redirects will always be on your site.
Like I said, it seems to be an odd question. The more prominent concerns in terms of user input are typically SQL Injection attacks and cross-site scripting. I've not really heard about "malicious redirects."
Related
We're reviewing some security findings and I'm trying to understand a finding about open redirects. Essentially the question is: Can a redirect to a hard-coded known-local path with user-supplied parameters in the query component be an open redirect?
Say you have the following in page1.aspx:
Response.Redirect("/page2.aspx?parm=" + Request["user-supplied-value"]);
Can page1.aspx, in and of itself, be an open redirect? Is there any value that can be supplied in the request to page1 that would cause a browser to ignore the "page2.aspx" path of the redirect and instead redirect to the user-supplied-value in the query part of the redirect url?
Please note, I am NOT referring to the situation where page2.aspx then takes the value of the parm and mindlessly redirects to it, that is an open redirect. I'm referring more to something like how a semicolon in a SQL Injection might terminate leading text and allow you to inject a new statement inline. I cannot think of a value for a parm in the query part that would essentially override the path part.
To be explicit, if you think the answer is "Yes, open redirects happen when you redirect to user-supplied inputs" go read the question again. I am not referring to the situation where page2 blindly redirects to the value supplied in the parameter. Rather can the redirect in page1 where the path component is hard-coded but has user-supplied parms be an open redirect?
It is not possible to make a modern browser go to a different page than page2.aspx just by manipulating the parameter. Therefore I wouldn't call this an open redirect, but it still might be a weakness, depending on what exactly page2.aspx does. It might become an actual open redirect as well in somewhat unexpected ways, like for example if there is header injection in page2.aspx and the attacker can inject a whole new Location header. Or this might be used to circumvent weak referrer-based csrf protection. Or it might just provide a way for an attacker to perform phishing by providing a link in a trusted application screen that does something the user didn't want to do.
So in short this might just be a building block in a more complex attack - but it might be an important one, or not even a weakness, it all depends on page2.aspx.
So I have an Nginx server set up which is supposed to redirect all http to https (and non-www to www) using 4 server blocks.
The issue is that any 404 or non existent http URL first get a 301 redirect to what could have been an https version if it hypothetically existed (hence creating an extra URL and redirect).
See example:
1) http://example.com/thisurldoesntexit
301 Redirect
2) https://example.com/thisurldoesntexit
404
3) https://example.com/notfound
Is there a way to redirect user directly to a https 404 (URL 3)?
First of all, as already been pointed out, doing a 301 redirect from a non-existent page to a single /notfound moniker, is a really bad practice, and is likely against the RFCs.
What if the user simply mistyped a single character of a long URL? Modern browsers make it non-trivial to go back to what has been typed in order to correct it. The user would have to decide whether your site is worth a retyping from scratch, or whether your competitor might have thought of a better experience.
What if the user simply followed a broken link, which is broken in a very obvious way, and could be easily fixed? E.g., http://www.example.org/www.example.com/page, where an absolute URL was mistyped by the creator to be a relative one, or maybe a URI like /page.html., with an extra dot in the end. Likewise, you'll be totally confusing the user with what's going on, and offering a terrible user experience, where if left alone, the URL could easily have been corrected promptly.
But, more importantly, what real problem are you actually trying to solve?!
For better or worse, it's a pretty common practice to indiscriminately redirect from http to https scheme, without an account of whether a given page may or may not exist. In fact, if you employ HSTS, then content served over http effectively becomes meaningless; the browser with a policy would never even be requesting anything over http from there on out.
Undoubtedly, in order to know whether or not a given page exists, you must consult with the backend. As such, you might as well do the redirect from http to https from within your backend; but it'll likely tie up your valuable server resources for little to no extra benefit.
Moreover, the presence or absence of the page may be dictated by the contents of the cookies. As such, if you require that your backend must discern whether a page does or does not exist for an http request, then you'll effectively be leaking private information that was meant to be protected by https in the first place. (In turn, if your site has no such private information, then maybe you shouldn't be using https in the first place.)
So, overall, the whole approach is just a REALLY, REALLY bad idea!
Consider instead:
Do NOT do a 301 redirect from all non-existent pages to a single /notfound page. Very bad practice, very bad UX.
It is totally OK to do an indiscriminate redirect from http to https, without accounting for whether or not the page exists. In fact, it's not only okay, but it's the way God intended, because an adversary should not be capable of discerning whether or not a given page exists for an https-based site, so, if you do find and implement a solution for your "problem", then you'll effectively create a security vulnerability and a data leak.
Use https://www.drupal.org/project/fast_404 module for serving 404 pages directly without much overload.
I'd suggest redirecting to a 404 page is a poor choice, and you should instead serve the 404 on the incorrect URL.
My reasons for stating this are:
By redirecting away from the page, you are issuing headers that implicitly say "The content does not exist on this URL, but it does over here". I'm not sure how the various search engines would react to being redirected to a 404
I can speak from my own experience as a user when I say that having the URL change on me when I've mis-typed by a single character can be very frustrating. I then need to spend the time to type out the entire URL again.
You can avoid having logic in your .htaccess file or whatever to judge a page as a 404. This will greatly simplify your initial logic (which by-the-by gets computed on every single page load) - and will remove far more redirects than just the odd one of http://badurl to https://badurl to https://404
Background: In my ASP.NET application, I occasionally need to pass the user through an intermediate page, which then must relink to the original requested page. I would like to maintain as many GET parameters as is feasible.
For example, if the user lands on:
store.aspx?storeId=34&myHours=12
But I now realize the user needs to go to the intermediate page, so the user is redirected:
siteAd.aspx?returnTo=store.aspx¶ms=storeId%3D34%26myHours%3D12
siteAd.aspx.vb will then have code to return the user to the original page (pseudocode):
Dim sReturnTo = Request.QueryString("returnTo")
' <insert code to protect against open-redirection attack on sReturnTo>
lnkContinue.HRef = sReturnTo & "?" & Request.QueryString("params")
Question: Are there any security concerns with the above scenario, as long as I make sure to protected against an open-redirection vulnerability in the returnTo parameter?
The solution I came up with was to URLEncode the entire string (per #SamGreenhalgh), then to use the built-in .NET function IsWellFormedUriString as such:
Private Shared Function isLocalUrl(ByVal sUrl As String) As Boolean
If String.IsNullOrWhiteSpace(sUrl) Then
Return False
End If
sUrl = sUrl.Trim
Return Uri.IsWellFormedUriString(sUrl, UriKind.Relative)
End Function
As #Sam Greenhalgh said why not pass the whole path and query string as one parameter rather than splitting. If it is correctly URL encoded, then this will be OK.
As well as the Open Redirection Vulnerability, you should secure your code against XSS by checking that lnkContinue.HRef can only be set to a HTTP or HTTPS URL. This will prevent someone creating a link to your site where returnTo contains JavaScript. e.g. javascript:alert('xss')
The Uri obect can be used to validate that returnTo isn't an open redirect or XSS attack. The protocol can be checked to make sure it is HTTP or HTTPS and the Url itself can be checked to see whether it is a relative URL or if it is an absolute URL pointing to your domain. It is better to use the Url object that executing manual string comparison methods. Many open redirect checks can be fooled by the attacker setting up domains such as www.example.com.evil.com to fool the domain check for www.example.com or by including www.example.com elsewhere in the URL (e.g. http://www.evil.com/www.example.com/Page.aspx).
URL Encoding is not a security measure and it is trivial to defeat. The best way to protect against Man-in-the-Middle attacks is to use HTTPS with a reputable SSL certificate and switch from GET to POST. Anything else is tiptoeing around the issue (i.e security)
Also see https://stackoverflow.com/a/1008550/374075
I'm replacing an old web application with a new one, with different structure.
I can not change the virtual directory path for the new app, as I have users which have bookmarked different links to the old app.
Lets say I have a user, who has this bookmark:
http://server/webapp/oldpage.aspx?data=somedata
My new app is going to reside in the same virtual directory, replacing the old one, but it has no longer oldpage.aspx, instead it has different layout, but it still needs the parameter from the old url.
So, I have set to redirect 404 errors to redirectfrombookmark.aspx, where I decide how to process the request.
The problem is, that the only parameter I receive is "aspxerrorpath=/webapp/oldpage.aspx", but not the "data" parameter, and I need it to correctly process the request.
Any idea how I can get the full "original" url in the 404 handler?
EDIT: reading the answers, looks like I did not make the question clear enough:
The users have bookmarked many different pages (oldpage1, oldpage2, etc.) and I should handle them equally.
The parameters for each old page are almost the same, and I need a specific ones only.
I want to re-use the "old" virtual directory name for the "new" application.
The search bots, etc., are not a concern, this is internal application with dynamic content, which expires very often.
The question is - can I do this w/o creating a bunch of empty pages in my "new" application with the old names, and Request.Redirect in their OnLoad. I.e. can this be done using the 404 mechanism, or some event handling in Global.asax, etc.
For the purposes of SEO, you should never redirect on a 404 error. A 404 should be a dead-end, with some helpful information of how to locate the page you're looking for, such a site map.
You should be using a 301, moved permanently. This allows the search bots to update their index without losing the page rank assigned to the original page,
See: http://www.webconfs.com/how-to-redirect-a-webpage.php on how to code this type of response.
You could look into the UrlRewritingNet component.
You should also look into using some of the events in your Global.ascx(?extention) file to check for errors and redirect intelligently. The OnError event is what you want to work with. You will have the variables from the request at that point in time (under the HttpContext object) and you can have your code work there instead of a 404. If you go this route, be sure you redirect the 404 correctly for anything other than oldpage.aspx.
I am sorry I don't have any explicit examples or information right now, hopefully this will point you in the right direction.
POST and GET parameters are only available per request. If you already know the name of the old page (OldPage.aspx) why not just add there a custom redirect in it?
I implemented OpenID support for an ASP.Net 2.0 web application and everything seems to be working fine on my local machine.
I am using DotNetOpenId library. Before I redirect to the third party website I store the orginal OpenID in the session to use when the user is authenticated (standard practice I believe).
However I have a habit of not typing www when entering a URL into the address bar. When I was testing the login on the live server I was getting problems where the session was cleared. My return url was hard coded as www.mysite.com.
Is it possible that switching from mysite.com to www.mysite.com caused the session to switch?
Another issue is that www.mysite.com is not under the realm of mysite.com.
What is the standard solution to these problems. Should the website automatically redirect to www.mysite.com? I could just make my link to the log in page an absolute url with containing www? Or are these just hiding another problem?
Solve the realm problem that you mentioned is easy. Just set the realm to *.mysite.com instead of just mysite.com. If you're using one of the ASP.NET controls included in the library, you just set a property on the control to set the realm. If you're doing it programmatically, you set the property on the IAuthenticationRequest object before calling RedirectToProvider().
As far as the session/cookie problem goes with hopping between the www and non-www host name, you have two options:
Rather than storing the original identifier in the session, which is a bad idea anyway for a few reasons, use the IAuthenticationRequest.AddCallbackArguments(name, value) method to store the user's entered data and then use IAuthenticationResponse.GetCallbackArgument(name) to recall the data when the user has authenticated.
Forget it. There's a reason the dotnetopenid library doesn't automatically store this information for you. Directed identity is just one scenario: If the user types 'yahoo.com', you probably don't want to say to them 'Welcome, yahoo.com!' but rather 'Welcome, id.yahoo.com/andrewarnott'! The only way you're going to get the right behavior consistently is to use the IAuthenticationResponse.FriendlyIdentifierForDisplay property to decide what to display to the user as his logged in identifier. It gives more accurate information, and is easier than storing a value in the callback and getting it back. :)
I dunno how OpenID works, but LiveID gives you a token based on the combination of user and domain. I just would have forwarded www to mysite.com.
The cookies and sessions and everything else get lost between www.site.com and site.com. I don't have patience enough to thoroughly read all the specs, but http://www.w3.org/Protocols/rfc2109/rfc2109 states that
A is a FQDN string and has the form
NB, where N is a non-empty name
string, B has the form .B', and B' is
a FQDN string. (So, x.y.com
domain-matches .y.com but not y.com.)
Note that domain-match is not a
commutative operation: a.b.c.com
domain-matches .c.com, but not the
reverse.
I think that means yes, you do need to forward to www. I have always added domain correction code to my sites when cookies and sessions are being used.