I just added a feature on a website to allow users to log in with Facebook. As part of the authentication workflow Facebook forwards the user to a callback URL on my site such as below.
https://127.0.0.1?facebook-login-callback?code.....#_=_
Note the trailing #_=_ (which is not part of the authentication data, Facebook appears to add this for no clear reason)
Upon receiving the request in the backend I validate the credentials, create a session for the user, then forward them to the main site using a Location header in the HTTP response.
I've inspected the HTTP response via my browser developer tools and confirmed I have set the location header as.
Location: https://127.0.0.1/
The issue is that the URL that appears in the browser address bar after forwarding is https://127.0.0.1/#_=_
I don't want the user to see this trailing string. How can I ensure it is removed when redirecting the user to a new URL?
The issue happens in all browsers I have tested. Chrome, Firefox, Safari and a few others
I know a similar question has been answered in other threads however there is no jquery or javascript in this workflow as in the other threads. All the processing of the login callback happens in backend code exlusively.
EDIT
Adding a bounty. This has been driving up the wall for some time. I have no explanation and don't even have a guess as to what's going on. So I'm going to share some of my hard earned Stackbux with whoever can help me.
Just To be clear on a few points
There is no Javascript in this authentication workflow whatsoever
I have implemented my own Facebook login workflow without using their Javascript libraries or other third party tools, it directly interacts with the Facebook REST API using my own Python code in the backend exclusively.
Below are excerpts from the raw HTTP requests as obtained from Firefox inspect console.
1 User connects to mike.local/facebook-login and is forwarded to Facebook's authentication page
HTTP/1.1 302 Found
Server: nginx/1.19.0
Date: Sun, 28 Nov 2021 10:44:30 GMT
Content-Type: text/plain; charset="UTF-8"
Content-Length: 0
Connection: keep-alive
Location: https://www.facebook.com/v12.0/dialog/oauth?client_id=XXX&redirect_uri=https%3A%2F%2Fmike.local%2Ffacebook-login-callback&state=XXXX
2 User accepts and Facebook redirects them to mike.local/facebook-login-callback
HTTP/3 302 Found
location: https://mike.local/facebook-login-callback?code=XXX&state=XXX#_=_
...
Requested truncated here. Note the offending #_=_ in the tail of the Location
3 Backend processes the tokens Facebook provides via the user forwarding, and creates a session for the user then forwards them to mike.local. I do not add #_=_ to the Location HTTP header as seen below.
HTTP/1.1 302 Found
Server: nginx/1.19.0
Date: Sun, 28 Nov 2021 10:44:31 GMT
Content-Type: text/plain; charset="UTF-8"
Content-Length: 0
Connection: keep-alive
Location: https://mike.local/
Set-Cookie: s=XXX; Path=/; Expires=Fri, 01 Jan 2038 00:00:00 GMT; SameSite=None; Secure;
Set-Cookie: p=XXX; Path=/; Expires=Fri, 01 Jan 2038 00:00:00 GMT; SameSite=Strict; Secure;
4 User arrives at mike.local and sees a trailing #_=_ in the URL. I have observed this in Firefox, Chrome, Safari and Edge.
I have confirmed via the Firefox inspect console there are no other HTTP requests being sent. In other words I can confirm 3 is the final HTTP response sent to the user from my site.
According to RFC 7231 §7.1.2:
If the Location value provided in a 3xx (Redirection) response does
not have a fragment component, a user agent MUST process the
redirection as if the value inherits the fragment component of the
URI reference used to generate the request target (i.e., the
redirection inherits the original reference's fragment, if any).
If you get redirected by Facebook to an URI with a fragment identifier, that fragment identifier will be attached to the target of your redirect. (Not a design I agree with much; it would make sense semantically for HTTP 303 redirects, which is what would logically fit in this workflow better, to ignore the fragment identifier of the originator. It is what it is, though.)
The best you can do is clean up the fragment identifier with a JavaScript snippet on the target page:
<script async defer type="text/javascript">
if (location.hash === '#_=_') {
if (typeof history !== 'undefined' &&
history.replaceState &&
typeof URL !== 'undefined')
{
var u = new URL(location);
u.hash = '';
history.replaceState(null, '', u);
} else {
location.hash = '';
}
}
</script>
Alternatively, you can use meta refresh/the Refresh HTTP header, as that method of redirecting does not preserve the fragment identifier:
<meta http-equiv="Refresh" content="0; url=/">
Presumably you should also include a manual link to the target page, for the sake of clients that do not implement Refresh.
But if you ask me what I’d personally do: leave the poor thing alone. A useless fragment identifier is pretty harmless anyway, and this kind of silly micromanagement is not worth turning the whole slate of Internet standards upside-down (using a more fragile, non-standard method of redirection; shoving yet another piece of superfluous JavaScript the user’s way) just for the sake of pretty minor address bar aesthetics. Like The House of God says: ‘The delivery of good medical care is to do as much nothing as possible’.
Not a complete answer but a couple of wider architectural points for future reference, to add to the above answer which I upvoted.
AUTHORIZATION SERVER
If you enabled an AS to manage the connection to Facebook for you, your apps would not need to deal with this problem.
An AS can deal with many deep authentication concerns to externalize complexity from apps.
SPAs
An SPA would have better control over processing login responses, as in this code of mine which uses history.replaceState.
SECURITY
An SPA can be just as secure as a website with the correct architecture design - see this article.
Related
We've noticed that for some users of our website, they have a problem that if they following links to the website from external source (specifically Outlook and MS Word) that they arrive at the website in such a way that User.IsAuthenticated is false, even though they are still logged in in other tabs.
After hours of diagnosis, it appears to be because the FormsAuthentication cookie is not sent sometimes when the external link is clicked. If we examine in Fiddler, we see different headers for links clicked within the website, versus the headers which are as a result of clicking a link in a Word document or Email. There doesn't appear to be anything wrong with the cookie (has "/" as path, no domain, and a future expiration date).
Here is the cookie being set:
Set-Cookie: DRYXADMINAUTH2014=<hexdata>; expires=Wed, 01-Jul-2015 23:30:37 GMT; path=/
Here is a request sent from an internal link:
GET http://domain.com/searchresults/media/?sk=creative HTTP/1.1
Host: domain.com
Cookie: Diary_SessionID=r4krwqqhaoqvt1q0vcdzj5md; DRYXADMINAUTH2014=<hexdata>;
Here is a request sent from an external (Word) link:
GET http://domain.com/searchresults/media/?sk=creative HTTP/1.1
Host: domain.com
Cookie: Diary_SessionID=cpnriieepi4rzdbjtenfpvdb
Note that the .NET FormsAuthentication token is missing from the second request. The problem doesn't seem to be affected by which browser is set as default and happens in both Chrome and Firefox.
Is this normal/expected behaviour, or there a way we can fix this?
Turns out this a known issue with Microsoft Word, Outlook and other MS Office products: <sigh>
See: Why are cookies unrecognized when a link is clicked from an external source (i.e. Excel, Word, etc...)
Summary: Word tries to open the URL itself (in case it's an Office document) but gets redirected as it doesn't have the authentication cookie. Due to a bug in Word, it then incorrectly tries to open the redirected URL in the OS's default browser instead of the original URL. If you monitor the the "process" column in Fiddler it's easy to see the exact behaviour from the linked article occurring:
Background: ETag tracking is well explained here and also mentioned on Wikipedia.
An answer I wrote in a response to "How can I prevent tracking by ETags?" has driven me to write this question.
I have a browser-side solution which prevents ETag tracking. It works without modifying the current HTTP protocol. Is this a viable solution to ETag tracking?
Instead of telling the server our ETag we ASK the server about its ETag, and we compare it to the one we already have.
Pseudo code:
If (file_not_in_cache)
{
page=http_get_request();
page.display();
page.put_in_cache();
}
else
{
page=load_from_cache();
client_etag=page.extract_etag();
server_etag=http_HEAD_request().extract_etag();
//Instead of saying "my etag is xyz",
//the client says: "what is YOUR etag, server?"
if (server_etag==client_etag)
{
page.display();
}
else
{
page.remove_from_cache();
page=http_get_request();
page.display();
page.put_in_cache();
}
}
HTTP conversation example with my solution:
Client:
HEAD /posts/46328
host: security.stackexchange.com
Server:
HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
ETag: "EVIl_UNIQUE_TRACKING_ETAG"
Content-Type: text/html
Content-Length: 131
Case 1, Client has an identical ETag:
Connection closes, client loads page from cache.
Case 2, client has a mismatching ETag:
GET...... //and a normal http conversation begins.
Extras that do require modifying the HTTP specification
Think of the following as theoretical material, the HTTP spec probably won't change any time soon.
1. Removing HEAD overhead
It is worth noting that there is minor overhead, the server has to send the HTTP header twice: Once in response to the HEAD, and once in response to the GET. One theoretical workaround for this is modifying the HTTP protocol and adding a new method which requests header-less content. Then the client would request the HEAD only, and after that the content only, if the ETags mismatch.
2. Preventing cache based tracking (or at least making it a lot harder)
Although the workaround suggested by Sneftel is not an ETag tracking technique, it does track people even when they're using the "HEAD, GET" sequence I suggested. The solution would be restricting the possible values of ETags: Instead of being any sequence, the ETag has to be a checksum of the content. The client checks this, and in case there is a mismatch between the checksummed value and the value sent by the server, the cache is not used.
Side note: fix 2 would also eliminate the following Evercookie tracking techniques: pngData, etagData, cacheData. Combining that with Chrome's "Keep local data only until I quit my browser" eliminates all evercookie tracking techniques except Flash and Silverlight cookies.
It sounds reasonable, but workarounds exist. Suppose the front page was always given the same etag (so that returning visitors would always load it from cache), but the page itself referenced a differently-named image each time it was loaded. Your GET or HEAD request for this image would then uniquely identify you. Arguably this isn't an etag-based attack, but it still uses your cache to identify you.
As long as any caching is used there's a potential exploit, even with the HTTP changes. Suppose the main page includes 100 images, each one randomly drawn from a potential pool of 2 images.
When a user returns to the site, her browser reloads the page (since the checksum doesn't match). On average, 25 of the 100 images will be cached from before. This combination can (almost certainly) be used to individually fingerprint the user.
Interestingly, this is almost exactly how DNA paternity testing works.
The server could detect that for a number of resources you do a HEAD request which is not followed by a GET for the same resource. That's a tell if you were playing poker.
Just by having some resources cached, you are storing information. That information can be deduced by the server any time you do not re-request a resource named on the page.
Protecting your privacy in this manner comes at the cost of having to download every resource on the page with every visit. If you ever cache anything then you are storing information that can be inferred from your requests to the server.
Especially on mobile, where your bandwidth is more expensive and often slower, downloading all page resources on every visit could be impractical. I think at some level you have to accept that there are patterns in your interaction with the website which could be detected and profiled to identify you.
I'm trying to make a POC of which is possible to have a website that uses http and https. So i have a control in my master page that needs info if the user is authenticated or not. For this I want to use HttpContext.Current.User.Identity.IsAuthenticated. If is authenticated shows info for authenticated users, if not appear the login control.
To authenticate the control make an AJAX POST request to the Login action that has the [RequireHttps] attribute. The URL used in the AJAX request is:
$.ajax({
type: 'POST',
url: '#Url.Action("ModalLogIn", "Authentication", null, "https", Request.Url.Host + ":44300")',
By the way I'm using VS2013 IIS express with SSL enabled.
As you can see in my AJAX request i'm using the HTTPS in action url.
The request is made to the server using SSL and the response is made with success.
The problem is that in the subsequent requests the ASPXAUTH cookie is not passed in the request header. So the server does not get the user authentication info. The subsequent requests are made with no SSL, are simple HTTP requests.
I know that in security terms the authentication is still insecure because i'm expecting to pass the ASPXAUTH through HTTP, but like I said is a POC and I want to see if it is possible to make a simple authentication request using HTTPS and all the others using HTTP.
As requested this is the Response Headers:
Access-Control-Allow-Orig... *
Cache-Control private
Content-Length 15
Content-Type application/json; charset=utf-8
Date Sat, 26 Oct 2013 18:57:55 GMT
Server Microsoft-IIS/8.0
Set-Cookie ASP.NET_SessionId=j2a53htev0fjp1qq4bnoeo0l; path=/; HttpOnly
ASP.NET_SessionId=j2a53htev0fjp1qq4bnoeo0l; path=/; HttpOnly
IAC.CurrentLanguage=en; expires=Sun, 26-Oct-2014 19:57:55 GMT; path=/
.ASPXAUTH=730DEDBFD2DF873A5F2BD581AA0E25B685CAD12C26AEA63AD82484C932E26B617687A05BB403216CC5EFCF799970810059F9CA2CF829F953580AF81FF48102003C0129AB04424F0D011A733CAAF1DE00688E5A4C93DEA97338DD2B5E7EE752F3761A470D52449BEBCA74098912DE37AA8C1E293B1C5D44EB1F9E9384DAAEF289; path=/; HttpOnly
X-AspNet-Version 4.0.30319
X-AspNetMvc-Version 3.0
X-Powered-By ASP.NET
X-SourceFiles =?UTF-8?B?QzpcTXkgRGF0YVxCaXRidWNrZXRcaWFjLXdlYnNpdGVcaW1wbGVtZW50YXRpb25cZG90bmV0XElBQy5XZWJcQXV0aGVudGljYXRpb25cTW9kYWxMb2dJbg==?=
It might be that when you set the auth cookie, it is marked as "Secure".
Using the Chrome Developer Tools, click on 'Resources', then cookies. Under the 'Secure' column check if the cookie is marked. If it is, then this means that the browser will not send the auth cookie using a non-secure connection.
Just a shot in the dark, but try setting the ASPXAUTH cookie with an expiration date.
It's possible that the browser, upon receiving a session cookie, will only present the session cookie on connections using the same protocol (https) as when it was set. I know for sure that persistent cookies do not have this limitation.
Also, investigate whether port could be the issue. If your AJAX goes over 44300 and your web goes over 80 or 443, it's possible the cookie is lost because the browser considers secure cookies to be port-specific. The W3C spec doesn't say whether cookies are private with respect to port; browsers vary.
All things work perfect like that ajax request in HTTPS manner by JS. Related respond works correctly too. But it seems that you have not prepared Login page in SSL too! My meaning is :
[RequireHttps]
public ActionResult Login()
{
return View();
}
Then Send request to HttpPost enabled Action. I believe that will work correctly. Unless you had some lack of requirements like MicrosoftMvcAjax.js and MicrosoftAjax.js in situations that you are using formal Microsoft ajax form by your ViewEngine (Perhaps by Razor). I think studying this Article can help you more.
Good Luck.
I need to test some client application code I've written to test its' handling of various status codes returned in an HTTP response from a web server.
I have Fiddler 2 (Web Debugging Proxy) installed and I believe there's a way to modify responses using this application, but I'm struggling to find out how. This would be the most convenient way, as it would allow me to leave both client and server code unmodified.
Can anyone assist as I'd like to intercept the HTTP response being sent from server to client and modify the status code before it reaches the client?
Any advice would be much appreciated.
Ok, so I assume that you're already able to monitor your client/server traffic. What you want to do is set a breakpoint on the response then fiddle with it before sending it on to the client.
Here are a couple of different ways to do that:
Rules > Automatic Breakpoints > After Responses
In the quickexec box (the black box at the bottom) type "bpafter yourpage.svc". Now Fiddler will stop at a breakpoint before all requests to any URL that contains "yourpage.svc". Type "bpafter" with no parameters to clear the breakpoint.
Programmatically tamper with the response using FiddlerScript. The best documentation for FiddlerScript is on the official site: http://www.fiddler2.com/Fiddler/dev/
Once you've got a response stopped at the breakpoint, just double click it to open it in the inspectors. You've got a couple of options now:
Right next to the green Run to Completion button (which you click to send the response) there's a dropdown that lets you choose some default response types.
Or, on the Headers inspector, change the response code & message in the textbox at the top.
Or, click the "Raw" inspector and mess with the raw response to do arbitrary things to it. Also a good way to see what your client does when it gets a malformed response, which you'll probably test accidentally :)
Another alternative is to use Fiddler's AutoResponder tab (on the right-hand panel). This allows you to catch a request to any URI that matches a string and serve a "canned" response from a file. The file can contain both headers and payload. The advantage of this approach is that you don't have to write FiddlerScript and you don't have to handle each request manually via a breakpoint.
You would set the rule up in Fiddler like shown below (ensure you enable unmatched requests passthrough otherwise all other http requests will fail).
In this example, any request whose URI includes "fooBar" will get the canned response. The format of the file will vary depending on your APIs (you can use your browser to intercept a "real" response and base it on that) but mine looked like the following:
HTTP/1.1 409 Conflict
Server: Apache-Coyote/1.1
X-Powered-By: Servlet 2.5; JBoss-5.0/JBossWeb-2.1
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, DELETE, PUT, PATCH, OPTIONS
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Authorization
Access-Control-Max-Age: 86400
Content-Type: application/vnd.api+json
Content-Length: 149
Date: Tue, 28 Mar 2017 10:03:29 GMT
{"errors":[{"code":"OutOfStock","detail":"Item not in stock","source":{"lineId":{"type":"Order line Number","id":"1"}},"meta":{"availableStock":0}}]}
I found that it needed a carriage return at the end of the last line (i.e. after the json), and that the Content-Length header had to match the number of characters in the json, otherwise the webapp would hang. Your mileage may vary.
Create a FiddlerScript rule. Here's what I used in order to generate a local copy of a website that was intentionally using 403 on every page to thwart HTTrack/WGET.
https://gist.github.com/JamoCA/22db8d68a9a2fb20cb04a85360185333
/* 20180615 Fiddler rule to ignore all 403 HTTP Status errors so WGET or HTTrack can generate local copy of remote website */
SCENARIO: Changing the user agent or setting a delay isn't enough and the entire remote server is configured to respond w/403.
CONFIGURE: Add below rule to FiddlerScript OnBeforeReponse() section. Configure HTTrack/WGET/CRON to use proxy 127.0.0.01:8888 */
static function OnBeforeResponse(oSession: Session) {
if (oSession.HostnameIs("TARGETHOSTNAME_FILTER.com") && oSession.responseCode == 403) {
oSession.responseCode = 200;
oSession.oResponse.headers.HTTPResponseCode = 200;
oSession.oResponse.headers.HTTPResponseStatus = "200 OK";
}
}
I would like to kindly ask you for a suggestion regarding browser cache invalidation.
Let's assume we've got an index page that is returned to the client with http headers:
Cache-Control: public, max-age=31534761
Expires: Fri, 17 Feb 2012 18:22:04 GMT
Last-Modified: Thu, 17 Feb 2011 18:22:04 GMT
Vary: Accept-Encoding
Should the user try to hit that index page again, it is very likely that the browser won't even send a request to the server - it will just present the user with the cached version of the page.
My question is: is it possible to create a web resource (for instance at uri /invalidateIndex) such that when a user hits that resource he is redirected to the index page in a way that forces the browser to invalidate its cache and ask the server for fresh content?
I'm having similar problems with a project of my own, so I have a few suggestions, if you haven't already found some solution...
I've seen this as a way jQuery forces ajax requests not to be cached: it adds a HTTP parameter to the URL with a random value or name, so that each new request has essentialy a different URL and the browser then never uses the cache. You could actually have the /invalidateIndex URI redirect to such a URL. The problem of course is that the browser never actually invalidates the original index URL, and that the browser will always re-request your index.
You could of course change the http header Cache-Control with a smaller max-age, say down to an hour, so that the cache is invalidated every hour or so
And also, you could use ETags, wherein the cached data have a tag that will be sent with each request, essentially asking the server if the index has changed or not.
2, 3 can be even combined I think...
There is no direct way of asking a browser to purge its cache of a particular file, but if you have only a few systems like this and plenty of bandwidth, you could try returning large objects on the same protocol, host, and port so that the cache starts evicting old objects. See https://bugzilla.mozilla.org/show_bug.cgi?id=81640 for example.