We've noticed that for some users of our website, they have a problem that if they following links to the website from external source (specifically Outlook and MS Word) that they arrive at the website in such a way that User.IsAuthenticated is false, even though they are still logged in in other tabs.
After hours of diagnosis, it appears to be because the FormsAuthentication cookie is not sent sometimes when the external link is clicked. If we examine in Fiddler, we see different headers for links clicked within the website, versus the headers which are as a result of clicking a link in a Word document or Email. There doesn't appear to be anything wrong with the cookie (has "/" as path, no domain, and a future expiration date).
Here is the cookie being set:
Set-Cookie: DRYXADMINAUTH2014=<hexdata>; expires=Wed, 01-Jul-2015 23:30:37 GMT; path=/
Here is a request sent from an internal link:
GET http://domain.com/searchresults/media/?sk=creative HTTP/1.1
Host: domain.com
Cookie: Diary_SessionID=r4krwqqhaoqvt1q0vcdzj5md; DRYXADMINAUTH2014=<hexdata>;
Here is a request sent from an external (Word) link:
GET http://domain.com/searchresults/media/?sk=creative HTTP/1.1
Host: domain.com
Cookie: Diary_SessionID=cpnriieepi4rzdbjtenfpvdb
Note that the .NET FormsAuthentication token is missing from the second request. The problem doesn't seem to be affected by which browser is set as default and happens in both Chrome and Firefox.
Is this normal/expected behaviour, or there a way we can fix this?
Turns out this a known issue with Microsoft Word, Outlook and other MS Office products: <sigh>
See: Why are cookies unrecognized when a link is clicked from an external source (i.e. Excel, Word, etc...)
Summary: Word tries to open the URL itself (in case it's an Office document) but gets redirected as it doesn't have the authentication cookie. Due to a bug in Word, it then incorrectly tries to open the redirected URL in the OS's default browser instead of the original URL. If you monitor the the "process" column in Fiddler it's easy to see the exact behaviour from the linked article occurring:
Related
I just added a feature on a website to allow users to log in with Facebook. As part of the authentication workflow Facebook forwards the user to a callback URL on my site such as below.
https://127.0.0.1?facebook-login-callback?code.....#_=_
Note the trailing #_=_ (which is not part of the authentication data, Facebook appears to add this for no clear reason)
Upon receiving the request in the backend I validate the credentials, create a session for the user, then forward them to the main site using a Location header in the HTTP response.
I've inspected the HTTP response via my browser developer tools and confirmed I have set the location header as.
Location: https://127.0.0.1/
The issue is that the URL that appears in the browser address bar after forwarding is https://127.0.0.1/#_=_
I don't want the user to see this trailing string. How can I ensure it is removed when redirecting the user to a new URL?
The issue happens in all browsers I have tested. Chrome, Firefox, Safari and a few others
I know a similar question has been answered in other threads however there is no jquery or javascript in this workflow as in the other threads. All the processing of the login callback happens in backend code exlusively.
EDIT
Adding a bounty. This has been driving up the wall for some time. I have no explanation and don't even have a guess as to what's going on. So I'm going to share some of my hard earned Stackbux with whoever can help me.
Just To be clear on a few points
There is no Javascript in this authentication workflow whatsoever
I have implemented my own Facebook login workflow without using their Javascript libraries or other third party tools, it directly interacts with the Facebook REST API using my own Python code in the backend exclusively.
Below are excerpts from the raw HTTP requests as obtained from Firefox inspect console.
1 User connects to mike.local/facebook-login and is forwarded to Facebook's authentication page
HTTP/1.1 302 Found
Server: nginx/1.19.0
Date: Sun, 28 Nov 2021 10:44:30 GMT
Content-Type: text/plain; charset="UTF-8"
Content-Length: 0
Connection: keep-alive
Location: https://www.facebook.com/v12.0/dialog/oauth?client_id=XXX&redirect_uri=https%3A%2F%2Fmike.local%2Ffacebook-login-callback&state=XXXX
2 User accepts and Facebook redirects them to mike.local/facebook-login-callback
HTTP/3 302 Found
location: https://mike.local/facebook-login-callback?code=XXX&state=XXX#_=_
...
Requested truncated here. Note the offending #_=_ in the tail of the Location
3 Backend processes the tokens Facebook provides via the user forwarding, and creates a session for the user then forwards them to mike.local. I do not add #_=_ to the Location HTTP header as seen below.
HTTP/1.1 302 Found
Server: nginx/1.19.0
Date: Sun, 28 Nov 2021 10:44:31 GMT
Content-Type: text/plain; charset="UTF-8"
Content-Length: 0
Connection: keep-alive
Location: https://mike.local/
Set-Cookie: s=XXX; Path=/; Expires=Fri, 01 Jan 2038 00:00:00 GMT; SameSite=None; Secure;
Set-Cookie: p=XXX; Path=/; Expires=Fri, 01 Jan 2038 00:00:00 GMT; SameSite=Strict; Secure;
4 User arrives at mike.local and sees a trailing #_=_ in the URL. I have observed this in Firefox, Chrome, Safari and Edge.
I have confirmed via the Firefox inspect console there are no other HTTP requests being sent. In other words I can confirm 3 is the final HTTP response sent to the user from my site.
According to RFC 7231 §7.1.2:
If the Location value provided in a 3xx (Redirection) response does
not have a fragment component, a user agent MUST process the
redirection as if the value inherits the fragment component of the
URI reference used to generate the request target (i.e., the
redirection inherits the original reference's fragment, if any).
If you get redirected by Facebook to an URI with a fragment identifier, that fragment identifier will be attached to the target of your redirect. (Not a design I agree with much; it would make sense semantically for HTTP 303 redirects, which is what would logically fit in this workflow better, to ignore the fragment identifier of the originator. It is what it is, though.)
The best you can do is clean up the fragment identifier with a JavaScript snippet on the target page:
<script async defer type="text/javascript">
if (location.hash === '#_=_') {
if (typeof history !== 'undefined' &&
history.replaceState &&
typeof URL !== 'undefined')
{
var u = new URL(location);
u.hash = '';
history.replaceState(null, '', u);
} else {
location.hash = '';
}
}
</script>
Alternatively, you can use meta refresh/the Refresh HTTP header, as that method of redirecting does not preserve the fragment identifier:
<meta http-equiv="Refresh" content="0; url=/">
Presumably you should also include a manual link to the target page, for the sake of clients that do not implement Refresh.
But if you ask me what I’d personally do: leave the poor thing alone. A useless fragment identifier is pretty harmless anyway, and this kind of silly micromanagement is not worth turning the whole slate of Internet standards upside-down (using a more fragile, non-standard method of redirection; shoving yet another piece of superfluous JavaScript the user’s way) just for the sake of pretty minor address bar aesthetics. Like The House of God says: ‘The delivery of good medical care is to do as much nothing as possible’.
Not a complete answer but a couple of wider architectural points for future reference, to add to the above answer which I upvoted.
AUTHORIZATION SERVER
If you enabled an AS to manage the connection to Facebook for you, your apps would not need to deal with this problem.
An AS can deal with many deep authentication concerns to externalize complexity from apps.
SPAs
An SPA would have better control over processing login responses, as in this code of mine which uses history.replaceState.
SECURITY
An SPA can be just as secure as a website with the correct architecture design - see this article.
When I ask user for HTTP Basic Auth at some URL, browser sends Authorization header only for this and some other URLs.
Testcase script written in PHP:
http://testauth.veadev.tk/
There are three URLs to ask for credentials (you can use any random).
Logout link (drops current credential after pressing "Cancel" button in browser auth form, not working in IE).
Links to root URL and some test deeper URLs.
Questions:
Why browser not sending Authorization header at / URL if HTTP/1.0 401 Unauthorized was sent at /system/dev?
To repeat: open clean http://testauth.veadev.tk/, click Auth2, enter any credentials, you'll be forwarded to / after that. You'll see Auth: null which means no credentials header was sent by browser.
Why does browser send Authorization header at / if HTTP/1.0 401 Unauthorized was sent at /dev?
To repeat: open clean http://testauth.veadev.tk/, click Auth1, enter any credentials, you'll be forwarded to / after that. You'll see something like Auth: string 'Basic dHQ6dHQ=' (length=14) which means credentials header was sent by browser.
If you repeat first case and then click Auth1 you'll have credentials at Root and all other pages. Why?
If you click Auth3 (/some/deep/and/long/url) and you'll have credentials at Page3 (/some/deep/and/long/3) and nowhere else. Why?
To clear credential state between tests either restart your browser or click Logout, Cancel in Auth form and Root to return back (Firefox, Google Chrome).
What are the rules of sending Authorization header?
RFC 2617, section 2 states:
A client SHOULD assume that all paths at or deeper than the depth of
the last symbolic element in the path field of the Request-URI also
are within the protection space specified by the Basic realm value of
the current challenge. A client MAY preemptively send the
corresponding Authorization header with requests for resources in that
space without receipt of another challenge from the server.
If you are using Digest Challenge, section 3.2 states that you may specify a domain in the WWW-Authenticate header to indicate what the protection space will be. I would try setting that to something like domain=/. I am not sure if this will work with Basic authorization, but it wouldn't hurt to try it; if not, Digest authorization is not much more difficult to work with and is a bit more secure.
I have embedded a windows media player in a web page, using the usual <object> and <embed> tags. The video is served by an ashx (http handler). When I try to play the video, I usually (but not always) get an error message telling me that the file extension (ashx) does not match the file format.
This happens in IE (9 & 10) and also in Firefox (latest) with the WMP plugin.
I know that the tags (with classid, etc) are correct because the media player displays and allows me to click the 'play' button.
The ashx returns the correct mime type (video/x-ms-wmv) and a valid file name (somevideo.wmv) in the response headers. I have tried content-disposition attachment and inline.
I have tried urls using 'http://', 'https://', and '//' (which I prefer)
If I put the url (including the .ashx) of the video file in the browser address bar directly, the video downloads and plays.
If I modify the object tag to use a direct path to the video file (/somewhere/somevideo.wmv), it works - but I can't use this as a solution.
The same ashx serves up video and audio in various other formats with out any fuss - it just seems that the embedded windows media player doesn't like it.
This has been working for several years - I think this is some new behavior, though I can't identify what has changed, other than browser updates.
EDIT: a more careful study in Fiddler showed something I missed before. If I access the video directly (by entering my ashx url in the browser address bar), the video plays in the standalone media player. The content type and disposition headers are correct.
However, when using the embedded player, I usually (not always) get OPTIONS and PROPFIND requests from user agent "Microsoft-WebDAV-MiniRedir/6.1.7601". I do not have WebDAV enabled, and I do not respond to options and propfind requests. The embedded player does not request the actual video file.
Correction - I do actually respond to the options request - here is the request and response info from fiddler :
OPTIONS http://mydomain.com/myhandler.ashx HTTP/1.1
User-Agent: Microsoft-WebDAV-MiniRedir/6.1.7601
translate: f
Connection: Keep-Alive
Host: mydomain.com
HTTP/1.1 200 OK
Allow: OPTIONS, TRACE, GET, HEAD, POST
Server: Microsoft-IIS/7.5
Public: OPTIONS, TRACE, GET, HEAD, POST
X-Powered-By: ASP.NET
Date: Tue, 24 Dec 2013 16:03:49 GMT
Content-Length: 0
This is followed by four identical requests, using PROPFIND instead of OPTIONS. the response is 404.
To successfully play the file you need to specify Content-Disposition and Content-Type headers correctly.
In your ashx, make sure you add following lines,
Response.AddHeader("Content-Disposition","attachment;filename='a.wmv'");
Response.AddHeader("Content-Type","video/wmv");
Please identify correct name and content-type based on type of file you have, and substitute them in above code.
Looks like it has Cross Origin Resource Sharing Issue,
make sure you are returning correct headers for different domain as suggested by following response.
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: X-Requested-With, Accept, Content-Type, Origin
Access-Control-Max-Age: 1728000
Replace * with domain where your page is hosted which embeds your media player.
Have you tried appending the file type to the end of the URL? for example:
http://www.mywebsite.com/MyVideoHandler.ashx?videofile=123245&.wmv
This example assumes that the file is of wmv type.
My goal is that I want to log into a site with some java code and do some work when logged in. (in order to write some java cooking handling I first need to understand how this all actually works)
The problem is that I can't quite figure out how to manage the cookie session.
Here's what I've observed when using chrome's dev. tools:
1) on first request for the site's url I send no cookies and I get some in response:
Set-Cookie:PHPSESSID=la2ek8vq9bbu0rjngl2o67aop6; path=/; domain=.mtel.bg; HttpOnly
Set-Cookie:PHPSESSID=d32cj0v5j4mj4nt43jbhb9hbc5; path=/; domain=.mtel.bg; HttpOnly
2) on moving on log page I now send (on HTTP GET):
Cookie:PHPSESSID=d32cj0v5j4mj4nt43jbhb9hbc5;
__utma=209782857.1075318979.1349352741.1349352741.1349352741.1;
__utmb=209782857.1.10.1349352741;
__utmc=209782857;
__utmz=209782857.1349352741.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);
__atuvc=1%7C40
and indeed when I check my stored cookies (after my first get request for the site's main url) in chrome's cookie tab - they exist in the way they're passed on the next get/post request.
Can you explain what's really happening after my 1st cookies are received and why such name/value pairs are stored?
There's a Javascript from Google Analytics on the page which sets some cookies.
Are there any issues with sending back a cookie during a 302 redirect? For example, if I create a return-to-url cookie and redirect the user in the same response will any (modern) browser ignore the cookie?
According to this blog post: http://blog.dubbelboer.com/2012/11/25/302-cookie.html all major browsers, IE (6, 7, 8, 9, 10), FF (17), Safari (6.0.2), Opera (12.11) both on Windows and Mac, set cookies on redirects. This is true for both 301 and 302 redirects.
As #Benni noted :
https://www.chromium.org/administrators/policy-list-3/cookie-legacy-samesite-policies
The SameSite attribute of a cookie specifies whether the cookie should be restricted to a first-party or same-site context. Several values of SameSite are allowed:
A cookie with "SameSite=Strict" will only be sent with a same-site request.
A cookie with "SameSite=Lax" will be sent with a same-site request, or a cross-site top-level navigation with a "safe" HTTP method.
A cookie with "SameSite=None" will be sent with both same-site and cross-site requests.
One notice (to save developer's life):
IE and Edge are ignoring Set-Cookie in redirect response when domain of the cookie is localhost.
Solution:
Use 127.0.0.1 instead of localhost.
Most browser are accepting cookies on 302 redirects. I was quite sure of that, but I made a little search. Not all modern browsers.
Internet archive Link from a now removed/dead/ microsoft connect Q/A on Silverlight Client HTTP Stack ignores Set-Cookie on 302 Redirect Responses (2010)
I think we now have a replacement for IE6 and it's Windows Mobile browsers...
Here is the Chromium bug for this issue (Set-cookie ignored for HTTP response with status 302).
I just ran into this problem with both Firefox and Safari, but not Chrome. From my testing, this only happens when the domain changes during the redirect. This is typical in an OAuth2 flow:
OAuth2 id provider (GitHub, Twitter, Google) redirects browser back to your app
Your app's callback URL verifies the authorization and sets login cookies, then redirects again to the destination URL
Your destination URL loads without any cookies set.
For reasons I haven't figured out yet, some cookies from request 2 are ignored while others are not. However, if request 2 returns a HTTP 200 with a Refresh header (the "meta refresh" redirect), cookies are set properly by request 3.
We hit this issue recently (Mar 2022) - both Firefox and Chrome didn't set the cookies immediately on HTTP 302 redirect.
Details:
We sent HTTP 302 redirect with Set-Cookie header with "SameSite=Strict" policy and Location pointing at a different path of the same domain.
However, the browser didn't send the Cookie in the subsequent GET request (the redirect's Location), even though it was indeed on the same domain (first-party request).
We could see the Cookie from the browser storage inspect tab, but not in the request immediately following the 302 response.
When we refreshed the page (or hit enter in the address bar), everything worked again, as the Cookie was sent properly in all following requests.
We think this might be a bug / undocumented behaviour. It's like the browser stored the cookie "a little too late".
We had to work around this by serving HTTP 200 with a client-side redirect instead:
<!DOCTYPE html>
<html>
<head><meta http-equiv="refresh" content="0; url='REDIRECT_URL'"></head>
<body></body>
</html>
This is a really frowned upon approach, but if you really want to not rely on 30x set-cookie browser behavior you could use an HTML meta http-equiv="refresh" "redirect" when setting the cookie. For example, in PHP:
<?php
...
setcookie("cookie", "value", ...);
url="page.php";
?>
<html>
<head><meta http-equiv="refresh" content=1;url="<?=$url?>"></head>
<body>Continue...</body>
</html>
Server will send Set-Cookie with a 200 instead of a proper 300x redirect, so browser will store the cookie, and then perform the "redirect". The <a> link is a fallback in case browser does not perform the meta refresh.
Encountered this issue while using OpenIdConnect / IdentityServer on .Net, where a separate API (different hostname) handles authentication and redirects back to the main site.
First (for development on localhost) you need to set CookieSecure option to SameAsRequest or Never to deal with http://localhost/ not being secure. See Michael Freidgeim's answer.
Second, you need to set the CookieSameSite attribute to Lax, otherwise the cookies do not get saved at all. Strict does not work here!
In my case I set CookieOptions.Secure=true, but tested it on http://localhost., and browser hide cookies according to the setting.
To avoid such problem, you can make cookie Secure option to match protocol Request.IsHttps,e.g.
new CookieOptions()
{
Path = "/",
HttpOnly = true,
Secure = Request.IsHttps,
Expires = expires
}