I need some help in resolving a strange behavior I came across while using Thinktectures Embedded STS locally in my ASP.net MVC application. I don’t see this issue on the server using ADFS.
The issue is
After I sign in into the application, most of the HTTP calls from then on are getting called twice.
The first HTTP request goes without the FedAuth cookie to which the server responds with a status code of 302 (redirect) and another request to the same URL is made but this time with the Fedauth cookie. I'm trying to understand what is causing the browser to send the first request without the FedAuth cookie and also why the server redirects to the same URL?
I also need help in understanding how the EmbeddedSTS URL gets resolved. I went through the code on Github but it is not very clear to me how the EmbeddedSTS url is resolved.
Any help is appreciated.
I was able to figure out the issue on my own.
This issue is related to cookie paths being case sensitive. My virtual directory in localhost was configured as ATSWeb but while making AJAX calls I am constructing the full URL with a different case for the virtual directory (atsweb).
Since the ADFS cookie was set with the path /ATSWeb, while doing the AJAX call the browser is not sending the Fedauth cookie to the server. This is leading to all sorts of issues.
You can read more about cookie paths at the links below.
http://www.allbacktomine.com/blog/2009/02/04/BrowserCookiesThePathIsCaseSensitive.aspx
Why are cookie paths case sensitive?
Related
I'm working on a WordPress website: https://samarazakaz.ru/
The client discovered a strange bug. After newly opening a browser the first login always fails, second one succeeds.
I tracked down the issue to a strange cookie with the name RCPC that is being set when the login form is submitted. If the cookie is missing then the login fails regardless of proper credentials.
I searched high and wide for any information about this cookie but could not find anything useful. The only thing remotely resembling my case was on some discussion on a site called https://codeforces.com/ . But nothing on that mentioned anything related to WordPress.
The site has a bare-bones setup with Elementor and my own plugin. And nothing in my code messes with cookies or the login process. I downloaded all website files and search in all files for "RCPC" but found nothing.
The site is behind an Nginx proxy, but I could not find any connection with this cookie and Nginx either.
I noticed that the value of this cookie is constant. So, as a workaround I jerry-rigged my plugin to set this cookie any time when it's not set. But, of course, I'm not very happy with that solution because I don't know if this will just stop working one day.
Update:
I verified that this is coming from the hosting. I renamed the /wp-login.php file and made a request to it, and it didn't return a 404 error but a 200 page with the same redirect code and the header to set the cookie. The hosting is reg.ru .
As far as I can figure this is a counter measure against automated password guessing. Any request (POST, GET, etc) to the /wp-login.php will get the redirect script with the cookie setting header. Only requests containing the correct RCPC cookie will get forwarded.
Upon further testing found that the value of the RCPC cookie is some kind of hash generated from the request's IP address. Because all of our computers got the same one but from other locations its different.
This does not cause any problem if the standard WordPress login form is used because that lives at the /wp-login.php address, so the first GET request will generate the cookie. However, we had a custom login page which didn't access /wp-login.php until the form was submitted.
Based on these discoveries I made a workaround, which is simply adding a one line JS script to the login page which makes a (fetch) request to the /wp-login.php page and simply discards the result. This is enough to set the cookie in the browser so that the form will work at the first try.
Need on hosting disable test-cookie-module
I have two Spring Web applications that work together. I'm running the first application from the IDE on localhost, while the second one is running in docker on app.127.0.0.1.nip.io.
The two applications interact indirectly through the users browser by redirecting and POSTing between the two apps. This is slightly similar to how an SP and an IdP work together in SAML2.
In my case, the first application on localhost is sending a 302 to the second application. After doing some work, the second application sends an HTML page with a form an JS code to autosubmit it, back to my first application on localhost. The HTML looks similar to this:
<form method=POST action="http://localhost:8080/some/path">
...
</form>
My first application is using Spring Session with a session cookie, and this works just fine. However, when the second application makes the browser POST the form, the browser does not send the session cookie with the POST request.
When both applications are running in docker under .127.0.0.1.nip.io, the cookie is sent.
I've tried to find any hint if this behaviour is expected, and what headers or other bits the applications could use to influence this.
At this point, this is mostly an annoyance while debugging, but I'm concerned that once the two applications will run on different FQDNs and/or different domains, the browsers will also block the cookie being sent.
I've tested this with current versions of Chrome and Firefox.
The problem is the new(ish) SameSite cookie policy that covers exactly this case: another application is POSTing to a host via HTTP. The default now is SameSite: lax, which does not allow sending the first-party cookie values on this request.
The solution is to allow the session cookie to be sent by specifying SameSite: none. Be aware however that this might create security vulnerabilities. For my application, this is not an issue, so I can allow the cookie to always be sent, and especially when I run my application in the debugger.
For the production deployment, I will be able to tighten this, since both applications will run under the same domain (a.example.com and b.example.com), and both will use TLS, so I can set the session cookie to SameSite: lax.
Here's a decent explanation: https://web.dev/samesite-cookies-explained/
I've been using Postman in my app development for some time and never had any issues. I typically use it with Google Chrome while I debug my ASP.NET API code.
About a month or so ago, I started having problems where Postman doesn't seem to send the cookie my site issued.
Through Fiddler, I inspect the call I'm making to my API and see that Postman is NOT sending the cookie issued by my API app. It's sending other cookies but not the one it is supposed to send -- see below:
Under "Cookies", I do see the cookie I issue i.e. .AspNetCore.mysite_cookie -- see below:
Any idea why this might be happening?
P.S. I think this issue started after I made some changes to my code to name my cookie. My API app uses social authentication and I decided to name both cookies i.e. the one I receive from Facebook/Google/LinkedIn once the user is authenticated and the one I issue to authenticated users. I call the cookie I get from social sites social_auth_cookie and the one I issue is named mysite_cookie. I think this has something to do with this issue I'm having.
The cookie in question cannot legally be sent over an HTTP connection because its secure attribute is set.
For some reason, mysite_cookie has its secure attribute set differently from social_auth_cookie, either because you are setting it in code...
var cookie = new HttpCookie("mysite_cookie", cookieValue);
cookie.Secure = true;
...or because the service is configured to automatically set it, e.g. with something like this in web.config:
<httpCookies httpOnlyCookies="true" requireSSL="true"/>
The flag could also potentially set by a network device (e.g. an SSL offloading appliance) in a production environment. But that's not very likely in your dev environment.
I suggest you try to same code base but over an https connection. If you are working on code that affects authentication mechanisms, you really really ought to set up your development environment with SSL anyway, or else you are going to miss a lot of bugs, and you won't be able to perform any meaningful pen testing or app scanning for potential threats.
You don't need to worry about cookies if you have them on your browser.
You can use your browser cookies by installing Postman Interceptor extension (left side of "In Sync" button).
I have been running into this issue recently with ASP.NET core 2.0. ASP.NET Core 1.1 however seems to be working just fine and the cookies are getting set in Postman
From what you have describe it seems like Postman is not picking up the cookie you want, because it doesn't recognize the name of the cookie or it is still pointing to use the old cookie.
Things you can try:
Undo all the name change and see if it works( just to get to the root of issue)
Rename one cookie and see if it still works, then proceed with other.
I hope by debugging in this way it will take you to the root cause of the issue.
If I make an HTTP request to get index.html on http://www.example.com but that URL has a 302 re-direct in place that points to http://www.foo.com/index.html, what happens if the redirect target (http://www.foo.com/index.html) isn't available? Will the user agent try the original URL (http://www.example.com/index.html) or just return an error?
Background to the question: I manage a legacy site that supports a few existing customers but doesn't allow new signs ups. Pretty much all the pages are redirected (using 302s rather than 301s for some unknown reason...) to a newer site. This includes the sign up page. On one of the pages that isn't redirected there is still a link to the sign up page which itself links through to a third party payment page (i.e. on another domain). Last week our current site went down for a couple of hours and in that period someone successfully signed up through the old site. The only way I can imagine this happened is that if a 302 doesn't find its intended URL some (all?) user agents bypass the redirect and then go to originally requested URL.
By the way, I'm aware there are many better ways to handle the particular situation we're in with the two sites. We're on it! This is just one of those weird situations I want to get to the bottom of.
You should receive a 404 Not Found status code.
Since HTTP is a stateless protocol, there is no real connection between two requests of a user agent. The redirection status codes are just a way for servers to politely tell their clients that the resource they were looking for is somewhere else now. The clients, however, are in no way obliged to actually request the resource from that other URL.
Oh, the signup page is at that URL now? Well then I don't want it anymore... I'll go and look at some kittens instead.
Moreover, even if the client decides to do request the new URL (which it usually does ^^), this can be considered as a completely new communication between server and client. Neither server nor client should remember that there was a previous request which resulted in a redirection status code. Instead, the current request should be treated as if it was the first (and only) request. And what happens when you request a URL that cannot be found? You get a 404 Not Found status code.
I have an ASP MVC3 website with a rest API service.
When a user passes in an invalid API or they have been blacklisted i wish to ignore the response.
I know I could send back a 404 or pass back an 503 but if someone keeps polling me then I would ideally like to ignore the response causing a time-out their end. Thus delaying the hammering my server gets.
Is this possible within ASP.net MVC3? If so any help would be most appreciated.
Thank you
For what you want, you still need to parse the request, so it will always consume server resources, specially if you have an annoying user sending a query every 500ms...
In this situations you would block the IP / Header of the request for a period of, for example 10 minutes, but it would be a very good idea to block it on your load balancer and prevent that request that even reach your application, this is easily accomplish if you're using Amazon Services to run your Service, but all other cloud provider do support this as well, if by any means you are using a cloud hosting.
if you can only use your web application, and this is a solution that is not tested, you could add an ignored route to your routing mechanism like:
routes.IgnoreRoute("{*allignore}", new {allignore=#".*\.ignore(/.*)?"});
and upon check that the IP is banned, simple redirect using for example Response.Redirect() to your site, to a .ignore path... or, why not redirecting that request to google.com just for the fun of it?