I have a protected directory for downloads.
A protected file in /downloads has this header:
WWW-Authenticate: Basic realm="Downloads"
I'm protecting it with .htaccess:
AuthBasicProvider ldap
AuthName "Downloads"
<Limit GET>
Order Deny,Allow
Deny from all
Require valid-user
Allow from 192.168.1.2
Satisfy any
</Limit>
If I'm not coming from 192.168.1.2, I need to auth via LDAP, or upon auth failure, receive a 401 not authorized message.
Sometimes, when not on 192.168.1.2, it seems I receive the 401 error without a prompt for credentials when trying to download a file for the first time in a browser.
A reload will bring the auth prompt. It's as if the server thinks the browser has tried and failed to auth.
But I'm certain that's not the case, as it's the first visit since opening the browser. It doesn't always happen.
When I notice the issue, I usually have logged in, via SSO, to a different site prior, e.g., foo.bar.com, a sub-domain of the site with the downloads. The site with the downloads, www.bar.com, does not share SSO with foo.bar.com.
Additionally, people following a link to the download click on an HTTP link, but the download site redirects to HTTPS (and any credential request is over HTTPS), the site uses HSTS to encourage HTTPS other times. I'm not sure if this could be related.
Is there some way I can always force the prompt (assuming I'm not at 192.168.1.2)? Can I do something with a rewrite and an auth header?
Any idea why I'm not always prompted for credentials?
Related
I'm building a web application that uses cookies to track the user session. These cookies work flawlessly in development on localhost but they aren't working correctly in production. I suspect this is because I have some cookie settings misconfigured but I'm not sure which.
One thing to note is that the webapp runs at app.goldsky.com and the api runs at api.goldsky.io (note the different TLDs).
The application I'm building uses a tool called WorkOS for user authentication.
The authentication flow is as follows:
User visits website, enters their email and presses the login button
Request is sent to backend (api.goldsky.io)
Backend generates an authentication URL using the WorkOS SDK (of the form api.workos/...) and sends this to the frontend
the frontend navigates to this WorkOS authentication URL and proceeds through the auth flow
If successful, WorkOS redirects the user to my backend (api.goldsky.io/auth/workos/callback)
My backend generates a session token, sets a secure, httpOnly, path=/ cookie with the session token (goldsky_session=...) and redirects the user back to the webapp (app.goldsky.com)
In localhost this all works flawlessly. However, in production I don't see the cookie persist after step 6 completes.
In production, the response to step 5 contains the cookie
however after the redirect back to the webapp, the cookie seems to disappear. Here's the request to app.goldsky.com (the redirect from step 6) and it doesn't have the cookie header.
and just for completeness, here's a screenshot of the cookies for app.goldsky.com - it's empty:
By comparison, the final redirect on localhost contains the cookie:
How come my cookie does not persist after redirecting from api.goldsky.io to app.goldsky.com? Do I need to set the Domain attribute for the cookie? If so, what should I set it to? Maybe this is a SameSite problem?
Turns out I had an nginx misconfiguration issue which was rejecting requests to specific paths. Nginx was only allowing requests to /auth and a few others. My login logic was under /auth but the user query was at /user which nginx was rejecting.
I have followed this tutorial (https://techexpert.tips/nginx/nginx-kerberos-authentication/) to creates a "special page" /test on my NGINX server that requires successful Kerberos authentication to access it. So this much works. The problem is, I want the application to know who actually just successfully authenticated to the site so I can show them their specific information. So, for example, instead of popping up a page that says this to everyone:
Nginx authentication test
I want to have a page that will add the authenticated Kerberos username to the output or issue a cookie with the authenticated username in it. This needs to support multiple, simultaneous users accessing the website so each will have access to their own information.
I should add that when I tailf the /var/log/nginx/access.log, I see a line that gets spit out with two dashes and what appears to be browser information from the browser accessing the site. Then after successful Kerberos authentication, another line gets spit out to that log that has what appears to be the username filled in for the second dash. So it seems like this information is available somewhere in nginx if I could only get access to it. I don't really want to grab it from the access.log file. ;-)
All Nginx auth modules (whether Basic, or PAM, or SPNEGO for Kerberos) behave the same way – Nginx puts the username in $remote_user when processing nginx.conf, which then uses 'fastcgi_param' or 'uwsgi_param' to propagate it to the REMOTE_USER FCGI or WSGI variable.
So if you're using PHP via FastCGI, I'd expect to find it in $_SERVER["REMOTE_USER"], while Python using Flask would have request.environ["REMOTE_USER"], and so on.
On the other hand, if Nginx talks to the webapp via HTTP (using 'proxy_pass'), you should be able to use 'proxy_set_header' to send $remote_user through (for example) X-Remote-User or another custom HTTP header – although you'd better make very sure that nobody but Nginx is allowed to talk to the webapp backend, otherwise they could spoof the header and walk right in.
I have limited access to my wordpress login page by adding the following .htaccess file to the wp-admin directory:
## due to brute force attacks, limiting access to specific ips
order deny,allow
deny from all
allow from 24.xxx.xxx.xxx 66.xxx.xxx.xxx
I have had this in place for a day or three now and thought it was working. But today I got a notice from our security plugin that this site has had several failed login attempts. The failed login attempts were from an IP similar to this:
200.199.xxx.xxx
I am xing out the IPs for security measures, but wanted to give you an idea of the IP families that I am allowing vs. seeing attempting login.
So how would it be possible for a bot or person to be able to even arrive at the login page with this type of blocking in place?
So how would it be possible for a bot or person to be able to even arrive at the login page with this type of blocking in place?
No one needs to go to any “pages” to send requests to your server; they don’t even need to use a “browser.”
Any client that speaks HTTP can send whatever requests it wants to your site.
But today I got a notice from our security plugin that this site has had several failed login attempts
The login form sends the data directly to /wp-login.php - that is not even in the /wp-admin/ folder that you blocked access to.
/wp-login.php handles the complete login process, and only redirects to /wp-admin/ afterwards.
The failed login attempts you see in your logs come from /wp-login.php
I have a virtual folder containing an administration application, like
https://www.mysite.com/alpha
which requires SSL. In the IIS manager properties for the folder, under "Authentication and access control", Anonymous Access is disabled and "Authenticated Access" is set to "Integrated Windows authentication."
When I connect to the site in Chrome, I receive two login boxes. The first is from mysite.com/alpha:443, and the second is from mysite.com/alpha:80. Firefox appears to re-send my credentials for the second box so it is never shown.
Any ideas why I'd be required to log in twice?
If you require SSL for authenticated users on your website (for any reason), then the best solution is to always have your "Login" page on https://. That way when they log in, they are instantly secure. The reason for this is because of the native design of SSL. It separates/secures it's self from the non secure version by not passing authentication states between http and https.
you will also have to write some logic to redirect returning authenticated visitors to the secure page (IE: visitors who can return authenticated from a cookie).
EDIT:
Since your using windows authentication, it's probably easiest to simply redirect ALL incoming http traffic to https. This means your entire site will be over SSL and will be inaccessible via http (other than to redirect to https)
I wrote a Blog Post on forcing a website to use WWW in the web address, but it can also be ported to forcing https.
Yep,
The one uses SSL, the other not.
therefore, you are not allowed to share the credential cache from a Secure session, with the one of the unsecure session.
if you require SSL, then directly redirect the users to the SSL website.
I have a site that has a mix of http and https pages. Under the root of the site, one folder has all the http pages and another has all the https pages. Login is over https and sends the user to the other pages. When a session expires the forms authentication redirects to the Login page but the browser uses http and the user gets a 403 error.
Is there any way to override the session timeout to send it to https?
one way is to configure IIS to redirect http traffic to https
http://support.microsoft.com/kb/839357
one thing to consider with mixed mode like that:
there is a common attack on SSL pages, which is, making a http request (to https resource) in order to obtain the un-encrypted session cookie. This is why you want to configure your session cookie to encrypted only (would not be sent over http). I am guessing that your http and https pages share session, which means you can't set this setting, making your site vulnerable to this attack. but it's good to be aware of this.
http://anubhavg.wordpress.com/2008/02/05/how-to-mark-session-cookie-secure/
another article you may find helpful
http://www.west-wind.com/Weblog/posts/4057.aspx