Is Request.IsLocal secure or can it be spoofed? - asp.net

I have a webpage which checks for an encrypted cookie on page load to determine user identity. However, when I'm testing the page locally on my development box, I don't have access to that cookie.
Previously I used an appsetting to tell the page whether it was in development mode or not, and when in dev-mode it would load a fixed user identity. Then I discovered Request.IsLocal
I can simply check like this:
if(Request.IsLocal){
FormsAuthentication.SetAuthCookie("testUser", false);
}else{
FormsAuthentication.SetAuthCookie(/*EncryptedCookieValue*/, false);
}
Is this secure? Is there any way a malicious user could spoof IsLocal?

I think your actual question is, how do you have development only functionality?
You could you use: Environment.UserInteractive
http://msdn.microsoft.com/en-us/library/system.environment.userinteractive.aspx
It returns false when running in IIS or a Windows Service, true when their is a user interface i.e. Visual Studio when your developing.
I think this is better than a DEBUG pre processor variable because the behaviour is more consistent, you could accidentally upload a DEBUG version of your dll to your live environment unless you have a very tight build/release process.
As a rule of thumb it's not a good idea to trust anything from the client.
I'd also be pragmatic, what are you protecting and how much effort would someone go to hack in?
The below SO post goes into some of the reasons why you shouldn't trust it:
Can I fool HttpRequest.Current.Request.IsLocal?
Reference
You can view the source at http://referencesource.microsoft.com
public bool IsLocal {
get {
String remoteAddress = UserHostAddress;
// if unknown, assume not local
if (String.IsNullOrEmpty(remoteAddress))
return false;
// check if localhost
if (remoteAddress == "127.0.0.1" || remoteAddress == "::1")
return true;
// compare with local address
if (remoteAddress == LocalAddress)
return true;
return false;
}

The code for IsLocal appears to be robust - I can't see any flaws in its logic so for your purposes it should be fine.
However, you should be aware that if your application (or any other application running on the same server) makes any HTTP requests whose destination can be influenced by the end user then you should add an extra layer of security such as a secret/expiring key or token to your request or you could secure the HTTP request when made so that it is not possible to request a local resource.
e.g. Say your website has an end point such as http://www.example.com/DeleteAllUsers and in the code that handles this request you are checking IsLocal to make sure that users can only be deleted if it is a local, trusted request.
Now let's say you have a function on your website Enter a web address to view headers: and the user enters http://www.example.com/DeleteAllUsers in this text box, causing your application to request DeleteAllUsers and satisfy the IsLocal security check because the HTTP request is made from your app. This is how IsLocal can be exploited, and I realise it is a contrived example to prove the point, but lots of websites do similar things such as grabbing a preview image of a URL to display. If nothing on your server can be made to make a local HTTP request you should be good to go.

You should not put this code on a production server, for the reasons mentioned in the other answers.
However, you could do
#if DEBUG
if (Request.IsLocal)
{
FormsAuthentication.SetAuthCookie("testUser", false);
}
else
{
#endif
FormsAuthentication.SetAuthCookie(/*EncryptedCookieValue*/, false);
#if DEBUG
}
#endif
On your development box, run a Debug build. In production, deploy a Release build.

Determining the remote IP is tricky and depends on configuring the server correctly.
For example a misconfigured server might use X-Forwarded-For to determine the IP, but it can be chosen by the client. But when using a reverse proxy that sets it to its own IP, this is the correct way to determine the IP.
Using the IP from the socket can be wrong as well, consider a reverse proxy running on the machine as the webserver.
=> If possible use a different authentication mechanism

Related

Sccess some folder on file share based on user authenticated

I have an asp.net web application with forms authentication and users (credentials) are checked against active directory, username is actually samAccountName attribute from AD.
Now I need to enable users to get access to some files which are located on file share, where each user has his own folder.
First proof of concept works like this:
appPool in IIS is configured to run under some domain user, and this user was given R/W access to file share and all user folders
when the user logs into web app only content of the folder on the path "\\myFileServer\username" is visible to him. And same when uploading files they get stored to "\\myFileServer\username".
While this works, doesn't seem to be secure at all. First issue is that user under which application pool runs has access to folders from all users. And even bigger concern is that only username determines to which folder you have access.
So my question is what is the correct/better way to doing this ? I was reading about impersonating the user, but this is not advised anymore if I understood correctly ? And I don't have Windows authentications since the web application must be accessible from internet.
I recommend not running the application under a user account, but creating an application specific account under which it runs with the proper R/W rights, and separate the person who gives these rights from the development team.
Within the application's authentication: after you receive a GET/POST request, you can verify the path to which the current user would read/write data, and cross-reference this with the path the user is authorized to read/write from. If these are incorrect, return a 401 NOT AUTHORIZED response, else, carry on the operation as you do now.
If your endpoints are protected properly, and the application runs under its own account, I don't see any harm in the setup itself. This still however gives the developers a way, through the application, to indirectly access other user's files. Based on how tight these checks must be, you could add additional controls, (like only allowing the application to connect from the production server, and only allowing server transport in a controlled way).
From the Description of your Problem i think Custom HttpHandlers are the right choice for you. You didn't mention what type of files will be present in your Folder , for brevity i will answer by assuming it will be having PDF files.
As you were mentioning that your application will be having different users so for this you need to use .NET built-in authentication manager and role provider. With a simple security framework setup, we'll place a PDF file in the web application, behind a web.config protected folder.then create a custom HTTP handler to restrict access on the static document to only those users who should be allowed to view it.
A sample HTTP Handler:
public class FileProtectionHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
switch (context.Request.HttpMethod)
{
case "GET":
{
// Is the user logged-in?
if (!context.User.Identity.IsAuthenticated)
{
FormsAuthentication.RedirectToLoginPage();
return;
}
string requestedFile =
context.Server.MapPath(context.Request.FilePath);
// Verify the user has access to the User role.
if (context.User.IsInRole("User"))
{
SendContentTypeAndFile(context, requestedFile);
}
else
{
// Deny access, redirect to error page or back to login
//page.
context.Response.Redirect("~/User/AccessDenied.aspx");
}
break;
}
}
}
Method SendContentTypeAndFile :
private HttpContext SendContentTypeAndFile(HttpContext context, String strFile)
{
context.Response.ContentType = GetContentType(strFile);
context.Response.TransmitFile(strFile);
context.Response.End();
return context;
}
private string GetContentType(string filename)
{
// used to set the encoding for the reponse stream
string res = null;
FileInfo fileinfo = new FileInfo(filename);
if (fileinfo.Exists)
{
switch (fileinfo.Extension.Remove(0, 1).ToLower())
{
case "pdf":
{
res = "application/pdf";
break;
}
}
return res;
}
return null;
}
Last step is that you need to configure this HTTP Handler in the webconfig ,
and You can see the more info here
Here is the complete Source Code
You're architecture (and assumptions) seem good for a low/mid security level, but if the nature of your data is very sensitive (medical, etc) my biggest concern about security would be controlling the user sessions.
If you're using forms authentication then you're storing the authenticated identity in a cookie or in a token (or if you're using sticky sessions then you're sending the session Id, but for the case it's the same). The problem arises if user B has phisical access to the machine where user A works. If user A leaves it's workplace (for a while or forever) and he doesn't explicitly close it's session in your web app, then his identity has been left around, at least until his cookie/token expires, and user B can use it since the identity system of ASP.NET hasn't performed a SignOut. The problem is even worse if you use tokens for authorization, because in all the infamous Microsoft implementations of the Identity System you're responsible of providing a way to invalidate such tokens (and make them dissapear from the client machine) when the user signs out, since they would stay valid until it's expiration. This can be addressed (but no completely thus not very satisfactorily for high security requirements) issuing short living refresh tokens, but that's another story, and I don't know if it's your case. If you're going with cookies then when user A signs out it's cookie is invalidated and removed from the request/response cicle, so this problem is mitigated. Anyway you should ensure that your users close their sessions in your web app or/and configure the cookies with short lives or short sliding expirations.
Other security concerns may be related with CSRF, wich you can prevent using the Antiforgery Token infrastructure of ASP.NET, but these kind of attacks are methods that are very far away from the tipical user (I don't know anything about the nature of your user and if your app is exposed to public on internet or it's only accesible on an intranet), but If you worry for such specialised attacks and have so sensitive data, maybe you should go with something more complex than forms authentication (two factor, biometrical, etc)

IIS Conditional authentication with the Windows Authentication module

I have an unusual situation in which authentication isn't necessary but where learning the user-id via windows authentication under certain conditions would be useful.
To give some context, I want to be able to require windows authentication when the user-agent matches certain conditions, but not require authentication in other conditions. With only some familiarity with asp.net and iis I suspect I am missing an easy way of accomplishing this. So far I've looked into writing a module that checks the user-agent and then adds the WindowsAuthenticationModule if the conditions are met - but I can't figure out how to do this.
Any suggestions the best way to auth or not auth on the value of the user-agent?
If you setup IIS to use windows authentication, you should be able to do something like the following code snippet.
However, as you may guess from the comments in the code, I would advise against it. The User Agent is easily spoofed - so any authentication checks you do based on it can also easily be bypassed. The same holds true for pretty much anything that comes across in an http header (e.g., basing authentication on http referrer is also a bad idea).
string windowsUserName = null;
var currentContext = System.Web.HttpContext.Current;
//NOT SECURE - easily spoofed!
if (currentContext.Request.UserAgent == "Some special user agent")
{
if (!currentContext.User.Identity.IsAuthenticated
|| currentContext.User.Identity.AuthenticationType != "Windows")
{
throw new SecurityException(#"You are not authorized, but you can easily
hack this application by modifying the user agent that you send to the server.")
}
windowsUserName = User.Identity.Name;
}
So in short, even if the above works, don't do it. You really need to completely rethink how you are authenticating your application.
If, as you seem to indicate in the first sentence of your question, this is purely informational, then it may be ok (e.g. if it is just for debugging purposes). However, it would not be suitable e.g. for auditing or restricting access to any resources, and you must be extremely careful that this code doesn't get reused in any real security context.

How to get HTTP 100 Continue to work for WebDAV on embedded Grizzly?

I am using the Milton WebDAV server (1.6.8) with an embedded Grizzly servlet container (2.1.7), and in their default configuration, PUT requests (at least as issued by Cyberduck) do not work. I have tracked the issue down to a problem with how HTTP 100 Continue is handled (it apparently also affects Jetty), a message on the Milton mailing list and bug tracker says it is the fault of the servlet container, which tries to be clever with "transparent expect/continue handling".
Yes, containers which transparently handle expect continue effectively break HTTP security for Webdav. HTTP uses a challenge/response security model and many clients rely on that. Ie if doing a PUT they will simply do an un-authenticated PUT and rely on ExpectContinue to ensure that the challenge is issued before the file is uploaded.
But with transparent handling of ExpectContinue the entire file gets uploaded before the milton API is able to check if the current user is authenticated and authorised to perform the action.
Depending on your supported clients and you use cases this can either be wholely unacceptable, a nuisance or not an issue at all.
But, generally, I think you should try to find out if Grizzly's transparent handling can be disabled, and then re-enable support in milton.
What can I do to disable Grizzly's transparent expect/continue handling, and is this really correct approach? The alternative would be to turn off expect/continue handling in Milton, but that seems to break WebDAV authentication.
Update: I also tried Jetty now (8.1.0.RC1), and it exhibits the same behaviour as Grizzly: only with expect/continue handling turned off can I PUT files, with the default settings it does not work.
Regarding Grizly 2.x, you need to override sendAcknowledgment method in your ServletHandler like the following:
class MyServletHandler extends ServletHandler
{
protected boolean sendAcknowledgment(final Request request,
final Response response)
throws IOException
{
if (authClient(request, response)
{
return super.sendAcknowledgment(request, response);
}
else
{
response.setStatus(HttpStatus.EXPECTATION_FAILED_417);
return false;
}
}
}
Hope it will help.
Note that whether or not the transparent expect-continue handling is a problem depends on whether your targeted client applications uses expect-continue authentication or not.
I haven't researched this in too much detail yet, so I can't say with certainty which containers do transparent handling and whether or not it can be disabled, or what client applications require it.
Might be good if someone from Grizzly or Tomcat could comment on options for disabling the container handling.

File permissions with FileSystemObject - CScript.exe says one thing, Classic ASP says another

I have a classic ASP page - written in JScript - that's using Scripting.FileSystemObject to save files to a network share - and it's not working. ("Permission denied")
The ASP page is running under IIS using Windows authentication, with impersonation enabled.
If I run the following block of code locally via CScript.exe:
var objNet = new ActiveXObject("WScript.Network");
WScript.Echo(objNet.ComputerName);
WScript.Echo(objNet.UserName);
WScript.Echo(objNet.UserDomain);
var fso = new ActiveXObject("Scripting.FileSystemObject");
var path = "\\\\myserver\\my_share\\some_path";
if (fso.FolderExists(path)) {
WScript.Echo("Yes");
} else {
WScript.Echo("No");
}
I get the (expected) output:
MY_COMPUTER
dylan.beattie
MYDOMAIN
Yes
If I run the same code as part of a .ASP page, substituting Response.Write for WScript.Echo I get this output:
MY_COMPUTER
dylan.beattie
MYDOMAIN
No
Now - my understanding is that the WScript.Network object will retrieve the current security credentials of the thread that's actually running the code. If this is correct - then why is the same user, on the same domain, getting different results from CScript.exe vs ASP? If my ASP code is running as dylan.beattie, then why can't I see the network share? And if it's not running as dylan.beattie, why does WScript.Network think it is?
Your problem is clear. In the current implementation you have only impersonation of users and no delegation. I don't want to repeat information already written by Stephen Martin. I only want to add at least three solutions. The classical way of delegation which Stephen Martin suggests is only one way. You can read some more ways here: http://msdn.microsoft.com/en-us/library/ff647404.aspx#paght000023_delegation. I see three practical ways of you solving your problem:
Convert the impersonation token of the user to a token with delegation level of impersonation or to a new primary token. You can do this with respect of DuplicateToken or DuplicateTokenEx.
Use S4U2Self (see http://msdn.microsoft.com/en-us/magazine/cc188757.aspx and http://msdn.microsoft.com/en-us/library/ms998355.aspx) to receive a new token from the old one with respect of one simple .NET statement WindowsIdentity wi = new WindowsIdentity(identity);
You can access another server with respect of one fixed account. It can be a computer account on an account of the application pool of the IIS. It can be another fixed defined account which one will only use for access to the file system.
It is important to know which version of Windows Server you have on the server where IIS is running and which Domain Function Level you have in Active Directory for your Domain (you see this in "Active Directory Domain and Trusts" tool if you select your domain and choose "Raise Domain Functional Level"). It is also interesting to know under which account the application pool of the IIS runs.
The first and the third way will always work. The third way can be bad for your environment and for the current permission in the file system. The second one is very elegant. It allows control of which servers (file server) are accessed from IIS. This way has some restrictions and it needs some work to be done in Active Directory.
Because you use classic ASP, a small scriptable software component must be created to support your implementation.
Which way do you prefer?
UPDATED based on the question from comment: Because you use classic ASP you can not use a Win32 API directly, but you can write a small COM component in VB6 or in .NET which use APIs which you need. As an example you can use code from http://support.microsoft.com/kb/248187/en. But you should do some other things inside. So I explain now which Win32 API can help you to do everything what you need with tokens and impersonation.
First of all a small explanation about impersonation. Everything works very easy. There are always one primary token under which the process runs. To any thread another token (thread token) can be assigned. To do this one needs to have a token of a user hUserToken and call API ImpersonateLoggedOnUser(hUserToken);.
To go back to the original process token (for the current thread only) you can call RevertToSelf() function. The token of user will be received and already impersonated for you by IIS, because you so configured your Web Site. To go back to the original process token you should implement calling of the function RevertToSelf() in your custom COM component. Probably, if you need to do nothing more in the ASP page, it will be enough, but I recommend you be more careful and save current users token in a variable before operation with files. Then you make all operations with file system and at the end reassign users token back to the current thread. You can assign an impersonation token to a thread with respect of SetThreadToken(NULL,hUserToken);. To give (save) current thread token (user token in your case) you can use OpenThreadToken API. It must work.
UPDATED 2: Probably the usage of RevertToSelf() function at the end of one ASP page would be already OK for you. The corresponding C# code can be so:
Create a new Project in C# of the type "Class Library" with the name LoginAdmin. Paste the following code inside
using System;
using System.Runtime.InteropServices;
namespace LoginAdmin {
[InterfaceTypeAttribute (ComInterfaceType.InterfaceIsDual)]
public interface IUserImpersonate {
[DispId(1)]
bool RevertToSelf ();
}
internal static class NativeMethods {
[DllImport ("advapi32.dll", SetLastError = true)]
internal static extern bool RevertToSelf ();
}
[ClassInterface (ClassInterfaceType.AutoDual)]
public class UserImpersonate : IUserImpersonate {
public UserImpersonate () { }
public bool RevertToSelf () {
return NativeMethods.RevertToSelf();
}
}
}
Check in project properties in "Build" part "Register for COM interop". In "Signing" part of the project check Sign the assembly and in "Choose a strong name key file" choose <New...>, then type any filename and password (or check off "protect my key..."). At the end you should modify a line from AssemblyInfo.cs in Properties part of the project:
[assembly: ComVisible (true)]
After compiling this project you get two files, LoginAdmin.dll and LoginAdmin.tlb. The DLL is already registered on the current computer. To register if on the other computer use RegAsm.exe.
To test this COM DLL on a ASP page you can do following
<%# Language="javascript" %>
<html><body>
<% var objNet = Server.CreateObject("WScript.Network");
Response.Write("Current user: ");Response.Write(objNet.UserName);Response.Write("<br/>");
Response.Write("Current user's domain: ");Response.Write(objNet.UserDomain);Response.Write("<br/>");
var objLoginAdmin = Server.CreateObject("LoginAdmin.UserImpersonate");
var isOK = objLoginAdmin.RevertToSelf();
if (isOK)
Response.Write("RevertToSelf return true<br/>");
else
Response.Write("RevertToSelf return false<br/>");
Response.Write("One more time after RevertToSelf()<br/>");
Response.Write("Current user: ");Response.Write(objNet.UserName);Response.Write("<br/>");
Response.Write("Current user's domain: ");Response.Write(objNet.UserDomain);Response.Write("<br/>");
var fso = Server.CreateObject("Scripting.FileSystemObject");
var path = "\\\\mk01\\C\\Oleg";
if (fso.FolderExists(path)) {
Response.Write("Yes");
} else {
Response.Write("No");
}%>
</body></html>
If the account used to run the IIS application pool has access to the corresponding network share, the output will be look like following
Current user: Oleg
Current user's domain: WORKGROUP
RevertToSelf return true
One more time after RevertToSelf()
Current user: DefaultAppPool
Current user's domain: WORKGROUP
Yes
Under impersonation you can only access securable resources on the local computer you cannot access anything over the network.
On Windows when you are running as an impersonated user you are running under what is called a Network token. This token has the user's credentials for local computer access but has no credentials for remote access. So when you access the network share you are actually accessing it as the Anonymous user.
When you are running a process on your desktop (like CScript.exe) then you are running under an Interactive User token. This token has full credentials for both local and remote access, so you are able to access the network share.
In order to access remote resources while impersonating a Windows user you must use Delegation rather then Impersonation. This will involve some changes to your Active directory to allow delegation for the computer and/or the users in your domain. This can be a security risk so it should be reviewed carefully.

Is there a HTTP cache suited for developing against API limited web services?

At some point I'm going to want to run my application against something like the real web service. The web service has an API call limit that I could see hitting. I considered serializing out some JSON files manually, but it seems like this would basically be caching the hard way.
Is there a HTTP cache I could run on my local machine which would aggressively (until I manually reset it) cache requests to a certain site?
You say "cache" but I think you really mean "filter" or "proxy". The first solution that comes to mind is the iptables system which can be used, with -limit and -hitcount rules to drop packets to the webserver after some threshold. I won't even pretend to be competent at iptables configuration.
The second course might be a web proxy like Squid using its delay pool mechanism. Expect a learning curve there as well.
I've built a proxy server that handles development requests and ensures that API calls are hammered in testing. This is how I do it with my ASP.Net MVC proxy:
public ActionResult ProxyRequest(string url, string request)
{
object cachedRequest = Cache[url];
if(cachedRequest != null)
{
return Content(cachedRequest, "application/json");
}
else
{
// make the http request here
Cache[url] = cachedRequest;
return Content(cachedRequest, "application/json");
}
}
I'm not on my development box right now so I'm doing this off the top off my head, but the concept is the same. Instead of using Cache[url] = cachedRequest I use a Cache.Insert method but that has a lot of parameters that I couldn't remember. (got lazy and built a wrapper class around it so I don't have to remember it)
This setup proxies all of my JSON requests (using a var isDevelopment = true (|| false)) in my JS code and using the isDevelopment variable know whether or not to proxy the request or hit the server directly.

Resources