web.config in remote virtual directory - iis-7

Based on this SO answer: https://stackoverflow.com/a/2066040/458354 I have a web.config that defines a custom HttpHandler for all files in a subdirectory, placed in my static file subdirectory. This works perfectly if the directory, even when configured as virtual, is on the same server as IIS. However, if the virtual directory points to a shared folder on another server, I receive this error when accessing a static resource: "An error occurred loading a configuration file: Invalid file name for file monitoring: '\\remoteserver\remotedir\web.config'."
I've even granted permissions on the remote directory to the IIS_IUSR from the webserver. I suspect the problem is the config path being a share. Any thoughts on an way to allow the virtual dir web.config to be read by IIS?
Alternatively, is there a way to configure the handler path in the site's normal web.config to cover an entire subdirectory?
Config line in question:
<add name="MyRequestHandler" type="MyApp.StaticFileRequestHandler,
MyApp" path="*" resourceType="Either" verb="GET,HEAD"
requireAccess="Read"/>

See 401 Unauthorized when custom IHttpHandler tries to read from virtual directory . Set the ApplicationPool identity to the same identity that will be used to connect to the remote virtual directory (could be a service account, for example).
Also, with regard to alternate ways to assign a Handler to a subdirectory, I ended up changing the application design to use asp.net routing (not MVC) and having the url request specify the desired static file by an identifier (which may not always be desirable, but was already an existing practice elsewhere in our application landscape). This eliminates the need for a child web.config in the virtual directory, as the Handler can then be assigned programatically:
public static void RegisterRoutes(RouteCollection routes)
{
// fileId must be all digits, with at least one digit
var routeConstraints = new RouteValueDictionary { { "fileId", #"\d{1,}" } };
var getProtectedFileRoute = new Route("resources/files/{fileId}", new MyFileRequestRouteHandler())
{
Constraints = routeConstraints
};
routes.Add(getProtectedFileRoute);
}
The identifier is no longer a wildcard, but a RouteData parameter. (Not sure if this would work if the identifier were a filename with a 'dot' and extension...).

Related

Letsencrypt acme-challenge on wordpress or asp.net mvc

I have been trying without success to generate security certificates for my company using Let's Encrypt. My company uses WordPress 3.9.7 for its main website and I am not allow to upgrade to a newer version since that is handled by a third party company.
The website is running on top of Internet Information Services 7.5 on Windows Server 2008 R2.
My question is: How can I make wordpress handle http://www.company.com/.well-known/acme-challenge/mftvrU2brecAXB76BsLEqW_SL_srdG3oqTQTzR5KHeA
?
I have already created a new empty page and a new template that returns exactly what let's encrypt is expecting but wordpress keeps returning a 404 for that page. My guess is that the problem arise with the dot(.) at the beginning of the route (".well-known") but I don't know how to solve that on wordpress.
I am also able to use an asp.net mvc website and make IIS point to that website for a while. Not a good idea though since clients may not be able to reach our website for a few minutes, but still an option. Then the question is: How can I create a controller or a route with a dot(".") at the beginning of the name?
Help will be really appreciated.
For ASP.Net MVC or Web Forms, with certain Routing configs, you'll end up treating this URL as something for the Routing Engine to hand off to the MVC/Forms Handler, not a static file return. The result will be a 404 or a 503. The solution is surprisingly very simple:
If you haven't already, place the Challenge file:
Create the necessary dirs - .well-known is tricky mostly because Microsoft is lazy, but you can either do it from cmdline or create the folder as .well-known. and Windows Explorer will notice the workaround and remove the trailing period for you.
Inside \.well-known\acme-challenge place the challenge file with the proper name and contents. You can go about this part any way you like; I happen to use Git Bash like echo "oo0acontents" > abcdefilename
Then make a Web.Config file in the acme-challenge dir with these contents:
<?xml version = "1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<staticContent>
<clear />
<mimeMap fileExtension = ".*" mimeType="text/json" />
</staticContent>
<handlers>
<clear />
<add name="StaticFile" path="*" verb="*" modules="StaticFileModule,DefaultDocumentModule"
resourceType="Either" requireAccess="Read" />
</handlers>
</system.webServer>
</configuration>
Source: https://github.com/Lone-Coder/letsencrypt-win-simple/issues/37
Done. The file will start returning instead of 404/503 allowing the Challenge to complete - you can now Submit and get your domain validated.
Aside: The above code snippet sets the content-type to json, a historical requirement that is no longer relevant to letsencrypt. The current requirement is there is no requirement - you can send a content-type of pantsless/elephants and it'll still work.
More for Asp.Net
I like to redirect all HTTP requests back to HTTPS to ensure users end up on a secure connection even if they didn't know to ask. There are a lot of easy ways to do that, until you're using LetsEncrypt - because you're going to break requests for .well-known. You can setup a static method in a class, like this:
public static class HttpsHelper
{
public static bool AppLevelUseHttps =
#if DEBUG
false;
#else
true;
#endif
public static bool Application_BeginRequest(HttpRequest Request, HttpResponse Response)
{
if (!AppLevelUseHttps)
return false;
switch (Request.Url.Scheme)
{
case "https":
return false;
#if !DEBUG
case "http":
var reqUrl = Request.Url;
var pathAndQuery = reqUrl.PathAndQuery;
// Let's Encrypt exception
if (pathAndQuery.StartsWith("/.well-known"))
return false;
// http://stackoverflow.com/a/21226409/176877
var url = "https://" + reqUrl.Host + pathAndQuery;
Response.Redirect(url, true);
return true;
#endif
}
return false;
}
}
Now that can do a great job of redirecting to HTTPS except when LetsEncrypt comes knocking. Tie it in, in Global.asax.cs:
protected void Application_BeginRequest(object sender, EventArgs ev)
{
HttpsHelper.Application_BeginRequest(Request, Response);
}
Notice that the bool returned is discarded here. You can use it if you like to decide whether to end the request/response immediately, true meaning, end it.
Finally, if you like, you can use the AppLevelUseHttps variable to turn off this behavior if need-be, for example to test if things are working without HTTPS. For example, you can have it set to the value of a Web.Config variable.

ASP.net Identity can't login

I'm using Asp.net Identity and I've deployed my web role to Azure, and changed my connection string in Web.config so it looks like this:
<connectionStrings>
<add name="DefaultConnection" connectionString="Server=SERVERNAME,1433;Database=DATABASE;User ID=USER;Password=PASSWORD;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />
</connectionStrings>
I haven't changed default Account controller, but when I try to Login nothing nothing happens except that URL changes to "/Account/Login?ReturnUrl=%2FRoutines" which should happen if user successfully logged in (no errors are shown)
Why is this happening ? (and how can I fix it)
EDIT
Here is code which configures ASP.net Identity
public class DatabaseContext : IdentityDbContext<User>
{
public DatabaseContext()
: base("DefaultConnection")
{
Configuration.LazyLoadingEnabled = true;
}
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<IdentityUser>()
.ToTable("Users");
modelBuilder.Entity<User>()
.ToTable("Users");
var usermapping = new UserMapping();
usermapping.MapUser(modelBuilder);
}
I've noticed that now even when I'm using LocalDb I can't Log in, and I don't know why is this because, I haven't changed my code, only changes that I made to the project is Added Azure web service, and Asp.net web Api 2 project (and when I've tested locally I didn't run my web api project), before this everything worked fine.
Are you sure your connection is really used when you deploy (in Web.config.release)?
In case you are, try testing the website locally (with the Azure SQL connection string) and stepping through the code for POST version of Login. There you should be able to see, what exactly is going on. You will probably need to enable access to SQL from your IP address, which is easy to do - just go to Azure Portal, click on your SQL database and down below select Manage, which will automatically ask you for your IP address access permission.
I don't know what I exactly happened, but I've fetched older version of solution from TFS and that helped (eve though code was same)

How to define error-handling logic at the server level

For a website/ vpath, it's possible to handle the Application_Error event to catch errors before they get sent back to the browser. Is it possible to also do this at the server level somehow? That is, define a method at the root level that will execute if an error occurs in a website, but that website fails to handle the error for whatever reason.
I know you can use the web.config at the root level to define custom error messages per HTTP status code. However, this isn't ideal for my case, because I want to return different types of content (ie, HTML or something else) depending on the application logic.
A custom http module can be registered at applicationHost.config. Then this module is used by all IIS applications on the target machine.
1) Create a signed class library project with http module:
public class ErrorHandlingModule : IHttpModule
{
public void Dispose() { }
public void Init(HttpApplication context)
{
context.Error += new EventHandler(context_Error);
}
void context_Error(object sender, EventArgs e)
{
// handle error
}
}
2) Install the class library into GAC, so it can be shared by all IIS applications.
3) Install the http module to applicationHost.config file. This file usualy resides in C:\Windows\System32\inetsrv\config. Files in this folder can be accessed only by 64-bit processes (there is no such issue on 32-bit OSes), VS2010 cannot see them but Explorer can. The applicationHost.config fragment could look like this:
<location path="" overrideMode="Allow">
<system.webServer>
<modules>
<add name="MyModule" preCondition="managedHandler" type="GlobalErrorHandler.ErrorHandlingModule, GlobalErrorHandler, Version=1.0.0.0, Culture=neutral, PublicKeyToken=bfd166351ed997df" />
I am not clear what your question is but as per my understanding. inside application_error you can use ,
Server.GetLastError() to get last error occured in server level.

Weird "The file '/DefaultWsdlHelpGenerator.aspx' does not exist" error when remapping WebService HttpHandler

I have dynamic CMS-driven (custom rolled, I know, wheels, etc. but not my decision!) site, which uses an HttpModule to direct content. I have found that .asmx resources were not working. After investigation, I figured out that this was because I had essentially overridden the handler by taking the request out of the overall pipeline.
So I am now detecting if the resource exists and is an .asmx file, and handling accordingly. Which I think is to create a WebServiceHandler using WebServiceHandlerFactory and then remapping it.
This works fine with a ?wsdl querystring, but ask for the URI itself and you get (at point indicated by asterisks):
System.InvalidOperationException was unhandled by user code
Message=Failed to handle request.
[snip]
InnerException:
System.InvalidOperationException
Message=Unable to handle request.
Source=System.Web.Services
InnerException: System.Web.HttpException
Message=The file '/DefaultWsdlHelpGenerator.aspx' does not
exist.
Note the final InnerException. This thread appears to suggest a corrupt .NET Framework install, but the file is present in the 4.0 Config folder. I suspect a mistake on my part. Am I remapping incorrectly?
public class xxxVirtualContentHttpModule : xxxHttpModule
{
protected override void OnBeginRequest(IxxxContextProvider cmsContext, HttpContext httpContext)
{
string resolvePath = httpContext.Request.Url.AbsolutePath;
// is path a physical file?
IRootPathResolver rootPathResolver=new HttpServerRootPathResolver(httpContext.Server);
string serverPath = rootPathResolver.ResolveRoot("~" + resolvePath);
if (File.Exists(serverPath))
{
if (Path.GetExtension(serverPath).Equals(".asmx", StringComparison.CurrentCultureIgnoreCase))
{
WebServiceHandlerFactory webServiceHandlerFactory = new WebServiceHandlerFactory();
IHttpHandler webServiceHttpHandler = webServiceHandlerFactory.GetHandler(httpContext, "Get", resolvePath, serverPath); // *****
httpContext.RemapHandler(webServiceHttpHandler);
}
}
}
Update
I have removed all references to the HttpModules and this issue still occurs, meaning it has nothing to do with the CMS portion of the system.
Solved it.
There seems to be a new configuration added to web.config:
<system.web>
<webServices>
<wsdlHelpGenerator href="DefaultWsdlHelpGenerator.aspx" />
</webServices>
</system.web>
Removed this, and it all works.

Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain":
Static content, such as images, JS and
CSS files, don't need to be
accompanied by cookies, as there is
no user interaction with these
resources. You can decrease request
latency by serving static resources
from a domain that doesn't serve
cookies.
Google then says that the best way to do this is to buy a new domain and set it to point to your current one:
To reserve a cookieless domain for
serving static content, register a new
domain name and configure your DNS
database with a CNAME record that
points the new domain to your existing
domain A record. Configure your web
server to serve static resources from
the new domain, and do not allow any
cookies to be set anywhere on this
domain. In your web pages, reference
the domain name in the URLs for the
static resources.
This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain?
At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do.
Apologies if this is a real newb question, I'm new at this!
Thanks.
This is how I've done in my website:
Setup a website on IIS with an ASP.NET application pool
Set the binding host to your.domain.com
Note: you cannot use domain.com or else the sub-domain will not be cookieless
Create a folder on the website called Static
Setup another website, point it to Static folder created earlier.
Set the binding host to static.domain.com
Use an application pool with unmanaged code
On the settings open Session State and check Not enabled.
Now you have a static website. To setup open the web.config file under Static folder and replace with this one:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.web>
<sessionState mode="Off" />
<pages enableSessionState="false" validateRequest="false" />
<roleManager>
<providers>
<remove name="AspNetWindowsTokenRoleProvider" />
</providers>
</roleManager>
</system.web>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="30.00:00:00" />
</staticContent>
<httpProtocol>
<customHeaders>
<remove name="X-Powered-By" />
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>
This is going to cache the files for 30 days, remove a RoleManager (I don't know if it changes anything but I removed all I could find), and remove an item from Response Headers.
But here is a problem, your content will be cached even when a new version is deployed, so to avoid this I made an helper method for MVC. Basically you have to append some QueryString that will change every time you change these files.
default.css?v=1 ?v=2 ...
My MVC method gets the last write date and appends on the file url:
public static string GetContent(this UrlHelper url, string link)
{
link = link.ToLower();
// last write date ticks to hex
var cacheBreaker = Convert.ToString(File.GetLastWriteTimeUtc(url.RequestContext.HttpContext.Request.MapPath(link)).Ticks, 16);
// static folder is in the website folders, but instead of
// www.domain.com/static/default.css I convert to
// static.domain.com/default.css
if (link.StartsWith("~/static", StringComparison.InvariantCultureIgnoreCase))
{
var host = url.RequestContext.HttpContext.Request.Url.Host;
host = String.Format("static.{0}", host.Substring(host.IndexOf('.') + 1));
link = String.Format("http://{0}/{1}", host, link.Substring(9));
// returns the file URL in static domain
return String.Format("{0}?v={1}", link, cacheBreaker);
}
// returns file url in normal domain
return String.Format("{0}?v={1}", url.Content(link), cacheBreaker);
}
And to use it (MVC3 Razor):
<link href="#Url.GetContent("~/static/default.css")" rel="stylesheet" type="text/css" />
If you are using another kind of application you can do the same, make a method that to append HtmlLink on the page.
If you don't write cookies from domain, the domain will be cookie-less.
When the domain is set to host only resource content like scripts, images, etc., they are requested by plain HTTP-GET requests from browsers. These contents should be served as-is. This will make your domain cookieless. This cannot be done by web-server configuration. Http is completely state-less and web-servers have no idea about the cookies at all. Cookies are written or sent to clients via server-side scripts. The best you can do is disable asp.net, classic-asp or php script capabilities on the IIS application.
The way we do it is.
We have a sub-domain setup to serve cookie-less resources. So we host all our images and scripts on the sub-domain. and from the primary application we just point the resource by it's url. We make sure sub-domain remains cookie-free by not serving any dynamic script on that domain or by creating any asp.net or php sessions.
http://cf.mydomain.com/resources/images/*.images
http://cf.mydomain.com/resources/scripts/*.scripts
http://cf.mydomain.com/resources/styles/*.styles
from primary domain we just refer a resource as following.
<img src="http://cf.mydomain.com/resources/images/logo.png" />
Serving resources from Cookie-less domains is great technique if you have more than 5 of combined images/styleshees/javascript then its benefit is noticeable and is gain even with that extra DNS lookup. Also its very easy to implement :). There's how you can easily set it in web.config[system.web] and have completely cookieless subdomain (unless its cookie-fested by Google Analytics but thats easily curable as well) :)
<!-- anonymousIdentification configuration:
enabled="[true|false]" Feature is enabled?
cookieName=".ASPXANONYMOUS" Cookie Name
cookieTimeout="100000" Cookie Timeout in minutes
cookiePath="/" Cookie Path
cookieRequireSSL="[true|false]" Set Secure bit in Cookie
cookieSlidingExpiration="[true|false]" Reissue expiring cookies?
cookieProtection="[None|Validation|Encryption|All]" How to protect cookies from being read/tampered
domain="[domain]" Enables output of the "domain" cookie attribute set to the specified value
-->
To give you example
<anonymousIdentification enabled="true" cookieName=".ASPXANONYMOUS" cookieTimeout="100000" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="None" domain="www.domain." />
This will set .ASPXANONYMOUS cookie only on www.domain.anyTLD but not myStatic.domain.anyTLD ... no need to create new pools and stuff :).
If you aren't using that cookie, in any way, you could just disable session state in IIS 6:
http://support.microsoft.com/kb/244465
In IIS, go to the Home Directory tab, then click the "Configuration" button.
Next go to the Options tab and un-check "Enable session state". The cookie will go away, and you can leave your files where they are with no need for an extra domain or sub-doamin.
Plus, by using additional domains, you increase dns lookups, which partially defeats the intent of the overall optimization.

Resources