I have been trying without success to generate security certificates for my company using Let's Encrypt. My company uses WordPress 3.9.7 for its main website and I am not allow to upgrade to a newer version since that is handled by a third party company.
The website is running on top of Internet Information Services 7.5 on Windows Server 2008 R2.
My question is: How can I make wordpress handle http://www.company.com/.well-known/acme-challenge/mftvrU2brecAXB76BsLEqW_SL_srdG3oqTQTzR5KHeA
?
I have already created a new empty page and a new template that returns exactly what let's encrypt is expecting but wordpress keeps returning a 404 for that page. My guess is that the problem arise with the dot(.) at the beginning of the route (".well-known") but I don't know how to solve that on wordpress.
I am also able to use an asp.net mvc website and make IIS point to that website for a while. Not a good idea though since clients may not be able to reach our website for a few minutes, but still an option. Then the question is: How can I create a controller or a route with a dot(".") at the beginning of the name?
Help will be really appreciated.
For ASP.Net MVC or Web Forms, with certain Routing configs, you'll end up treating this URL as something for the Routing Engine to hand off to the MVC/Forms Handler, not a static file return. The result will be a 404 or a 503. The solution is surprisingly very simple:
If you haven't already, place the Challenge file:
Create the necessary dirs - .well-known is tricky mostly because Microsoft is lazy, but you can either do it from cmdline or create the folder as .well-known. and Windows Explorer will notice the workaround and remove the trailing period for you.
Inside \.well-known\acme-challenge place the challenge file with the proper name and contents. You can go about this part any way you like; I happen to use Git Bash like echo "oo0acontents" > abcdefilename
Then make a Web.Config file in the acme-challenge dir with these contents:
<?xml version = "1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<staticContent>
<clear />
<mimeMap fileExtension = ".*" mimeType="text/json" />
</staticContent>
<handlers>
<clear />
<add name="StaticFile" path="*" verb="*" modules="StaticFileModule,DefaultDocumentModule"
resourceType="Either" requireAccess="Read" />
</handlers>
</system.webServer>
</configuration>
Source: https://github.com/Lone-Coder/letsencrypt-win-simple/issues/37
Done. The file will start returning instead of 404/503 allowing the Challenge to complete - you can now Submit and get your domain validated.
Aside: The above code snippet sets the content-type to json, a historical requirement that is no longer relevant to letsencrypt. The current requirement is there is no requirement - you can send a content-type of pantsless/elephants and it'll still work.
More for Asp.Net
I like to redirect all HTTP requests back to HTTPS to ensure users end up on a secure connection even if they didn't know to ask. There are a lot of easy ways to do that, until you're using LetsEncrypt - because you're going to break requests for .well-known. You can setup a static method in a class, like this:
public static class HttpsHelper
{
public static bool AppLevelUseHttps =
#if DEBUG
false;
#else
true;
#endif
public static bool Application_BeginRequest(HttpRequest Request, HttpResponse Response)
{
if (!AppLevelUseHttps)
return false;
switch (Request.Url.Scheme)
{
case "https":
return false;
#if !DEBUG
case "http":
var reqUrl = Request.Url;
var pathAndQuery = reqUrl.PathAndQuery;
// Let's Encrypt exception
if (pathAndQuery.StartsWith("/.well-known"))
return false;
// http://stackoverflow.com/a/21226409/176877
var url = "https://" + reqUrl.Host + pathAndQuery;
Response.Redirect(url, true);
return true;
#endif
}
return false;
}
}
Now that can do a great job of redirecting to HTTPS except when LetsEncrypt comes knocking. Tie it in, in Global.asax.cs:
protected void Application_BeginRequest(object sender, EventArgs ev)
{
HttpsHelper.Application_BeginRequest(Request, Response);
}
Notice that the bool returned is discarded here. You can use it if you like to decide whether to end the request/response immediately, true meaning, end it.
Finally, if you like, you can use the AppLevelUseHttps variable to turn off this behavior if need-be, for example to test if things are working without HTTPS. For example, you can have it set to the value of a Web.Config variable.
Related
We just had an external pen test and all of our sites are coming back with a low warning stating that we allow cross site scripting.
I don't think this is actually the case since we had to specifically allow it on one page on one specific site for that one to work.
The report shows that when calling our URL's a header for Access-Control-Allow-Origin is set to *.
Using Postman I can get that same result.
This is returning the same result from both ASP.Net web forms applications as well as new ASP.Net 6 Razor page apps.
Is there any way to have this header removed?
Maybe something in IIS?
To get rid of it you have to list all the origins that are allowed to send the requests to your endpoint. If you are running ASP.NET Core application then you have to configure the CORS middleware like this:
// Startup.ConfigureServices() method
// For example only, put these values in the appsettings.json so they could be overridden if you need it
var corsAllowAnyOrigin = false;
var corsAllowOrigins = new string[]{ "https://*.contoso.com", "https://api.contoso.com" };
// Configuring CORS module
services.AddCors(options =>
{
options.AddDefaultPolicy(
builder =>
{
if (apiConfiguration.CorsAllowAnyOrigin)
{
builder.AllowAnyOrigin();
}
else
{
builder.WithOrigins(apiConfiguration.CorsAllowOrigins);
}
builder.AllowAnyHeader();
builder.AllowAnyMethod();
});
});
For your Web Forms application you can install IIS CORS module and configure it in the web.config file like this:
<?xml version="1.0"?>
<configuration>
<system.webServer>
<cors enabled="true">
<add origin="*" allowed="false"/>
<add origin="https://*.contoso.com" allowCredentials="false" />
<add origin="https://api.contoso.com" allowCredentials="true" />
</cors>
</system.webServer>
</configuration>
I have access to a sub directory on server like http://root/_pp. My job is to create a generic http handler in .NET and drop it in _pp directory. The said handler should accept POST and OPTIONS request from external sources like custom code in .NET and java would be calling the handler using OPTIONS and POST requests.
I have uploaded quite a simple handler in the sub directory. The code looks like the following
public void ProcessRequest(HttpContext context){
context.Response.ClearHeaders();
//string origin = context.Request.Headers["Origin"];
context.Response.AppendHeader("Access-Control-Allow-Origin","*");
//string requestHeaders = context.Request.Headers["Access-Control-Request-Headers"];
context.Response.AppendHeader("Access-Control-Allow-Headers","*");
context.Response.AppendHeader("Access-Control-Allow-Methods", "POST, OPTIONS");
context.Response.ContentType = "text/plain";
context.Response.Write("Hello World again again");
}
When I upload this handler to the _pp sub directory and send a POST or OPTIONS request from fiddler, it returns 500 Internal Server Error.
Please Note: I have no control over configuring the IIS on server, nor can I access anything in root directory of the server.
Is it possible to achieve what I want with given constraints? Please help
EDIT 1 handler registration in web.config of _pp
<system.webserver>
<handlers>
<add name="IPN" verb="*" path="IPN.ashx" type="System.Web.UI.SimpleHandlerFactory" />
</handlers>
</system.webServer>
I'm running an Azure Website. Whenever I deploy, everyone gets logged out because the machineKey changes.
I specified the machineKey in the web.config but this didn't solve the issue. I believe this is because Azure automatically overwrites the machineKey [1].
I've found a couple of similar questions here but the answers link to dead links.
So, what's the solution? Surely there's a way to keep users logged in regardless of deployments on Azure.
Try to reset the machine-key configuration section upon Application_Start:
protected void Application_Start()
{
// ...
var mksType = typeof(MachineKeySection);
var mksSection = ConfigurationManager.GetSection("system.web/machineKey") as MachineKeySection;
var resetMethod = mksType.GetMethod("Reset", BindingFlags.NonPublic | BindingFlags.Instance);
var newConfig = new MachineKeySection();
newConfig.ApplicationName = mksSection.ApplicationName;
newConfig.CompatibilityMode = mksSection.CompatibilityMode;
newConfig.DataProtectorType = mksSection.DataProtectorType;
newConfig.Validation = mksSection.Validation;
newConfig.ValidationKey = ConfigurationManager.AppSettings["MK_ValidationKey"];
newConfig.DecryptionKey = ConfigurationManager.AppSettings["MK_DecryptionKey"];
newConfig.Decryption = ConfigurationManager.AppSettings["MK_Decryption"]; // default: AES
newConfig.ValidationAlgorithm = ConfigurationManager.AppSettings["MK_ValidationAlgorithm"]; // default: SHA1
resetMethod.Invoke(mksSection, new object[] { newConfig });
}
The above assumes you set the appropriate values in the <appSettings> section:
<appSettings>
<add key="MK_ValidationKey" value="...08EB13BEC0E42B3F0F06B2C319B..." />
<add key="MK_DecryptionKey" value="...BB72FCE34A7B913DFC414E86BB5..." />
<add key="MK_Decryption" value="AES" />
<add key="MK_ValidationAlgorithm" value="SHA1" />
</appSettings>
But you can load your actual values from any configuration source you like.
If Azure is rewriting your machineKey, you can't do much about it, as it is part of their infrastructure. However, there are other methods.
Override FormsAuthentication
This should not be difficult as you can easily look up for source code of FormsAuthentication and create your own logic and replace MachineKey with your own key stored in web.config or in your database.
Custom Authentication Filter
The simplest way would be to create a filter and check, verify, encrypt decrypt cookies in your filter. You need to do this on OnAuthorization method and create new instance of IPrincipal and set IsAuthenticated to true if descryption was successful.
OAuth
Enable OAuth and create OAuthProvider. However you will need to host OAuthProvider on server that is in your control as that will need machineKey working.
Enable Third Party OAuth, if you enable OAuth with Google, Facebook etc, it will be easy as user will be redirected to OAuth provider and they will continue to login automatically and a new session will be established.
I had the same issue and in my case I was using the webdeploy to Azure wizard in VS13. I thought I was going crazy as I would set the machinekey in the web.config and then it would be changed on the deployed web.config to autogenerate. It is something in the webdeploy script/settings. My solution was to open the live azure site from within VS13 using the Server Explorer and then editing the web.config and saving changes. This preserved my settings with my supplied keys and all works fine.
While I'd love to get rid of requiring FrontPage Extensions on a heavy traffic site I host, the client requires it to administrate the site. Having just implemented Wildcard Application Mapping in IIS 6 on this site in order to provide integrated Forms Authentication security between ASP and ASP.NET resources, this breaks FrontPage extensions. Everything works like a charm, including encrypting and caching roles that are now available even to ASP, except for the loss of FrontPage. Specifically, you cannot even login to FrontPage administration (incorrect credentials).
Has anyone gotten FrontPage to work with Wildcard Application Mapping routing through the ASP.NET 2.0 aspnet_isapi.dll?
UPDATE: I've marked #Chris Hynes answer even though I have not had the time to test (and the current configuration is working for the client). It makes sense and goes along with what I thought was occurring and possibly how to deal with, but did not know where to route the request at that point (fpadmdll.dll). Much thanks!
The issue here sounds like the wildcard mapping is taking precedence over the frontpage extensions ISAPI handler and/or messing up the request/response for that. I'd try creating a handler that does nothing and mapping it to fpadmdll.dll.
Something like this:
namespace YourNamespace
{
public IgnoreRequestHandler : IHttpHandler
{
public IsReusable { get { return true; } }
public void ProcessRequest(HttpContext context)
{ }
}
}
Then map it up in the web.config:
<httpHandlers>
<add verb="*" path="fpadmdll.dll" type="YourNamespace.IgnoreRequestHandler, YourDll" />
</httpHandlers>
I generate an XML/Google sitemap on the fly using an Http Handler, so that I don't need to maintain an XML file manually.
I have mapped my Http Handler to "sitemap.xml" in my web.config like this:
<httpHandlers>
<add verb="*" path="sitemap.xml" type="My.Name.Space, MyAssembly" />
</httpHandlers>
It works nicely. Now, www.mywebsite.com/sitemap.xml sets my Http Handler into action and does exactly what I want. However, this url will do the same: www.mywebsite.com/some/folder/sitemap.xml and I don't really want that i.e. I just want to map my handler to the root of my application.
I have tried changing the "path" of my handler in my web.config to "/sitemap.xml" and "~/sitemap.xml" but neither works.
Am I missing something here?
Try adding the following to your web.config
<urlMappings enabled="true">
<add url="~/SiteMap.xml" mappedUrl="~/MyHandler.ashx"/>
</urlMappings>
This uses a little known feature of ASP.NET 2.0 called 'Url Mapping'
Following on from Kirtan suggested solution #1 you can do a workaround like follows:
public void ProcessRequest(HttpContext context) {
//Ensure that the sitemap.xml request is to the root of the application
if (!context.Request.PhysicalPath.Equals(Server.MapPath("~/sitemap.xml"))) {
//Invoke the Default Handler for this Request
context.RemapHandler(null);
}
//Generate the Sitemap
}
You might need to play with this a bit, not sure if invoking the default handler will just cause IIS to re-invoke your Handler again. Probably worth testing in Debug mode from VS. If it does just re-invoke then you'll need to try invoking some static file Handler instead or you could just issue a HTTP 404 yourself eg
//Issue a HTTP 404
context.Response.Clear();
context.Response.StatusCode = (int)System.Net.HttpStatusCode.NotFound;
return;
See the MSDN documentation on HttpContext.RemapHandler for more info -
http://msdn.microsoft.com/en-us/library/system.web.httpcontext.remaphandler.aspx
2 solutions to this:
Soln #1:
You can check the request path using the Request.Url property, if the request is from the root path, you can generate the XML, else don't do anything.
Soln #2:
Put a web.config file with the following setting in every folder in which you don't want to handle the request for the sitemap.xml file.
You can, alternately, run a check in the global.asax, verify the request, and finaly re-assigning a new handler throughtout context.RemapHandler method.
The only thing is that you would´ve to implement a factory for that matter.
I would suggest you inherit the HttpApplication, and implement there the factory, but that's your call.