I would like to encrypt the connection string of my web.config. Here I have found a nice example on how to do this. I implemented this and on my development machine this runs find.
However if I upload it to the provider, it does not work with the following error:
[SecurityException: Request failed.]
System.Configuration.DpapiProtectedConfigurationProvider.Encrypt(XmlNode node)
In this blog I have read, that this is because of the web probably runs in medium trust and therefore WebConfigurationManager.OpenWebConfiguration can not be used. Instead of this, WebConfigurationManager.GetSection should be used. However, if I get the section as proposed, the call to ProtectSection fails with the following error message:
System.InvalidOperationException: This operation does not apply at runtime
Can anyone lead me to a solution, how I can encode (and decode) the connection string in the web.config file (at runtime)?
Update
Not a real answer to the question, but the hoster gave full trust to the web and now, all worked fine. I leave the quesion open, maybe someone posts a solution to the original question and helps with this people having the same problem but not getting full trust.
From http://msdn.microsoft.com/en-us/library/89211k9b%28v=vs.80%29.aspx
static void ToggleWebEncrypt()
{
// Open the Web.config file.
Configuration config = WebConfigurationManager.
OpenWebConfiguration("~");
// Get the connectionStrings section.
ConnectionStringsSection section =
config.GetSection("connectionStrings")
as ConnectionStringsSection;
// Toggle encryption.
if (section.SectionInformation.IsProtected)
{
section.SectionInformation.UnprotectSection();
}
else
{
section.SectionInformation.ProtectSection(
"DataProtectionConfigurationProvider");
}
// Save changes to the Web.config file.
config.Save();
}
UPDATE
Also, ensure that your service account has write permissions to the Web.config. Also, be aware that granting write permissions to your service account on the Web.config increases somewhat the security footprint of your application. Only do so if you understand and accept the risks.
Related
So i managed to encrypt my connectionstrings on my localhost, and everything was fine.. it could read without any problems.
Now after i have published my project on my webhote, the story is quite different.
I get following error:
Key not valid for use in specified state. (Exception from HRESULT: 0x8009000B)
Im wondering if i should decrypt it locally and thereafter encrypt it after it has been published to my webhotel? i have seen another thread where people suggest, that a machinekey should be added. but where would i place it, and where would i find it?
the whole stacktrace can be seen here
I used the following cmd to encrypt:
aspnet_regiis -pef "connectionStrings" "PATH" -prov "DataProtectionConfigurationProvider"
So i figured that the following error is yielded when the machinekey doesnt correspond with the key that has been used to encrypt the string in the first place (my localhost).
Therefore i had to make the following method:
private void ProtectSection(string sectionName, string provider)
{
Configuration config = WebConfigurationManager.OpenWebConfiguration("~/");
ConfigurationSection section = config.GetSection(sectionName);
if (section != null && !section.SectionInformation.IsProtected)
{
section.SectionInformation.ProtectSection(provider);
config.Save();
}
}
And call it in my global.asax file..
By doing so, i first uploaded my web.config unprotected and then asked my web hotel encrypt my connection string when the website was run the first time. Obviously it therefore used it's own machinekey, and nothing therefore conflicted.
I'm using Azure Web API 2. My clients are getting some 500 errors, and I'm trying to figure out why. I've turned on tracing in Azure portal, and without a truncated file, I see some great info like:
235. - GENERAL_RESPONSE_ENTITY_BUFFER
{"Message":...,"ExceptionMessage"...,"StackTrace":".....
The problem is my log files are getting truncated at 1MB. (The amount of posted JSON data can be large, which eats up log space.)
I see some potentially nice .htm files in LogFiles/DetailedErrors, but they are generic pages without any details or trace info.
In Web.Config I set <customErrors mode="Off" />. This added detail to trace files, but not to the DetailedErrors htm files.
Questions:
1) Can I increase the max size of the trace file? (I tried unsuccessfully using maxLogFileSizeKB, but didn't know where to put it, presumably in Web.Config.)
2) Any other way to see stack trace information on server errors from the LogFiles directory on the server, or otherwise?
I think your problem might be logging to the wrong place. There's three different places to store the logs, but the Preview Portal makes this less clear than the old Azure Portal. The documentation for logging still directs you to the old portal to set up and you can log to Blob Storage or to Table Storage.
https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/. Logging to tables might be less limiting.
While I was not able to increase the log size, I was able to get the trace information with IExceptionLogger. I don't need any special error handling, so just being notified is good enough for me. This is for API (2) controllers.
1) In App_Start/WebApiConfig.cs, I added the following line
config.Services.Add(typeof(IExceptionLogger), new ApiErrorLogger());
2) Create my ApiErrorLogger class
public class ApiErrorLogger : ExceptionLogger {
public override void Log(ExceptionLoggerContext context) {
addLogError(context.Request.RequestUri.ToString(), context.Exception.Message, context.Exception.StackTrace);
}
public static void addLogError(string uri, string message, string stackTrace) {
// Store data in Azure table
}
}
I don't have to use <customErrors mode="Off" />, which is good, and I can turn tracing off (which is resource expensive) in Azure portal.
I am new to the encryption process and have tried unsuccessfully to install an encrypted web.config file onto a hosting companies server. I am using Microsoft Visual Web Developer 2010 Express.
I have followed the steps located in Walkthrough: Encrypting Configuration Information Using Protected several times.
Please Note regarding the walkthrough, I do not have any machineKeys in my web.config file, so I skipped that encryption step.
When I Ran the aspnet_regiis -pef connectionStrings "c:\Users......\mywebsite.com"
Return is:
Encrypting configuration section ...
Succeeded!
2) I then FTP my web.config file and the site gets the below error: Note: The Line 8 is highlighted)
Server Error in '/' Application.
Configuration Error
Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately.
Parser Error Message: Failed to decrypt using provider 'RsaProtectedConfigurationProvider'. Error message from the provider: Bad Data.
Source Error:
Line 6:
Line 7:
Line 8:
Line 10:
Source File: C:\HostingSpaces*username**mywebsite.com*\wwwroot\web.config Line: 8
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1
I know there must be some piece missing but I have searched and have not found anything. I emailed the hosting company to find out if they need to do anything regarding encrypting web sites and they have not responded yet.
What I would expect is that there is a key that resides elsewhere which takes the encrypted value and decrypts it using an algorhythm. If this is so, where would I get that key and where would it go.
Any help is greatly appreciated and somewhat surprised I cannot find any issues similar to this on the web.
Thanks Much.
I don't have a direct answer to your question, but here's a simple technique to encrypt web.config. It may not be the best way, but it might be enough to get you started. This technique encrypts web.config during application start-up.
VERY IMPORTANT: make sure this code only runs in production. If you run it during development, you'll encrypt your source web.config and you won't be able to get it back.
private static void EncryptConfig() {
System.Configuration.Configuration config = WebConfigurationManager.OpenWebConfiguration(HostingEnvironment.ApplicationVirtualPath);
foreach (string sectionName in new[] { "connectionStrings", "appSettings" }) {
ConfigurationSection section = config.GetSection(sectionName);
if (!section.SectionInformation.IsProtected) {
section.SectionInformation.ProtectSection("DataProtectionConfigurationProvider");
}
}
config.Save();
}
You can then call this method in Application_Start()
protected void Application_Start() {
if (IsProduction) {
EncryptConfig();
}
}
This solution isn't perfect because when you deploy your web.config to your production server, it won't be encrypted. Because the encryption happens during runtime, it will only be encrypted once your application starts. When the first request comes in, web.config will be encrypted. When the second request comes in, your app will need to restart because asp.net will detect that web.config was changed. And then from that point on, your app will operate normally with an encrypted web.config. The benefit of this technique is that the encryption happens automatically. Whenever you deploy a new web.config file, it will automatically be encrypted during start-up.
Important: Make sure that EncryptConfig() only runs in production so that you don't encrypt your source web.config.
Jonny O - Thanks. This worked so easily. CP
I added the global.asax file and here are the code snippets that went into this file (global.asax.cs).
Granted much of this is duplicated from above, but it is my entire solution. Thanks again.
using System.Web.Configuration;
using System.Configuration;
using System.Web.Hosting;
protected void Application_Start(object sender, EventArgs e)
{
//Test to see if this app is being started on the development machine (e.g. in the debugger)
//This code will encript web.config the first time this program runs.
//Therefore, it is important to have a backup copy of the non-encrypted web.config as this
//code below will encrypt it, which is what we want to happen on the production server.
if (! System.Diagnostics.Debugger.IsAttached )
{
EncryptConfig(); //See below
}
}
/// <summary>
/// This technique of encrypting the web.config file was learned from this forum post:
/// http://stackoverflow.com/questions/5602630/encrypting-web-config-and-installing
/// </summary>
private static void EncryptConfig()
{
System.Configuration.Configuration config = WebConfigurationManager.OpenWebConfiguration(HostingEnvironment.ApplicationVirtualPath);
foreach (string sectionName in new[] { "connectionStrings", "appSettings" })
{
ConfigurationSection section = config.GetSection(sectionName);
if (!section.SectionInformation.IsProtected)
{
section.SectionInformation.ProtectSection("DataProtectionConfigurationProvider");
}
}
config.Save();
}
I have a problem with an Azure project with one WebRole but multiple instances that uses cookieless sessions. The application doesn't need Session storage, so it's not using any session storage provider, but I need to track the SessionID. Apparently, the SessionID should be the same accross the WebRole instances, but it changes suddently w/o explanation. We are using the SessionID to track some data, so it's very important.
In order to reproduce the issue:
Create a Cloud Project.
Add a ASP.NET Web Role. The code already in it will do.
Open Default.aspx
Add a control to see the current SessionID and a button to cause a postback
<p><%= Session.SessionID %></p>
<asp:Button ID="Button1" runat="server" Text="PostBack" onclick="Button1_Click" />
Add a event handler for button that will delay the response a bit:
protected void Button1_Click(object sender, EventArgs e)
{
System.Threading.Thread.Sleep(150);
}
Open Web.Config
Enable cookieless sessions:
<system.web>
<sessionState cookieless="true" />
</system.web>
Run the project, and hit fast and repeteadly the "PostBack" button for a while giving attention to the session id in the address bar. Nothing happens, the session id is always the same :). Stop it.
Open ServiceConfiguration.csfg
Enable four instances:
<Instances count="4" />
Ensure that in the Web.config there is a line related with the machine key that has been added automatically by Visual Studio. (at the end of system.web).
Rerun the project, hit fast and repeteadly the "Postback" button for a while and give attention to the session id in the address bar. You'll see how the SessionID changes after a while.
Why is this happening? As far as I know, if all machines share the machineKey, the session should be the same across them. With cookies there are no problems, the issue apparently is just when cookieless sessions are used.
My best guess, is that something wrong is happening when there are several instances, when the SessionID generated in one WebRole goes to another, is rejected and regenerated. That doesn't make sense, as all the WebRoles have the same machineKey.
In order to find out the problem, and see it more clearly, I created my own SessionIDManager:
public class MySessionIDManager : SessionIDManager
{
public override string CreateSessionID(HttpContext context)
{
if (context.Items.Contains("AspCookielessSession"))
{
String formerSessionID = context.Items["AspCookielessSession"].ToString();
// if (!String.IsNullOrWhiteSpace(formerSessionID) && formerSessionID != base.CreateSessionID(context))
// Debugger.Break();
return formerSessionID;
}
else
{
return base.CreateSessionID(context);
}
}
}
And to use it change this line in the WebConfig:
<sessionState cookieless="true" sessionIDManagerType="WebRole1.MySessionIDManager" />
Now you can see that the SessionID doesn't change, no matter how fast and for how long you hit. If you uncomment those two lines, you will see how ASP.NET is creating a new sessionID even when there is already one.
In order to force ASP.NET to create a new session, just a redirect to an absolute URL in your site:
Response.Redirect(Request.Url.AbsoluteUri.Replace(Request.Url.AbsolutePath, String.Empty));
Why is this thing happening with cookieless sessions?
How reliable is my solution in MySessionIDManager ?
Kind regards.
UPDATE:
I've tried this workaround:
User-Specified Machine Keys
Overwritten by Site-Level Auto
Configuration, but the problem
still stands.
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
using (var server = new ServerManager())
{
try
{
// get the site's web configuration
var siteNameFromServiceModel = "Web"; // update this site name for your site.
var siteName =
string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
var siteConfig = server.Sites[siteName].GetWebConfiguration();
// get the appSettings section
var appSettings = siteConfig.GetSection("appSettings").GetCollection()
.ToDictionary(e => (string)e["key"], e => (string)e["value"]);
// reconfigure the machine key
var machineKeySection = siteConfig.GetSection("system.web/machineKey");
machineKeySection.SetAttributeValue("validationKey", appSettings["validationKey"]);
machineKeySection.SetAttributeValue("validation", appSettings["validation"]);
machineKeySection.SetAttributeValue("decryptionKey", appSettings["decryptionKey"]);
machineKeySection.SetAttributeValue("decryption", appSettings["decryption"]);
server.CommitChanges();
_init = true;
}
catch
{
}
}
return base.OnStart();
}
I've also tried this about put a
session start handler and add
some data, but no luck.
void Session_Start(object sender, EventArgs e)
{
Session.Add("dummyObject", "dummy");
}
Bounty up!
In short, unless you use cookies or a session provider there is no way for the session id to pass from one web role instance to the other. The post you mention says that the SessionID does NOT stay the same across web roles if you don't use cookies or session storage.
Check this previous question for ways to handle state storage in Azure, e.g. using Table Storage
The machineKey has nothing to do with sessions or the application domain, it is the key used to encrypt,decrypt,validate authentication and viewstate data. To verify this open SessionIDManager.CreateSessionID with Reflector. You will see that the ID value is just a random 16-byte value encoded as a string.
The AspCookielessSession value is already checked by SessionIDManager in the GetSessionID method, not CreateSessionID so the check is already finished before your code gets executed. Since the default sessionstate mode is InProc it makes sence that separate web roles will not be able to validate the session key so they create a new one.
In fact, a role may migrate to a different physical machine at any time, in which case its state will be lost. This post from the SQL Azure Team describes a way to use SQL Azure to store state for exactly this reason.
EDIT I finally got TableStorageSessionStateProvider to work in cookieless mode!
While TableStorageSessionStateProvider does support cookieless mode by overriding SessionStateStoreProviderBase.CreateUnititializedItem, it fails to handle empty sessions properly in private SessionStateStoreData GetSession(HttpContext context, string id, out bool locked, out TimeSpan lockAge,out object lockId, out SessionStateActions actions,bool exclusive). The solution is to return an empty SessionStateStoreData if no data is found in the underlying blob storage.
The method is 145 lines long so I won't paste it here. Search for the following code block
if (actions == SessionStateActions.InitializeItem)
{
// Return an empty SessionStateStoreData
result = new SessionStateStoreData(new SessionStateItemCollection(),
}
This block returns an empty session data object when a new session is created. Unfortunately the empty data object is not stored to the blob storage.
Replace the first line with the following line to make it return an empty object if the blob is empty:
if (actions == SessionStateActions.InitializeItem || stream.Length==0)
Long stroy short cookieles session state works as long as the provider supports it. You'll have to decide whether using cookieless state justifies using a sample provider though. Perhaps vtortola should check the AppFabric Caching CTP. It includes out-of-the-box ASP.NET providers, is a lot faster and it definitely has better support than the sample providers. There is even a step-by-step tutorial on how to set session state up with it.
Sounds tricky.
I have one suggestion/question for you. Don't know if it will help - but you sound like you're ready to try anything!
It sounds like maybe the session manager on the new machine is checking the central session storage provider and, when it finds that the session storage is empty, then it's issuing a new session key.
I think a solution may come from:
- using Session_Start as you have above in order to insert something into Session storage
- plus inserting a persistent Session storage provider of some description into the web.config - e.g. some of the oldest Azure samples provide a table based provider, or some of the newer samples provide an AppFabric caching solution.
I know your design is not using the session storage, but maybe you need to put something in (a bit like your Session_Start), plus you need to define something other than in-process session management.
Alternatively, you need to redesign your app around something other than ASP.NET sessions.
Hope that helps - good luck!
I experienced the same problem and after much research and debugging I found that the issue occurred because the "virtual servers" in the Azure SDK map the websites to different paths in the IIS metabase. (You can see this through through Request.ServerVariables["APPL_MD_PATH"].)
I just found this out now but wanted to post this so people could get working on testing it. My theory is that this problem may go away once it's published out to Azure proper. I'll update with any results I find.
We'd like to restrict the maximum upload file size in our web site. We've already set the appropriate limits in our web.config. The problem we're encountering is if a really large file (1 GB, for example) is uploaded, the entire file is uploaded before a server-side error is generated, and the type of the error is different whether the file is huge or not.
Is there a way to detect the size of a pending file upload before the actual upload takes place?
Here's my relevant web.config settings that restrict requests to 16 MB:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.web>
<httpRuntime maxRequestLength="12288"/>
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="12582912"/>
</requestFiltering>
</security>
</system.webServer>
</configuration>
I've tried creating an HTTP module so I could intercept a request early in the request lifecycle, but the uploads seem to take place even before the BeginRequest event of HttpApplication:
public class UploadModule : IHttpModule
{
private const int MaxUploadSize = 12582912;
public void Init(HttpApplication context)
{
context.BeginRequest += handleBeginRequest;
}
public void Dispose()
{
}
private void handleBeginRequest(object sender, EventArgs e)
{
// The upload takes place before this method gets called.
var app = sender as HttpApplication;
if (app.Request.Files.OfType<HttpPostedFile>()
.Any(f => f.ContentLength > MaxUploadSize))
{
app.Response.StatusCode = 413;
app.Response.StatusDescription = "Request Entity Too Large";
app.Response.End();
app.CompleteRequest();
}
}
}
Update:
I know that client-side technologies like Flash can detect file sizes before upload, but we need a server-side workaround because we're wanting to target platforms that have no Flash/Java/ActiveX/Silverlight support. I believe that IIS or ASP.NET has a bug that's allowing large files to be uploaded despite the limits, so I've filed a bug here.
Would an ISAPI extension give me more control over request processing than HTTP modules and handlers, such as allowing me to abort an upload if the Content-Length header is seen to be larger than the allowed limit?
Update 2:
Sigh. Microsoft has closed the bug I filed as a duplicate but has provided no additional information. Hopefully they didn't just drop the ball on this.
Update 3:
Hooray! According to Microsoft:
This bug is being resolved as it has been ported over to the IIS product team. The IIS team has since fixed the bug, which will be included in future release of Windows.
The problem is that the upload happens all at once using the HTTP Post request so you can only detect it after it's done.
If you want more control over this you should try Flash based upload widgets which have this and more. Check out this link http://www.ajaxline.com/10-most-interesting-upload-widgets
Microsoft has responded on their Microsoft Connect site with the following:
This bug is being resolved as it has been ported over to the IIS product team. The IIS team has since fixed the bug, which will be included in future release of Windows.
If you are requesting a fix for the current OS, a QFE request must be opened. Please let me know if this is the route that you want to take. Please note that opening a QFE request does not necessarily mean that it would be approved.
So I guess we have to wait for the next version of IIS for the fix (unless a QFE request is fulfilled, whatever that is).
Is there a way to detect the size of a
pending file upload before the actual
upload takes place?
No. That would require access to the file size on the client. Allowing a web server direct access to files on the client would be a bit dangerous.
Your best bet is to place a line of text stating the maximum allowed file size.
OR you could create some sort of ActiveX control, java applet, etc so that you're not dependent on browser restrictions. Then you have to convince your users to install it. Probably not the best solution.
Well.... Depends how low-level you want to get.
Create a service app that acts as a proxy for IIS. (All incoming port 80 socket requests go to the service.) Have the service pass everything it receives to IIS (website listening on a different port or IP), but monitor the total request size as its received.
When the size from a give connection exceeds you're desired limit, close connection. Return a redirect to an error page if you want to be polite.
Silly, but it'll let you monitor data in transit without waiting for IIS to hand over the request.