Error at System.Web.Handlers.ScriptResourceHandler.ProcessRequest(HttpContext
context) at System.Web.HttpApplication.CallHandlerExecutionStep.
System.Web.HttpApplication.IExecutionStep.Execute() at
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean&
completedSynchronously)
I am getting the above error from a production server.
We have six production servers and we get the above error from three production servers only.
Remaining three are working fine.
We have following setting for machinKey in all six servers. Path :
C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\web.config.comments
C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG\web.config.comments
OLD Setting:
< machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps"
validation="SHA1"
decryption="Auto"
compatibilityMode="Framework20SP1" / >
CHANGED TO SPECIFIC KEY
< machineKey validationKey="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
decryptionKey="XXXXXXXXXXXXXXXXXXXXXXXX"
validation="SHA1"
decryption="AES"
compatibilityMode="Framework20SP1" />
but getting same exception.
Can any one please tell me what could be the exact problem?
This can be caused by search engines crawling through your pages. Sometimes they hit the ScriptResource.axd file and generate the error you are seeing.
If you can log the IPs that are causing this error, look them up and see where/who they are.
Of course, if you are only getting this from 3 of 6 servers in your web farm, something else could be wrong.
Related
Here is my problem.
I have one server on the other side of the world with IP 1.2.3.4
If I put in web.config this
<compilation defaultLanguage="c#" debug="true" />
everyone sees a debug, I want to set something like
<compilation defaultLanguage="c#" debug="true" IP="4.3.2.1" />
So only IP 4.3.2.1 can see debug for that site all other IPs should see like
<compilation defaultLanguage="c#" debug="false" />
is set.
The setting says how the whole page is compiled. Then in this form it is served to all clients. If you want this, you can have two sites - one normal and one debug - and if there is one particular IP requesting your site you can redirect it to the debug version.
What do you intend to do? You are fiddling with the compilation element, that is, you try to modify the compilation of your code. Code is not compiled per-request or per-user.
If you want to reveal/hide stack traces you may want to use this instead:
<customErrors mode="RemoteOnly" />
However, this does not allow to filter by IP except the loop-back. IP addresses are generally not a very secure way to identify a person or to prevent an authorized person from retrieving the stacktrace.
If you have remote access to the web server, you can log in and use http://localhost to access your web site. If you have RemoteOnly active, you will find the stacktrace of your error then.
If you still want to go for an IP-based approach, you might find something at Rich Custom Error Handling with ASP.NET. The section "Rich Custom Error Pages" mentions "Logic to display detailed information only to certain IP addresses may be included here."
(I found the article by googling for "asp.net reveal stacktrace to certain ip only")
This is quite literally driving me bananas - I'm on holiday as from tomorrow but if I can't get this working today then it's under threat - so any help much appreciated!
Firstly, my website has a manually defined <machinekey /> element defined so that both web servers in the web farm are kept in sync. I have verified this with IIS manager (hence why I'm asking, despite the similarity with so many other questions). It looks like this (keys elided - but they are the correct length):
<machineKey validationKey="[512 bit hex]"
decryptionKey="[256-bit hex]"
validation="SHA1"
decryption="AES" />
The website is running Asp.Net MVC3 and I am using Forms authentication in 'normal' mode (i.e. not 2.0 compat mode). I'm using Forms Authentication to create an authentication ticket - using code like the following:
FormsAuthentication.SetAuthCookie(userName, false);
My Forms Auth config is very simple; there are no IIS or server-wide settings in place that override the documented defaults:
<authentication mode="Forms">
<forms defaultUrl="~/Unauthorised"
loginUrl="~/Unauthorised"
ticketCompatibilityMode="Framework40" />
</authentication>
And then I've hijacked the cookie-reading functionality as per this MSDN topic so that I can create the principal and identity that I want.
Problem is - only one half of the web farm is able to decrypt the authentication cookie, the other half (i.e. whichever one didn't authenticate the user) just gives:
System.Security.Cryptography.CryptographicException: Length of the data to decrypt is invalid.
With this as the top-part of the stack trace:
[CryptographicException: Length of the data to decrypt is invalid.]
System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) +12521039
System.Security.Cryptography.CryptoStream.FlushFinalBlock() +53
System.Web.Configuration.MachineKeySection.EncryptOrDecryptData(Boolean fEncrypt, Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Boolean useValidationSymAlgo, Boolean useLegacyMode, IVType ivType) +331
System.Web.Security.FormsAuthentication.Decrypt(String encryptedTicket) +293
We are using a slightly ancient (but very reliable) Load balancer - which does not modify the HTTP traffic - and the traffic in this case is HTTPs; I'm able to verify that it's always one server and not both by having used Fiddler to decrypt and inspect the traffic.
This clearly points to out-of-sync machineKeys - but they are not - so any idea what's going on!!?
Thanks in advance!
When different servers have different patches, it has the potential to alter the behavior of encryption (/decryption), therefore possibly causing this runtime exception:
System.Security.Cryptography.CryptographicException: Length of the data to decrypt is invalid.
More information regarding this issue can be found here: http://blog.evonet.com.au/post/SystemSecurityCryptographyCryptographicException-Length-of-the-data-to-decrypt-is-invalid.aspx
I have a file upload control on my webpage. The maximum request length is set to 8 MB (maxRequestLength = 8192). I also have server validation that throws an error if the file is more than 4MB. The reason that its 8MB in the config is the leverage that's given to the user and also so that the application can be tested.
If I upload a file that's 9MB, I get thrown an exception Maximum request length exceeded., which is fine and working as expected. But when I try to upload a file that's 1GB, it shows me a HTTP 404 - File not found. Can someone please explain why this is happening and how can I get it to throw me a maxRequestLength exception?
I'm using IIS6.
I experienced this condition today (HTTP 404 on large file upload with IIS 7) but I thought I had made all the correct configuration settings. I wanted to upload files up to 300MB so I made the following web.config settings in a sub-folder of the application:
<configuration>
<system.web>
<httpRuntime maxRequestLength="307200" />
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="314572800" />
</requestFiltering>
</security>
</system.webServer>
</configuration>
This configuration worked in test but when I copied the updated files including the web.config to the production server, I received the HTTP 404 error on uploading a 90MB file. Smaller files under the application-wide limit of 30MB were working fine, so I knew it was a request size problem of some sort.
I figured there was a chance IIS had cached some application settings and just hadn't updated them, so I recycled the Application Pool, after which everything worked as expected.
I feel none of the answers here explain why you get a 404, they just tell you the usual stuff of how to fix the problem.
The 404 is not due to misconfiguration, it is intentional and documented behaviour:
When request filtering blocks an HTTP request because an HTTP request exceeds the request limits, IIS 7 will return an HTTP 404 error to the client and log one of the following HTTP statuses with a unique substatus that identifies the reason that the request was denied:
HTTP Substatus Description
404.13 Content Length Too Large
404.14 URL Too Long
404.15 Query String Too Long
These substatuses allow Web administrators to analyze their IIS logs and identify potential threats.
In addition, when an HTTP request exceeds the header limits that are defined in the in the <headerLimits> element, IIS 7 will return an HTTP 404 error to the client with the following substatus:
HTTP Substatus Description
404.10 Request Header Too Long
This is a bit of an old thread, but I thought I should add my experiences with this.
I faced the same problem with large file uploads and the web api. A 404.13 is thrown before it gets to a controller at all, so I had to find out where to jump in and handle this case.
My solution was the following web.config entries:
I handle the 404.13 by redirecting it to a mvc controller (it could be a webforms page just the same), and regular 404 errors hit my 404 route. it's critical that the responseMode="redirect" for the 404.13
<httpErrors errorMode="Custom">
<remove statusCode="404" subStatusCode="-1" />
<error statusCode="404" subStatusCode="13" path="/errors/filesize" responseMode="Redirect" />
<error statusCode="404" path="/errors/notfound" responseMode="ExecuteURL" />
</httpErrors>
Then, in my Errors controller, I have the following:
public ActionResult FileSize()
{
Response.StatusCode = 500;
Response.StatusDescription = "Maximum file size exceeded.";
Response.End();
return null;
}
Again, this could be a regular webforms page.
To my knowledge, there is no way to gracefully handle exceeding IIS's "maxRequestLength" setting. It can't even display a custom error page (since there is no corresponding HTTP code to respond to). The only way around this is to set maxRequestLength to some absurdly high number of kbytes, for example 51200 (50MB), and then check the ContentLength after the file has been uploaded (assuming the request didn't time out before 90 seconds). At that point, I can validate if the file <=5MB and display a friendly error.
You can also try this link.
You could also try something like this:
private void application_EndRequest(object sender, EventArgs e)
{
HttpRequest request = HttpContext.Current.Request;
HttpResponse response = HttpContext.Current.Response;
if ((request.HttpMethod == "POST") &&
(response.StatusCode == 404 && response.SubStatusCode == 13))
{
// Clear the response header but do not clear errors and transfer back to requesting page to handle error
response.ClearHeaders();
HttpContext.Current.Server.Transfer(request.AppRelativeCurrentExecutionFilePath);
}
}
I have found that this problem can also be caused on IIS7 (and presumably IIS6) when the URLScan tool is installed and running on the site.
When uploading the file to a website i was receiving the message "File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable."
If the problem is being caused by URLScan then if you try to upload the large file to the site whilst browsing the site on the hosting server itself, you will be served a full asp.net error message instead of a 404 that mentions URLScan.
You can also check if URLScan is running on you site in IIS7 by viewing the ISAPI Filters for the website in IIS, URLScan will be listed if it is used.
This can be fixed by altering the ini file for URLScan is located at "%WINDIR%\System32\Inetsrv\URLscan" and changing the MaxAllowedContentLength.
The MaxAllowedContentLength is in bytes.
This may require a IIS restart to take effect, though it did not when i tried it myself with IIS7.
http://www.iis.net/learn/extensions/working-with-urlscan/urlscan-overview
http://www.iis.net/learn/extensions/working-with-urlscan/common-urlscan-scenarios
You could configure the default error page in IIS itself.
The request limit is a setting is IIS. Open the Request Filtering section of your site in IIS and select Edit Request Settings. For me it was that simple.
A more detailed How To from Microsoft.
https://learn.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/#how-to-edit-the-request-filtering-feature-settings-and-request-limits
I just met the same problem, i made the similar operation like pseudocoder's answer but have different( i think maybe is not the cache) :
edit your Web.config --> maxRequestLength
<system.web>
<httpRuntime maxRequestLength="1073741824" executionTimeout="3600" />
</system.web>
edit this:
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
just like this,and try it.
The problem with the 1GB uploads is more browser related. I have had heaps of trouble with it and tried a lot of solutions but really the question to ask here is what are the chances of this happening in the real world for your business needs and maybe it should be recorded as a known issue in the business rules or non functional requirements document.
We have a web.config in a physical subdirectory of a virtual directory that's under an application in an IIS site. Something like this:
Site
App
Web.config
Virtual Dir
Subdir
Web.config
In the Web.config we put this configuration in system.web:
<webServices>
<protocols>
<add name="HttpPost" />
<add name="HttpGet" />
</protocols>
</webServices>
We enable both protocols for an ASMX in that subdirectory.
It all works fine for a while and after that it just stops and those protocols just stop working. We restart IIS and it starts working again.
To fix this, we have used a workaround to add that configuration to the application Web.config and then it just works fine. But we would like to avoid changing the application Web.config and instead make the subdirectory Web.config work.
Any ideas why ASP.Net would just stop considering the subdirectory Web.config after a while?
We're hosting on Windows Server 2003, IIS 6, ASP.Net 2.0.
HTTP POST requests to the ASMX stop working. The error we're getting is System.InvalidOperationException with this message:
Request format is unrecognized for URL unexpectedly ending in '/blah'.
The stack trace is:
at System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(Type type, HttpContext context, HttpRequest request, HttpResponse response)
at System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath)
at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig)
at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
It's unlikely that get and post are really being ignored intermittently.
What is your ASMX doing? If you're allocating and not freeing a resource - like a connection to another service or WCF object - or entering a long-running task, IIS can stop responding to requests when those resources are exhausted. That would explain with restarting IIS fixes the problem.
Are you getting no response at all, an error 500, or what? Anything in the event log?
I have several websites which get approximately 3000 pageviews in total per day, and I get this viewstate error roughly 5-10 times per day, caught in global.asax:
System.Web.HttpException: Unable to validate data. at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString)
I have tried:
hard-coding the machine key in web.config for all websites
hard-coding the machien key in machine.config
adding items to the pages section of the web.config for all websites.
Machine key looks like:
<machineKey validationKey="key goes here" decryptionKey="key goes here" validation="SHA1" decryption="AES" />
Pages section looks like:
<pages renderAllHiddenFieldsAtTopOfForm="true" validateRequest="false" enableEventValidation="false" viewStateEncryptionMode="Never">
The errors are not related to application pool recycling as best I can tell, as the pool is set to recycle at every 100,000 requests. I am not running a web farm or web garden. Quite often I get two or three of these errors in a row, as if a user is getting an error, going back, and then clicking the link again.
Anyone have any ideas?
I have seen "random" ViewState errors before caused by slow internet connections. The slow connections would cause the page to be visibly rendered to the user, however it hadn't completely loaded. The user would then take action on the form and thus "random" issues would occur.
See if you can correlate the exception timestamps to specific pages being requested in the IIS logs. You could then try to re-create the low bandwidth scenario with something like Firefox Throttle.
Similar question below confirms my experiences and probably what your are seeing as well:
ASP.NET: Unable to validate data