ASP.net HTTP 404 - File not found instead of MaxRequestLength exception - asp.net

I have a file upload control on my webpage. The maximum request length is set to 8 MB (maxRequestLength = 8192). I also have server validation that throws an error if the file is more than 4MB. The reason that its 8MB in the config is the leverage that's given to the user and also so that the application can be tested.
If I upload a file that's 9MB, I get thrown an exception Maximum request length exceeded., which is fine and working as expected. But when I try to upload a file that's 1GB, it shows me a HTTP 404 - File not found. Can someone please explain why this is happening and how can I get it to throw me a maxRequestLength exception?
I'm using IIS6.

I experienced this condition today (HTTP 404 on large file upload with IIS 7) but I thought I had made all the correct configuration settings. I wanted to upload files up to 300MB so I made the following web.config settings in a sub-folder of the application:
<configuration>
<system.web>
<httpRuntime maxRequestLength="307200" />
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="314572800" />
</requestFiltering>
</security>
</system.webServer>
</configuration>
This configuration worked in test but when I copied the updated files including the web.config to the production server, I received the HTTP 404 error on uploading a 90MB file. Smaller files under the application-wide limit of 30MB were working fine, so I knew it was a request size problem of some sort.
I figured there was a chance IIS had cached some application settings and just hadn't updated them, so I recycled the Application Pool, after which everything worked as expected.

I feel none of the answers here explain why you get a 404, they just tell you the usual stuff of how to fix the problem.
The 404 is not due to misconfiguration, it is intentional and documented behaviour:
When request filtering blocks an HTTP request because an HTTP request exceeds the request limits, IIS 7 will return an HTTP 404 error to the client and log one of the following HTTP statuses with a unique substatus that identifies the reason that the request was denied:
HTTP Substatus Description
404.13 Content Length Too Large
404.14 URL Too Long
404.15 Query String Too Long
These substatuses allow Web administrators to analyze their IIS logs and identify potential threats.
In addition, when an HTTP request exceeds the header limits that are defined in the in the <headerLimits> element, IIS 7 will return an HTTP 404 error to the client with the following substatus:
HTTP Substatus Description
404.10 Request Header Too Long

This is a bit of an old thread, but I thought I should add my experiences with this.
I faced the same problem with large file uploads and the web api. A 404.13 is thrown before it gets to a controller at all, so I had to find out where to jump in and handle this case.
My solution was the following web.config entries:
I handle the 404.13 by redirecting it to a mvc controller (it could be a webforms page just the same), and regular 404 errors hit my 404 route. it's critical that the responseMode="redirect" for the 404.13
<httpErrors errorMode="Custom">
<remove statusCode="404" subStatusCode="-1" />
<error statusCode="404" subStatusCode="13" path="/errors/filesize" responseMode="Redirect" />
<error statusCode="404" path="/errors/notfound" responseMode="ExecuteURL" />
</httpErrors>
Then, in my Errors controller, I have the following:
public ActionResult FileSize()
{
Response.StatusCode = 500;
Response.StatusDescription = "Maximum file size exceeded.";
Response.End();
return null;
}
Again, this could be a regular webforms page.

To my knowledge, there is no way to gracefully handle exceeding IIS's "maxRequestLength" setting. It can't even display a custom error page (since there is no corresponding HTTP code to respond to). The only way around this is to set maxRequestLength to some absurdly high number of kbytes, for example 51200 (50MB), and then check the ContentLength after the file has been uploaded (assuming the request didn't time out before 90 seconds). At that point, I can validate if the file <=5MB and display a friendly error.
You can also try this link.
You could also try something like this:
private void application_EndRequest(object sender, EventArgs e)
{
HttpRequest request = HttpContext.Current.Request;
HttpResponse response = HttpContext.Current.Response;
if ((request.HttpMethod == "POST") &&
(response.StatusCode == 404 && response.SubStatusCode == 13))
{
// Clear the response header but do not clear errors and transfer back to requesting page to handle error
response.ClearHeaders();
HttpContext.Current.Server.Transfer(request.AppRelativeCurrentExecutionFilePath);
}
}

I have found that this problem can also be caused on IIS7 (and presumably IIS6) when the URLScan tool is installed and running on the site.
When uploading the file to a website i was receiving the message "File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable."
If the problem is being caused by URLScan then if you try to upload the large file to the site whilst browsing the site on the hosting server itself, you will be served a full asp.net error message instead of a 404 that mentions URLScan.
You can also check if URLScan is running on you site in IIS7 by viewing the ISAPI Filters for the website in IIS, URLScan will be listed if it is used.
This can be fixed by altering the ini file for URLScan is located at "%WINDIR%\System32\Inetsrv\URLscan" and changing the MaxAllowedContentLength.
The MaxAllowedContentLength is in bytes.
This may require a IIS restart to take effect, though it did not when i tried it myself with IIS7.
http://www.iis.net/learn/extensions/working-with-urlscan/urlscan-overview
http://www.iis.net/learn/extensions/working-with-urlscan/common-urlscan-scenarios

You could configure the default error page in IIS itself.

The request limit is a setting is IIS. Open the Request Filtering section of your site in IIS and select Edit Request Settings. For me it was that simple.
A more detailed How To from Microsoft.
https://learn.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/#how-to-edit-the-request-filtering-feature-settings-and-request-limits

I just met the same problem, i made the similar operation like pseudocoder's answer but have different( i think maybe is not the cache) :
edit your Web.config --> maxRequestLength
<system.web>
<httpRuntime maxRequestLength="1073741824" executionTimeout="3600" />
</system.web>
edit this:
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
just like this,and try it.

The problem with the 1GB uploads is more browser related. I have had heaps of trouble with it and tried a lot of solutions but really the question to ask here is what are the chances of this happening in the real world for your business needs and maybe it should be recorded as a known issue in the business rules or non functional requirements document.

Related

Deleting media using the WordPress REST API

I'm trying to delete media from the WordPress Library using the rest-api with cookie authentication. I can create a file (POST) and retrieve (GET) the file contents, but DELETE do not work. I'm using IIS Version 10.0.
Note: this code is ran on the website domain, not from another domain.
Things I've tried:
Enabling WebDAV on the server
Used Basic WordPress authentication plugin
Here is the XMLHttpRequest that I'm using:
var apiCall = new XMLHttpRequest();
apiCall.onreadystatechange = function() {
...
};
apiCall.open("DELETE", wpApiSettings.root + "wp/v2/media/");
apiCall.setRequestHeader("X-WP-Nonce", wpApiSettings.nonce);
apiCall.send("2000");
The error I get back:
HTTP Error 401.0 - Unauthorized. You do not have permission to view this directory or page.
This error is never present with GET or POST, only when doing the delete, which makes me think about the authentication within IIS. Maybe it's not even reaching the WordPress engine and IIS is intercepting the request and denying it. Which I thought enabling WebDAV would fix, but sadly, it did not.
First, 401 error typically indicates the request is not authenticated. We have to set up the credential based on the authentication mode in IIS. If it requires basic credential, we need to set up the HTTP header like below,
xhr.setRequestHeader('Authorization', 'Basic ZWx1c3VhcmlvOnlsYWNsYXZl');
How to send a correct authorization header for basic authentication
In addition, for supporting Delete HTTP verb, please add the below code to your the webconfig file.
<system.webServer>
<validation validateIntegratedModeConfiguration="false"/>
<modules runAllManagedModulesForAllRequests="true">
<remove name="WebDAVModule"/> <!-- ADD THIS -->
</modules>
Here is a related discussion.
WebAPI Delete not working - 405 Method Not Allowed

SignalR connect and start cause 404 error

I could not find the specific answer to this when searching so I'm posting my solution here.
We were working fine with SignalR for a long time, then some users started getting 404 errors on the start and connect calls from AngularJs/jQuery to the server. Negotiate would work fine though and return a 200 code.
It turns out by default the server side (ASP.NET 4.6.1) must impose a limit on the size of a URL and instead of returning a 414 (Request-URI Too Long) like you might expect it must just truncate the URL at a length around 2,083 and still try to process it. I'm assuming the 404 is the SignalR hub responding because now the truncated string has query string parameters that don't match what is expected. We were seeing it because we were passing a couple custom values and the security access token via query string parameters which increased the size. The access token was passed via query string because we wanted to use websockets and headers don't exist for websockets. As you add more attributes and roles to your token it will grow which is what put us over the default size limit.
The fix is easy. Just update your server web.config to allow longer URLs.
<system.web>
<httpRuntime targetFramework="4.5" maxRequestLength="30000000" maxUrlLength="40960" maxQueryStringLength="2097151" />
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="30000000" maxUrl="40960" maxQueryString="2097151" />
</requestFiltering>
</security>
</system.webServer>

IIS/ASP.NET: Issue with sending (POST) 1M bytes length requests

I encountered a weird case during debugging a WebAPI controller for saving files.
My controller has a method for posting data
[HttpPost]
[HttpPut]
public async Task<HttpResponseMessage> Upload()
{
if (!request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
var provider = await request.Content.ReadAsMultipartAsync();
MultipartFileData file = provider.FileData.FirstOrDefault();
// this is the uploaded file: file.LocalFileName
}
Controller's logic doesn't matter much here. The key point here is that the controller expects multipart-content.
Now from the client where I'm using jquery.fileupload library I send files via POST method.
I set a breakpoint on the first line of the controller's Upload method and send a file from the client. The breakpoint goes off. It goes off before all file were uploaded to the server. It's expected behavior and everything is good.
Now the problem. I send a large file and the breakpoint doesn't go off. The file is being uploaded (POSTed) and the controller isn't being called!
Event worse that after the file is finally uploaded I get 404 Not Found on the client.
My site is on localhost under IIS with ASP.NET Web API 5.1.2 (.NET 4.5).
I start changing file size to choose the size when the things works. It turns out that is't 1000000000 (one million bytes). If a request's length (data and headers) is 1M bytes that everything works: breakpoint goes off and the file is uploaded. If a request's length 1 byte more that nothing works: breakpoint doesn't go off, the file seems to being uploaded (i.e. tha data is sending) but it's unclear where it's uploaded as the controller isn't called.
As my site on localhost I can't believe it's caused by proxy server.
So my question is what is going on? What can be wrong with posting 1M bytes requests?
p.s. I'm aware of IIS and ASP.NET hard limits for request lengths (4Gb and 2Gb) - they are not exceeded.
In IIS Manager click on your site and go Configuration Editor - you will need to alter maxRequestLength to a value that will work with your requirements.
Edit:
The second place request limits are set are here:
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1000000000" /> <!-- change this -->
</requestFiltering>
</security>
</system.webServer>
So I completely forgot about settings in web.config. It turns out that 1M was the limit set there :(
<system.web>
<httpRuntime maxRequestLength="1000000000" />
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1000000000" />
</requestFiltering>
</security>
</system.webServer>

Intermittent ASP.net IIS8.5 uncatchable 500 internal-server-error on Azure cloud service

Lets start with a little background information. I am running a very simple ASP.net MVC Azure cloud service (a web role, Windows Server 2012 R2 with IIS 8.5). This service receives statistics from a flash client, which posts data roughly every 10 seconds (for potentially very large number of clients) and JavaScript. All the service contains is a single controller with two simple actions with a bunch of parameters (representing the individual statistics which are send in various combinations). All the service does is set the CORS and cookie responses (the clients/JavaScript can be embedded on random domains), verify the integrity of the received data and then store it into an Azure table storage account.
In order to ensure our service operates optimally we use New Relic to track service performance, and in order to ensure that our data is accurate (i.e. we successfully record all received messages) we implemented a custom error handling solution so we can fix any problems/bugs that might arise.
We have load tested our service using jmeter and encountered no problems, but now that we have deployed to a live environment and our service is being used we are starting to encounter occasional 500 internal server errors (approx 5% of requests). The big problem being that our own error handling code is not detecting these errors, however New Relic does report certain requests generating a 500 internal server error (with no further information like a stack trace, sometimes with, sometimes without reported parameters).
Our custom error handling consists of an HTTP module which registers to both the AppDomain.CurrentDomain.UnhandledException and the context.Error events. In theory this should be catching (and then logging) any exceptions which are not already being caught (and logged) inside our own code. Relevant web.config sections are configured in the following manner:
<customErrors mode="On" redirectMode="ResponseRewrite" defaultRedirect="~/500.aspx">
<error statusCode="404" redirect="~/404.aspx" />
<error statusCode="500" redirect="~/500.aspx" />
</customErrors>
and
<httpErrors existingResponse="Replace">
<clear />
<error statusCode="404" path="404.html" responseMode="File" />
<error statusCode="500" path="500.html" responseMode="File" />
</httpErrors>
<modules>
<add type="namespace.UnhandledExceptionModule" name="UnhandledExceptionModule" preCondition="managedHandler" />
</modules>
However, this is not the case. I have tried turning on all kinds of logging but the IIS logs are useless (they only show that a 500 response was returned, but no other useful information). The only useful information I have been able to gather is from the failed request traces, but I have not been able to determine what the actual problem is from that information (googling the error code or exception leads to nothing concrete). A screenshot of the relevant section of a failed trace can be found here:
http://i57.tinypic.com/20acrip.jpg
I also uploaded the complete trace here:
http://pastebin.com/fDt3thvr
Each failed request generates exactly the same log, so the errors we are seeing are consistently being caused by the same problem. However, I am not able to determine what this problem is, let alone find a way of fixing it. Even though I have an error code and message, googling them only returns very old topics on issues that have been fixed 6 years ago.
It is pretty important for our business that these messages can be recorded with a high degree of accuracy, but as it stands now I have no further ideas on how to gain better information on what is happening on these servers. We are also not able to replicate this behavior in a controlled environment.
Also, our error logging itself does work properly. 'Normal' errors are logged as expected and we have also verified the HTTP module actually works.
Edit:
The controller pseudo code is as follows:
[HttpPost]
public ActionResult Method(...)
{
// Set cookie and CORS reponse, check for early out.
if(earlyOut)
return 404;
// Store received values.
azuretable.ExecuteAsync(TableOperation.InsertOrMerge(...));
return 200;
}
Edit2:
I have spend some time analyzing failed request traces and they mostly seem to be generated by users with IE9. I actually managed to reproduce the error 2 times by quickly leaving the page while it is loading, as the problem seems to be caused by aborted Ajax calls (which we make the most of during page load). Why would an aborted call cause a 500 error though instead of being handled neatly?
Do the cookies exceed 4k ? The same thing happened to us on IIS, and the requests sometimes ended up with 500 Internal Server error. The errors were virtually untraceable.
I reproduced the issue by simply inflating a cookie over the 4093 bytes limit.
I think that it is because you are not awaiting your async method call, or your are not returning an awaitable response. I had exactly this issue when I forgot to do that.
await azuretable.ExecuteAsync(TableOperation.InsertOrMerge(...))
Then you should be good. I think you'll find that the async call is finishing after your call has completed back to the caller.

IIS7 Overrides customErrors when setting Response.StatusCode?

Having a weird problem here. Everybody knows that if you use web.config's customErrors section to make a custom error page, that you should set your Response.StatusCode to whatever is appropriate. For example, if I make a custom 404 page and name it 404.aspx, I could put <% Response.StatusCode = 404 %> in the contents in order to make it have a true 404 status header.
Follow me so far? Good. Now try to do this on IIS7. I cannot get it to work, period. If Response.StatusCode is set in the custom error page, IIS7 seems to override the custom error page completely, and shows its own status page (if you have one configured.)
Has anyone else seen this behavior and also maybe know how to work around it? It was working under IIS6, so I don't know why things changed.
Note: This is not the same as the issue in ASP.NET Custom 404 Returning 200 OK Instead of 404 Not Found
Set existingResponse to PassThrough in system.webServer/httpErrors section:
<system.webServer>
<httpErrors existingResponse="PassThrough" />
</system.webServer>
Default value of existingResponse property is Auto:
Auto tells custom error module to do the right thing. Actual error text seen by clients will be affected depending on value of fTrySkipCustomErrors returned in IHttpResponse::GetStatus call. When fTrySkipCustomErrors is set to true, custom error module will let the response pass through but if it is set to false, custom errors module replaces text with its own text.
More information: What to expect from IIS7 custom error module
The easiest way to make the behavior consistent is to clear the error and use Response.TrySkipIisCustomErrors and set it to true. This will override the IIS global error page handling from within your page or the global error handler in Application_Error.
Server.ClearError();
Response.TrySkipIisCustomErrors = true;
Typically you should do this in your Application_Error handler that handles all errors that your application error handlers are not catching.
More detailed info can be found in this blog post:
http://www.west-wind.com/weblog/posts/745738.aspx
Solved: It turns out that "Detailed Errors" needs to be on in order for IIS7 to "passthrough" any error page you might have. See http://forums.iis.net/t/1146653.aspx
I'm not sure if this is similar in nature or not, but I solved an issue that sounds similar on the surface and here's how I handled it.
First of all, the default value for existingResponse (Auto) was the correct answer in my case, since I have a custom 404, 400 and 500 (I could create others, but these three will suffice for what I'm doing). Here are the relevant sections that helped me.
From web.config:
<customErrors mode="Off" />
And
<httpErrors errorMode="Custom" existingResponse="Auto" defaultResponseMode="ExecuteURL">
<clear />
<error statusCode="404" path="/errors/404.aspx" responseMode="ExecuteURL" />
<error statusCode="500" path="/errors/500.aspx" responseMode="ExecuteURL" />
<error statusCode="400" path="/errors/400.aspx" responseMode="ExecuteURL" />
</httpErrors>
From there, I added this into Application_Error on global.asax:
Response.TrySkipIisCustomErrors = True
On each of my custom error pages I had to include the correct response status code. In my case, I'm using a custom 404 to send users to different sections of my site, so I don't want a 404 status code returned unless it actually is a dead page.
Anyway, that's how I did it. Hope that helps someone.
This issue has been a major headache. None of the suggestions previously mentioned alone solved it for me, so I'm including my solution. For the record, our environment/platform uses:
.NET Framework 4
MVC 3
IIS8 (workstation) and IIS7 (web server)
Specifically, I was trying to get an HTTP 404 response that would redirect the user to our custom 404 page (via the Web.config settings).
First, my code had to throw an HttpException. Returning a NotFoundResult from the controller did not achieve the results I was after.
throw new HttpException(404, "There is no class with that subject");
Then I had to configure both the customErrors and httpError nodes in the Web.config.
<customErrors mode="On" defaultRedirect="/classes/Error.aspx">
<error statusCode="404" redirect="/classes/404.html" />
</customErrors>
...
<httpErrors errorMode="Custom" existingResponse="Auto" defaultResponseMode="ExecuteURL">
<clear />
<error statusCode="404" path="/classes/404.aspx" responseMode="ExecuteURL" />
</httpErrors>
Note that I left the existingResponse as Auto, which is different than the solution #sefl provided.
The customErrors settings appeared to be necessary for handling my explicitly thrown HttpException, while the httpErrors node handled URLs that fell outside of the route patterns specified in Globals.asax.cs.
P.S. With these settings I did not need to set Response.TrySkipIisCustomErrors
TrySkipIisCustomErrors is only a part of a puzzle. If you use Custom Error Pages but you also want to deliver some RESTful content based on 4xx statuses then you have a problem. Setting web.config's httpErrors.existingResponse to "Auto" does not work, because .net seems to always deliver some page content to IIS, therefore using "Auto" causes all (or at least some) Custom Error Pages to be not used. Using "Replace" won't work too, because response will contain your http status code, but its content will be empty or filled with Custom Error Page. And the "PassThrough" in fact turns the CEP off, so it can't be used.
So if you want to bypass CEP for some cases (by bypassing I mean returning status 4xx with some content) you will need additional step: clean the error:
void Application_Error(object sender, EventArgs e)
{
var httpException = Context.Server.GetLastError() as HttpException;
var statusCode = httpException != null ? httpException.GetHttpCode() : (int)HttpStatusCode.InternalServerError;
Context.Server.ClearError();
Context.Response.StatusCode = statusCode;
}
So if you want to use REST response (i.e. 400 - Bad Request) and send some content with it, you will just need to set TrySkipIisCustomErrors somewhere in action and set existingResponse to "Auto" in httpErrors section in web.config. Now:
when there's no error (action returns 4xx or 5xx) and some content is returned the CEP is not used and the content is passed to client;
when there's an error (an exception is thrown) the content returned by error handlers is removed, so the CEP is used.
If you want to return status with empty content from you action it will be treated as an empty response and CEP will be shown, so there's some room to improve this code.
By default IIS 7 uses detailed custom error messages so I would assume that Response.StatusCode will equal 404.XX rather than just 404.
You can configure IIS7 to use the simpler error message codes or modify your code handling the more detailed error messages that IIS7 offers.
More info available here:
http://blogs.iis.net/rakkimk/archive/2008/10/03/iis7-enabling-custom-error-pages.aspx
Further investigation revealed I had it the wrong way around - detailed messages aren't by default but perhaps they've been turned on, on your box if you're seeing the different error messages that you've mentioned.

Resources