Increase Http Runtime MaxRequestLength from C# code - asp.net

How can I increase
from my C# code ? I can't do this in Web.config, My application is created to deploy web
application in IIS.

Take a look at http://bytes.com/topic/asp-net/answers/346534-how-i-can-get-httpruntime-section-page
There's how you get access to an instance of HttpRuntimeSection. Then modify the property MaxRequestLength.

An alternative to increasing the max request length is to create an IHttpModule implementation. In the BeginRequest handler, grab the HttpWorkerRequest to process it entirely in your own code, rather than letting the default implementation handle it.
Here is a basic implementation that will handle any request posted to any file called "dropbox.aspx" (in any directory, whether it exists or not):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Example
{
public class FileUploadModule: IHttpModule
{
#region IHttpModule Members
public void Dispose() {}
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
#endregion
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
string filePath = context.Request.FilePath;
string fileName = VirtualPathUtility.GetFileName( filePath );
string fileExtension = VirtualPathUtility.GetExtension(filePath);
if (fileName == "dropbox.aspx")
{
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest wr = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
//HANDLE REQUEST HERE
//Grab data from HttpWorkerRequest instance, as reflected in HttpRequest.GetEntireRawContent method.
application.CompleteRequest(); //bypasses all other modules and ends request immediately
}
}
}
}
You could use something like that, for example, if you're implementing a file uploader, and you want to process the multi-part content stream as it's received, so you can perform authentication based on posted form fields and, more importantly, cancel the request on the server-side before you even receive any file data. That can save a lot of time if you can determine early on in the stream that the upload is not authorized or the file will be too big or exceed the user's disk quota for the dropbox.
This is impossible to do with the default implementation, because trying to access the Form property of the HttpRequest will cause it to try to receive the entire request stream, complete with MaxRequestLength checks. The HttpRequest object has a method called "GetEntireRawContent" which is called as soon as access to the content is needed. That method starts with the following code:
HttpRuntimeSection httpRuntime = RuntimeConfig.GetConfig(this._context).HttpRuntime;
int maxRequestLengthBytes = httpRuntime.MaxRequestLengthBytes;
if (this.ContentLength > maxRequestLengthBytes)
{
if (!(this._wr is IIS7WorkerRequest))
{
this.Response.CloseConnectionAfterError();
}
throw new HttpException(SR.GetString("Max_request_length_exceeded"), null, 0xbbc);
}
The point is that you'll be skipping that code and implementing your own custom content length check instead. If you use Reflector to look at the rest of "GetEntireRawContent" to use it as a model implementation, you'll see that it basically does the following: calls GetPreloadedEntityBody, checks if there's more to load by calling IsEntireEntityBodyIsPreloaded, and finally loops through calls to ReadEntityBody to get the rest of the data. The data read by GetPreloadedEntityBody and ReadEntityBody are dumped into a specialized stream, which automatically uses a temporary file as a backing store once it crosses a size threshold.
A basic implementation would look like this:
MemoryStream request_content = new MemoryStream();
int bytesRemaining = wr.GetTotalEntityBodyLength() - wr.GetPreloadedEntityBodyLength();
byte[] preloaded_data = wr.GetPreloadedEntityBody();
if (preloaded_data != null)
request_content.Write( preloaded_data, 0, preloaded_data.Length );
if (!wr.IsEntireEntityBodyIsPreloaded()) //not a type-o, they use "Is" redundantly in the
{
int BUFFER_SIZE = 0x2000; //8K buffer or whatever
byte[] buffer = new byte[BUFFER_SIZE];
while (bytesRemaining > 0)
{
bytesRead = wr.ReadEntityBody(buffer, Math.Min( bytesRemaining, BUFFER_SIZE )); //Read another set of bytes
bytesRemaining -= bytesRead; // Update the bytes remaining
request_content.Write( buffer, 0, bytesRead ); // Write the chunks to the backing store (memory stream or whatever you want)
}
if (bytesRead == 0) //failure to read or nothing left to read
break;
}
At that point, you'll have your entire request in a MemoryStream. However, rather than download the entire request like that, what I've done is offload that "bytesRemaining" loop into a class with a "ReadEnough( int max_index )" method that is called on demand from a specialized MemoryStream that "loads enough" into the stream to access the byte being accessed.
Ultimately, that architecture allows me to send the request directly to a parser that reads from the memory stream, and the memory stream automatically loads more data from the worker request as needed. I've also implemented events so that as each element of the multi-part content stream is parsed, it fires events when each new part is identified and when each part is completely received.

You can do that in the web.config
<httpRuntime maxRequestLength="11000" />
11000 == 11 mb

Related

Uploading multiple HttpPostedFileBase using Parallel.ForEach breaking files

I have a form that uploads multiple files. My model has a List<HttpPostedFileBase> called SchemaFileBases, which is correctly binded. I need to upload these files to s3 and would like to do it in parallel. I'm unable to use asyc and await because this code is run from both ASP.Net and a queue based application that currently doesn't have async/await support (working on it).
If I change the foreach below to Parallel.ForEach(this.SchemaFileBases, schemaFileBase => {... Then I get some funkiness going on. The two files end up being mashed. Each file will contain some of the other files content after it's uploaded. AwsDocument is being used elsewhere in parallel so I don't think it has to do with that. Each AwsDocument has it's own AmazonS3Client.
public override void UploadToS3(IMetadataParser parser)
{
string hash;
string key;
foreach (var schemaFileBase in this.SchemaFileBases)
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
hash = schemaFileBase.InputStream.Md5Hash().ToByteArray().ToHex();
key = String.Format("{0}/{1}", this.S3Prefix, schemaFileBase.FileName);
Stream inputStream = schemaFileBase.InputStream;
aws.UploadToS3(key, inputStream, hash);
}
}
My coworker suspect's it's something to do with how the InputStream on the HttpPostedFileBase is implemented. Perhaps it is not thread safe, and the streams are both reading from the original request at the same time? I can't imagine MS would do that though.
Multi-threaded version:
public override void UploadToS3(IMetadataParser parser)
{
Parallel.ForEach(this.SchemaFileBases, f =>
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
string hash = f.InputStream.Md5Hash().ToByteArray().ToHex();
string key = String.Format("{0}/{1}", this.S3Prefix, f.FileName);
Stream inputStream = f.InputStream;
aws.UploadToS3(key, inputStream, hash);
});
}
Above solution is what I tried to multi-thread it. Does not work (files get mixed up all weird).

Returning a filestream - how to know when it's done

I have a controller which has a function that will return a file. The file is generated on the server as a temp file and then streamed via a HttpResponseMessage. What I'd like to do, is delete the file after I've finished sending it (maybe in the future we might keep them for a little while in case the exact same request is made again). I have something like this:
[HttpGet]
public HttpResponseMessage GetReport()
{
string fileName = //function that creates the file and returns the filename...
HttpResponseMessage response = new HttpResponseMessage();
response.Content = new StreamContent(new FileStream(fileName, FileMode.Open, FileAccess.Read));
response.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = "test.docx";
//File.Delete(fileName);
return response;
}
I can't delete the file at the commented out point above because the file is in use at that point. So is there an event or something that will be fired once the stream has finished being sent so I can handle deleting?
I could, of course, just start a task to wait some (hopefully sufficiently long) period of time and then delete, but that seems a little hit-or-miss.
Because you mentioned keeping the files around for awhile (potentially), you will need some kind of expiration architecture. Create a database table that tracks these temporary file system objects along with an expiration timestamp. Then, create a scheduled task using Windows Task Scheduler or a library like Quartz.NET to periodically query for expired objects and delete them.
I do this in my own projects for cleaning up files that were uploaded by the user but aren't necessarily used because the user canceled the encompassing process.
The tricky part is defining what constitutes a successful response. Is the response successful because the client received all the data and acted upon it? If so, then only the client has all the information necessary to determine if the data was received successfully. In this case, the client could perhaps tell the server that it (the client) received and acted upon the data. Then, the server could either delete the file immediately or mark it for expiration in the architecture I mentioned previously.
HttpResponseMessage is disposable than my suggestion is define your class derived from HttpResponseMessage and override Dispose(bool disposing) method to clean up your file.
class FileResponseMessage : HttpResponseMessage
{
public string FileResponseMessage(string fileName)
{
this.Content = new StreamContent(new FileStream(fileName, FileMode.Open, FileAccess.Read));
this.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
this.Content.Headers.ContentDisposition.FileName = "test.docx";
}
override void Dispose(bool disposing)
{
if(disposing)
{
//your cleanup
}
}
}

Receive byte array over HTTP in an ASP.NET 2.0 app

Trying to transfer a byte array to an ASP.NET 2.0 app in a light-weight manner without using SOAP. I decided to use a generic HTTP handler (.ashx) that interprets the HTTP request body as a Base64 string, decoding it to a byte array and saving to disk.
<%# WebHandler Language="C#" Class="PdfPrintService" %>
using System;
using System.Web;
public class PdfPrintService : IHttpHandler {
public void ProcessRequest (HttpContext context) {
string body;
using (System.IO.StreamReader reader =
new System.IO.StreamReader(context.Request.InputStream))
{
body = reader.ReadToEnd();
}
byte[] bytes = System.Convert.FromBase64String(body);
String filePath = System.IO.Path.GetTempFileName() + ".pdf";
System.IO.File.WriteAllBytes(filePath, bytes);
// Print file.
XyzCompany.Printing.PrintUtility.PrintFile(filePath);
}
public bool IsReusable {
get {
return false;
}
}
}
The client application (an iOS app in my case) will simply have to encode the bytes as Base64 and post them to the URL of this generic handler (ashx).
I imagine there is a better, more orthodox way to do this. Any ideas are appreciated!
The thing that comes to mind is POST & GET Requests handled through classes like the HttpWebResponse class. http://msdn.microsoft.com/en-us/library/system.net.httpwebresponse%28v=vs.71%29.aspxx
You can have your iOS app try to POST to the ASP.NET app, which is then set up to receive the POST and parse it for your byte array, which you'd include. More, or less, this is how some data was sent across the internet before SOAP.. all SOAP is is a schema for these types of requests.

Sending and receiving binary data in Servlets

I'm attempting to write a Java Servlet to receive binary data requests and reply to them, using HttpServletRequest.getOutputStream() and HttpServletResponse.getInputStream(). This is for a project which involves having a request sent by a Silverlight client to which this servlet responds to through an HTTP POST connection. For the time being, to test the Servlet I'm implementing a client in Java which I'm more familiar with than Silverlight.
The problem is that in my test project I send the data from a Servlet client as a byte array and expect to receive a byte array with the same length -- only it doesn't, and instead I'm getting a single byte. Therefore I'm posting here the relevant code snippets in the hopes that you might point me where I'm doing wrong and hopefully provide relevant bibliography to help me further.
So here goes.
The client servlet handles POST requests from a very simple HTML page with a form which I use as front-end. I'm not too worried about using JSP etc, instead I'm focused on making the inter-Servlet communication work.
// client HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
String firstName = (String) request.getParameter("firstname");
String lastName = (String) request.getParameter("lastname");
String xmlRequest = "<MyRequest><Person><Name Firstname=\""+firstName+"\" Lastname=\""+lastName+"\" /></Person></MyRequest>";
OutputStream writer = null;
InputStream reader = null;
try {
URL url = new URL("http://localhost:8080/project/Server");
URLConnection conn = url.openConnection();
conn.setDoInput(true);
conn.setDoOutput(true);
writer = conn.getOutputStream();
byte[] baXml = xmlRequest.getBytes("UTF-8");
writer.write(baXml, 0,baXml.length);
writer.flush();
// perhaps I should be waiting here? how?
reader = conn.getInputStream();
int available = reader.available();
byte[] data = new byte[available];
reader.read(data,0,available);
String xmlResponse = new String(data,"UTF-8");
PrintWriter print = response.getWriter();
print.write("<html><body>Response:<br/><pre>");
print.write(xmlResponse);
print.write("</pre></body></html>");
print.close();
} finally {
if(writer!=null)
writer.close();
if(reader!=null)
reader.close();
}
}
The server servlet handles HTTP POST requests. This is done by receiving requests the requests from a client Servlet for testing purposes above, but in the future I intend to use it for clients in other languages (specifically, Silverlight).
// server HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServetResponse response)
throws ServletException, IOException {
ServletInputStream sis = null;
try {
sis = request.getInputStream();
// maybe I should be using a BufferedInputStream
// instead of the InputStream directly?
int available = sis.available();
byte[] input = new byte[available];
int readBytes = sis.read(input,0,available);
if(readBytes!=available) {
throw new ServletException("Oops! readBytes!=availableBytes");
}
// I ONLY GET 1 BYTE OF DATA !!!
// It's the first byte of the client message, a '<'.
String msg = "Read "+readBytes+" bytes of "
+available+" available from request InputStream.";
System.err.println("Server.process(HttpServletRequest,HttpServletResponse): "+msg);
String xmlReply = "<Reply><Message>"+msg+"</Message></Reply>";
byte[] data = xmlReply.getBytes("UTF-8");
ServletOutputStream sos = response.getOutputStream();
sos.write(data, 0,data.length);
sos.flush();
sos.close();
} finally {
if(sis!=null)
sis.close();
}
}
I have been sticking to byte arrays instead of using BufferInputStreams so far because I've not decided yet if I'll be using e.g. Base64-encoded strings to transmit data or if I'll be sending binary data as-is.
Thank you in advance.
To copy input stream to output stream use the standard way:
InputStream is=request.getInputStream();
OutputStream os=response.getOutputStream();
byte[] buf = new byte[1000];
for (int nChunk = is.read(buf); nChunk!=-1; nChunk = is.read(buf))
{
os.write(buf, 0, nChunk);
}
The one thing I can think of is that you are reading only request.getInputStream().available() bytes, then deciding that you have had everything. According to the documentation, available() will return the number of bytes that can be read without blocking, but I don't see any mention of whether this is actually guaranteed to be the entire content of the input stream, so am inclined to assume that no such guarantees are made.
I'm not sure how to best find out when there is no more data (maybe Content-Length in the request can help?) without risking blocking indefinitely at EOF, but I would try looping until having read all the data from the input stream. To test that theory, you could always scan the input for a known pattern that occurs further into the stream, maybe a > matching the initial < that you are getting.

Capturing SOAP requests to an ASP.NET ASMX web service

Consider the requirement to log incoming SOAP requests to an ASP.NET ASMX web service. The task is to capture the raw XML being sent to the web service.
The incoming message needs to be logged for debug inspection. The application already has its own logging library in use, so the ideal usage would be something like this:
//string or XML, it doesn't matter.
string incomingSoapRequest = GetSoapRequest();
Logger.LogMessage(incomingSoapRequest);
Are there any easy solutions to capture the raw XML of the incoming SOAP requests?
Which events would you handle to get access to this object and the relevant properties?
Is there anyway IIS can capture the incoming request and push to a log?
You can also implement by placing the code in Global.asax.cs
protected void Application_BeginRequest(object sender, EventArgs e)
{
// Create byte array to hold request bytes
byte[] inputStream = new byte[HttpContext.Current.Request.ContentLength];
// Read entire request inputstream
HttpContext.Current.Request.InputStream.Read(inputStream, 0, inputStream.Length);
//Set stream back to beginning
HttpContext.Current.Request.InputStream.Position = 0;
//Get XML request
string requestString = ASCIIEncoding.ASCII.GetString(inputStream);
}
I have a Utility method in my web service that I use to capture the request when something happens that I am not expecting like a unhandled exception.
/// <summary>
/// Captures raw XML request and writes to FailedSubmission folder.
/// </summary>
internal static void CaptureRequest()
{
const string procName = "CaptureRequest";
try
{
log.WarnFormat("{0} - Writing XML request to FailedSubmission folder", procName);
byte[] inputStream = new byte[HttpContext.Current.Request.ContentLength];
//Get current stream position so we can set it back to that after logging
Int64 currentStreamPosition = HttpContext.Current.Request.InputStream.Position;
HttpContext.Current.Request.InputStream.Position = 0;
HttpContext.Current.Request.InputStream.Read(inputStream, 0, HttpContext.Current.Request.ContentLength);
//Set back stream position to original position
HttpContext.Current.Request.InputStream.Position = currentStreamPosition;
string xml = ASCIIEncoding.ASCII.GetString(inputStream);
string fileName = Guid.NewGuid().ToString() + ".xml";
log.WarnFormat("{0} - Request being written to filename: {1}", procName, fileName);
File.WriteAllText(Configuration.FailedSubmissionsFolder + fileName, xml);
}
catch
{
}
}
Then in web.config I store several AppSetting values that define what level I want to use to capture the request.
<!-- true/false - If true will write to an XML file the raw request when any Unhandled exception occurrs -->
<add key="CaptureRequestOnUnhandledException" value="true"/>
<!-- true/false - If true will write to an XML file the raw request when any type of error is returned to the client-->
<add key="CaptureRequestOnAllFailures" value="false"/>
<!-- true/false - If true will write to an XML file the raw request for every request to the web service -->
<add key="CaptureAllRequests" value="false"/>
Then in my Application_BeginRequest I have it modified like so. Note that Configuration is a static class I create to read properties from web.config and other areas.
protected void Application_BeginRequest(object sender, EventArgs e)
{
if(Configuration.CaptureAllRequests)
{
Utility.CaptureRequest();
}
}
One way to capture the raw message is to use SoapExtensions.
An alternative to SoapExtensions is to implement IHttpModule and grab the input stream as it's coming in.
public class LogModule : IHttpModule
{
public void Init(HttpApplication context)
{
context.BeginRequest += this.OnBegin;
}
private void OnBegin(object sender, EventArgs e)
{
HttpApplication app = (HttpApplication)sender;
HttpContext context = app.Context;
byte[] buffer = new byte[context.Request.InputStream.Length];
context.Request.InputStream.Read(buffer, 0, buffer.Length);
context.Request.InputStream.Position = 0;
string soapMessage = Encoding.ASCII.GetString(buffer);
// Do something with soapMessage
}
public void Dispose()
{
throw new NotImplementedException();
}
}
You know that you dont actually need to create a HttpModule right?
You can also read the contents of the Request.InputStream from within your asmx WebMethod.
Here is an article I wrote on this approach.
Code is as follows:
using System;
using System.Collections.Generic;
using System.Web;
using System.Xml;
using System.IO;
using System.Text;
using System.Web.Services;
using System.Web.Services.Protocols;
namespace SoapRequestEcho
{
[WebService(
Namespace = "http://soap.request.echo.com/",
Name = "SoapRequestEcho")]
public class EchoWebService : WebService
{
[WebMethod(Description = "Echo Soap Request")]
public XmlDocument EchoSoapRequest(int input)
{
// Initialize soap request XML
XmlDocument xmlSoapRequest = new XmlDocument();
// Get raw request body
Stream receiveStream = HttpContext.Current.Request.InputStream;
// Move to beginning of input stream and read
receiveStream.Position = 0;
using (StreamReader readStream = new StreamReader(receiveStream, Encoding.UTF8))
{
// Load into XML document
xmlSoapRequest.Load(readStream);
}
// Return
return xmlSoapRequest;
}
}
}
There are no easy ways to do this. You will have to implement a SoapExtension. The example at the previous link shows an extension that can be used to log the data.
If you had been using WCF, then you could simply set the configuration to produce message logs.
According to Steven de Salas, you can use the Request.InputStream property within the webmethod. I have not tried this, but he says that it works.
I would want to test this with both http and https, and with and without other SoapExtensions running at the same time. These are things that might affect what kind of stream the InputStream is set to. Some streams cannot seek, for instance, which might leave you with a stream positioned after the end of the data, and which you cannot move to the beginning.

Resources