Setting Content-Length header in ASP.NET 5 response - asp.net

TL; DR I have an ASP.NET 5 (MVC 6) application and just trying to set a HTTP Content-Length header in order to avoid chunked response.
To my surprise, it happens that this is a quite tricky task to accomplish in ASP.NET 5. Everything is running on Kestrel which from ASP.NET 5 Beta7 supports writing chunked responses automatically when no content length is specified for the response.
There is a similar question here, but the difference is that the OP just wants to count a response size, while I need to ensure that the Content-Length header is sent within a response. Tried with many things so far, and the only thing that works is writing a custom Middleware:
public class WebApiMiddleware {
RequestDelegate _next;
public WebApiMiddleware(RequestDelegate next) {
_next = next;
}
public async Task Invoke(HttpContext context) {
using (var buffer = new MemoryStream()) {
var response = context.Response;
var bodyStream = response.Body;
response.Body = buffer;
await _next(context);
response.Headers.Add("Content-Length", new[] { buffer.Length.ToString()});
buffer.Position = 0;
await buffer.CopyToAsync(bodyStream);
}
}
}
However, this is highly inefficient since we are using additional memory while processing every request. Using a custom stream that just wraps around the context.Response.Body stream and additionally counts bytes doesn't work since the Microsoft team states that once the amount of data they're willing to buffer has been written, it goes out on the wire and they don't store it any more, so I can, at best, add Content-Length to the last chunk which is not the wished behavior - I want to avoid chunking completely.
Any help is highly appreciated!
p.s. I am aware that chunking is a regular feature of HTTP/1.1, but in my scenario it degrades performance so I want to avoid it without forcing the clients to send HTTP/1.0 requests.

You need to send header before sending response so you need to store whole response before sending it to determine length.
As you said context.Response.Body stream doesn't store to much before sending so overhead is not that big.
There is middleware that does similar thing: https://github.com/aspnet/BasicMiddleware/blob/master/src/Microsoft.AspNetCore.Buffering/ResponseBufferingMiddleware.cs it is very similar to what you wrote but supports additional features like IHttpBufferingFeature to allows other middleware control buffering.
Checking for bufferStream.CanSeek is there to make sure nobody already wrapped response stream into buffering stream and avoid double buffering.
For your second question: this place is middlware.

Related

Downsides of streaming large JSON or HTML content to a browser in ASP.NET MVC

I am working with a legacy ASP.NET Framework MVC application that is experiencing issues with memory usage such as occasional bursts of OutOfMemory exceptions across a variety of operations. The application is frequently operating with large lists of objects (sometimes 10s to 100s of megabytes), and then serializing them to JSON to return to the client. We are not totally sure what the source of the OutOfMemory exceptions is, but believe a likely candidate is memory fragmentation due to too many large objects going on the Large Object Heap.
We are thinking a quick win is to refactor some of the controller endpoints to serialize their JSON content using a stream writer (as outlined in the JSON.NET Documentation), and to stream the content back to the browser. This won't eliminate the memory load of the data lists prior to serialization, but in theory should cut down the overall amount of data going on to the LOH.
The code is written to send the results in chunks of less than 85kb:
public async Task<ActionResult> MyControllerMethod()
{
var data = GetData();
Response.BufferOutput = false;
Response.ContentType = "application/json";
var serializer = JsonSerializer.Create();
using (var sw = new StreamWriter(Response.OutputStream, Encoding.UTF8, 84999))
{
sw.AutoFlush = false;
serializer.Serialize(sw, data);
await sw.FlushAsync();
}
return new EmptyResult();
}
I am aware of a few downsides with this approach, but don't consider them showstoppers:
More complex to implement a unit test due to the 'EmptyResult' returned by the controller.
I have read there is a small overhead due to a call to PInvoke whenever data is flushed. (In practice I haven't noticed this).
Cannot post-process the content using e.g. an HttpHandler
Cannot set a content-length header which may be useful for the client in some cases.
What other downsides or potential problems exist with this approach?

ASP.NET Core, overkilling Task.Run()?

Lets say we have an ASP.NET Core receiving a string as a payload, size order of couple of megabytes. First method implementation:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = Encoding.UTF8.GetString(b);
.....
}
Body is read with ReadAsync, this is good since we have I/O stuff on socket and having it asynchronous is for free due to the nature of the call itself. But if we have a look after, the GetString() method, is purely CPU, is blocking with linear complexity. Anyway this affect somehow the performance since other clients wait for my bytes to get converted in string. I think to avoid this the solution is to run GetString() on the thread pool, by this:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = await Task.Run(()=>ASCIIEncoding.UTF8.GetString(b));
.....
}
please don't mind the return right now, something more has to be done in the function.
So the question, is the second approach overkilling? If so, what is the boundary to discriminate what could be run as blocking and what has to be moved to another thread?
You are very much abusing Task.Run there. Task.Run is used to off-load work onto a different thread and asynchronously wait for it to complete. So every Task.Run call will cause thread context switches. Of course, that is usually a very bad idea to do for things that should not run on their own thread.
Things like ASCIIEncoding.UTF8.GetString(b) are really fast. The overhead involved in creating and managing a thread that encapsulates this is much larger than just executing this directly on the same thread.
You should generally use Task.Run only to off-load (primarily synchronous) work that can benefit from running on its own thread. Or in cases, where you have work that would take a bit longer but block the current execution.
In your example, that is simply not the case, so you should just call those methods synchronously.
If you really want to reduce the work for that code, you should look at how to work properly streams. What you do in your code is read the request body stream completely and only then you work on it (trying to translate into a string).
Instead of separating the process of reading the binary stream and then translating it into a string, you could just read the stream as a string directly using a StreamReader. That way, you can read the request body directly, even asynchronously. So depending on what you actually do with it afterwards, that may be a lot more efficient.

TCP server: will payload delivery happen in pieces?

I am writing a TCP server app in Dart. When doing similar things in other languages, I've noticed that even if I send a byte buffer of size X, my onData() receive function will probably be called multiple times with smaller buffers that add up to X. If I'm not mistaken, this happens because of Flow Control. So usually my payload's header contains the payload size, and I use that to wait until I've read the full payload before processing it.
Do I have to handle this manually in Dart too? So far, I have not had issues and I've received the entire payload in a single call to onData(), but I'd rather ask.
I didn't have issues either, but you could start processing the data while the response is not yet fully received.
If the response is huge, all data needs to be buffered. This way you aren't able to receive data that is bigger than your available RAM. For example downloading a movie wouldn't be possible this way.
Yes. While I'm not sure the exact size of the data transferred to a request, there may be times when you have to completely drain the stream before accessing the data being sent via a POST or other method. (For the body of the http request). See the Creating a server section of the Dart Tutorial. In particular you can see how the stream is drained under Handling POST requests. Normally rather than writing back the pieces as the example shows, they are added directly to a buffer as follows:
var buff = [];
req.listen(buff.addAll,
onDone: () {
print('Received: ${String.fromCharCodes(buff)}');
});
See more information on HttpRequest class documentation.
As an alternative, you can use the http_server package which will automatically drain the stream for you and handle the data load properly depending on the headers which are passed with the request. It does this by applying a stream transformer to the incoming HttpRequests stream to convert them to HttpRequestBody. See below for an example. For more details see the HttpBodyHandler API.
HttpServer.bind(...).then((server) {
server.transform(new HttpBodyHandler())
.listen((HttpRequestBody body) {
// each request is now an HttpRequestBody
// which has already drained the stream
print(body.type);
print(body.body);
body.request.response..statusCode = HttpStatus.OK
..writeln('Got it!')
..close();
});
});

Read Request Body in ASP.NET

How does one read the request body in ASP.NET? I'm using the REST Client add-on for Firefox to form a GET request for a resource on a site I'm hosting locally, and in the Request Body I'm just putting the string "test" to try to read it on the server.
In the server code (which is a very simple MVC action) I have this:
var reader = new StreamReader(Request.InputStream);
var inputString = reader.ReadToEnd();
But when I debug into it, inputString is always empty. I'm not sure how else (such as in FireBug) to confirm that the request body is indeed being sent properly, I guess I'm just assuming that the add-on is doing that correctly. Maybe I'm reading the value incorrectly?
Maybe I'm misremembering my schooling, but I think GET requests don't actually have a body. This page states.
The HTML specifications technically define the difference between "GET" and "POST" so that former means that form data is to be encoded (by a browser) into a URL while the latter means that the form data is to appear within a message body.
So maybe you're doing things correctly, but you have to POST data in order to have a message body?
Update
In response to your comment, the most "correct" RESTful way would be to send each of the values as its own parameter:
site.com/MyController/MyAction?id=1&id=2&id=3...
Then your action will auto-bind these if you give it an array parameter by the same name:
public ActionResult MyAction(int[] id) {...}
Or if you're a masochist you can maybe try pulling the values out of Request.QueryString one at a time.
I was recently reminded of this old question, and wanted to add another answer for completeness based on more recent implementations in my own work.
For reference, I've blogged on the subject recently.
Essentially, the heart of this question was, "How can I pass larger and more complex search criteria to a resource to GET a filtered list of objects?" And it ended up boiling down to two choices:
A bunch of GET query string parameters
A POST with a DTO in the request body
The first option isn't ideal, because implementation is ugly and the URL will likely exceed a maximum length at some point. The second option, while functional, just didn't sit right with me in a "RESTful" sense. After all, I'm GETting data, right?
However, keep in mind that I'm not just GETting data. I'm creating a list of objects. Each object already exists, but the list itself doesn't. It's a brand new thing, created by issuing search/filter criteria to the complete repository of objects on the server. (After all, remember that a collection of objects is still, itself, an object.)
It's a purely semantic difference, but a decidedly important one. Because, at its simplest, it means I can comfortably use POST to issue these search criteria to the server. The response is data which I receive, so I'm "getting" data. But I'm not "GETting" data in the sense that I'm actually performing an act of creation, creating a new instance of a list of objects which happens to be composed of pre-existing elements.
I'll fully admit that the limitation was never technical, it was just semantic. It just never "sat right" with me. A non-technical problem demands a non-technical solution, in this case a semantic one. Looking at the problem from a slightly different semantic viewpoint resulted in a much cleaner solution, which happened to be the solution I ended up using in the first place.
Aside from the GET/POST issue, I did discover that you need to set the Request.InputStream position back to the start. Thanks to this answer I found.
Specifically the comment
Request.InputStream // make sure to reset the Position after reading or later reads may fail
Which I translated into
Request.InputStream.Seek(0,0)
I would try using the HttpClient (available via Nuget) for doing this type of thing. Its so much easier than the System.Net objects
Direct reading from the Request.InputStream dangerous because when re-reading will get null even if the data exists. This is verified in practice.
Reliable reading is performed as follows:
/*Returns a string representing the content of the body
of the HTTP-request.*/
public static string GetFromBodyString(this HttpRequestBase request)
{
string result = string.Empty;
if (request == null || request.InputStream == null)
return result;
request.InputStream.Position = 0;
/*create a new thread in the memory to save the original
source form as may be required to read many of the
body of the current HTTP- request*/
using (MemoryStream memoryStream = new MemoryStream())
{
request.InputStream.CopyToMemoryStream(memoryStream);
using (StreamReader streamReader = new StreamReader(memoryStream))
{
result = streamReader.ReadToEnd();
}
}
return result;
}
/*Copies bytes from the given stream MemoryStream and writes
them to another stream.*/
public static void CopyToMemoryStream(this Stream source, MemoryStream destination)
{
if (source.CanSeek)
{
int pos = (int)destination.Position;
int length = (int)(source.Length - source.Position) + pos;
destination.SetLength(length);
while (pos < length)
pos += source.Read(destination.GetBuffer(), pos, length - pos);
}
else
source.CopyTo((Stream)destination);
}

ASP.NET web services using reference parameters in webmethod

I have an issue when I try to retrieve info through a webmethod. I am using a proxy to call a web service, in that proxy I have an operation which returns data using 'out' parameters.
The server executes the operation succesfully, giving back the parameters properly instanced (I also have checked the soap return message using a traffic analyzer and is ok), but when I ask for those parameters to the proxy, I only obtain null values.
Here is some code info:
//This is the call to the web service using the proxy (t is the proxy and get_capabilities is the webmethod)
public trf_capabilities get_capabilities() {
trf_capabilities trfcap = new trf_capabilities();
trfcap.protocol_list= t.get_capabilities(0, out trfcap.pause, out trfcap.maxfiles, out trfcap.maxsize, out trfcap.encrypt, out trfcap.authenticate, out trfcap.integritycheck, out trfcap.hashtype, out trfcap.multipath, out trfcap.profile_list);
return trfcap;
}
//This is the webMethod definition
[System.Web.Services.Protocols.SoapDocumentMethodAttribute("iTransfer-get_capabilities",/*RequestElementName="elementoVacio_",*/ RequestNamespace="", ResponseElementName="trf_capabilitiesPar", ResponseNamespace="", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
[return: System.Xml.Serialization.XmlElementAttribute("protocol_list", Form=System.Xml.Schema.XmlSchemaForm.Unqualified)]
public protocolType[] get_capabilities([System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] int vacio, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out bool pause, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out uint maxfiles, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out uint maxsize, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out bool encrypt, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out bool authenticate, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out bool integritycheck, [System.Xml.Serialization.XmlElementAttribute("hash_type", Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out hash_typeType[] hash_type, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out bool multipath, [System.Xml.Serialization.XmlElementAttribute("profile_list", Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] out profile_listType[] profile_list) {
object[] results = this.Invoke("get_capabilities", new object[] {
vacio});
pause = ((bool)(results[1]));
maxfiles = ((uint)(results[2]));
maxsize = ((uint)(results[3]));
encrypt = ((bool)(results[4]));
authenticate = ((bool)(results[5]));
integritycheck = ((bool)(results[6]));
hash_type = ((hash_typeType[])(results[7]));
multipath = ((bool)(results[8]));
profile_list = ((profile_listType[])(results[9]));
return ((protocolType[])(results[0]));
}
As you can see, I am using the 'out' token in both call and handler method, but it seems is not enough to get the correct behaviour.
And finally, here is the SOAP message intercepted with the traffic analyzer:
Content-Type: text/xml; charset=UTF-8
Server: SOAPStandaloneServer
Content-Length: 584
Connection: close
<E:Envelope xmlns:E="http://schemas.xmlsoap.org/soap/envelope/" xmlns:A="http://schemas.xmlsoap.org/soap/encoding/" xmlns:s="http://www.w3.org/2001/XMLSchema-instance" xmlns:y="http://www.w3.org/2001/XMLSchema"><E:Body><ns1:get_capabilitiesResponse xmlns:ns1=""><ns1:pause>true</ns1:pause><ns1:maxfiles>5</ns1:maxfiles><ns1:maxsize>0</ns1:maxsize><ns1:encrypt>true</ns1:encrypt><ns1:authenticate>true</ns1:authenticate><ns1:integritycheck>true</ns1:integritycheck><ns1:multipath>true</ns1:multipath></ns1:get_capabilitiesResponse></E:Body></E:Envelope>
Any ideas?
I think you're on the right track with [decoration] and serializing the answers. the arrays in there seem a bit tricky, do you have serialization routines for the elements in them?
Then again, having that amount of output parameters seems kind of overwhelming. I would probably have created a "ServiceResponse" struct and added all the params as properties in it.
EDIT: Next step, if the response seems ok but the proxy has problems deserializing it, I would suggest (of course) delving deeper into the proxy. Is the proxy generated or have you written it manually? Try to step through it and see what it tries to do with the parameters it's been given. Often I hack about with web services till my eyes bleed, only to discover that the deserialization spec was obsolete.
I found something interesting about all of this. I have been checking the headers of the two types of web methods I have (the one written in C++ I have to use and the test one I developed in C#). I realised that, for out parameters, .NET add some kind of wrapping.
Here comes the msdn explanation:
The XML portion of the SOAP response encapsulates the out parameters for the Web service method, including the result inside an element. The name of the encapsulating element, by default, is the name of the Web service method with Response appended to it.
Here is the link
It seems you have to use that wrapper in order to get 'out' reference parameters working.

Resources