Best way to stream files in ASP.NET - asp.net

What's the best way to stream files using ASP.NET?
There appear to be various methods for this, and I'm currently using the Response.TransmitFile() method inside an http handler, which sends the file to the browser directly. This is used for various things, including sending FLV's from outside the webroot to an embedded Flash video player.
However, this doesn't seem like a reliable method. In particular, there's a strange problem with Internet Explorer (7), where the browser just hangs after a video or two are viewed. Clicking on any links, etc have no effect, and the only way to get things working again on the site is to close down the browser and re-open it.
This also occurs in other browsers, but much less frequently. Based on some basic testing, I suspect this is something to do with the way files are being streamed... perhaps the connection isn't being closed properly, or something along those lines.
After trying a few different things, I've found that the following method works for me:
Response.WriteFile(path);
Response.Flush();
Response.Close();
Response.End();
This gets around the problem mentioned above, and viewing videos no longer causes Internet Explorer to hang.
However, my understanding is that Response.WriteFile() loads the file into memory first, and given that some files being streamed could potentially be quite large, this doesn't seem like an ideal solution.
I'm interested in hearing how other developers are streaming large files in ASP.NET, and in particular, streaming FLV video files.

I would take things outside of the "aspx" pipeline. In particular, I would write a ran handler (ashx, or mapped via config), that does the minimum work, and simply writes to the response in chunks. The handler would accept input from the query-string/form as normal, locate the object to stream, and stream the data (using a moderately sized local buffer in a loop). A simple (incomplete) example shown below:
public void ProcessRequest(HttpContext context) {
// read input etx
context.Response.Buffer = false;
context.Response.ContentType = "text/plain";
string path = #"c:\somefile.txt";
FileInfo file = new FileInfo(path);
int len = (int)file.Length, bytes;
context.Response.AppendHeader("content-length", len.ToString());
byte[] buffer = new byte[1024];
Stream outStream = context.Response.OutputStream;
using(Stream stream = File.OpenRead(path)) {
while (len > 0 && (bytes =
stream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytes);
len -= bytes;
}
}
}

Take a look at the following article Tracking and Resuming Large File Downloads in ASP.NET which will give you more in depth than just open a stream and chuck out all the bits.
The http protocol supports ranged byte requests and resumeable downloads, and many streaming clients (like video players or Adobe pdf) can and will try to chunk these up, saving bandwidth and giving your users a better experience.
Not trivial, but it's time well spent.

Try opening the file as a stream, then using Response.OutputStream.Write(). For example:
Edit: My bad, I forgot that Write takes a byte buffer. Fixed
byte [] buffer = new byte[1<<16] // 64kb
int bytesRead = 0;
using(var file = File.OpenRead(path))
{
while((bytesRead = file.Read(buffer, 0, buffer.Length)) != 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
}
Response.Flush();
Response.Close();
Response.End();
Edit 2: Did you try this? It should work.

After trying lots of different combinations, including the code posted in the various answers, it seems like setting Response.Buffer = true before calling TransmitFile did the trick and the web application is now a lot more responsive in Internet Explorer.
In this particular case, the SWF extension is also mapped to ASP.NET, and we're using a custom handler in our web application to read the files from disk and then send them to the browser using Response.TransmitFile(). We've got a flash-based video player to play video files which are also SWF's, and I think having all of this activity go through the handler without buffering is what may have been causing strange things to happen in IE.

Related

Apache commons fileupload timeout only with Firefox

I use the Apache commons fileupload 1.4 library in my java project.
I have a html part with a classic form with a file input and some hidden fields.
I have a problem with uploading files of around >500ko only with Firefox >= 52
It works well with files of 10mo in Chrome or Internet Explorer.
But with Firefox, I have a timeout after waiting several minutes after submitting the form.
After some debugging, I see that the code responsible of the timeout is :
List<FileItem> items = (new ServletFileUpload(new DiskFileItemFactory())).parseRequest(request);
The part with cause wait is "parseRequest".
I try to debug the content of request with debugger in IntelliJ, but there is no way to copy entire content value of this request object in raw format.
It's working in these cases :
- Firefox : version <= 52 or file size < 500ko (around, it's not really precise)
- Internet Explorer
- Chrome
There is no file size limit, it seems that depends on the request size, because the parsing request part is taking too much time...
I get the HTTP request with a Firefox extension in two cases.
One generating uploading a file of 3mo which doesn't works (the request file is huge, 3x the size of the uploaded file) :
https://code.empreintesduweb.com/13561.html
One generated uploading a file of 200ko which works (the request file is small) :
https://code.empreintesduweb.com/13560.html
In fact, the main difference is that in Chrome or IE, I don't have the raw content of the uploaded file in the request headers :
The part with :
obj
stream
....
endstream
endobj
Only appear with Firefox...
You can try setting the maximum file size, maybe the file size exceeds the maximum threshold .According to the documentation :
Uploaded items should be retained in memory as long as they are reasonably small.
Larger items should be written to a temporary file on disk.
Very large upload requests should not be permitted.
The built-in defaults for the maximum size of an item to be retained in memory, the maximum permitted size of an upload
request, and the location of temporary files are acceptable.
Try the following :
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
try {
// Set factory constraints
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setSizeThreshold(yourMaxMemorySize);
ServletContext servletContext = this.getServletConfig().getServletContext();
File repository = (File) servletContext.getAttribute("javax.servlet.context.tempdir");
factory.setRepository(repository);
List<FileItem> items = new ServletFileUpload(factory).parseRequest(request);
for (FileItem item : items) {
if (item.isFormField()) {
// Process regular form field (input type="text|radio|checkbox|etc", select, etc).
String fieldName = item.getFieldName();
String fieldValue = item.getString();
// ... (do your job here)
} else {
// Process form file field (input type="file").
String fieldName = item.getFieldName();
String fileName = FilenameUtils.getName(item.getName());
InputStream fileContent = item.getInputStream();
// ... (do your job here)
}
}
} catch (FileUploadException e) {
throw new ServletException("Cannot parse multipart request.", e);
}
// ...
}
Here, we are providing a temp location for the file since the file is large.
A few things that are worth to try here:
Explicit the encoding: https://stackoverflow.com/a/10488411/4279120
Decompose your call and add iteration and try catch, ex. : https://www.programcreek.com/java-api-examples/?api=org.apache.commons.fileupload.FileItemIterator
Take a look at the MultipartConfig, it seems to provide such attributes as maxFileSize and maxRequestSize (see: https://www.codejava.net/java-ee/servlet/java-file-upload-example-with-servlet-30-api#maxFileSize%28%29)
Manually define the header of your Request if you can. It seems that adding "X-File-Name" and "X-File-Size", can also help, but this is a little old: AJAX File Upload with XMLHttpRequest
We may also help you better if you provide some more informations, like the versions of apache / java / servlet, and a few more code (especially the definition of request)
Some ressources that could be helpful:
XMLHttpRequest
Sending_files_using_a_FormData_object
How to set a header for a HTTP GET request, and trigger file download?
try this to set session timeout using setMaxInactiveInterval method
request.getSession().setMaxInactiveInterval(1200);
parameter Specifies the time, in seconds, between client requests before
the servlet container will invalidate this session. An interval value
of zero or less indicates that thesession should never timeout.
Thanks for all your answer.
Finally, I successfully resolve this issue, but in fact... not really.
I notice that there was some specific things in my form.
I had two inputs, one standard file input, and another which receive the file content encoded in base64 by some weird js before any upload.
So I was having one time the raw content of the file, and also the file in base64. Why ?! I don’t know.
But I delete all this, I create a new simple and clean form with a standard input file.
I use the stream API from ServletFileUpload, and it works, and takes only few seconds for big files.
So I don’t understand everything (why the problem was only on some browser for example), but I find a solution ;)
Thank you !

Most Space-Efficient Way to Store a Byte Array in a Database Table - ASP.NET

Right now we have a database table (SQL Server 2008 R2) that stores an uploaded file (PDF, DOC, TXT, etc.) in an image type column. A user uploads this file from an ASP.NET application. My project is to get a handle on the size at which this table is growing, and I've come up with a couple of questions along the way.
On the database side, I've discovered the image column type is supposedly somewhat depreciated? Will I gain any benefits to switching over to varbinary(max), or should I say varbinary(5767168) because that is my file size cap, or might as well I just let it stay as an image type as far as space-efficiency is concerned?
On the application side, I want to compress the byte array. Microsoft's built in GZip sometimes made the file bigger instead of smaller. I switched over to SharpZipLib, which is much better, but I still occasionally run into the same problem. Is there a way to find out the average file compression savings before I implement it on a wide scale? I'm having a hard time finding out what the underlying algorithm is that they use.
Would it be worth writing a Huffman code algorithm of my own, or will that present the same problem where there is occasionally a larger compressed file than original file?
For reference, in case it matters, here's the code in my app:
using ICSharpCode.SharpZipLib.GZip;
private static byte[] Compress(byte[] data)
{
MemoryStream output = new MemoryStream();
using (GZipOutputStream gzip = new GZipOutputStream(output))
{
gzip.IsStreamOwner = false;
gzip.Write(data, 0, data.Length);
gzip.Close();
}
return output.ToArray();
}
private static byte[] Decompress(byte[] data)
{
MemoryStream output = new MemoryStream();
MemoryStream input = new MemoryStream();
input.Write(data, 0, data.Length);
input.Position = 0;
using (GZipInputStream gzip = new GZipInputStream(input))
{
byte[] buff = new byte[64];
int read = gzip.Read(buff, 0, buff.Length);
while (read > 0)
{
output.Write(buff, 0, read);
read = gzip.Read(buff, 0, buff.Length);
}
gzip.Close();
}
return output.ToArray();
}
Thanks in advance for any help. :)
that's not a byte array, that's a BLOB. 10 years ago, you would have used the IMAGE datatype.
these days, it's more efficient to use VARBINARY(MAX)
I really reccomend that people use FILESTREAM for VarBinary(Max) as it makes backing up the database (without the blobs) quite easy.
Keep in mind that using the native formats (without compression) will allow full text searches.. Which is pretty incredible if you think about it. You have to install some iFilter from Adobe for searching inside PDF.. but it's a killer feature, I can't live without it.
I hate to be a jerk and answer my own question, but I thought I'd summarize my findings into a complete answer for anyone else looking to space-efficiently store file/image data within a database:
* Using varbinary(MAX) versus Image?
Many reasons for using varbinary(MAX), but top among them is that Image is deprecated and in a future version of SQL it will be removed altogether. Not starting any new projects with it is just nipping a future problem in the bud.
According to the info in this question: SQL Server table structure for storing a large number of images, varbinary(MAX) has more operations available to be used on it.
Varbinary(MAX) is easy to stream from a .NET application by using an SQL Parameter. Negative one is for 'MAX' length. Like so:
SQLCommand1.Parameters.Add("#binaryValue", SqlDbType.VarBinary, -1).Value = compressedBytes;
* What compression algorithm to use?
I'm really not much closer to a decent answer on this one. I used ICSharpCode.SharpZipLib.Gzip and found it had better performance than the built in zipping functions simply by running it on a bunch of stuff and comparing it.
My results:
I reduced my total file size by about 20%. Unfortunately, a lot of the files I had were PDFs which don't compress that well, but there was still some benefit. Not much luck (obviously) with file types that were already compressed.

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

Stream AVI file from memory using IIS/Asp.Net

I have some AVI files on disk but they are encrypted. I'm wondering if there is a way I can decrypt them and stream them to the browser (using MemoryStream or something similar) without having to write any files?
I know there is Windows Media Services but I'm using a Vista machine and Windows Media Services will only install in Windows Server 2003 and 2008.
Is there a way to accomplish this without too much trouble or is Media Services/Windows Server the only way to go? And if there is, would I use something like a custom IHttpHandler (.ashx file)?
Edit:
I have decided to use a custom IHttpHandler. What basic code would I need to have the video play?
I wouldn't want to use a MemoryStream for video. Assuming you can create a CryptoStream (found in the System.Security.Cryptography namespace) over the encrypted AVI file you should be able to just pump a Read from that to a Write on the Response.OutputStream in a IHttpHandler. Something like:-
byte[] buffer = new byte[65556]; // adjust the buffer size as yo prefer.
CryptoStream inStream = YourFunctionToDecryptAVI(aviFilePath);
int bytesRead = inStream.Read(buffer, 0, 65556);
while (bytesRead != 0)
{
context.Response.OutputStream.Write(buffer, 0, bytesRead);
bytesRead = inStream.Read(buffer, 0, 65556);
if (!context.Response.IsClientConnected) break;
}
Response.Close(); //see edit note.
Make sure you turn off response buffer and specify a content type.
Edit:
Ordinarily I hate calling close, it seems so draconian. However whilst chunked encoding shouldn't require it, in the case of streamed video it may be that the client doesn't like it. Also with large data transfers closing the connection is not really big deal.

Streaming large file uploads to ASP.NET MVC

For an application I'm working on, I need to allow the user to upload very large files--i.e., potentially many gigabytes--via our website. Unfortunately, ASP.NET MVC appears to load the entire request into RAM before beginning to service it--not exactly ideal for such an application. Notably, trying to circumvent the issue via code such as the following:
if (request.Method == "POST")
{
request.ContentLength = clientRequest.InputStream.Length;
var rgbBody = new byte[32768];
using (var requestStream = request.GetRequestStream())
{
int cbRead;
while ((cbRead = clientRequest.InputStream.Read(rgbBody, 0, rgbBody.Length)) > 0)
{
fileStream.Write(rgbBody, 0, cbRead);
}
}
}
fails to circumvent the buffer-the-request-into-RAM mentality. Is there an easy way to work around this behavior?
It turns out that my initial code was basically correct; the only change required was to change
request.ContentLength = clientRequest.InputStream.Length;
to
request.ContentLength = clientRequest.ContentLength;
The former streams in the entire request to determine the content length; the latter merely checks the Content-Length header, which only requires that the headers have been sent in full. This allows IIS to begin streaming the request almost immediately, which completely eliminates the original problem.
Sure, you can do this. See RESTful file uploads with HttpWebRequest and IHttpHandler. I have been using this method for a few years and have a site that has been tested with files of at least several gigabytes. Essentially, you want to create your own IHttpHandler, which is easier than it sounds.
In a nutshell, you create a class that implements the IHttpHandler
interface, meaning you have to support the IsReusable property and the ProcessRequest method. On top of that, there is a minor change to your web.config, and it works like a charm. At this stage in the life cycle of the request, the entire file being uploaded does not get loaded into memory, so it neatly steps around out of memory issues.
Note that in the web.config,
<httpHandlers>
<add verb="*" path="DocumentUploadService.upl" validate="false" type="TestUploadService.FileUploadHandler, TestUploadService"/>
</httpHandlers>
the file referenced, DocumentUploadService.upl, doesn't actually exist. That is just there to give an alternate extension so that the request is not intercepted by the standard handler. You point your file upload form to that path, but then your FileUploadHandler class kicks in and actually receives the file.
Update: Actually, the code I use is different from that article, and I think I stumbled on the reason it works. I use the HttpPostedFile class, in which "Files are uploaded in MIME multipart/form-data format. By default, all requests, including form fields and uploaded files, larger than 256 KB are buffered to disk, rather than held in server memory."
if (context.Request.Files.Count > 0)
{
string tempFile = context.Request.PhysicalApplicationPath;
for(int i = 0; i < context.Request.Files.Count; i++)
{
HttpPostedFile uploadFile = context.Request.Files[i];
if (uploadFile.ContentLength > 0)
{
uploadFile.SaveAs(string.Format("{0}{1}{2}",
tempFile,"Upload\\", uploadFile.FileName));
}
}
}

Resources