Stream AVI file from memory using IIS/Asp.Net - asp.net

I have some AVI files on disk but they are encrypted. I'm wondering if there is a way I can decrypt them and stream them to the browser (using MemoryStream or something similar) without having to write any files?
I know there is Windows Media Services but I'm using a Vista machine and Windows Media Services will only install in Windows Server 2003 and 2008.
Is there a way to accomplish this without too much trouble or is Media Services/Windows Server the only way to go? And if there is, would I use something like a custom IHttpHandler (.ashx file)?
Edit:
I have decided to use a custom IHttpHandler. What basic code would I need to have the video play?

I wouldn't want to use a MemoryStream for video. Assuming you can create a CryptoStream (found in the System.Security.Cryptography namespace) over the encrypted AVI file you should be able to just pump a Read from that to a Write on the Response.OutputStream in a IHttpHandler. Something like:-
byte[] buffer = new byte[65556]; // adjust the buffer size as yo prefer.
CryptoStream inStream = YourFunctionToDecryptAVI(aviFilePath);
int bytesRead = inStream.Read(buffer, 0, 65556);
while (bytesRead != 0)
{
context.Response.OutputStream.Write(buffer, 0, bytesRead);
bytesRead = inStream.Read(buffer, 0, 65556);
if (!context.Response.IsClientConnected) break;
}
Response.Close(); //see edit note.
Make sure you turn off response buffer and specify a content type.
Edit:
Ordinarily I hate calling close, it seems so draconian. However whilst chunked encoding shouldn't require it, in the case of streamed video it may be that the client doesn't like it. Also with large data transfers closing the connection is not really big deal.

Related

ASP.NET: Insert data in a ZIP file without having to re-write the entire ZIP file?

My question is a bit similar to this one but it is with ASP.NET and my requirements are slightly different: Android append files to a zip file without having to re-write the entire zip file?
I need to insert data to a zip-file downloaded by users (not much 1KB of data at most, this is data for Adword off-line conversion actually). The zip-file is downloaded through an ASP.NET website. Because the zip file is already large enough (10's of MB) to avoid overloading the server, I need to insert these data without re-compressing everything. I can think of two ways to do this.
Way A: Find a zip-technology that lets embed a particular file in the ZIP file, this particular file being embedded uncompressed. Assuming there is no checksum, it'd be then easy to just override the bits of this un-compressed file with my specific data, in the zip file itself. If possible, this would have to be supported by all unzip tools (Windows integrated zip, winrar, 7zip...).
Way B: Append an extra file to the original ZIP file without having to recompress it! This extra file would have to be stored in an embedded folder in the ZIP file.
I looked a bit at SevenZipSharp which has an enumeration SevenZip.CompressionMode with values Create and Append that leads me to think that Way B could be implemented. DotNetZip seems also to work pretty well with Stream according to FAQ.
But if Way A could be possible I'd prefer it much since no extra zip library would be needed on the server side!
Ok, thanks to DotNetZip I am able to do what I want in a very resource efficient way:
using System.IO;
using Ionic.Zip;
class Program {
static void Main(string[] args) {
byte[] buffer;
using (var memoryStream = new MemoryStream()) {
using (var zip = new ZipFile(#"C:\temp\MylargeZipFile.zip")) {
// The file on which to override content in MylargeZipFile.zip
// has the path "Path\FileToUpdate.txt"
zip.UpdateEntry(#"Path\FileToUpdate.txt", #"Hello My New Content");
zip.Save(memoryStream);
}
buffer = memoryStream.ToArray();
}
// Here the buffer will be sent to httpResponse
// httpResponse.Clear();
// httpResponse.AddHeader("Content-Disposition", "attachment; filename=MylargeZipFile.zip");
// httpResponse.ContentType = "application/octe-t-stream";
// httpResponse.BinaryWrite(buffer);
// httpResponse.BufferOutput = true;
// Just to check it worked!
File.WriteAllBytes(#"C:\temp\Result.zip", buffer);
}
}

Most Space-Efficient Way to Store a Byte Array in a Database Table - ASP.NET

Right now we have a database table (SQL Server 2008 R2) that stores an uploaded file (PDF, DOC, TXT, etc.) in an image type column. A user uploads this file from an ASP.NET application. My project is to get a handle on the size at which this table is growing, and I've come up with a couple of questions along the way.
On the database side, I've discovered the image column type is supposedly somewhat depreciated? Will I gain any benefits to switching over to varbinary(max), or should I say varbinary(5767168) because that is my file size cap, or might as well I just let it stay as an image type as far as space-efficiency is concerned?
On the application side, I want to compress the byte array. Microsoft's built in GZip sometimes made the file bigger instead of smaller. I switched over to SharpZipLib, which is much better, but I still occasionally run into the same problem. Is there a way to find out the average file compression savings before I implement it on a wide scale? I'm having a hard time finding out what the underlying algorithm is that they use.
Would it be worth writing a Huffman code algorithm of my own, or will that present the same problem where there is occasionally a larger compressed file than original file?
For reference, in case it matters, here's the code in my app:
using ICSharpCode.SharpZipLib.GZip;
private static byte[] Compress(byte[] data)
{
MemoryStream output = new MemoryStream();
using (GZipOutputStream gzip = new GZipOutputStream(output))
{
gzip.IsStreamOwner = false;
gzip.Write(data, 0, data.Length);
gzip.Close();
}
return output.ToArray();
}
private static byte[] Decompress(byte[] data)
{
MemoryStream output = new MemoryStream();
MemoryStream input = new MemoryStream();
input.Write(data, 0, data.Length);
input.Position = 0;
using (GZipInputStream gzip = new GZipInputStream(input))
{
byte[] buff = new byte[64];
int read = gzip.Read(buff, 0, buff.Length);
while (read > 0)
{
output.Write(buff, 0, read);
read = gzip.Read(buff, 0, buff.Length);
}
gzip.Close();
}
return output.ToArray();
}
Thanks in advance for any help. :)
that's not a byte array, that's a BLOB. 10 years ago, you would have used the IMAGE datatype.
these days, it's more efficient to use VARBINARY(MAX)
I really reccomend that people use FILESTREAM for VarBinary(Max) as it makes backing up the database (without the blobs) quite easy.
Keep in mind that using the native formats (without compression) will allow full text searches.. Which is pretty incredible if you think about it. You have to install some iFilter from Adobe for searching inside PDF.. but it's a killer feature, I can't live without it.
I hate to be a jerk and answer my own question, but I thought I'd summarize my findings into a complete answer for anyone else looking to space-efficiently store file/image data within a database:
* Using varbinary(MAX) versus Image?
Many reasons for using varbinary(MAX), but top among them is that Image is deprecated and in a future version of SQL it will be removed altogether. Not starting any new projects with it is just nipping a future problem in the bud.
According to the info in this question: SQL Server table structure for storing a large number of images, varbinary(MAX) has more operations available to be used on it.
Varbinary(MAX) is easy to stream from a .NET application by using an SQL Parameter. Negative one is for 'MAX' length. Like so:
SQLCommand1.Parameters.Add("#binaryValue", SqlDbType.VarBinary, -1).Value = compressedBytes;
* What compression algorithm to use?
I'm really not much closer to a decent answer on this one. I used ICSharpCode.SharpZipLib.Gzip and found it had better performance than the built in zipping functions simply by running it on a bunch of stuff and comparing it.
My results:
I reduced my total file size by about 20%. Unfortunately, a lot of the files I had were PDFs which don't compress that well, but there was still some benefit. Not much luck (obviously) with file types that were already compressed.

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

System.Drawing.Image as source for asp.net image container

I created image from byte array
System.Drawing.Image newImage;
using (MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length))
{
ms.Write(imageBytes, 0, imageBytes.Length);
newImage = System.Drawing.Image.FromStream(ms, true);
}
and now I need to have this image as a source for asp:Image (System.Web.UI.WebControls.Image). Is this possible as I know that conversion is impossible?
Use the following code:
System.IO.MemoryStream ms = new System.IO.MemoryStream();
image.Save(ms, System.Drawing.Imaging.ImageFormat.Gif);
Response.ClearContent();
Response.ContentType = "image/Gif";
Response.BinaryWrite(ms.ToArray());
<asp:Image ID="Image1" runat="server" ImageUrl="~/pic.aspx"/>
If you're starting with a byte array, just send that. No need to parse and re-encode it unless you need to change the image format.
If you do need to perform image manipulations, you might consider an existing library that doesn't have memory leaks, and is optimized for the server. GDI calls are dangerous on the server - you have to know exactly what you're doing.
For the best performance, use an HttpModule to answer the request. An aspx page as suggested by MUG4N will add significant overhead to the request. A .ashx would be better than a .aspx file, but wouldn't be MVC compatible, and doesn't permit disk caching as can be accomplished via an HttpModule and a RewritePath call.

Best way to stream files in ASP.NET

What's the best way to stream files using ASP.NET?
There appear to be various methods for this, and I'm currently using the Response.TransmitFile() method inside an http handler, which sends the file to the browser directly. This is used for various things, including sending FLV's from outside the webroot to an embedded Flash video player.
However, this doesn't seem like a reliable method. In particular, there's a strange problem with Internet Explorer (7), where the browser just hangs after a video or two are viewed. Clicking on any links, etc have no effect, and the only way to get things working again on the site is to close down the browser and re-open it.
This also occurs in other browsers, but much less frequently. Based on some basic testing, I suspect this is something to do with the way files are being streamed... perhaps the connection isn't being closed properly, or something along those lines.
After trying a few different things, I've found that the following method works for me:
Response.WriteFile(path);
Response.Flush();
Response.Close();
Response.End();
This gets around the problem mentioned above, and viewing videos no longer causes Internet Explorer to hang.
However, my understanding is that Response.WriteFile() loads the file into memory first, and given that some files being streamed could potentially be quite large, this doesn't seem like an ideal solution.
I'm interested in hearing how other developers are streaming large files in ASP.NET, and in particular, streaming FLV video files.
I would take things outside of the "aspx" pipeline. In particular, I would write a ran handler (ashx, or mapped via config), that does the minimum work, and simply writes to the response in chunks. The handler would accept input from the query-string/form as normal, locate the object to stream, and stream the data (using a moderately sized local buffer in a loop). A simple (incomplete) example shown below:
public void ProcessRequest(HttpContext context) {
// read input etx
context.Response.Buffer = false;
context.Response.ContentType = "text/plain";
string path = #"c:\somefile.txt";
FileInfo file = new FileInfo(path);
int len = (int)file.Length, bytes;
context.Response.AppendHeader("content-length", len.ToString());
byte[] buffer = new byte[1024];
Stream outStream = context.Response.OutputStream;
using(Stream stream = File.OpenRead(path)) {
while (len > 0 && (bytes =
stream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytes);
len -= bytes;
}
}
}
Take a look at the following article Tracking and Resuming Large File Downloads in ASP.NET which will give you more in depth than just open a stream and chuck out all the bits.
The http protocol supports ranged byte requests and resumeable downloads, and many streaming clients (like video players or Adobe pdf) can and will try to chunk these up, saving bandwidth and giving your users a better experience.
Not trivial, but it's time well spent.
Try opening the file as a stream, then using Response.OutputStream.Write(). For example:
Edit: My bad, I forgot that Write takes a byte buffer. Fixed
byte [] buffer = new byte[1<<16] // 64kb
int bytesRead = 0;
using(var file = File.OpenRead(path))
{
while((bytesRead = file.Read(buffer, 0, buffer.Length)) != 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
}
Response.Flush();
Response.Close();
Response.End();
Edit 2: Did you try this? It should work.
After trying lots of different combinations, including the code posted in the various answers, it seems like setting Response.Buffer = true before calling TransmitFile did the trick and the web application is now a lot more responsive in Internet Explorer.
In this particular case, the SWF extension is also mapped to ASP.NET, and we're using a custom handler in our web application to read the files from disk and then send them to the browser using Response.TransmitFile(). We've got a flash-based video player to play video files which are also SWF's, and I think having all of this activity go through the handler without buffering is what may have been causing strange things to happen in IE.

Resources