Httphandler Version Aspx Code Behind writing Image file - asp.net

We have encountered this difference in file creation while using a HttpHandler Versus a Code Behind Aspx page.
We are reading a saved jpg/png picture as byte array from a 'Image' field in sql server database and create a physical file in the server.
Both the Aspx Page and Httphandler use the same code pasted below.
//Begin
int docID = Convert.ToInt32(Request.QueryString["DocID"]);
var docRow = documentDB.GetDocument(docID);
// Retrieve the physical directory path for the Uploads subdirectory
string destDir = Server.MapPath("../../Uploads").ToString() + "\\";
string strFileName = destDir + DateTime.Now.ToFileTime() + "_" + docRow.DocName.ToString();
FileStream fs = new FileStream(strFileName, FileMode.CreateNew, FileAccess.Write);
fs.Write(docRow.DocData, 0, docRow.DocData.Length);
fs.Flush();
fs.Close();
// End
After the file is created, it is viewable as a jpg/png Image only in Aspx Code Behind. While in case of HttpHandler it is not a valid Image.
Any ideas for this behavior/missing link/resolution steps will be helpful.
Thank you.

Finally isolating different steps the problem was identified to be the data being stored into Database Table.
The way to eliminate this issue was during upload of file, Create a physical file on the server local system. Read this file into a byte array and store into the Database Table. (Could be Encoding Issue)

Related

Save file to App_Data folder error

I am trying to save a pdf file generated using Rotativa to the app_data folder of my web app but I get the error :
System.IO.DirectoryNotFoundException: Could not find a part of the path 'E:\www\tsp13amp\website.com\wwwroot\App_Data\Documents\Corps_Profile_userID.pdf'.
My Controller code :
var PDF = new Rotativa.ViewAsPdf("ProfilePrint", model) { FileName = "Corps_Profile_" + User.Identity.GetUserId() + "_" + DateTime.Now.ToString("dd-MM-yyyy")};
var fileName = PDF.FileName;
byte[] pdfBytearray = PDF.BuildPdf(ControllerContext);
var fullPath = Path.Combine(Server.MapPath("~/App_Data/Documents/"),fileName +".pdf");
var byteArray = PDF.BuildPdf(ControllerContext);
System.IO.File.WriteAllBytes(fullPath, byteArray)
What I am trying to do is to store the file in the App_Data folder and a reference to the file in the database which will then be used a link to the file later on like:
Download
Thanks for any help.
Just to make this official - don't store anything in your app_data unless you specifically have a way it will be used. This folder is not mapped by default to return content, as such you'd need an HttpHandler or something specific to serve files from this folder. If you map them to a name in the database you'd still need a way to return them.
You are much better served creating another folder to use for this.

SQLFileStream with a chunked file

I'm a little stuck in trying to upload files into our SQL DB using FileStream. I've followed this example http://www.codeproject.com/Articles/128657/How-Do-I-Use-SQL-File-Stream but the difference is we upload the file in 10mb chunks.
On the first chunk a record is created in the DB with empty content (so that a file is created) and then OnUploadChunk is called for each chunk.
The file is uploading ok but when I check, a new file has been created for each chunk, so for a 20mb file for example I have one which is 0kb, another which is 10mb and the final one which is 20mb. I'm expecting one file of 20mb.
I'm guessing this is perhaps to do with getting the transaction context or incorrectly using TransactionScope which I dont quite fully grasp yet. I presume this may be different for each chunk with it going to and from client to server.
Here is the method which is called every time a chunk is sent from the client (using PlupLoad if of any relevance).
protected override bool OnUploadChunk(Stream chunkStream, string DocID)
{
BinaryReader b = new BinaryReader(chunkStream);
byte[] binData = b.ReadBytes(chunkStream.Length);
using (TransactionScope transactionScope = new TransactionScope())
{
string FilePath = GetFilePath(DocID); (Folder path the file is sitting in)
//Gets size of file that has been uploaded so far
long currentFileSize = GetCurrentFileSize(DocID)
//Essentially this is just Select GET_FILESTREAM_TRANSACTION_CONTEXT()
byte[] transactionContext = GetTransactionContext();
SqlFileStream filestream = new SqlFileStream(FilePath, transactionContext, FileAccess.ReadWrite);
filestream.Seek(currentFileSize, SeekOrigin.Begin);
filestream.Write(binData, 0, (int)chunkStream.Length);
filestream.Close();
transactionScope.Complete();
}
}
UPDATE:
I've done a little research and I believe the issue is around this:
FILESTREAM does not currently support in-place updates. Therefore an update to a column with the FILESTREAM attribute is implemented by creating a new zero-byte file, which then has the entire new data value written to it. When the update is committed, the file pointer is then changed to point to the new file, leaving the old file to be deleted at garbage collection time. This happens at a checkpoint for simple recovery, and at a backup or log backup.
So have I just got to wait for the garbage collector to remove the chunked files? Or should I perhaps be uploading the file somewhere on the file system first and then copying it across?
Yes, you will have to wait for Sql to clean up the files for you.
Unless you have other system constraints you should be able stream the entire file all at once. This will give you a single file on the sql side

The process cannot access the file because it is being used by another process at SharePoint 2010 C# code

I am creating custom timer job service in SharePoint 2010 using asp.net 3.5 and c#.In this service, business logic is that i have to create zip file containing list of applications as excel report for each client.for this, i am using Ionic.zip third party dll and ZipFile class for creating zip file and storing this zip file on hard disk having some path.here scenario is that my code contains two foreach loops, upper for list of clients and inner for list of applications.each client may have no. of applications.I am adding these applications to zip file, storing it on hard disk and attaching this file to mail for sending to clients, but my problem is that I am trying to delete zip file before gone to next client, so that there should not be any files on hard disk, but I am getting error as "The process cannot access the file because it is being used by another process".also I have tried to attach output stream for excel report as mail attachment but I am getting zero bytes in attachment. how should i overcome this error.
I am giving simple code below
foreach(list of clients)////may have no. of clients
{
string zipFileDownloadPath = String.Empty;
foreach(list of applications)//may have no. of applications
{
HttpWebResponse resp = (HttpWebResponse)httpReq.GetResponse();
Stream excelReport = resp.GetResponseStream();
zipFile.AddEntry(appName, excelReport);
}
zipFileDownloadPath = clientFolder + #"\" + client["client_name"] + "_" + reportDate + ".zip";
zipFile.Save(zipFileDownloadPath);
mail.Attachments.Add(new Attachment(zipFileDownloadPath));
smtp.Send(mail);//mail have body, subject etc.
//here I am deleting files
if (Directory.Exists(clientFolder))
{
Directory.Delete(clientFolder, true);//here I am getting error
}
}
I the above code I have also tried so save zipfile to output stream so that there should not be any need for storing files on hard disk and attach this stream to mail attachment, but problem is that, i am getting proper bytes in output stream but when mail is sent, i am getting zero byes in attachment.
//here is code for attaching output stream to mail
foreach(list of clients)////may have no. of clients
{
foreach(list of applications)//may have no. of applications
{
HttpWebResponse resp = (HttpWebResponse)httpReq.GetResponse();
Stream excelReport = resp.GetResponseStream();
zipFile.AddEntry(appName, excelReport);
}
Stream outputStream = new MemoryStream();
zipFile.Save(outputStream);
mail.Attachments.Add(new Attachment(outputStream,"ZipFileName" MediaTypeNames.Application.Zip);));
smtp.Send(mail);//mail have body, subject etc.
}
Try moving the position of the stream to it's begiining before sending it to the attachement:
outputStream .Seek(0, SeekOrigin.Begin);
Also before deleting your file make sure you dispose the zipFile object:
zipFile.Dispose()
Or alternately (better) wrap it in a using statement.
Also unless I am missing something if you are using streams, why do you need to save the files to the harddrive? just use the streams, something along the lines of:
var ms = new new MemoryStream();
zipFile.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
mail.Attachments.Add(new Attachment(ms,"ZipFileName" MediaTypeNames.Application.Zip));
zipFile.Dispose()
Special thanks to Luis.Luis has solved my problem.
Hi Everyone Finally I have solved my problem. problem was that I was saving the zip file on output stream so stream was reading exact bytes and reaching at it's last position and I was attaching same stream to attachment that's why i was getting zero bytes in mail attachment.so solution for this is that seek the position of output stream to begin after saving to zip file and before attaching to it to mail. please refer following code for reference.
Stream outputStream = new MemoryStream();
zipFile.Save(outputStream);
outputStream .Seek(0, SeekOrigin.Begin);
mail.Attachments.Add(new Attachment(outputStream,"ZipFileName" MediaTypeNames.Application.Zip);));

Can I test the validity of an image file before uploading it in ASP.NET?

I have an ASP.NET web application that allows the user to upload a file from his PC to a SQL Server database (which is later used to generate an image for an tag). Is there an "easy" way to test the image within .NET to validate that it does not contain anything malicious before saving it?
Right now, I use this:
MemoryStream F = new MemoryStream();
Bitmap TestBitmap = new Bitmap(Filename);
TestBitmap.Save(F, System.Drawing.Imaging.ImageFormat.Png);
int PhotoSize = (int)F.Length;
Photo = new byte[PhotoSize];
F.Seek(0, SeekOrigin.Begin);
int BytesRead = F.Read(Photo, 0, PhotoSize);
F.Close();
Creating TestBitmap fails if it is not an image (e.g. if Filename is the name of a text file), but apparently this doesn't stop a file that is an image with malicious code appended to it from loading as an image, so saving it as a MemoryStream and then writing the stream to a byte array (which is later saved in the database) supposedly fixes this.
To avoid people pass programs and other information's using the ability to upload photos to your site you can do two main steps.
Read and save again the image with your code to remove anything elst.
Limit the size of each image to a logical number.
To avoid some one upload bad code and run it on your server you keep an isolate folder with out permission to run anything. More information's about that on:
I've been hacked. Evil aspx file uploaded called AspxSpy. They're still trying. Help me trap them‼
And a general topic on the same subject: Preparing an ASP.Net website for penetration testing

storing the files in a web application in asp.net

I have set of components that i wish to let the users download from my web application.
Now the question is where should i place the files in app_data or create a separate folder in asp.net web application as shown here or is there any other optimal solution for this ?
What i mean by components is you can take a look at this ! So what is the best way to do store the components ?
Right now what i'm doing is: i'm storing the files in a external folder outside the application more specifically in documents folder of my c drive, and i'm storing the path to a component as a data element of the table, when ever user clicks on a particular row's button (in the grid view) i'm getting the title of that particular clicked row and querying the database table for the filepath of that component title using these lines of code:
String filePath = dr1[0].ToString(); //GETS THE FILEPATH FROM DATABASE
HttpContext.Current.Response.ContentType = "APPLICATION/OCTET-STREAM";
String disHeader = "Attachment; Filename=\"" + filePath + "\"";
HttpContext.Current.Response.AppendHeader("Content-Disposition", disHeader);
System.IO.FileInfo fileToDownload = new System.IO.FileInfo(filePath);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.WriteFile(fileToDownload.FullName);
Am i doing it properly ? Is there a better/optimal way to do it ?
A user simply needs read access to download a file, so you can simply create a directory claled "Downloads" and place them in there.
You can ensure that people can't "browse" that directory by disabling Directory Browsing and not placing any default docs in there (index.html, default.aspx for example)
What you are currently doing looks like a fairly standard way for providing downloads off your site.
I can't think of something more "optimal".

Resources