Error: while trying to OPen PDF in ASP.NET - asp.net

In my ASP.NET application,When I try to open PDF file by using the below code, I am getting an error
CODE USED TO SHOW PDF FILE
FileStream MyFileStream = new FileStream(filePath, FileMode.Open);
long FileSize = MyFileStream.Length;
byte[] Buffer = new byte[(int)FileSize + 1];
MyFileStream.Read(Buffer, 0, (int)MyFileStream.Length);
MyFileStream.Close();
Response.ContentType = "application/pdf";
Response.AddHeader("content-disposition", "attachment; filename="+filePath);
Response.BinaryWrite(Buffer);
ERROR I AMN GETTING
"There was an error opening this document.The file is damaged and could not open"

Sounds like your using an aspx file to output the pdf. Have you considered using an ashx file which is an HttpHandler? It bypasses all the typical aspx overhead stuff and is more efficient for just serving up raw data.
Here is an example of the ashx using your code:
<% WebHandler Language="c#" class="ViewPDF" %>
public class ViewPDF : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
FileStream MyFileStream = new FileStream(filePath, FileMode.Open);
long FileSize = MyFileStream.Length;
byte[] Buffer = new byte[(int)FileSize + 1];
MyFileStream.Read(Buffer, 0, (int)MyFileStream.Length);
MyFileStream.Close();
Response.ContentType = "application/pdf";
Response.AddHeader("content-disposition", "attachment; filename="+filePath);
Response.BinaryWrite(Buffer);
}
public bool IsReusable
{
get { return false; }
}
}
If you still want to use the aspx page. Make sure you are doing the following:
// At the beginning before you do any response stuff do:
Response.Clear();
// When you are done all your response stuff do:
Response.End();
That should solve your problem.

You must flush the response otherwise it gets partially transmitted.
Response.Flush();

In addition to ocedcio's reply, you need to be aware that Stream.Read() does not necessarily read all of the bytes requested. You should examine the return value from Stream.Read() and continue reading if less bytes are read than requested.
See this question & answer for the details: Creating a byte array from a stream

Related

retrieving Binary/Blob files from Microsoft Dynamics Nav with ASP.NET

I am working with a MS Dynamics Nav Database that have a file attachment tables. The files are stored in MS SQL. I am able to pull the files to my desktop with a custom asp.net application that I have built, but when I open the files, they are corrupted. These are PDFs files that are located in the "image" file type column of the database and I have tried to download over 20 files. All of them varies in size and seem to download successfully.
The reason why I suspect these are PDFs files is because the column right next to the binary columns give me the name of the file as in PDF format. I have also tried to renaming the file after I download to different image formats but without any luck when I tried to open it. This is not my first project to retrieve binary files, from MS SQL database. If anyone work on getting files off the Nav database before, please help me. The sample code below I wrote to retrieve files using LINQ to SQL when I give it a specific ID in the browser. Please advice me if you know any sort of compression or encryption in the binary files itself and how to grab the file successfully to read it. Thanks
protected void getFileFromID(string queryid)
{
string Filename = string.Empty;
byte[] bytes;
try
{
DataClassesFilesDataContext dcontext = new DataClassesFilesDataContext();
var myfile = (from file in dcontext.Comptroller_File_Attachments
where file.No_ == queryid
select file).First();
if (myfile.Table_ID.ToString().Length > 0 && myfile.Attachment != null)
{
Filename = myfile.FileName.ToString();
bytes = myfile.Attachment.ToArray();
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=" + Filename);
Response.BinaryWrite(bytes);
Response.End();
}
else
{
Response.Write("no file exist");
}
}
catch (Exception e)
{
Response.Write(e);
}
}
Well. I figured it out. I read on a blog that 4 bytes was the "magic number" to get rid off. So all you have to do is get rid of 4 bytes from the BLOB bytes array and then decompress it with DeflateStream. The example code I post below is an example where it takes in a byte array and skip the first 4 using LINQ-to-SQL and return the byte and string filename for the 2nd function. It also pass in a queryid string parameter. I am sure the code can be improve more for efficiency purposes. For those who have trouble with this, just give this a try.
//get bytes and remove first 4 bytes from bytes array
protected Tuple<byte[], string> getBytesfromFile(string queryID)
{
byte[] MyFilebytes = null;
string filename = string.Empty;
try
{
DataClassesFilesDataContext dcontext = new DataClassesFilesDataContext();
var myfile = (from file in dcontext.Comptroller_File_Attachments
where file.No_ == queryID
select file).First();
if (myfile.Table_ID.ToString().Length > 0 && myfile.Attachment != null)
{
MyFilebytes = myfile.Attachment.ToArray().Skip(4).ToArray();
filename = myfile.FileName.ToString();
}
else
Response.Write("no byte to return");
}
catch
{
Response.Write("no byte");
}
return Tuple.Create(MyFilebytes, filename);
}
//after getting the remaining bytes (after removing 4 first byte) deflate the byte and then store it in a memory steam and get the result back.
protected void getFile()
{
try
{
string Filename = string.Empty;
byte[] myfile = getBytesfromFile(getQueryID()).Item1;
byte[] result;
using (Stream input = new DeflateStream(new MemoryStream(myfile),
CompressionMode.Decompress))
{
using (MemoryStream output = new MemoryStream())
{
input.CopyTo(output);
result = output.ToArray();
}
}
Filename = getBytesfromFile(getQueryID()).Item2;
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=" + Filename);
Response.BinaryWrite(result);
Response.End();
}
catch (Exception e)
{
Response.Write(e);
}
}
//pass in file id
protected string getQueryID()
{
QueryID.QueryStringID = Request.QueryString["fileid"];
return QueryID.QueryStringID;
}

ASP.NET - response.outputstream.write either writes 16k and then all 0's, or writes all but insetrs a char every 64k

I have the following code...
public partial class DownloadFile : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string FilePath = "[FTPPath]";
Download downloadFile = new Download();
Server.ScriptTimeout = 54000;
try
{
long size = downloadFile.GetFileSize(FilePath);
using (FtpWebResponse ftpResponse = downloadFile.BrowserDownload(FilePath))
using (Stream streamResponse = ftpResponse.GetResponseStream())
{
string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
int bufferSize = 65536;
byte[] buffer = new byte[bufferSize];
int readCount;
readCount = streamResponse.Read(buffer, 0, bufferSize);
// Read file into buffer
//streamResponse.Read(buffer, 0, (int)size);
Response.Clear();
Response.Buffer = false;
Response.BufferOutput = false;
//Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
Response.AddHeader("Pragma", "public");
//Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
Response.Expires = 0;
//Again this line forces the browser not to cache the page.
Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
Response.AddHeader("Cache-Control", "public");
Response.AddHeader("Content-Description", "File Transfer");
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
Response.AddHeader("Content-Transfer-Encoding", "binary");
Response.AddHeader("Content-Length", size.ToString());
// writes buffer to OutputStream
while (readCount > 0)
{
Response.OutputStream.Write(buffer, 0, bufferSize);
readCount = streamResponse.Read(buffer, 0, bufferSize);
Response.Flush();
}
Response.End();
Server.ScriptTimeout = 90;
}
}
catch (Exception ex)
{
Response.Write("<p>" + ex.Message + "</p>");
Server.ScriptTimeout = 90;
}
}
}
To download .zip files from an FTP (please ignore the header rubbish about preventing caching unless this is related to the issue).
So downloadFile is a class I have written using FTPWebRequest/Response with SSL enabled that can do to two things; one is return the file size (GetFileSize) of a file on our FTP and the other is to set FtpWebRequest.Method = WebRequestMethods.Ftp.DownloadFile to allow the download of a file.
Now the code appears to work perfectly, you get a nice zip downloaded of exactly the same size as the one on the FTP however, this is where the quirks begin.
The zip files are always corrupted, no matter how small. In theory, very small files should be okay, but you'll see why in a moment. Because of this, I decided to compare the files in binary.
If I set bufferSize to anything other than the size of the file
(i.e. 1024, 2048, 65536), the first 16k (16384 bytes) downloads
perfectly, and then the stream just writes zeros to the end of the
file.
If I set bufferSize = size (filesize), the stream appears to download the full file, until you look more closely. The file is an exact replica up to the first 64k, and then an extra character appears in the downloaded file (this chararacter never seems to be the same).
After this extra byte, the files are exactly the same again. An extra byte appears to get added every 64k, meaning that by the end of 65MB file, the two files are massively out of sync. Because the download length is limited to the size of the file on the server, the end of the file gets truncated in the downloaded file. The archive will allow access to it as all the CRC checks fail.
Any help would be much appreciated. Cheers.
Now changed my code somewhat to use WebRequest and WebResponse to grabe a zip using Http from the web server itself. Here is the code...
public partial class DownloadFile : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string FilePath = [http path];
Server.ScriptTimeout = 54000;
try
{
WebRequest HWR = WebRequest.Create(FilePath);
HWR.Method = WebRequestMethods.File.DownloadFile;
using (WebResponse FWR = HWR.GetResponse())
using (BinaryReader streamResponse = new BinaryReader(FWR.GetResponseStream()))
{
string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
int bufferSize = 2048;
byte[] buffer = new byte[bufferSize];
int readCount;
readCount = streamResponse.Read(buffer, 0, bufferSize);
Response.Clear();
Response.Buffer = false;
Response.BufferOutput = false;
//Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
Response.AddHeader("Pragma", "public");
//Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
Response.Expires = 0;
//Again this line forces the browser not to cache the page.
Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
Response.AddHeader("Cache-Control", "public");
Response.AddHeader("Content-Description", "File Transfer");
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
Response.AddHeader("Content-Transfer-Encoding", "binary");
// writes buffer to OutputStream
while (readCount > 0)
{
Response.OutputStream.Write(buffer, 0, bufferSize);
Response.Flush();
readCount = streamResponse.Read(buffer, 0, bufferSize);
}
//Response.Write(testString);
Response.End();
Server.ScriptTimeout = 90;
}
}
catch (Exception ex)
{
Response.Write("<p>" + ex.Message + "</p>");
Server.ScriptTimeout = 90;
}
}
}
This code is more simple but it is still corrupting the data. I'm sure there's something very simple I'm doing wrong, but I just can't spot it or find a test to show me where I am going wrong. Please help :)
On your line
Response.OutputStream.Write(buffer, 0, bufferSize);
change bufferSize to readCount so that you only write the number that you actually read.

Responding with a csv file in asp.net

I am trying to make a csv file from a textbox and then send it to the user. This is my code so far:
Response.Clear();
Response.ContentType = "text/csv";
Response.AppendHeader("Content-Disposition",
string.Format("attachment; filename={0}", DateTime.Now));
Response.Write(TextBox_Data.Text);
Context.Response.End();
What is sent is an empty xml file, I have never tried responding with a file before and I'm wondering why it does this?
I have also tried the following which did not work:
var writer = File.CreateText("C:\\file.csv");
writer.WriteLine(TextBox_Data.Text);
Context.Response.Clear();
Context.Response.AppendHeader("content-disposition", "attachment; filename=" + DateTime.Now + ".csv");
Context.Response.ContentType = "text/csv";
Context.Response.Write("C:\\file.csv");
Context.Response.Flush();
Context.Response.End();
Let me know if you have the answer :)
The following code worked for me. You may just be missing a file extension.
Response.Clear();
Response.ContentType = "text/csv";
Response.AppendHeader("Content-Disposition",
string.Format("attachment; filename={0}.csv", DateTime.Now));
Response.Write(TextBox_Data.Text);
Context.Response.End();
Just a complement on joshb's answer regarding the use of Response.End():
MSDN does not recommend the use of Response.End() for non-error cases, and in some cases it can actually cause the client to lose some data .
In my case sometimes the downloaded csv would loose the last bytes of the last line, so I removed the Response.End() and used
HttpContext.Current.ApplicationInstance.CompleteRequest()
instead, and I had to override the page's Render(HtmlTextWriter writer) method to not write anything on the request, since the csv was already writen.
public class PageBase : Page
{
private bool performRegularPageRender = true;
protected override void Render(HtmlTextWriter writer)
{
if (performRegularPageRender)
base.Render(writer);
}
public void SkipRegularPageRendering()
{
performRegularPageRender = false;
}
}
More info / credits:
msdn blog; Is Response.End() considered harmful?; PostBack and Render Solutions? Overrides

Remote file Download via ASP.NET corrupted file

I am using the code below which I have found on one of the forums to download a file in remote server. it seems it is working. However, the downloaded file is corrupted and I cannot unzip.
Do you have any idea why it is so? or if my approach is wrong, could you suggest me a better way please?
protected void Page_Load(object sender, EventArgs e)
{
string url = "http://server/scripts/isynch.dll?panel=AttachmentDownload&NoteSystem=SyncNotes&NoteType=Ticket&NoteId=1&Field=supp&File=DisplayList%2etxt";
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
req.Credentials = new NetworkCredential("user", "pass");
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
////Initialize the output stream
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AppendHeader("Content-Disposition:", "attachment; filename=" + "DisplayList.txt");
Response.AppendHeader("Content-Length", resp.ContentLength.ToString());
////Populate the output stream
byte[] ByteBuffer = new byte[resp.ContentLength];
Stream rs = req.GetResponse().GetResponseStream();
rs.Read(ByteBuffer, 0, ByteBuffer.Length);
Response.BinaryWrite(ByteBuffer);
Response.Flush();
///Cleanup
Response.End();
rs.Dispose();
}
First of all, use application/octet-stream as it is the standard content type for downloads.
new byte[resp.ContentLength + 1] will define a buffer which is one byte larger than content type. I believe this is the reason for corruption. Use new byte[resp.ContentLength].
I actually recommend re-writing it and removing memorystream:
const int BufferLength = 4096;
byte[] byteBuffer = new byte[BufferLength];
Stream rs = req.GetResponse().GetResponseStream();
int len = 0;
while ( (len = rs.Read(byteBuffer,0,byteBuffer.Length))>0)
{
if (len < BufferLength)
{
Response.BinaryWrite(byteBuffer.Take(len).ToArray());
}
else
{
Response.BinaryWrite(byteBuffer);
}
Response.Flush();
}
the article on http://support.microsoft.com/default.aspx?scid=kb;en-us;812406 solved my problem. Many thanks to #Aliostad for his effort to help me.

Returning plain text or other arbitary file in ASP.net

If I were to respond to an http request with a plain text in PHP, I would do something like:
<?php
header('Content-Type: text/plain');
echo "This is plain text";
?>
How would I do the equivalent in ASP.NET?
If you only want to return plain text like that I would use an ashx file (Generic Handler in VS). Then just add the text you want to return in the ProcessRequest method.
public void ProcessRequest(HttpContext context)
{
context.Response.ContentType = "text/plain";
context.Response.Write("This is plain text");
}
This removes the added overhead of a normal aspx page.
You should use Response property of Page class:
Response.Clear();
Response.ClearHeaders();
Response.AddHeader("Content-Type", "text/plain");
Response.Write("This is plain text");
Response.End();
Example in C# (for VB.NET just remove the end ;):
Response.ContentType = "text/plain";
Response.Write("This is plain text");
You may want to call Response.Clear beforehand in order to ensure there are no headers or content in the buffer already.
Response.ContentType = "text/plain";
Response.Write("This is plain text");
and if you migrate to asp net core / blazor:
string str = "text response";
byte[] bytes = Encoding.ASCII.GetBytes(str);
context.Response.ContentType = "text/plain";
await context.Response.Body.WriteAsync(bytes, 0, bytes.Length);

Resources