I have an ASP.Net 4.0 web application that renders on client's browser a PDF created on the fly using CrystalReports inside an iFrame via binary write. The webapp works well on my dev station (Visual Studio 2012 Web Developer), but when I tested to our production server (Win2003, IIS6) the part where the user clicks on a button to stream the PDF it just shows a part of the document (the page header logo only, the rest of the PDF is blank). I made a test: instead of streaming the binary data to the client's browser I saved to a local directory on the server's virtual path, I could access the PDF which integrity was OK, all data appeared on the saved doc. I've been Googling for 3 days, tried all kind of code snippets, IIS configurations (log files doesn't show any error), server permissions, HTML headers, web.config tags, etc., with no luck. I suspect that it has something to be with IIS, but I can't find the solution. Here is the code that renders the PDF on client's browser:
//reportObject is a CrystalReport document object (.rpt)
Dim s As System.IO.MemoryStream = reportObj.ExportToStream(ExportFormatType.PortableDocFormat)
s.Seek(0, SeekOrigin.Begin)
With HttpContext.Current.Response
.ClearContent()
.ClearHeaders()
.ContentType = "application/pdf"
.AddHeader("Content-Disposition", "inline; filename=" & fileName)
.BinaryWrite(s.ToArray)
.Flush()
.End()
End With
I also tried these, all worked on my dev machine, but not on the server:
With HttpContext.Current.Response
.Clear()
.ClearHeaders()
.Buffer = True
.AddHeader("Content-Disposition", "inline; filename=doc.pdf")
.AddHeader("Content-Length", Int32.Parse(s.Length))
.AddHeader("Content-transfer-encoding", "8bit")
.AddHeader("Cache-Control", "private")
.AddHeader("Pragma", "cache")
.AddHeader("Expires", "0")
.AddHeader("X-UA-Compatible", "IE=EmulateIE7")
.ContentType = "application/pdf"
.BinaryWrite(s.ToArray)
.Flush()
.Close()
HttpContext.Current.ApplicationInstance.CompleteRequest()
End With
And also these lines:
.AppendHeader("Accept-Ranges", "none")
.OutputStream.Write(s.ToArray(), 0, Convert.ToInt32(s.Length))
Nothing seems to help.
I had to implement another approach described on this thread:
Response.TransmitFile and delete it after transmission
I hope it helps someone.
Related
I am working on an ASP.Net/VB.Net web application in which a file is to be generated and sent to the client when a button on the page is clicked. I have the following code to do this:-
Dim text_file_name As String = WriteOutputFile() ' Generate output file
Response.ClearContent()
Response.Clear()
Response.ContentType = "text/plain"
Response.AddHeader("Content-Disposition", "attachment; filename=" + text_file_name + ";")
Response.Flush()
HttpContext.Current.ApplicationInstance.CompleteRequest()
File.Delete(text_file_name)
This appears to complete and a file is duly downloaded, but on opening it I find it contains the web page HTML, rather than the intended file text. I observe though that the file (extension .csv) is opened up in Excel, so it is getting at least that part of the message.
I have verified that the file is created as intended by leaving out the File.Delete and watching the files accumulate in the server's directory.
In a previous attempt I had
Response.End()
in place of the complete request; this also generated a .csv file, but one containing the details of a thread exception.
Does anyone know what I am doing wrong?
You're sending the intended file name to the browser as a header hint but you aren't actually sending the file itself. To do that, use Response.WriteFile()
I am currently using Visual Studio 2008 for my ASP .NET application. I am trying to server an excel file via the Response object. The problem is I cannot seem to set the title of the file to Japanese. If I set it to Japanese file name, it gets returned as garbage character. I am using a Japanese IE browser in a Japanese WinXP.
Response.AppendHeader("Content-Type", "application/vnd.ms-excel");
Response.AddHeader("Content-Disposition", String.Format("attachment; filename=\"{0}\"", "日本語.xls"));
OR
Response.AddHeader("Content-Disposition", String.Format("attachment; filename=\"{0}\"", Server.HtmlEncode("日本語.xls")));
I already tried to change the encoding to Shift-JIS
Response.Charset = "Shift_JIS";
or
Response.Charset = "sjis";
Any ideas? Btw, I had the same problem with Visual Studio 2005 too.
I'm not an ASP expert but have you tried recoding the filename using UrlEncode?
Response.AddHeader("Content-Disposition",
System.Web.HttpUtility.UrlEncode(String.Format("attachment; filename=\"{0}\"", "日本語.xls")));
Response.Charset only concerns the body of the HTTP request. According to the HTTP spec, the headers are implicitly encoded as ISO-8859-1 - characters outside that encoding have to be MIME-encoded.
This is only logical - after all, the body encoding set by Response.Charset is itself specified in a header.
I got it working, finally... :)
Use of System.Web.HttpUtility.UrlPathEncode solves the garbage problem but when you open the file, it shows unicode-uncoded names in the filename instead of actual japanese characters.(Well, this is the issue in IE7 and IE6, UrlPathEncode works fine with IE8.)
So, instead of using System.Web.HttpUtility.UrlPathEncode you should decode the filname using the encoding used for the Response header.
In .NET, by default the encoding of the Response header is utf-8, change it to iso-8859-1.
Modify the web.config for the same, as shown below,
<globalization responseHeaderEncoding="iso-8859-1" .../>
And code would be,
//apply Response header's encoding i.e. iso-8859-1 to the filename.
Dim fileName as String = "在庫あり全商品を24時間以内に出荷.doc"
Dim enc As Encoding = Encoding.GetEncoding("shift_jis")
Dim dec As Encoding = Encoding.GetEncoding("iso-8859-1")
fileName = dec.GetString(enc.GetBytes(fileName))
//Show Download Dialog box and Writting it on Client Side.
Response.ClearHeaders()
Response.ContentType = corspBean.ContentType
Response.AppendHeader("content-disposition", "attachment; filename=""" + fileName + """")
Response.BinaryWrite(bytes)
Response.End()
Now, one more important thing, I have wasted lot of time because of it, is this doesn't work on ASP.NET Development Server i.e the server you use to test/debug the web apps on your local machine. So, deploy the solution on the IIS and test from there. It works perfectly fine on IIS. (and IIS is the destiny of every ASP.NET app ;) so it doesn't matter if it works on ASP.NET Development Server or not)
After building a filepath (path, below) in a string (I am aware of Path in System.IO, but am using someone else's code and do not have the opportunity to refactor it to use Path). I am using a FileStream to deliver the file to the user (see below):
FileStream myStream = new FileStream(path, FileMode.Open, FileAccess.Read);
long fileSize = myStream.Length;
byte[] Buffer = new byte[(int)fileSize + 1];
myStream.Read(Buffer, 0, (int)myStream.Length);
myStream.Close();
Response.ContentType = "application/csv";
Response.AddHeader("content-disposition", "attachment; filename=" + filename);
Response.BinaryWrite(Buffer);
Response.Flush();
Response.End();
I have seen from: ASP.NET How To Stream File To User reasons to avoid use of Response.End() and Response.Close().
I have also seen several articles about different ways to transmit files and have diagnosed and found a solution to the problem (https and http headers) with a colleague.
However, the error message that was being displayed was not about access to the file at path, but the aspx file.
Edit: Error message is:
Internet Explorer cannot download MyPage.aspx from server.domain.tld
Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later.
(page name and address anonymised)
Why is this? Is it due to the contents of the file coming from the HTTP response .Flush() method rather than a file being accessed at its address?
Even though you are sending a file, it is the "page" that contains the header information that describes the file you are sending. The browser still has to download that page, then sees the "attachment; filename=" and gives you the file instead.
So if there is an error, it will be page that is shown as the problem. It's a bit like getting a corrupted email with an attachment, you seen the problem in the email not the attachment itself.
Don't call Response.End();
I have a website that has several PDF files. I need to have quite a few of them locked down with the standard ASP.NET authentication (in a folder with web.config that denies anonymous users).
I set PDF files to get handled by the ASP.NET worker process and added:
<add type="System.Web.StaticFileHandler" path="*.pdf" verb="*" />
to my web.config, but for some reason they hang when downloading. I've seen this issue before on an old server, and for the live of me I can't remember what I did to solve it. Does anyone have any idea?
Thanks.
I would open fiddler and see what requests were actually being sent to the server and what the responses are. It may also depend on how you are actually requesting (ie the path) and how you are serving the PDF. I demonstrate how to return password protected PDF documents in one of my WROX eBooks and in my handlers and modules presenation I do at user groups. http://professionalaspnet.com/archive/2008/08/10/CodeStock-Rocked_2100_.aspx (link to download the code on the page).
Here is the code I use to return a PDF using a Handler, it might help you out.
'First run the supplied filename through a URL decoder to change any URL
'characters back to their normal selves.
sDocument = HttpUtility.UrlDecode( _
Path.GetFileName(context.Request.Url.ToString))
'Now build a full physical path to the file on the server.
sDocument = Path.Combine(Path.Combine( _
context.Request.PhysicalApplicationPath, "PDF"), sDocument)
'Verify we actually have the file to serve.
If File.Exists(sDocument) = False Then
context.Server.Transfer("badpdf.aspx")
End If
'The next two sections are from Scott Hanselman's blog, but seem to be out of date,
'since it was posted in 2003
'http://www.hanselman.com/blog/InternetExplorerAndTheMagicOfMicrosoftKBArticleQ293792.aspx
'This is done to support older browsers that were not as efficient
'at managing document requests.
If InStr(1, context.Request.UserAgent, "contype") > 0 Then
'Just send the mime/type
Response.ContentType = "application/pdf"
Response.End()
Exit Sub
End If
Dim Language As String = context.Request.ServerVariables("HTTP_ACCEPT_LANGUAGE")
If String.IsNullOrEmpty(Language) Then
Response.Clear()
Response.ContentType = "application/pdf"
Response.AddHeader("Last-modified", "Mon, 01 Sep 1997 01:03:33 GMT")
Response.Status = "304 Not Modified"
Response.End()
Exit Sub
End If
'Set the Cacheability
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.SetExpires(DateTime.MinValue)
Response.ContentType = "application/pdf"
'This opens the Open/Save Dialog
Response.AddHeader("Content-Disposition", "attachment; filename=" & _
Path.GetFileName(sDocument))
'This Bypasses the Open/Save Dialog
'Response.AddHeader("Content-Disposition", "inline; filename=" & _
' Path.GetFileName(sDocument))
If File.Exists(sDocument) Then
'Write the file to the Response Output stream without buffering.
'This is new to ASP.NET 2.0. Prior to that use WriteFile.
Response.TransmitFile(sDocument)
Response.End()
End If
I think I finally figured out what I was missing. I needed to set the MIME type for PDF files to application/octet-stream instead of application/pdf.
I am looking to stream a file housed in a SharePoint 2003 document library down to the browser. Basically the idea is to open the file as a stream and then to "write" the file stream to the reponse, specifying the content type and content disposition headers. Content disposition is used to preserve the file name, content type of course to clue the browser about what app to open to view the file.
This works all good and fine in a development environment and UAT environment. However, in the production environment, things do not always work as expected,however only with IE6/IE7. FF works great in all environments.
Note that in the production environment SSL is enabled and generally used. (When SSL is not used in the production environment, file streams, is named as expected, and properly dislays.)
Here is a code snippet:
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "test.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] pdfBytes = new byte[byteNum];
fs.Read(pdfBytes, 0, (int)byteNum);
Response.AppendHeader("Content-disposition", "filename=Testme.doc");
Response.CacheControl = "no-cache";
Response.ContentType = "application/msword; charset=utf-8";
Response.Expires = -1;
Response.OutputStream.Write(pdfBytes, 0, pdfBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Like I said, this code snippet works fine on the dev machine and in the UAT environment. A dialog box opens and asks to save, view or cancel Testme.doc. But in production onnly when using SSL, IE 6 & IE7 don't use the name of the attachment. Instead it uses the name of the page that is sending the stream, testheader.apx and then an error is thrown.
IE does provide an advanced setting "Do not save encrypted pages to disk".
I suspect this is part of the problem, the server tells the browser not to cache the file, while IE has the "Do not save encrypted pages to disk" enabled.
Yes I am aware that for larger files, the code snippet above will be a major drag on memory and this implimentation will be problematic. So the real final solution will not open the entire file into a single byte array, but rather will open the file as a stream, and then send the file down to the client in bite size chunks (e.g. perhaps roughly 10K in size).
Anyone else have similar experience "streaming" binary files over ssl? Any suggestions or recommendations?
It might be something really simple, believe it or not I coded exactly the same thing today, i think the issue might be that the content disposition doesnt tell the browser its an attachment and therefore able to be saved.
Response.AddHeader("Content-Disposition", "attachment;filename=myfile.doc");
failing that i've included my code below as I know that works over https://
private void ReadFile(string URL)
{
try
{
string uristring = URL;
WebRequest myReq = WebRequest.Create(uristring);
NetworkCredential netCredential = new NetworkCredential(ConfigurationManager.AppSettings["Username"].ToString(),
ConfigurationManager.AppSettings["Password"].ToString(),
ConfigurationManager.AppSettings["Domain"].ToString());
myReq.Credentials = netCredential;
StringBuilder strSource = new StringBuilder("");
//get the stream of data
string contentType = "";
MemoryStream ms;
// Send a request to download the pdf document and then get the response
using (HttpWebResponse response = (HttpWebResponse)myReq.GetResponse())
{
contentType = response.ContentType;
// Get the stream from the server
using (Stream stream = response.GetResponseStream())
{
// Use the ReadFully method from the link above:
byte[] data = ReadFully(stream, response.ContentLength);
// Return the memory stream.
ms = new MemoryStream(data);
}
}
Response.Clear();
Response.ContentType = contentType;
Response.AddHeader("Content-Disposition", "attachment;");
// Write the memory stream containing the pdf file directly to the Response object that gets sent to the client
ms.WriteTo(Response.OutputStream);
}
catch (Exception ex)
{
throw new Exception("Error in ReadFile", ex);
}
}
Ok, I resolved the problem, several factors at play here.
Firstly this support Microsoft article was beneficial:
Internet Explorer is unable to open Office documents from an SSL Web site.
In order for Internet Explorer to open documents in Office (or any out-of-process, ActiveX document server), Internet Explorer must save the file to the local cache directory and ask the associated application to load the file by using IPersistFile::Load. If the file is not stored to disk, this operation fails.
When Internet Explorer communicates with a secure Web site through SSL, Internet Explorer enforces any no-cache request. If the header or headers are present, Internet Explorer does not cache the file. Consequently, Office cannot open the file.
Secondly, something earlier in the page processing was causing the "no-cache" header to get written. So Response.ClearHeaders needed to be added, this cleared out the no-cache header, and the output of the page needs to allow caching.
Thirdly for good measure, also added on Response.End, so that no other processing futher on in the request lifetime attempts to clear the headers I've set and re-add the no-cache header.
Fourthly, discovered that content expiration had been enabled in IIS. I've left it enabled at the web site level, but since this one aspx page will serve as a gateway for downloading the files, I disabled it at the download page level.
So here is the code snippet that works (there are a couple other minor changes which I believe are inconsequential):
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "TestMe.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] fileBytes = new byte[byteNum];
fs.Read(fileBytes, 0, (int)byteNum);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("Content-disposition", "attachment; filename=Testme.doc");
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.ContentType = "application/octet-stream";
Response.OutputStream.Write(fileBytes, 0, fileBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Response.End();
Keep in mind too, this is just for illustration. The real production code will include exception handling and likely read the file a chunk at a time (perhaps 10K).
Mauro, thanks for catching a detail that was missing from the code as well.