Chrome showing "canceled" on successful file download (200 status) - asp.net

Not sure if this is an actual problem or not, but I'm writing a file out in ASP.NET, and even though the file always successfully goes through, in Chrome's developer tools, network tab, I always see the line in red, marked "Canceled".
I've tried lots of ways of doing this - for simplicity, I'm trying this with a simple text file, but it's true for PDF and other file types as well.
WebForms: I've tried it with lots of combinations of the following:
Response.Clear();
// and/or/neither
Response.ClearHeaders();
// with and without this
Response.Buffer = true;
Response.Charset = "";
// or/neither
Response.Charset = "utf-8";
// application/pdf for PDF, also tried application/octet-stream
Response.ContentType = "text/plain";
// with and without this
Response.AddHeader("Content-Length", bytes.Length.ToString());
Response.AddHeader("Content-Disposition", "attachment; filename=1.txt");
// bytes is the UTF8 bytes for a string or the PDF contents
new MemoryStream(bytes).WriteTo(Response.OutputStream);
// or
Response.Write("12345");
// any combination of the following 3, or none at all
Response.Flush();
Response.Close();
Response.End();
MVC (2 and 3, haven't tried 4):
byte[] fileContents = Encoding.UTF8.GetBytes("12345");
return File(fileContents, "text/plain", "1.txt");
// or
return File(#"C:\temp\1.txt", "text/plain", "1.txt");
It's always the same - the file goes through just fine, but dev tools shows me this:
I'm thinking of just ignoring it and moving on with life, but the red just bothers me. Any idea how I can deal with that?

This just means that chrome didn't navigate away from the page. The behavior is by design. Don't worry about it.

Related

Failed - network error in get Zip file in chrome

I am creating a zip by Ionic.Zip.dll like this (ASP.NET,C#):
zip.AddEntry("Document.jpeg", File.ReadAllBytes("Path");
I want to download it like this:
Response.Clear();
Response.BufferOutput = false;
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "filename=SuppliersDocuments.zip";
zip.Save(Response.OutputStream);
Response.Close();
I tested this code in localhost by Firefox and Chrome and it worked properly. But when I test this code in host, I get this error:
Failed - network error
Is my code is wrong?
I ran into a similar issue with relaying an SSRS report. Taking #Arvin's suggestion, I did the following:
private void CreateReport(string ReportFormat)
{
ReportViewer rview = new ReportViewer();
// (setup report viewer object)
// Run report
byte[] bytes = rview.ServerReport.Render(ReportFormat, deviceInfo, out mimeType, out encoding, out extension, out streamids, out warnings);
// Manually create a response
Response.Clear();
Response.ContentType = mimeType;
Response.AddHeader("Content-disposition", string.Format("attachment; filename={0}.{1}", fileName, extension));
// Ensure the content size is set correctly
Response.AddHeader("Content-Length", bytes.Length.ToString()); // <- important
// Write to the response body
Response.OutputStream.Write(bytes, 0, bytes.Length);
// (cleanup streams)
}
The fix was adding the Content-Length header and setting it to the size of the byte array from reporting services.

Getting a corrupted XLSX file when writing it to Response.OutputStream

In ASP.Net, I'm using NPOI to write save to an Excel doc and I've just moved to version 2+. It worked fine writing to xls but switching to xlsx is a little more challenging. My new and improved code is adding lots of NUL characters to the output file.
The result is that Excel complains that there is "a problem with some content" and do I want them to try to recover?
Here is a pic of the xlsx file that was created from my Hex editor:
BadXlsxHexImage Those 00s go on for several pages. I literally deleted those in the editor until the file opened without an error.
Why does this code add so many NULs to this file??
using (var exportData = new MemoryStream())
{
workbook.Write(exportData);
byte[] buf = exportData.GetBuffer();
string saveAsFileName = sFileName + ".xlsx";
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("Content-Disposition", string.Format("attachment;filename={0}; size={1}", saveAsFileName, buf.Length.ToString()));
Response.Clear();
Response.OutputStream.Write(buf, 0, buf.Length);
exportData.Close();
Response.BufferOutput = true;
Response.Flush();
Response.Close();
}
(I've already tried BinaryWrite in place of OutputStream.Write, Response.End in place of Response.Close, and setting Content-Length with the length of the buffer. Also, none of this was an issue writing to xls.)
The reason you are getting a bunch of null bytes is because you are using GetBuffer on the MemoryStream. This will return the entire allocated internal buffer array, which will include unused bytes that are beyond the end of the data if the buffer is not completely full. If you want to get just the data in the buffer (which you definitely do), then you should use ToArray instead.
That being said, why are you writing to a MemoryStream at all? You already have a stream to write to: the OutputStream. Just write the workbook directly to that.
Try it like this:
Response.Clear();
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", saveAsFileName));
workbook.Write(Response.OutputStream);
Response.Flush();
Response.Close();

How to download, then delete file in ASP.NET properly?

I've stumbled upon this problem recently: In my app, I'm providing users with the option of downloading multiple files from a list by putting them into a .zip folder and then downloading that. I naturally want that .zip folder to be deleted once the download is finished. Here's my approach:
try {
Response.Clear();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", "attachment; filename=filename.zip");
Response.TransmitFile(archive);
Response.Flush();
success = success && true;
return success;
} catch {
return false;
} finally {
System.IO.File.Delete(archive);
Response.End();
}
Now, the booleans are just to indicate whether or not was the download successful and I don't think they are important right now.
I thought how this works is the program first tries to transmit the file to the client, if no errors occur it skips the catch block and only after it was downloaded will it delete the archive file.
However, I am often getting an error saying The process cannot access the file because it is being used by another process. on the line System.IO.File.Delete(archive);. This doesn't happen every time, as far as I can see, it's rather random.
Could anyone hint me at the solution?
Response.ContentType = ContentType;
Response.AppendHeader("Content-Disposition",
"attachment; filename=myFile.txt");
Response.WriteFile(Server.MapPath("~/uploads/myFile.txt"));
Response.Flush();
System.IO.File.Delete(Server.MapPath("~/uploads/myFile.txt"));
Response.End();

ASP.Net Transmit File

I am writing a webapplication in ASP.net.
I am trying to make a file dialog box appear for downloading something off the server.
I have the appropriate file data stored in a variable called file.
File has fields:
FileType - The MIMEType of the file
FilePath - The server-side file path
Here's the code so far:
Response.Clear();
Response.ContentType = file.FileType;
Response.AppendHeader("Content-Disposition", "attachment; filename=" + GetFileName(file));
Response.TransmitFile(file.FilePath) ;
Response.End();
GetFileName is a function that gets me the filename from an attachment object. I only store the path.
The above code is in a function called "Download_Clicked" that is an event that triggers on click. The event is mapped to a LinkButton.
The problem is that when I run the above code, nothing happens. The standard dialog box does not appear.
I have attempted the standard trouble-shooting such as making sure the file exists, and ensuring the path is correct. They are both dead on the mark.
My guess is that because my machine is also the server, it may not be processing properly somehow.
Thanks in advance.
Edit 1: Attempted putting control onto another page, works fine.
Edit 2: Resolved issue by removing control from AJAX Update Panel.
I've found another to do this without removing the update panel. Place the code below in your page load and you'll now be able to use that button to trigger a download.
ScriptManager.GetCurrent(this.Page).RegisterPostBackControl(Button);
Use Response.WriteFile() instead.
Also, don't use Response.End()! This aborts the thread. Use Response.Flush(); Response.Close();
Try changing
Response.AppendHeader("Content-Disposition", "attachment; filename=" + GetFileName(file));
To
Response.AppendHeader("Content-Disposition", "attachment; filename=" + Path.GetFileName(GetFileName(file))));
If that doesn't work, you can always use Response.BinaryWrite or Resonse.Write to stream the file to the web browser
Here is how transmit the file using Response.Write or Response.BinaryWrite. Put these functions in a library somewhere then call them as needed
public void SendFileToBrowser(String FileName, String MIMEType, String FileData)
{
Response.Clear();
Response.AddHeader("Content-Disposition", "attachment; filename=" + FileName);
Response.ContentType = MIMEType;
Response.Buffer = true;
Response.Write(FileData);
Response.End();
}
public void SendFileToBrowser(String FileName, String MIMEType, Byte[] FileData)
{
Response.Clear();
Response.AddHeader("Content-Disposition", "attachment; filename=" + FileName);
Response.ContentType = MIMEType;
Response.Buffer = true;
Response.BinaryWrite(FileData);
Response.End();
}
Then somewhere you call these functions like so
SendFileToBrowser("FileName.txt", "text/plain", "Don't try this from an Update Panel. MSAjax does not like it when you mess with the response stream.");
See edit on initial post.
Removed Ajax Update Panel to resolve the error. The panel was stopping the post back to the server.
For more info, see Cris Valenzuela's comment.

Why doesn't Content-Disposition header work in IE 8?

I'm trying to stream a text file (CSV) to the response, and the following code works perfectly in Firefox 3, but when I use IE, it looks like it wants to download the actual .aspx page, and complains that the file contents don't match the file extension or type. If I then choose to download the file anyway, it correctly downloads the CSV data and opens it in Excel. What am I doing wrong?
DataTable dt = ExtensionsProvider.ListPrivateCallCostsForCsv(reportFilter.BusinessUnit, reportFilter.StartDate,
reportFilter.EndDate);
Response.Clear();
Response.Buffer = true;
Response.ContentType = "text/csv";
Response.AddHeader("Content-Disposition", "filename=" + GetExportFileName());
DataTableHelper.WriteCsv(dt, Response.Output, false);
Response.End();
Response.AddHeader("Content-Disposition", "filename=" + GetExportFileName());
Should be:
Response.AddHeader("Content-Disposition", "attachment;filename=" + GetExportFileName());
Without a main Content-Disposition value, IE will just use the trailing part of the URL — something.aspx — as a filename.
(The above assumes GetExportFileName() returns a sanitised filename stripped of most punctuation. What can go in a header parameter as token or quoted-string in IE is a matter of some annoyance; see this question for details)
It does not work with inline either. Of course it works for all other browsers
HttpServletResponse response = aCtx.getResponse();
response.setContentType("text/plain");
response.addHeader("Content-Disposition", "inline;filename=log.txt");
You have to give the value for the Content-Disposition header, in addition to the filename parameter.
You may have more luck with the "inline" value than the "attachment" value:
Response.AddHeader("Content-Disposition", "inline;filename=" + GetExportFileName());

Resources