I have ASP.NET application which allows users to download a file when he/she enters a password. I use code below to send file to user:
Context.Response.Clear();
Context.Response.ContentType = "application/pdf";
Context.Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
Context.Response.BinaryWrite(File.ReadAllBytes(fileName));
Context.Response.Flush();
Context.Response.Close();
The problem is that the downloads become very slow if the files are more than 1mb or many users are downloading files at the same time. Is it possible somehow to optimize code for better performance?
You might use Response.TransmitFile(/* Your file */); instead of Response.BinaryWrite(/* Your file */);
The TransmitFile()-Method writes the data to the HTTP output stream without storing it in the memory.
Why are you managing the downloads manually? Why not just put a link to the appropriate PDF file on the page that is shown after a successful login? This will free up the ASP.NET threads so you don't use them to manage the file download. IIS will still have to serve them up but I think it would reduce your overhead significantly.
Are you worried about the file name being exposed? If so, reply - there are a few other options you can explore.
Related
I am working on an ASP.Net/VB.Net web application in which a file is to be generated and sent to the client when a button on the page is clicked. I have the following code to do this:-
Dim text_file_name As String = WriteOutputFile() ' Generate output file
Response.ClearContent()
Response.Clear()
Response.ContentType = "text/plain"
Response.AddHeader("Content-Disposition", "attachment; filename=" + text_file_name + ";")
Response.Flush()
HttpContext.Current.ApplicationInstance.CompleteRequest()
File.Delete(text_file_name)
This appears to complete and a file is duly downloaded, but on opening it I find it contains the web page HTML, rather than the intended file text. I observe though that the file (extension .csv) is opened up in Excel, so it is getting at least that part of the message.
I have verified that the file is created as intended by leaving out the File.Delete and watching the files accumulate in the server's directory.
In a previous attempt I had
Response.End()
in place of the complete request; this also generated a .csv file, but one containing the details of a thread exception.
Does anyone know what I am doing wrong?
You're sending the intended file name to the browser as a header hint but you aren't actually sending the file itself. To do that, use Response.WriteFile()
Currently I am generating an xml file for download using posted fields with the following code:
string attachment = "attachment; filename=" + FileName + ".xml";
Response.ClearContent();
Response.ContentType = "application/xml";
Response.AddHeader("content-disposition", attachment);
Response.Write(Session["FileForDownload"]);
Response.End();
This is working fine.
However I want to sftp upload the generated file to a specified directory on a server.
I have had success in connecting using ssh.net and have been able to create a new directory etc.
My question is how can I generate the file and then sftp it using ssh.net?
I've tried using a file stream with no success. I'm guessing the file needs to be temporarily stored and then retrieved for upload.
This is my current code segment for the specified problem:
SftpClient sftp = new SftpClient("host", "user", "pwd");
sftp.Connect();
sftp.ChangeDirectory("directory/");
Stream fs = File.OpenRead(Server.MapPath(#"filetobeuploaded"));
sftp.UploadFile(fs, Session["FileName"].ToString());
sftp.Disconnect();
I recognize that there won't be a file already on the server to upload.
Any help would be much appreciated as this is the final piece of the puzzle in my application.
Cheers
Fixed: I found a solution by generating a temp XML file in the server, uploading and deleting it. Thanks for your reply anyway
Environment:
-SharePoint 2010 foundation
-Claim based authentication
-Execution time out in web.config is set to 3600
Overview:
We have an excel export functionality where we connect to AD and SQL databases to fetch Users and their related data for a perticular Organization Unit (OU)in Active Directory.
We have on OU in AD which has got around 1400 users in it. We are using Open and Closed xml to generate excel file which works fine and takes about 11-14 minutes to generate a file on the server on following path
C:\inetpub\wwwroot\wss\VirtualDirectories\VirtualDirectyrName\Excel\FileName.xlsx
Immediately after generating a file we have following piece of code which would read file from server and dump it on output steam and presents a file ope-save as dialog box in browser to end user.
Problem Description:
When an Organization has less number of users and it does not take more than 5-6 minteus to generate the file on server, following piece of code successfully downloads the file on browser. But when for above mentioned OU where we have 1400 users the reponse.writefile function fails and in browse we get to see 'Browse can not display this web page' (when fiddler was on we found it gives - http 504 error). Surpricingly if we perform this export from the server itself (i.e browse the web site on server) it downloads without issue.
protected void lnkbtnDownloadFile_Click(object sender, EventArgs e)
{
String fileName = #"C:\inetpub\wwwroot\wss\VirtualDirectories\VirtualDirectyrName\Excel\540KBFileWhichFails.xlsx";
//File size is hardly ~500 KB
//Wait for around 12 minutes, to mimic a scenario of file generation which takes time on staging and prod. environment.
System.Threading.Thread.Sleep(720000);
try
{
if (fileName != "")
{
var file = new FileInfo(fileName);
if (file.Exists)
{
Response.Clear();
Response.AddHeader("Content-Disposition", "attachment; filename=" + file.Name);
Response.AddHeader("Content-Length", file.Length.ToString());
Response.ContentType = "application/octet-stream";
Response.WriteFile(file.FullName);
Response.End();
}
else
Response.Write("This file does not exist.");
}
}
catch (Exception ex)
{
//This would usually give thread aboart exception but thats expected.
}
}
we dont see any error in ULS logs, event logs specific to this behavior.
Please note , response.TransmitFile also gives same behaviour.
any idea ?
What I suspect here is that you have felt on session lock. What I mean is that the download and the generation and all this calls made using the session, and session locks everything until finish.
To solve this issue do two thinks.
When you generate this file, generate it ether with thread ether with handle with out session needed
Download this file from a handler (that not use session) and not from the page post back.
For example you make a handler, eg download.ashx and you make a link to your page as download.ashx?thisfileId=7723423&SecurityID=82jkj1288123 Inside your handler you read this parameters and you send the file. But if you make this on the page then a way is to disable the session for this page if you not use session, for example you set EnableSessionState="false" on the first line declarations.
Some similar questions and session relative answer.
call aspx page to return an image randomly slow
Replacing ASP.Net's session entirely
How to deliver big files in ASP.NET Response?
I figured out the issue, It was an issue with the Idle time out issue in the Hardware load balancer we where using. Default value in load balancer was 0 which meant 11 minutes and my file generation was taking longer than that which caused this issue. Increasing load balancer idle time out issue seems to be solutions.
I'm using DotNetZip Library to create a zip file with about 100MB.I'm saving the zip file directly to the Response.OutputStream
Response.Clear();
// no buffering - allows large zip files to download as they are zipped
Response.BufferOutput = false;
String ReadmeText= "Dynamic content for a readme file...\n" +
DateTime.Now.ToString("G");
string archiveName= String.Format("archive-{0}.zip",
DateTime.Now.ToString("yyyy-MMM-dd-HHmmss"));
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "attachment; filename=" + archiveName);
using (ZipFile zip = new ZipFile())
{
// add a file entry into the zip, using content from a string
zip.AddFileFromString("Readme.txt", "", ReadmeText);
// add the set of files to the zip
zip.AddFiles(filesToInclude, "files");
// compress and write the output to OutputStream
zip.Save(Response.OutputStream);
}
Response.Close();
what i need is to split this 100MB file in to with about 20MB sections and provide the download facility to the user.how can i achieve this?
Your question is sort of independent of the ZIP aspect. Basically it seems you want to make available for download a large file of 100mb or more, and you want to do it in parts. Some options:
Save it to a regular file, then transmit it in parts. The client would have to make a distinct download request for each of the N parts, selecting the appropriate section of the file via the HTTP Range header. The server would have to be set up to server ZIP files with the appropriate MIME type etc.
save it to a split (spanned) zip file, which implies N different files. The client would then make an HTTP GET for each of the distinct files. The server would have to be set up to server .zip, .z00, .z01, etc. I'm not sure if built-in OS tools handle split zip files appropriately.
save the file as one large blob, and have the client use BITS or some other restartable download facility.
In our application, we allow user to upload documents which can be PDF, Doc, XLS, TXT. Uploaded documents will be saved on web server. We need to display link for each document user uploaded and when user click on that link, it should open relevant document. it is expected to have required software to open relevant documents.
To upload document, we use saveAs method of FileUpload control and it works absolutely fine.
Now, how to view it?
I believe, i need to copy/download file to local user machine and need to open it using Process.Start.
For that i need to find user local temp directory. if i put path.GetTempPath(), it gives me web server directory and copy file there.
File.Copy(
sPath + dataReader["url"].ToString(),
Path.GetTempPath() + dataReader["url"].ToString(),
true);
Please advise.
You can't write to the user's drive from the webserver.
What you can do is just provide a link that will download the file to the client.
Set the Content-Disposition header to "attachment" to have a "save as" dialog come up, or to "inline" to let it display in the browser using the registered program from the client.
You can create a LinkButton with a server side handler that contains code like this:
byte[] data = ...; // get the data from database or whatever
Response.Clear(); // no contents of the aspx file needed
Response.CacheControl = "private";
Response.ContentType = "application/pdf"; // or whatever the mimetype of your file is
Response.AppendHeader("Content-Disposition", "attachment;filename=statistic.pdf");
Response.AppendHeader("Content-Length", data.Length.ToString(CultureInfo.InvariantCulture));
Response.BinaryWrite(data);
Response.End(); // no further processing of the page needed
Can you not just put a link on the page pointing to the directory that the files are in?
e.g.
<a href=downloadedfiles/filename.pdf> click here </a>
Once you've provided the link, your job is done. Mostly. The client's browser will handle loading the file when the link is clicked, if it can handle the file type based on the file extension.
I prefer to use a http handler for referencing file links on a web page. This will be important on the day when you need to implement security for uploaded file access; otherwise, any user could access any file.
You don't have to download the file to user machine.
// For pdf documents
Response.Clear();
string filePath = "File path on the web server";
Response.ContentType = "application/pdf"; // for pdf
Response.WriteFile(filePath);
// For word documents
Response.Clear();
string filePath = "File path on the web server";
Response.ContentType = "application/msword";
Response.WriteFile(filePath);
// similarly other file types