I have an application having front end in ASP.NET (VB.NET) and at back end is of Oracle. In oracle i have a procedure which generates files on two file servers (File Server A, File Server B). I have two server one is development server and the other one is client server. In my application i have a web page 'GenerateReport.aspx' which is used to generate report. On the basis of dates the backend procedure generate file on the File Server A and B. when i host the application on development server and download the generated file it is downloaded completely and when i host the application of client server and donwload the generated files only a part of file is downloaded (56KB of 97MB file). Code i use to download file is given below.
Private Sub DownloadFileClient(ByVal RemoteFilePath As String)
Try
Dim File As System.IO.FileInfo
File = New System.IO.FileInfo(RemoteFilePath)
If File.Exists Then
Response.Clear()
Response.AddHeader("Content-Disposition", "attachment; filename=" & File.Name)
Response.AddHeader("Content-Length", File.Length.ToString())
Response.ContentType = "application/octet-stream"
Response.TransmitFile(File.FullName)
Response.End()
Else
lblErrorMsg.Text = "Unable to Download"
End If
Catch ex As Exception
lblErrorMsg.Text = "Unable to Download,check file path"
End Try
End Sub
Flush the response stream before you call Response.End().
Actually, unless there's other stuff you left out, you should Flush(), but don't bother calling Response.End().
Related
I am working on an ASP.Net/VB.Net web application in which a file is to be generated and sent to the client when a button on the page is clicked. I have the following code to do this:-
Dim text_file_name As String = WriteOutputFile() ' Generate output file
Response.ClearContent()
Response.Clear()
Response.ContentType = "text/plain"
Response.AddHeader("Content-Disposition", "attachment; filename=" + text_file_name + ";")
Response.Flush()
HttpContext.Current.ApplicationInstance.CompleteRequest()
File.Delete(text_file_name)
This appears to complete and a file is duly downloaded, but on opening it I find it contains the web page HTML, rather than the intended file text. I observe though that the file (extension .csv) is opened up in Excel, so it is getting at least that part of the message.
I have verified that the file is created as intended by leaving out the File.Delete and watching the files accumulate in the server's directory.
In a previous attempt I had
Response.End()
in place of the complete request; this also generated a .csv file, but one containing the details of a thread exception.
Does anyone know what I am doing wrong?
You're sending the intended file name to the browser as a header hint but you aren't actually sending the file itself. To do that, use Response.WriteFile()
I have an ASP.Net 4.0 web application that renders on client's browser a PDF created on the fly using CrystalReports inside an iFrame via binary write. The webapp works well on my dev station (Visual Studio 2012 Web Developer), but when I tested to our production server (Win2003, IIS6) the part where the user clicks on a button to stream the PDF it just shows a part of the document (the page header logo only, the rest of the PDF is blank). I made a test: instead of streaming the binary data to the client's browser I saved to a local directory on the server's virtual path, I could access the PDF which integrity was OK, all data appeared on the saved doc. I've been Googling for 3 days, tried all kind of code snippets, IIS configurations (log files doesn't show any error), server permissions, HTML headers, web.config tags, etc., with no luck. I suspect that it has something to be with IIS, but I can't find the solution. Here is the code that renders the PDF on client's browser:
//reportObject is a CrystalReport document object (.rpt)
Dim s As System.IO.MemoryStream = reportObj.ExportToStream(ExportFormatType.PortableDocFormat)
s.Seek(0, SeekOrigin.Begin)
With HttpContext.Current.Response
.ClearContent()
.ClearHeaders()
.ContentType = "application/pdf"
.AddHeader("Content-Disposition", "inline; filename=" & fileName)
.BinaryWrite(s.ToArray)
.Flush()
.End()
End With
I also tried these, all worked on my dev machine, but not on the server:
With HttpContext.Current.Response
.Clear()
.ClearHeaders()
.Buffer = True
.AddHeader("Content-Disposition", "inline; filename=doc.pdf")
.AddHeader("Content-Length", Int32.Parse(s.Length))
.AddHeader("Content-transfer-encoding", "8bit")
.AddHeader("Cache-Control", "private")
.AddHeader("Pragma", "cache")
.AddHeader("Expires", "0")
.AddHeader("X-UA-Compatible", "IE=EmulateIE7")
.ContentType = "application/pdf"
.BinaryWrite(s.ToArray)
.Flush()
.Close()
HttpContext.Current.ApplicationInstance.CompleteRequest()
End With
And also these lines:
.AppendHeader("Accept-Ranges", "none")
.OutputStream.Write(s.ToArray(), 0, Convert.ToInt32(s.Length))
Nothing seems to help.
I had to implement another approach described on this thread:
Response.TransmitFile and delete it after transmission
I hope it helps someone.
I've been working on a site to allow clients to sftp data files to us (using SharpSSH.dll) My solution works fine on my development machine but as soon as I move it to production server I am unable to upload.
I know that the problem is because my code is trying to pick up a file on the server when it obviously doesn't exist there so I need some pointers on the best way to resolve this - i.e. how do I adjust my code to allow for an sftp upload from the client machine?
The plan is basically to (and im not entirely sure that I am going about this the correct way) use my web server as a go-between so the client logs on and sftps a file to another server. Advice and pointers are very welcomed - please see code below:
transfer = New SecureFileTransfer("IP", "PORT", "NAME", "PASSWORD")
If transfer.putFile(FileUpload.PostedFile.FileName, company & "/" & filename) = True Then
lblMsg.Text = "File upload complete!"
'write data file details to table
writeAudit()
'check which account manager to alert and send email notification
emailNotify()
Else
lblMsg.Text = "File upload has failed - please try again..."
Exit Sub
End If
Public Sub New(ByVal hostname As String, ByVal port As Integer, ByVal username As String, ByVal password As String)
Me._hostname = hostname
Me._port = port
Me._username = username
Me._password = password
End Sub
Public Function putFile(ByVal localFile As String, ByVal remotePath As String) As Boolean
Try
transfer = New Sftp(Me._hostname, Me._username, Me._password)
transfer.Connect(Me._port)
transfer.Put(localFile, remotePath)
transfer.Close()
Return True
Catch ex As Exception
Dim objWriter As New System.IO.StreamWriter("C:\logfile.txt")
objWriter.Write(ex.Message)
objWriter.Close()
Return False
End Try
End Function
I have checked my sftp credentials and made sure that access from my web server to sftp server is valid and working. My log file is giving me the following exception message:
Could not find file 'c:\windows\system32\inetsrv\x.txt'.
I think I may be misunderstanding how the FileUpload control works here which may well be the crux of the problem
I'm not sure what the exact problem is but I would say that if you have tested your code and are happy it works, then this is not first place to start looking for problems. I would start with the configuration of the FTP server.
Hope this helps!
I have a website which allows secure (ssl) file uploads and download. The site runs on a Window 2003 server with IIS 6.0; asp.net 2.
When using this code:
protected void StartDownLoad(string filename)
{
Response.Clear();
if(filename.EndsWith("zip"))
Response.ContentType = "application/zip";
else
Response.ContentType = "application/msword";
string path = "C:\\Inetpub\\sites\\testsite\\secureDocs\\" + filename;
Response.WriteFile(path);
string headDesc = "inline;filename=" + filename;
Response.AddHeader("Content-Disposition", headDesc);
Response.End();
}
In my tests a 62MB file downloads without any problem -- a 65MB appear to start the download and then immediately stop. The http error logs have four entries each showing "Connection_Dropped". If I remove permissions from the folder and directly access the file through an https url I am able to download files that exceed 65MB so it doesn't seem like it's an IIS issue. Is there an asp.net setting that restricts the response write? Is it an IIS issue? Has anyone run into this before? Any solutions?
You can try using
Response.TransmitFile(path)
instead of
Response.WriteFile(path)
TransmitFile() doesn't buffer the file.
Bye.
I have a website that has several PDF files. I need to have quite a few of them locked down with the standard ASP.NET authentication (in a folder with web.config that denies anonymous users).
I set PDF files to get handled by the ASP.NET worker process and added:
<add type="System.Web.StaticFileHandler" path="*.pdf" verb="*" />
to my web.config, but for some reason they hang when downloading. I've seen this issue before on an old server, and for the live of me I can't remember what I did to solve it. Does anyone have any idea?
Thanks.
I would open fiddler and see what requests were actually being sent to the server and what the responses are. It may also depend on how you are actually requesting (ie the path) and how you are serving the PDF. I demonstrate how to return password protected PDF documents in one of my WROX eBooks and in my handlers and modules presenation I do at user groups. http://professionalaspnet.com/archive/2008/08/10/CodeStock-Rocked_2100_.aspx (link to download the code on the page).
Here is the code I use to return a PDF using a Handler, it might help you out.
'First run the supplied filename through a URL decoder to change any URL
'characters back to their normal selves.
sDocument = HttpUtility.UrlDecode( _
Path.GetFileName(context.Request.Url.ToString))
'Now build a full physical path to the file on the server.
sDocument = Path.Combine(Path.Combine( _
context.Request.PhysicalApplicationPath, "PDF"), sDocument)
'Verify we actually have the file to serve.
If File.Exists(sDocument) = False Then
context.Server.Transfer("badpdf.aspx")
End If
'The next two sections are from Scott Hanselman's blog, but seem to be out of date,
'since it was posted in 2003
'http://www.hanselman.com/blog/InternetExplorerAndTheMagicOfMicrosoftKBArticleQ293792.aspx
'This is done to support older browsers that were not as efficient
'at managing document requests.
If InStr(1, context.Request.UserAgent, "contype") > 0 Then
'Just send the mime/type
Response.ContentType = "application/pdf"
Response.End()
Exit Sub
End If
Dim Language As String = context.Request.ServerVariables("HTTP_ACCEPT_LANGUAGE")
If String.IsNullOrEmpty(Language) Then
Response.Clear()
Response.ContentType = "application/pdf"
Response.AddHeader("Last-modified", "Mon, 01 Sep 1997 01:03:33 GMT")
Response.Status = "304 Not Modified"
Response.End()
Exit Sub
End If
'Set the Cacheability
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.SetExpires(DateTime.MinValue)
Response.ContentType = "application/pdf"
'This opens the Open/Save Dialog
Response.AddHeader("Content-Disposition", "attachment; filename=" & _
Path.GetFileName(sDocument))
'This Bypasses the Open/Save Dialog
'Response.AddHeader("Content-Disposition", "inline; filename=" & _
' Path.GetFileName(sDocument))
If File.Exists(sDocument) Then
'Write the file to the Response Output stream without buffering.
'This is new to ASP.NET 2.0. Prior to that use WriteFile.
Response.TransmitFile(sDocument)
Response.End()
End If
I think I finally figured out what I was missing. I needed to set the MIME type for PDF files to application/octet-stream instead of application/pdf.