Download large files from website - asp.net

I think this question is repeated in many places. But I would like to know the better solution which won't give overhead to server.
My scenario is like a user should be able to click on a link in website and that link will get the correct file from server and send it to user.
I see solutions like below,
string filename1 = "newfile.txt";
string filename = #"E:\myfolder\test.txt";
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=\"" + filename1 + "\"");
Response.TransmitFile(filename);
and like in below post,
Download/Stream file from URL - asp.net

The differences I spotted between the 2 methods you mention is:
1. Uses Response.TransmitFile
2. Uses Response.WriteFile
In order to compare the two methods you should look at these links:
TransmitFile
Writes the specified file directly to an HTTP response output stream, without buffering it in memory.
WriteFile
Writes the specified file directly to an HTTP response output stream.
Clearly, Response.TransmitFile() sends the file to the client machine without loading it to the Application memory on the server, but Response.WriteFile() method loads the file that is being downloaded in the Application memory area of the server.
I would say use Response.TransmitFile() for larger files based on this alone.
However, you will need to look into how this impacts other parts of your application before you make a final decision.
This has been pretty comprehensively debated on various forums. Look for it.

Related

File upload and store with lighttpd

I am running lighthttpd in Linux on an embedded platform.
Now i want to make it possible to transfer a file to the system, with an upload web page containing a file selector and "Upload" button (with HTML tags and ). The selected file is transferred as a POST HTTP request containing multipart/form-data. The file should then simply be stored as a regular file in the file system.
I'm already having a CGI interface, a bash script which receives the request and which passes it to the backend C++ application. And because it is an embedded platform, i would like to avoid using php, python etc. only for this case.
As far as i see, lighttpd is not able to save the received files directly from multipart-encoded request body to pure files, correct?
To decode the body i found 'munpack' tool from the mpack package, which writes the encoded body to files on disk, but is intended for mime encoded emails. Nevertheless i can call it in the CGI bash script, and it works almost like expected, except that it can't handle the terminating boundary id (the boundary id given in 'Content-Type' appended by two dashes), resulting in the last file still containing the final boundary. Update: This munpack behaviour came from a faulty script, but still it doesn't work, munpack produces wrong files when the body contains CRLF line endings; only LF produces the correct result.
Is there any other direct request-to-file-on-disk approach? Or do i really have to filter out the terminating boundary manually in the script, or write a multipart-message parser in my C++ application?
To make the use case clear: A user should be able to upload a firmware file to my system. So he connects to my system with a web browser, receives an upload page where he can select the file and send it with an "Upload" button. This transferred file should then simply be stored on my system. The CGI script for receiving the request does already exist (as well as a C++ backend where i could handle the request, too), the only problem is converting the multipart/form-data encoded file to a plain file on disk.
Now i want to make it possible to transfer a file to the system, through a POST HTTP request. The file should simply be stored as a regular file in the file system.
That sounds more like it should be an HTTP PUT rather than an HTTP POST.
As far as i see, lighttpd is not able to save the received files directly from multipart-encoded request body to pure files, correct?
Do you mean application/x-www-form-urlencoded with the POST?
Why multipart-encoded? Are there multiple files being uploaded?
lighttpd mod_webdav supports PUT. Otherwise, you need your own program to handle the request body, be it a shell script or a compiled program. You can use libfcgi with your C++, or you can look at the C programs that lighttpd uses for testing, which implement FastCGI and SCGI in < 300 lines of C each.

400 Bad request for ASP .NET Web API filestream characters

I have a file upload API running in IIS. I am calling it from a WPF Application. I have file names like "Test-12345–Coverage–TestFile-2018Jul23".
If you see this file name, it has got a '-' and a '–'. Eventhough they look alike, they are different. Our clients get these filenames generated through some external system, but when I try to upload these files through my webapi. I am getting a 400 Bad request error. It looks like '–' is treated as an invalid character in the http request making it a bad request. If I change '–' to a '-', it works fine.
Is there a list of restricted characters in http request stream objects? If so I would like to know and share it with my clients. There is a debate of application should handle these characters since windows file system allows naming files with these characters. But if IIS rejects it, I don't know what to do. Please advice.
Stream s = System.IO.File.OpenRead("C:\Desktop\Test**-** NoticeInfo **–** funding- July 26, 2018.pdf");
StreamContent content = new StreamContent(stream, Convert.ToInt32(buffer));
_httpClient.PostAsync("http://myurl/api/v1/UploadDocument", content); //The code breaks here
Notice the '-' in the file name. There are two occurences. First one is a standard Keyboard '-'. The second one is a different hyphen. These file names are generated from another system and we are using it to upload in our system. But for some reason IIS did not like the second –, it treats the whole request as a bad request.

ASP.NET/IIS6 - Disable chunked encoding when using dynamically compressed content?

I'm running ASP.NET on an IIS6 server. Right now the server is set up to compress dynamically generated content, mainly to reduce the page size of ASPX files that are being retrieved.
Once of the ASPX files has the following bit of code, used to fetch a file from the database and send it to the user:
Response.Clear();
Response.Buffer = true;
Response.ContentType = Document.MimeType;
Response.AddHeader("content-disposition", "attachment;filename=\"" + Document.Filename + Document.Extension + "\"");
Response.AddHeader("content-length", Document.FileSizeBytes.ToString());
byte[] docBinary = Document.GetBinary();
Response.BinaryWrite(docBinary);
The download itself works perfectly. However, the person downloading the file doesn't get a progress bar, which is incredibly annoying.
From what research I've been doing, it seems that when IIS sets the transfer-encoding to chunked when compressing dynamic content, which removes the content-length header as it violates the HTTP1.1 standard when doing that.
What's the best way to get around this without turning dynamic compression off at the server level? Is there a way through ASP.NET to programatically turn off compression for this response? Is there a better way to go about doing things?
You can turn on/off compression at the site or folder level by modifying the metabase. For more information see:
Enabling HTTP Compression (IIS 6.0)
Scroll down to: "To enable HTTP Compression for Individual Sites and Site Elements"
To do this you need elevated rights (Administrator at least).
You might have to place the download page in it's own folder and turn off compression at that level so as not to affect other parts of the site.
I have to admit I've not tried this but it's what I'd attempt first.

How to download a file and immediately serve it to the client in ASP.NET?

Basically I need to serve files from a location that requires windows authentication. Instead of having my client's deal with it directly, I would like to implement a process so that they can simply download the files as if they were on my server, after they have logged in to my system, of course. Here is what I have so far, which doesn't seem to work correctly:
// Create the request
WebRequest request = HttpWebRequest.Create(button.CommandArgument);
request.Credentials = new NetworkCredential(_username,_password);
// Get the response
WebResponse response = request.GetResponse();
StreamReader responseStream = new StreamReader( response.GetResponseStream());
// Send the response directly to output
Response.ContentEncoding = responseStream.CurrentEncoding;
Response.ContentType = request.ContentType;
Response.Write(responseStream.ReadToEnd());
Response.End();
When I try this I am able to view the file, but something is wrong with the encoding or the content type and, for example, a PDF will contain 16 blank pages (Instead of 16 pages of text).
Any idea what am I missing?
Feel free to change the title of this question if there is a better way of phrasing this question
Update:
Tried the two responses below but with no luck. I now think that the content type and encoding are OK, but maybe the authentication is failing? The content-length is a lot smaller than it actually should be... Am I using the wrong method for Windows Authentication?
Depending on how/what you have. I would do a few things.
Response.Clear() first of all to remove anything that might have been rendered.
I would then add a header, with content-disposition set and send it down as an actual attachment, rather than just writing it to the user.
It looks like you're sending the wrong content type in your last code block. You're sending the type of the user's original request instead of the content type of the file that you've retrieved. Change this:
Response.ContentType = request.ContentType;
to:
Response.ContentType = response.ContentType;
If your problem is related to network credentials, you may want to try a different approach. If you grant HTTP access to the identity that the web site's application pool is using, you can avoid having to specify the username/password credentials in the request. This also gives you the added benefit of not needing to store the password somewhere.

How can I prevent an XSS vulnerability when using Flex and ASP.NET to save a file?

I've implemented a PDF generation function in my flex app using alivePDF, and I'm wondering if the process I've used to get the file to the user creates an XSS vulnerability.
This is the process I'm currently using:
Create the PDF in the flex application.
Send the binary PDF file to the server using a POST, along with the filename to deliver it as.
An ASP.NET script on the server checks the filename to make sure it's valid, and then sends it back to the user as an HTTP attachment.
Given that, what steps should I take to prevent XSS?
Are there any other GET or POST parameters other than the filename?
In preventing XSS, there are three main strategies: validation, escaping, and filtering.
Validation: Upon detecting nvalid characters, reject the POST request (and issue an error to the user).
Escaping: Likely not applicable when saving the file, as your OS will have restrictions on valid file names.
Filtering: Automatically strip the POST filename parameter of any invalid characters. This is what I'd recommend for your situation.
Within the ASP.NET script, immediately grab the POST string and remove the following characters:
< > & ' " ? % # ; +
How is this going to be XSS exploitable? You aren't outputting something directly to the user. The filesystem will just reject strange characters, and when putting the file on the output stream, the name nor the content does matter.

Resources