Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end.
Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written).
However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening).
Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off)
Any clues on this?
Thanks in advance.
FileUpload control renders as an <input type="file"> HTML element; this way, your browser will open that file, read ALL content, encode and send it.
Your ASP.NET request just starts after IIS receives all browser data.
This way, you'll need to code a client component (Flash, Java applet, Silverlight) to send a file in small chunks and rebuild that at server-side.
EDIT: Some information on MSDN:
To control whether the file to upload is temporarily stored in memory or on the server while the request is being processed, set the requestLengthDiskThreshold attribute of the httpRuntime element. This attribute enables you to manage the size of the input stream buffer. The default is 256 bytes. The value that you specify should not exceed the value that you specify for the maxRequestLength attribute.
I understand that you want to check the file which is being uploaded for it's content.
If this is your requirement then why not add a textbox and populate it while you are reading the file from HttpPostedFile.
Related
This may be immature question but...
When we use html input file control to upload a file, OS encrypts! the full path of the file due to security. i.e.: C:\falsepath\XXXXXX.txt
why security has to be enforced, since the client is the one uploading the file, he obviously knows the location, why can't it just provide full path (client script)
But how does server gets stream of bytes from client?
Can somebody explain me what is happening behind the screen?
OS-windows environment , Browsers -all
Server does not to know what is local path, browser sends to him stream of bytes. Local path is for good looking for user, nothing else.
If you ask: how does BROWSER know where the file is, this is good question, but you didn't write what is your OS.
You should know, that the server is completely separated from the client.
The client application sends to the server a message, which contains the content of the file and a file name (just the name of the file, not the directory. The change of the actual name to the C:\falsepath* is made only to prevent scripts on client's side to know anything about the original location, which may contain sensitive information you don't want to publish.
i'm on a task where I need to create a file serverside and move this file to an USB key.
Is it possible to copy a file from a webserver to an USB Key ?
(any security issues)
Furthermore the user needs to indicate to which path the file needs to be saved on. Is there a control like the asp upload control, where the user can browse to the right directory or is the simple solution to use a textbox, where the user can write e.g. "E:\mygeneratedfiles"
The USB key is on the users local machines
From the ASP.NET perspective, you can return the file in HTTP response, but once the file is sent to the client web browser, you're pretty much out of luck.
There might be something you can do with javascript to streamline the saving process (not my area of expertise), but accessing the client's filesystem directly, especially writing to it, is out of the question. If you want to do that you'll have to write an ActiveX control or similar type of plugin.
Edit:
For returning the file in the HTTP response, load your file in to a 1-dimensional byte array and use the following code pattern:
context.Response.Clear()
context.Response.AddHeader("content-disposition", "attachment;filename=" & objFile.FileName)
context.Response.BinaryWrite(objFile.FileImage)
context.Response.End()
In this example objFile.FileName is the file name string and objFile.FileImage is a Byte array containing the file. context is the current HttpContext.
Use this Code samples on this FileUpload control page
My ASP.NET Application has an FileUpload control. My server doesn't have any antivirus program. If I add a byte to binary content of the file before saving file, does my server affect from virus? When displaying file, I will remove extra byte from the content.
Thanks for replies.
A virus will only cause you problems if it is run on the server (i.e. the file is opened). You can get around this by renaming all uploaded files with a .resources extension. All requests for this type of file are sent by IIS to ASP.NET, which rejects them. So effectively, the files store the data but can't be opened/run at all. Then you can still serve them back by reading their content in an ASP.NET page/module, and returning the data as a file with the correct extension.
Transforming the data as you suggest will also provide a level of protection, though I'd probably do more than add a byte to the end. Perhaps run the whole stream through a reversible algorithm (e.g. a fast encryption or something).
Of course, this doesn't protect the client from any virus.
I'd like to understand what happen under the hood when you do an web upload.
I guess one of these:
The file is loaded in memory by the browser, sent to the web server buffer memory, and then the app is notified to collect it.
The file is being readed by the browser and at the same time sent to the web server, that can start to save the bytes progresively.
I've tried to upload a very large file, and put a breakpoint on the frist line of the method receiving the upload. I've seen how the browser toke a lot of time loading... but the breakpoint was still not hit, and after a while the breakpoint is hit.
I want to understand this, because in the worst scenario, if I allow big uploads, they could blow up the server memory at some point.
What does happen if I upload a 2Gb file? (considering that the web server/app accepts that length) would it take 2Gb of server memory?
Cheers.
The documentation for the HttpPostedFile class (which represents a file uploaded to the server in ASP.NET) specifies:
Files are uploaded in MIME
multipart/form-data format. By
default, all requests, including form
fields and uploaded files, larger than
256 KB are buffered to disk, rather
than held in server memory.
I am writing an upload handler (asp.net) to handle image uploads.
The aim is to check the image type and content size before the entire file is uploaded. So I cannot use the Request object directly as doing so loads the entire file input stream. I therefore use the HttpWorkerRequest.
However, I keep getting "The connection to the server was reset while the page was loading".
After quite a bit of investigation it has become apparent that when posting the file the call works only if the entire input stream is read.
This, of course, is exactly what I do not want to do :)
Can someone please tell me how I can close off the request without causing the "connection reset" issue and having the browser process the response?
There is no way to do this, as this is how HTTP functions. The best you can do is slurp the data from the client (i.e. read it in chunks) and immediately forget about it. This should prevent your memory requirements from being hammered, though will hurt your bandwidth.