I am using amazon S3 to store files. While storing, I am encrypting the stream on the fly. Again on download, I decrypt the stream on the fly. This set up is working very well but occasionally I am getting following exceptions -
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 16 when decrypting with padded cipher
What could be possible reasons for this error to happen. Is corruption of data during upload/download is one of the possibilities? If yes, will this happen only when padding bytes are corrupted or any of the bytes in file got corrupted?
[EDIT] But the strange thing is that the file size stored in S3 is proper, it's not like only half of the file got stored.
Yes, it is. Its most likely that you receive partial files. You should be able to check if the connection was aborted before completion. To be sure you get the full, unchanged file, add a (H)MAC or use a cipher mode with integrity validation (e.g. GCM).
[EDIT]: No, this particular decryption exception should only happen when the full file is not available, not when the file itself is currupted. Better check the file handling upon receiving (forgetting to close stream or delete partial files).
Related
According to the File Parsing section on the MIP SDK FAQs and known issues page, applying a sensitivity label results in a protected copy of the original being made:
Any labeled files will result in a copy of the input file with the label actions applied.
This raises a few questions:
Does the labeled copy ever touch the filesystem in an unprotected state? For example, does the SDK only begin applying label actions after making a full, unprotected copy of the original?
Does the MIP SDK allocate the disk space required to store an encrypted copy ahead of time (e.g. using fallocate(2) on Linux)?
Is there any risk of leaving behind partially encrypted or corrupted copies if the protecting process is suddenly terminated?
The version release history page also makes mention of a Timeout category of NetworkError:
Category=Timeout: Failed to establish HTTP connection after timeout (retry will probably succeed)
What is the HTTP connection timeout, and is it configurable?
In one of my chef recipes, I am using encrypted data bags to do hide the download path for a remote file resource that I have defined.
However when converging on a node, if the download fails for whatever reason, then I can see all my secrets in the log.
Since I'm planning to deploy this on a CI server, I really don't want to have it displayed.
Is there any way to keep the data encrypted even on error?
You can try setting the sensitive attribute on the resource. This suppresses a lot of log data for some resources. For example, template resources will not log their contents when the sensitive attribute is set to true. I doubt it will suppress the URL of a remote_file, but it's worth a shot.
My ASP.NET Application has an FileUpload control. My server doesn't have any antivirus program. If I add a byte to binary content of the file before saving file, does my server affect from virus? When displaying file, I will remove extra byte from the content.
Thanks for replies.
A virus will only cause you problems if it is run on the server (i.e. the file is opened). You can get around this by renaming all uploaded files with a .resources extension. All requests for this type of file are sent by IIS to ASP.NET, which rejects them. So effectively, the files store the data but can't be opened/run at all. Then you can still serve them back by reading their content in an ASP.NET page/module, and returning the data as a file with the correct extension.
Transforming the data as you suggest will also provide a level of protection, though I'd probably do more than add a byte to the end. Perhaps run the whole stream through a reversible algorithm (e.g. a fast encryption or something).
Of course, this doesn't protect the client from any virus.
I am writing an upload handler (asp.net) to handle image uploads.
The aim is to check the image type and content size before the entire file is uploaded. So I cannot use the Request object directly as doing so loads the entire file input stream. I therefore use the HttpWorkerRequest.
However, I keep getting "The connection to the server was reset while the page was loading".
After quite a bit of investigation it has become apparent that when posting the file the call works only if the entire input stream is read.
This, of course, is exactly what I do not want to do :)
Can someone please tell me how I can close off the request without causing the "connection reset" issue and having the browser process the response?
There is no way to do this, as this is how HTTP functions. The best you can do is slurp the data from the client (i.e. read it in chunks) and immediately forget about it. This should prevent your memory requirements from being hammered, though will hurt your bandwidth.
Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end.
Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written).
However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening).
Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off)
Any clues on this?
Thanks in advance.
FileUpload control renders as an <input type="file"> HTML element; this way, your browser will open that file, read ALL content, encode and send it.
Your ASP.NET request just starts after IIS receives all browser data.
This way, you'll need to code a client component (Flash, Java applet, Silverlight) to send a file in small chunks and rebuild that at server-side.
EDIT: Some information on MSDN:
To control whether the file to upload is temporarily stored in memory or on the server while the request is being processed, set the requestLengthDiskThreshold attribute of the httpRuntime element. This attribute enables you to manage the size of the input stream buffer. The default is 256 bytes. The value that you specify should not exceed the value that you specify for the maxRequestLength attribute.
I understand that you want to check the file which is being uploaded for it's content.
If this is your requirement then why not add a textbox and populate it while you are reading the file from HttpPostedFile.