Efficiently encrypt/decrypt large file with cryptojs - encryption

I want to encrypt large string (200 MB).
The string come from dataUrl (base64) corresponding to file.
I'm doing my encryption in the browser.
My issue is that at the moment, i chunked string into small part into an array.
Then i encrypt this chunks.
At the moment encrypting the string will full the memory.
Here is how i'm doing it.
var encryptChunk = function(chunk, index){
encryptedChunks.push( aesEncryptor.process( chunk ));
sendUpdateMessage( "encryption", index+1, numberOfChunks );
}
chunkedString.forEach(encryptChunk);
encryptedChunks.push( aesEncryptor.finalize() );
I assume that, there should be a better way of doing this. But i can't find a memroy efficient way of doing this.

I am doing something similar to you. To directly answer your question of "is there a more memory efficient way?" .. well I use a web worker for processing progressive ciphering which seems to work.
//pass in what you need here
var worker = new Worker("path/to/worker.js");
worker.postMessage({
key: getKeyAndIvSomehow(),
file: file,
chunkSize: MY_CHUNK_SIZE
});
worker.addEventListener('message', function (e) {
// create the blob from e.data.encrypted
});
You will need to import the cryptoJS script into your worker: importScripts('cryptoJS.all.min.js')

What are you doing with the encrypted chunks? If you're, say, uploading them over the network, you don't need to store them in an array first. Instead, you can upload the encrypted file chunk by chunk, either writing your own chunked upload implementation (it's not terribly hard) or by using an existing library.
Ditto for the input: you can encrypt it as you read it. You can use the JS File API to read the file in chunks, using the .slice() method.
Other than that, your code looks just like the recommended way to progressively encrypt a file.

Related

How to load a binary file into Blob?

The Larger Context: I am attaching a file to a Confluence page. This is done by POSTing a multi-part request, containing the file, to the Confluence RESTful API.
I'm looking for a simple means of loading a complete binary file (a PNG) into a Blob, so that I can compose the FormData object. The file is small, (less than a Meg) so I am content to load it all into memory.
I can compose the Blob from byte literals, but cannot see yet how I can load file data into it.
The answer came to be shortly after.
const fileBytes = await Deno.readFile(filename);
const fileBlob = new Blob([fileBytes], {type: 'image/png'});

other ways to transfer PDF Byte as a HttpResponseMessage?

I have a function that retrieves PDF bytes from another Webservice. What I wanted to do is make the PDF bytes also available to others by creating an API call that returns HttpResponseMessage.
Now, my problem is I don't think that passing it through json is possible, because it converts the PDF bytes into a string?
Is there any other practical way of passing the PDF, or making the PDF visible to the requestors?
(Note: saving the PDF file in a specific folder and then returning the URL is prohibited in this specific situation)
I just solved it. There is a new paramater responseType: 'arrayBuffer' which addresses this problem. Sample: $http.post('/api/konto/setReport/pdf', $scope.konta, { responseType: 'arraybuffer' }) View my question and answer on SO: How to display a server side generated PDF stream in javascript sent via HttpMessageResponse Content

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

Place images byte into String is not working?

I tried on Flex 3, facing issue with uploading JPG/PNG image, trace readUTFBytes would return correct bytes length but tmpFileContent is trucated, it would only appear to have upload just 3 characters of data to the server through PHP script which made image unusable. I have no issue for non-images format. What is wrong here?
var tmpFileContent:String = fileRef.data.readUTFBytes(fileRef.data.length);
Is String capable of handle bytes?
I'm not sure what you're looking to do with the image, but you might want to read this:
http://livedocs.adobe.com/flex/3/html/help.html?content=Filesystem_15.html
You may also need a image encoder such as the JPEGEncoder: http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/mx/graphics/codec/JPEGEncoder.html
You could always encode using base64:
var enc:Base64Encoder = new Base64Encoder();
enc.encodeBytes(fileRef.data);
var base64data:String = enc.drain();
The method used in the tutorial is not going to work safely for anything but text files. An arbitrary binary format is likely to contain zeros. A zero (a byte whose value is 0) is generally considered a string terminator in many languages / platforms. This is also the case in Actionscript as this code shows:
var str:String = "abc\x00def";
trace(str);
The string will be truncated to "abc", since 0x00 is considered to mark the end of a string.
I think your best bet is to encode the content to base 64 as maclema suggested. From the php side, decode it back before writting the file with something like:
file_put_contents($myFilePath, base64_decode($fileData["filedata"]));
Also, I can't remember if file_put_contents is binary safe (I think it's not). If that's the case, you should use fopen('you_path',"wb"), fwrite() and fclose() to write the file. Notice the "b" in "wb", which stands for binary. If you don't pass that flag you'll probably have problems with some characters (newline and carriage return, for example).
Added:
Perhaps, following davr suggestion, you could try sending the data ByteArray to see if AMFPHP handles it correctly.
Php does allow embbeded Nuls in strings as this code shows:
$str = "a\x00b";
var_dump(ord($str{0})); // 97
var_dump(ord($str{1})); // 0
var_dump(ord($str{2})); // 98
So, if AMFPHP converts the bytearray to a string and does not mangle it in the process, this could actually work.
// method saves files on the server
function uploadFiles($fileData) {
// new file path an name
// to not overwrite the files we add the microtime before the file name
$myFilePath = '../../_uploads/'.
preg_replace("/[^0-9]+/","_",microtime()).'_'.$fileData["filename"];
// writing on the disk
$fp = fopen($myFilePath,"wb");
if($fp) {
fwrite($fp,$fileData["filedata"]);
fclose($fp);
}
// returning response - is not used anywhere
return true;
}
Otherwise, try echoing var_dump($fileData['filedata']) to see what the actual type AMFPHP is converting the data to (perhaps it uses an array, not sure; given how strings work in php (much like a buffer of single byte characters, though, I think it could be just using strings).

A way to generate a signature or a hash of an image in ASP.NET for duplicate detection?

I run a rather large site where my members add thousands of images every day. Obviously there is a lot of duplication and i was just wondering if during an upload of an image i can somehow generate a signature or a hash of an image so i can store it. And every time someone uploads the picture i would simply run a check if this signature already exists and fire an error stating that this image already exists. Not sure if this kind of technology already exists for asp.net but i am aware of tineye.com which sort of does it already.
If you think you can help i would appreciate your input.
Kris
A keyword that might be of interest is perceptual hashing.
You use any derived HashAlgorithm to generate a hash from the byte array of the file. Usually MD5 is used, but you could subsitute this for any of those provided in the System.Security.Cryptography namespace. This works for any binary, not just images.
Lots of sites provide MD5 hashes when you download files to verify if you've downloaded the file properly. For instance, an ISO CD/DVD image may be missing bytes when you've received the whole thing. Once you've downloaded the file, you generate the hash for it and make sure it's the same as the site says it should be. If all compares, you've got an exact copy.
I would probably use something similar to this:
public static class Helpers
{
//If you're running .NET 2.0 or lower, remove the 'this' keyword from the
//method signature as 2.0 doesn't support extension methods.
static string GetHashString(this byte[] bytes, HashAlgorithm cryptoProvider)
{
byte[] hash = cryptoProvider.ComputeHash(bytes);
return Convert.ToBase64String(hash);
}
}
Requires:
using System.Security.Cryptography;
Call using:
byte[] bytes = File.ReadAllBytes("FilePath");
string filehash = bytes.GetHashString(new MD5CryptoServiceProvider());
or if you're running in .NET 2.0 or lower:
string filehash = Helpers.GetHashString(File.ReadAllBytes("FilePath"), new MD5CryptoServiceProvider());
If you were to decide to go with a different hashing method instead of MD5 for the miniscule probability of collisions:
string filehash = bytes.GetHashString(new SHA1CryptoServiceProvider());
This way your has method isn't crypto provider specific and if you were to decide you wanted to change which crypto provider you're using, you just inject a different one into the cryptoProvider parameter.
You can use any of the other hashing classes just by changing the service provider you pass in:
string md5Hash = bytes.GetHashString(new MD5CryptoServiceProvider());
string sha1Hash = bytes.GetHashString(new SHA1CryptoServiceProvider());
string sha256Hash = bytes.GetHashString(new SHA256CryptoServiceProvider());
string sha384Hash = bytes.GetHashString(new SHA384CryptoServiceProvider());
string sha512Hash = bytes.GetHashString(new SHA512CryptoServiceProvider());
Typically you'd just use MD5 or similar to create a hash. This isn't guaranteed to be unique though, so I'd recommend you use the hash as a starting point. Identify if the image matches any known hashes you stored, then individually load the ones that it does match and do a full byte comparison on the potential collisions to be sure.
Another, simpler technique though is to simply pick a smallish number of bits and read first part of the image... store that number of starting bits as if they were a hash. This still gives you a small number of potential collisions that you'd need to check, but has much less overhead.
Look in the System.Security.Cryptography namespace. You have your choice of several hashing algorithms/implementations. Here's an example using md5, but since you have a lot of these you might want something bigger like SHA1:
public byte[] HashImage(Stream imageData)
{
return new MD5CryptoServiceProvider().ComputeHash(imageData);
}
I don't know if it already exists or not, but I can't think of a reason you can't do this yourself. Something similar to this will get you a hash of the file.
var fileStream = Request.Files[0].InputStream;//the uploaded file
var hasher = System.Security.Cryptography.HMACMD5();
var theHash = hasher.ComputeHash(fileStream);
System.Security.Cryptography

Resources