How to load a binary file into Blob? - deno

The Larger Context: I am attaching a file to a Confluence page. This is done by POSTing a multi-part request, containing the file, to the Confluence RESTful API.
I'm looking for a simple means of loading a complete binary file (a PNG) into a Blob, so that I can compose the FormData object. The file is small, (less than a Meg) so I am content to load it all into memory.
I can compose the Blob from byte literals, but cannot see yet how I can load file data into it.

The answer came to be shortly after.
const fileBytes = await Deno.readFile(filename);
const fileBlob = new Blob([fileBytes], {type: 'image/png'});

Related

ASP.NET: Insert data in a ZIP file without having to re-write the entire ZIP file?

My question is a bit similar to this one but it is with ASP.NET and my requirements are slightly different: Android append files to a zip file without having to re-write the entire zip file?
I need to insert data to a zip-file downloaded by users (not much 1KB of data at most, this is data for Adword off-line conversion actually). The zip-file is downloaded through an ASP.NET website. Because the zip file is already large enough (10's of MB) to avoid overloading the server, I need to insert these data without re-compressing everything. I can think of two ways to do this.
Way A: Find a zip-technology that lets embed a particular file in the ZIP file, this particular file being embedded uncompressed. Assuming there is no checksum, it'd be then easy to just override the bits of this un-compressed file with my specific data, in the zip file itself. If possible, this would have to be supported by all unzip tools (Windows integrated zip, winrar, 7zip...).
Way B: Append an extra file to the original ZIP file without having to recompress it! This extra file would have to be stored in an embedded folder in the ZIP file.
I looked a bit at SevenZipSharp which has an enumeration SevenZip.CompressionMode with values Create and Append that leads me to think that Way B could be implemented. DotNetZip seems also to work pretty well with Stream according to FAQ.
But if Way A could be possible I'd prefer it much since no extra zip library would be needed on the server side!
Ok, thanks to DotNetZip I am able to do what I want in a very resource efficient way:
using System.IO;
using Ionic.Zip;
class Program {
static void Main(string[] args) {
byte[] buffer;
using (var memoryStream = new MemoryStream()) {
using (var zip = new ZipFile(#"C:\temp\MylargeZipFile.zip")) {
// The file on which to override content in MylargeZipFile.zip
// has the path "Path\FileToUpdate.txt"
zip.UpdateEntry(#"Path\FileToUpdate.txt", #"Hello My New Content");
zip.Save(memoryStream);
}
buffer = memoryStream.ToArray();
}
// Here the buffer will be sent to httpResponse
// httpResponse.Clear();
// httpResponse.AddHeader("Content-Disposition", "attachment; filename=MylargeZipFile.zip");
// httpResponse.ContentType = "application/octe-t-stream";
// httpResponse.BinaryWrite(buffer);
// httpResponse.BufferOutput = true;
// Just to check it worked!
File.WriteAllBytes(#"C:\temp\Result.zip", buffer);
}
}

other ways to transfer PDF Byte as a HttpResponseMessage?

I have a function that retrieves PDF bytes from another Webservice. What I wanted to do is make the PDF bytes also available to others by creating an API call that returns HttpResponseMessage.
Now, my problem is I don't think that passing it through json is possible, because it converts the PDF bytes into a string?
Is there any other practical way of passing the PDF, or making the PDF visible to the requestors?
(Note: saving the PDF file in a specific folder and then returning the URL is prohibited in this specific situation)
I just solved it. There is a new paramater responseType: 'arrayBuffer' which addresses this problem. Sample: $http.post('/api/konto/setReport/pdf', $scope.konta, { responseType: 'arraybuffer' }) View my question and answer on SO: How to display a server side generated PDF stream in javascript sent via HttpMessageResponse Content

SQLFileStream with a chunked file

I'm a little stuck in trying to upload files into our SQL DB using FileStream. I've followed this example http://www.codeproject.com/Articles/128657/How-Do-I-Use-SQL-File-Stream but the difference is we upload the file in 10mb chunks.
On the first chunk a record is created in the DB with empty content (so that a file is created) and then OnUploadChunk is called for each chunk.
The file is uploading ok but when I check, a new file has been created for each chunk, so for a 20mb file for example I have one which is 0kb, another which is 10mb and the final one which is 20mb. I'm expecting one file of 20mb.
I'm guessing this is perhaps to do with getting the transaction context or incorrectly using TransactionScope which I dont quite fully grasp yet. I presume this may be different for each chunk with it going to and from client to server.
Here is the method which is called every time a chunk is sent from the client (using PlupLoad if of any relevance).
protected override bool OnUploadChunk(Stream chunkStream, string DocID)
{
BinaryReader b = new BinaryReader(chunkStream);
byte[] binData = b.ReadBytes(chunkStream.Length);
using (TransactionScope transactionScope = new TransactionScope())
{
string FilePath = GetFilePath(DocID); (Folder path the file is sitting in)
//Gets size of file that has been uploaded so far
long currentFileSize = GetCurrentFileSize(DocID)
//Essentially this is just Select GET_FILESTREAM_TRANSACTION_CONTEXT()
byte[] transactionContext = GetTransactionContext();
SqlFileStream filestream = new SqlFileStream(FilePath, transactionContext, FileAccess.ReadWrite);
filestream.Seek(currentFileSize, SeekOrigin.Begin);
filestream.Write(binData, 0, (int)chunkStream.Length);
filestream.Close();
transactionScope.Complete();
}
}
UPDATE:
I've done a little research and I believe the issue is around this:
FILESTREAM does not currently support in-place updates. Therefore an update to a column with the FILESTREAM attribute is implemented by creating a new zero-byte file, which then has the entire new data value written to it. When the update is committed, the file pointer is then changed to point to the new file, leaving the old file to be deleted at garbage collection time. This happens at a checkpoint for simple recovery, and at a backup or log backup.
So have I just got to wait for the garbage collector to remove the chunked files? Or should I perhaps be uploading the file somewhere on the file system first and then copying it across?
Yes, you will have to wait for Sql to clean up the files for you.
Unless you have other system constraints you should be able stream the entire file all at once. This will give you a single file on the sql side

Efficiently encrypt/decrypt large file with cryptojs

I want to encrypt large string (200 MB).
The string come from dataUrl (base64) corresponding to file.
I'm doing my encryption in the browser.
My issue is that at the moment, i chunked string into small part into an array.
Then i encrypt this chunks.
At the moment encrypting the string will full the memory.
Here is how i'm doing it.
var encryptChunk = function(chunk, index){
encryptedChunks.push( aesEncryptor.process( chunk ));
sendUpdateMessage( "encryption", index+1, numberOfChunks );
}
chunkedString.forEach(encryptChunk);
encryptedChunks.push( aesEncryptor.finalize() );
I assume that, there should be a better way of doing this. But i can't find a memroy efficient way of doing this.
I am doing something similar to you. To directly answer your question of "is there a more memory efficient way?" .. well I use a web worker for processing progressive ciphering which seems to work.
//pass in what you need here
var worker = new Worker("path/to/worker.js");
worker.postMessage({
key: getKeyAndIvSomehow(),
file: file,
chunkSize: MY_CHUNK_SIZE
});
worker.addEventListener('message', function (e) {
// create the blob from e.data.encrypted
});
You will need to import the cryptoJS script into your worker: importScripts('cryptoJS.all.min.js')
What are you doing with the encrypted chunks? If you're, say, uploading them over the network, you don't need to store them in an array first. Instead, you can upload the encrypted file chunk by chunk, either writing your own chunked upload implementation (it's not terribly hard) or by using an existing library.
Ditto for the input: you can encrypt it as you read it. You can use the JS File API to read the file in chunks, using the .slice() method.
Other than that, your code looks just like the recommended way to progressively encrypt a file.

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

Resources