Place images byte into String is not working? - apache-flex

I tried on Flex 3, facing issue with uploading JPG/PNG image, trace readUTFBytes would return correct bytes length but tmpFileContent is trucated, it would only appear to have upload just 3 characters of data to the server through PHP script which made image unusable. I have no issue for non-images format. What is wrong here?
var tmpFileContent:String = fileRef.data.readUTFBytes(fileRef.data.length);
Is String capable of handle bytes?

I'm not sure what you're looking to do with the image, but you might want to read this:
http://livedocs.adobe.com/flex/3/html/help.html?content=Filesystem_15.html
You may also need a image encoder such as the JPEGEncoder: http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/mx/graphics/codec/JPEGEncoder.html

You could always encode using base64:
var enc:Base64Encoder = new Base64Encoder();
enc.encodeBytes(fileRef.data);
var base64data:String = enc.drain();

The method used in the tutorial is not going to work safely for anything but text files. An arbitrary binary format is likely to contain zeros. A zero (a byte whose value is 0) is generally considered a string terminator in many languages / platforms. This is also the case in Actionscript as this code shows:
var str:String = "abc\x00def";
trace(str);
The string will be truncated to "abc", since 0x00 is considered to mark the end of a string.
I think your best bet is to encode the content to base 64 as maclema suggested. From the php side, decode it back before writting the file with something like:
file_put_contents($myFilePath, base64_decode($fileData["filedata"]));
Also, I can't remember if file_put_contents is binary safe (I think it's not). If that's the case, you should use fopen('you_path',"wb"), fwrite() and fclose() to write the file. Notice the "b" in "wb", which stands for binary. If you don't pass that flag you'll probably have problems with some characters (newline and carriage return, for example).
Added:
Perhaps, following davr suggestion, you could try sending the data ByteArray to see if AMFPHP handles it correctly.
Php does allow embbeded Nuls in strings as this code shows:
$str = "a\x00b";
var_dump(ord($str{0})); // 97
var_dump(ord($str{1})); // 0
var_dump(ord($str{2})); // 98
So, if AMFPHP converts the bytearray to a string and does not mangle it in the process, this could actually work.
// method saves files on the server
function uploadFiles($fileData) {
// new file path an name
// to not overwrite the files we add the microtime before the file name
$myFilePath = '../../_uploads/'.
preg_replace("/[^0-9]+/","_",microtime()).'_'.$fileData["filename"];
// writing on the disk
$fp = fopen($myFilePath,"wb");
if($fp) {
fwrite($fp,$fileData["filedata"]);
fclose($fp);
}
// returning response - is not used anywhere
return true;
}
Otherwise, try echoing var_dump($fileData['filedata']) to see what the actual type AMFPHP is converting the data to (perhaps it uses an array, not sure; given how strings work in php (much like a buffer of single byte characters, though, I think it could be just using strings).

Related

Efficiently encrypt/decrypt large file with cryptojs

I want to encrypt large string (200 MB).
The string come from dataUrl (base64) corresponding to file.
I'm doing my encryption in the browser.
My issue is that at the moment, i chunked string into small part into an array.
Then i encrypt this chunks.
At the moment encrypting the string will full the memory.
Here is how i'm doing it.
var encryptChunk = function(chunk, index){
encryptedChunks.push( aesEncryptor.process( chunk ));
sendUpdateMessage( "encryption", index+1, numberOfChunks );
}
chunkedString.forEach(encryptChunk);
encryptedChunks.push( aesEncryptor.finalize() );
I assume that, there should be a better way of doing this. But i can't find a memroy efficient way of doing this.
I am doing something similar to you. To directly answer your question of "is there a more memory efficient way?" .. well I use a web worker for processing progressive ciphering which seems to work.
//pass in what you need here
var worker = new Worker("path/to/worker.js");
worker.postMessage({
key: getKeyAndIvSomehow(),
file: file,
chunkSize: MY_CHUNK_SIZE
});
worker.addEventListener('message', function (e) {
// create the blob from e.data.encrypted
});
You will need to import the cryptoJS script into your worker: importScripts('cryptoJS.all.min.js')
What are you doing with the encrypted chunks? If you're, say, uploading them over the network, you don't need to store them in an array first. Instead, you can upload the encrypted file chunk by chunk, either writing your own chunked upload implementation (it's not terribly hard) or by using an existing library.
Ditto for the input: you can encrypt it as you read it. You can use the JS File API to read the file in chunks, using the .slice() method.
Other than that, your code looks just like the recommended way to progressively encrypt a file.

invalid pixel in Firefox because of content charset setting in Netty server

I am developing an http server with Netty. On some occasions, the server must answer a 1x1 transparent pixel. So I hard-coded a GIF transparent pixel in base64, and returned it with the following code :
String pixel_string= new String (Base64.decodeBase64("R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=="));
HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
response.setContent(ChannelBuffers.copiedBuffer(pixel_string, CharsetUtil.UTF_8));
EDIT : I also set the content-type :
response.setHeader(HttpHeaders.Names.CONTENT_TYPE,
"image/gif");
In Chrome, everything is fine. However, Firefox tells me that it cannot display the pixel (which is pretty bad for my app), as the pixel data in invalid.
After many investigations, I finally figured out a fix, by changing the charset to Iso-8859-1.
response.setContent(ChannelBuffers.copiedBuffer(
responseBuilder.pixel_string, CharsetUtil.ISO_8859_1));
I don't understand why it works, which makes me think that I may run into troubles in some cases. I tried to change the Firefox preferences (to have UTF8 as default), but it doesn't change much.
Why does Firefox accept the ISO-8859 encoding, and not UTF-8 ? Can I change that ? Would someone have a clue on the origin of the issue and how to be sure that it will work whatever the user's setting ?
Thanks
It's not Firefox that's accepting the encoding or not. It's your server.
When you do your base64 decode you produce a string that contains some characters... but what you really produced was bytes that you're then thinking of as characters somehow. Since a Java String is a container that holds a UTF-16 string, in practice what you're doing is taking each byte, treating it as a a 16-bit integer and constructing the UTF-16 "string" made up of those code units.
But when you want to put all this on the network, you have to convert you string to bytes, and the argument to copiedBuffer says how to do that. If converting to UTF-8, any character that came from a byte that had the high bit set will end up getting encoded as a two-byte UTF-8 sequence. On the other hand, if converting to ISO-8859-1, the conversion just drops the high byte of each UTF-16 code unit (which in your case is always zero anyway).
So the conversion to ISO-8859-1 produces the actual byte array you got out of base64-decoding, while the conversion to UTF-8 produces.... something else which may or may not actually make any sense depending on the exact byte values.
The copiedBuffer constructor you call is not appropriate for the type of data (binary) you are using. According to the JavaDoc of the Netty API, the one you are calling is:
Creates a new big-endian buffer whose content is the specified string
encoded in the specified charset.
Which means that your binary data is being "converted" to UTF-8 (which is meaningless). If you try to save the generated file and look at it with a hex editor, you'll probably see that it is corrupted.
Try with something like this (untested code):
static byte[] pixel_data = Base64.decodeBase64("R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==");
HttpResponse response = ...
response.setHeader(HttpHeaders.Names.CONTENT_TYPE, "image/gif");
response.setContent(ChannelBuffers.copiedBuffer(pixel_data));

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

how do i insert html file that have hebrew text and read it back in sql server 2008 using the filestream option?

i am new to the filestream option in sql server 2008,
but i have already understand how to open this option and how to create a table that allow you to save files.
let say my table contains:
id,name, filecontent
i tried to insert an html file (that has hebrew chars/text in it) to this table.
i'm writing in asp.net (c#), using visual studio 2008.
but when i tried to read back the content , hebrew char becomes '?'.
the actions i took were:
1. i read the file like this:
// open the stream reader
System.IO.StreamReader aFile = new StreamReader(FileName, System.Text.UTF8Encoding.UTF8);
// reads the file to the end
stream = aFile.ReadToEnd();
// closes the file
aFile.Close();
return stream; // returns the stream
i inserted the 'stream' to the filecontent column as binary data.
i tried to do a 'select' to this column and the data did return (after i coverted it back to string) but hebrew chars become '?'
how do i solve this problem ?
what I should pay attention to ?
thanks,
gadym
I succeeded to solve this problem.
I was wrong , the problem wasnt in the sql server , but in my code, when i transfer it from binary to string and vice versa.
when you need to convert string (that have hebrew chars) to binary you can write the following lines:
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
//HtmlFile = the file i read as string and now i want to convert it to bytes array.
byte[] ConvertTextToBytesArray = encoding.GetBytes(HtmlFile);
and vice versa :
string str;
System.Text.UTF8Encoding enc = new System.Text.UTF8Encoding();
// result = the binary data i want to convert it back to string
str = enc.GetString(result);
i used for some reason System.Text.ASCIIEncoding instead of System.Text.UTF8Encoding.
thank you Meff !
Looks like the UTF8 encoding may not work with the Hebrew?
See here for an older discussion: http://social.msdn.microsoft.com/Forums/en-US/csharplanguage/thread/73c81574-b434-461f-b766-fb9d0e4353c7
sr = new StreamReader(fs, Encoding.GetEncoding("windows-1255"));
Alternatively, are you sure the file is encoded in UTF8?
Also, FILESTREAM may actually perform worse if the BLOB is under 1MB, and HTML files I would expect to fit that description. Have you considered NVARCHAR(MAX) instead.
http://blogs.msdn.com/manisblog/archive/2007/10/21/filestream-data-type-sql-server-2008.aspx

How to check the content of an uploaded file without relying on its extension?

How do you go about verifying the type of an uploaded file reliably without using the extension? I'm guessing that you have to examine the header / read some of the bytes, but I really have no idea how to go about it. Im using c# and asp.net.
Thanks for any advice.
ok, so from the above links I now know that I am looking for 'ff d8 ff e0' to positively identify a .jpg file for example.
In my code I can read the first twenty bytes no problem:
FileStream fs = File.Open(filePath, FileMode.Open);
Byte[] b = new byte[20];
fs.Read(b, 0, 20);
so (and please excuse my total inexperience here) but how do I check whether the byte array contains 'ff d8 ff e0'?
Here's a quick-and-dirty response to the followup question you posted:
byte[] jpg = new byte[] { 0xFF, 0xD8, 0xFF, 0xE0 };
bool match = true;
for (int i = 0; i < jpg.Length; i++)
{
if (jpg[i] != b[i])
{
match = false;
break;
}
}
That indeed is what the Unix file program does, with greater or lesser degrees of reliability. In part, it depends on whether the programs whose files you are trying to detect emits a file header; the program tar is notorious for not doing so. It depends on how many types of files you plan to try and recognize, but it might well be simplest to use an implementation of file; it recognizes many file types, and modern versions are extensible via a file of extra file type definitions that can handle a multitude of scenarios.
Wotsit is a good resource for finding out the magic numbers for various file types.
Edit: link is broken. Here’s a better resource that is still being updated
https://www.garykessler.net/library/file_sigs.html
The first few bytes of a file will often tell you the file type. See, for example,
http://www.garykessler.net/library/file_sigs.html
http://www.astro.keele.ac.uk/oldusers/rno/Computing/File_magic.html
Use System.IO to read the byes as binary after the upload.
I'm curious, though, why you can't rely on on the ContentType header?
Reading the contents of the file is the fool proof way. Since you are building it in .Net, you could probably check the MIME Type of the uploaded file.
You can DllImport urlmon.dll to help. Please refer a post at:
http://coding-passion.blogspot.com/2008/11/validating-file-type.html
And to clarify regarding Content-type, it invariably is driven by the extension of the file. So even a .zip file got its extension renamed to .txt, the content type will still say Text only.

Resources