how to view encrypted picture - encryption

how do people view encrypted pictures like on this wiki page? is there a special program to do it, or did someone decide to do some silly xor just make a point about ECB? im not a graphics person, so if there are programs to view encrypted pictures, what are they?

Encryption works on a stream of bytes. That is, it takes an array of bytes and outputs another array of bytes. Images are also just an array of bytes. We assign the "r" component of the top-left pixel to the first byte, the "g" component to the second byte, the "b" component to the third byte. The "r" component of the pixel next to that is the fourth byte and so on.
So to "encrypt" an image, you just take a byte array of the pixels in the first image, encrypt it (encryption usually doesn't change the number of bytes - apart from padding) and use those encrypted bytes as the pixel data for the second image.
Note that this is different from encrypting an entire image file. Usually an image file has a specific header (e.g. the JPEG header, etc). If you encrypted the whole file then the header would also be included and you wouldn't be able to "display" the image without decrypting the whole thing.

To view an encrypted image, the image has to be an uncompressed image format for example bmp.
PNG, JPEG and so on are compressed images so you wont't be able to display those. Also the imgae header has to be uncompressed.
If you want to encrypt pictures like this, just convert it to an uncompressed format, open it with an hex editor and save the image header. After that u can encrypt the image with AES/ECB.
At last you have to insert the original image header. Now you should be able to view the encrypted image.

It's not just a silly XOR (they can all use XOR) but yes, it's just there to emphasize that any scheme which converts the same input to the same output every time makes it easy to spot patterns that were present in the input. The image is there to show how easily we can spot Tux in the "encrypted" output. The author could have used any kind of data, but used an image because the human eye is very good at spotting patterns, so it makes a good example.
As the article says, better schemes use the output of the previous block to "randomize" the next block, so you can't see patterns in the output (a la the image on the right).

Related

.FON Font Format - Finding Start of .FNT

I have a .FON file I'd like to use, vgafix.FON, "use" in the sense that I'd like to be able to generate an image from individual characters of the font. I have seen this question regarding the .FON format, but am having trouble interpreting the answers.
From the various links in that question, I understand that the .FON format is just a .EXE wrapped around a .FNT, but I cannot tell where the .EXE ends and the .FNT begins. .FON's should be NE executables, and the extended header contains an offset to the resource table, where I'd expect the .FNT data to be contained. In vgafix.FON, the extended header starts at 0x80, and offset 0x24 should contain the resource table offset, and offset 0x34 for the number of resource table entries.
However, the resource table offset corresponds to an address that doesn't start with null, which .FNT files are supposed to. Additionally, 0xB4 contains 0, so there are zero resource tables anyways? I am unsure whether offsets are relative to the beginning of the header, or the position of the value within the header, but the above is true for both. I can see the copyright information which I believe is parts of the .FNT file(s), but that info is not exactly 60 characters so I'm unsure where that begins or ends too.
What about these file formats am I misunderstanding, and how can I tell where the .FON container data ends and the proper .FNT data begins?
I'm just finishing up code that does just this. It's still in a somewhat rough state, email me to get the complete class, it's a bit bulky to post here. The purpose of it is to use .FON files to make big letters in terminal windows, you can discard that part. The main thing is that it parses the complete FON file to identify resources - fonts in particular.
Here are the main points:
Traverse the MZ header
Traverse the NE header
Locate the Resource Table in the NE header
Parse the Resource Table to find the fonts (there can be many!)
Parse each font.
Do your stuff.
There is no logic to handle vector fonts, just raster/bitmapped
regards
Kári Poulsen
kpo#kvf.fo

Need AES encryption in GNU radio

I'm trying to make a simple program in GNU Radio to help understand (and test) the encryption blocks. I have attached a screenshot of my program. Basically, it takes a picture of a cat from a .png file and sends it to another.png file. I sent it three ways so as to see how it behaved. One way went straight from file to file, one went through only encryption, and one went through encryption and decryption. With the lower half of the program (the encryption and decryption) disabled, it works on the first route, but when I enable the lower half in an attempt to simultaneously do all 3 paths, the first path only sends the top half of the cat image, and the other two don't send any data to the files at all. The image of my program can be found in the link above this post. I'm new to this so my apologies if this was a bad post, but thanks in advance for any help.

Download Only A Part of a JPG with HTTP request

With the HTTP header Range clients can request only a certain range of bytes from a server.
GET myfile.jpg HTTP/1.1
"http://myhost"
Range=bytes=1000-1200
If the server supports this response feature and maybe even shows that by a Accept-Range header, the above request will return only the 200 bytes from byte 1000 onwards.
Is it possible to get usable parts from an JPG image with this method? Say the actual JPG measures 800x1197 pixels. What would have to be done in order to request only a sub image between the pixels 200x200 and 400x400?
To me it looks like it's only possible receive horizontally cut slices of the image. But this would already be better than getting the full image file. So in the example above I'd say one could try to download: the slice from 200 (y-axis) to 400 (y-axis) and then crop the result on the client side accordingly.
Assume we already know the content-length of the file as well as its actual image size, which may have been determined by a preceding HTTP request:
content length in bytes: 88073
jpg size: 800x1197
Which byte range would I have to request for this image? I assume that JPG has some meta data, which has to be taken in account as well. Or does the compression of jpg render this attempt impossible? It would be ok if the final cut out does not contain any metadata from the original.
But still it might be necessary to have an initial request, which takes some bytes from the beginning hoping to fetch the metadata. and based on this the actual byte range might be determined.
Would be very nice if someone could give me a hint how to approach this.
JPEG encodes compressed data in one or more scans. The scans do not indicate their length. You have to actually decode to get to the end of the scan. The scans span the entire image.
If the JPEG stream is progressively encoded you can read the stream blocks at at a time, decode the scans, update the output image, and get successively refined views of the image.

Break XOR type encryption with whole Known text from virus

I was hit by a ransomware infection that encrypts the first 512 bytes at the top of the file and puts them at the bottom. Upon looking at the encrypted text it seems to be some type of XOR cipher. I know the whole plain text of one of the files that was encrypted, so i figured in theory i should be able to xor it to get the key to decrypt the rest of my files. Well i am having a very hard time with this because i don't understand how the creator xor'ed it really. Im thinking he would use a binaryreader to read the first 512 bytes into an array, XOR it, and replace it. But does that mean he XOR'ed it in HEX? or Decimal? Im quite confused at this point, but i believe i am simply missing something.
I have tried Xor Tool with python, and everything it attempts to crack looks like non sense. I also tried a python script called Unxor that you give the known plain text to, but the dump file it outputs is always blank.
Good Header file dump:
Good-Header.bin
Encrypted Header file dump:
Enc-Header.bin
This may not be the best file example to see the XOR pattern, but its the only file i have that also has the original header 100% before encryption. In other headers where there is more changes the encrypted header changes with it.
Any advice on a method i should try, or application i should use to try and take this further? Thanks so much for your help!
P.S Stackoverflow yelled at me when i tried to post 4 links because im so new, so if you would rather see the hex dumps on pastebin than download the header files, please let me no. The files are in no way malicious, and are only the extracted 512 bytes and not a whole file.
To recover the keystream XOR the plaintext bytes with the cyphertext bytes. Do this with two different files so you can see if the ransomware is using the same keystream or a different keystream for each file.
If it is using the same keystream (unlikely) then your problem is solved. If the keystreams are different, then your easiest solution is to restore the affected files from backups. You did keep backups, didn't you? Alternatively research the particular infection you have got and see if anyone else has broken that particular variant, so you can derive the key(s) they used and hence regenerate the required keystreams.
If you have a lot of money then a data recovery firm might be able to help you, but they will certainly charge.
A rule of thumb to tell a decent cipher from a toy cipher is to encrypt a highly compressible file and try to compress it in its encrypted form: a dumb cipher will produce a file with a level of entropy similar to that of the original one, so the encrypted file will compress as well as the original one; on the other side, a good cipher (even without an initialization vector) will produce a file that will look like a random garbage and thus will not compress at all.
When I compressed your Enc-Header.bin of 512 bytes with PKZIP, the output was also 512 bytes, so the cipher is not as dumb as you expected — bad luck. (But it does not mean that the malware has no weak spots at all.)

Please help me trace how charsets are handled every step of the way

We all know how easy character sets are on the web, yet every time you think you got it right, a foreign charset bites you in the butt. So I'd like to trace the steps of what happens in a fictional scenario I will describe below. I'm going to try and put down my understanding as well as possible but my question is for you folks to correct any mistakes I make and fill in any BLANKs.
When reading this scenario, imagine that this is being done on a Mac by John, and on Windows by Jane, and add comments if one behaves differently than the other in any particular situation.
Our hero (John/Jane) starts by writing a paragraph in Microsoft Word. Word's charset is BLANK1 (CP1252?).
S/he copies the paragraph, including smart quotes (e.g. “ ”). The act of copying is done by the BLANK2 (Operating system...Windows/Mac?) which BLANK3 (detects what charset the application is using and inherits the charset?). S/he then pastes the paragraph in a text box at StackOverflow.
Let's assume StackOverflow is running on Apache/PHP and that their set up in httpd.conf does not specify AddDefaultCharset utf-8 and their php.ini sets the default_charset to ISO-8859-1.
Yet neither charset above matters, because Stack Overflow's header contains this statement META http-equiv="Content-Type" content="text/html; charset=UTF-8", so even though when you clicked on "Ask Question" you might have seen a *RESPONSE header in firebug of "Content-type text/html;" ... in fact, Firefox/IE/Opera/Other browsers BLANK4 (completely 100% ignore the server header and override it with the Meta Content-type declaration in the header? Although it must read the file before knowing the Content-type, since it doesn't have to do anything with the encoding until it displays the body, this makes no different to the browser?).
Since the Meta Content-type of the page is UTF-8, the input form will convert any characters you type into the box, into UTF-8 characters. BLANK5 (If someone can go into excruciating detail about what the browser does in this step, it would be very helpful...here's my understanding...since the operating system controls the clipboard and display of the character in the form, it inserts the character in whatever charset it was copied from. And displays it in the form as that charset...OVERRIDING the UTF-8 in this example).
Let's assume the form method=GET rather than post so we can play w/ the URL browser input.... Continuing our story, the form is submitted as UTF-8. The smart quotes which represent decimal code 147 & 148, when the browser converts them to UTF-8, it gets transformed into BLANK6 characters.
Let's assume that after submission, Stack Overflow found an error in the form, so rather than displaying the resulting question, it pops back up the input box with your question inside the form. In the php, the form variables are escaped with htmlspecialchars($var) in order for the data to be properly displayed, since this time it's the BLANK7 (browser controlling the display, rather than the operating system...therefore the quotes need to be represented as its UTF-8 equivalent or else you'd get the dreaded funny looking � question mark?)
However, if you take the smart quotes, and insert them directly in the URL bar and hit enter....the htmlspecialchars will do BLANK8, messing up the form display and inserting question marks �� since querying a URL directly will just use the encoding in the url...or even a BLANK9 (mix of encodings?) if you have more than one in there...
When the REQUEST is sent out, the browser lists acceptable charsets to the browser. The list of charsets comes from BLANK10.
Now you might think our story ends there, but it doesn't. Because StackOverflow needs to save this data to a database. Fortunately, the people running this joint are smart. So when their MySQL client connects to the database, it makes sure the client and server are talking to each other UTF-8 by issuing the SET NAMES UTF-8 command as soon as the connection is initiated. Additionally, the default character set for MySQL is set to UTF-8 and each field is set the same way.
Therefore, Stack Overflow has completely secured their website from dB injections, CSRF forgeries and XSS site scripting issues...or at least those borne from charset game playing.
*Note, this is an example, not the actual response by that page.
I don't know if this "answers" your "question", but I can at least help you with what I think may be a critical misunderstanding.
You say, "Since the Meta Content-type of the page is UTF-8, the input form will convert any characters you type into the box, into UTF-8 characters." There is no such thing as a "UTF-8 character", and it isn't true or even meaningful to think of the form "converting" anything into anything when you paste it. Characters are a completely abstract concept, and there's no way of knowing (without reading the source) how a given program, including your web browser, decides to implement them. Since most important applications these days are Unicode-savvy, they probably have some internal abstraction to represent text as Unicode characters--note, that's Unicode and not UTF-8.
A piece of text, in Unicode (or in any other character set), is represented as a series of code points, integers that are uniquely assigned to characters, which are named entities in a large database, each of which has any number of properties (such as whether it's a combining mark, whether it goes right-to-left, etc.). Here's the part where the rubber meets the road: in order to represent text in a real computer, by saving it to a file, or sending it over the wire to some other computer, it has to be encoded as a series of bytes. UTF-8 is an encoding (or a "transformation format" in Unicode-speak), that represents each integer code point as a unique sequence of bytes. There are several interesting and good properties of UTF-8 in particular, but they're not relevant to understanding, in general, what's going on.
In the scenario you describe, the content-type metadata tells the browser how to interpret the bytes being sent as a sequence of characters (which are, remember, completely abstract entities, having no relationship to bytes or anything). It also tells the browser to please encode the textual values entered by the user into a form as UTF-8 on the way back to the server.
All of these remarks apply all the way up and down the chain. When a computer program is processing "text", it is doing operations on a sequence of "characters", which are abstractions representing the smallest components of written language. But when it wants to save text to a file or transmit it somewhere else, it must turn that text into a sequence of bytes.
We use Unicode because its character set is universal, and because the byte sequences it uses in its encodings (UTF-8, the UTF-16s, and UTF-32) are unambiguous.
P.S. When you see �, there are two possible causes.
1) A program was asked to write some characters using some character set (say, ISO-8859-1) that does not contain a particular character that appears in the text. So if text is represented internally as a sequence of Unicode code points, and the text editor is asked to save as ISO-8859-1, and the text contains some Japanese character, it will have to either refuse to do it, or spit out some arbitrary ISO-8859-1 byte sequence to mean "no puedo".
2) A program received a sequence of bytes that perhaps does represent text in some encoding, but it interprets those bytes using a different encoding. Some byte sequences are meaningless in that encoding, so it can either refuse to do it, or just choose some character (such as �) to represent each unintelligible byte sequence.
P.P.S. These encode/decode dances happen between applications and the clipboard in your OS of choice. Imagine the possibilities.
In answer to your comments:
It's not true that "Word uses CP1252 encoding"; it uses Unicode to represent text internally. You can verify this, trivially, by pasting some Katakana character such as サ into Word. Windows-1252 cannot represent such a character.
When you "copy" something, from any application, it's entirely up to the application to decide what to put on the clipboard. For example, when I do a copy operation in Word, I see 17 different pieces of data, each having a different format, placed into the clipboard. One of them has type CF_UNICODETEXT, which happens to be UTF-16.
Now, as for URLs... Details are found here. Before sending an HTTP request, the browser must turn a URL (which can contain any text at all) into an IRI. You convert a URL to an IRI by first encoding it as UTF-8, then representing UTF-8 bytes outside the ASCII printable range by their percent-escaped forms. So, for example, the correct encoding for http://foo.com/dir1/引き割り.html is http://foo.com/dir1/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html . (Host names follow different rules, but it's all in the linked-to resource).
Now, in my opinion, the browser ought to show plain old text in the location bar, and do all of the encoding behind the scenes. But some browsers make stupid choices, and they show you the IRI form, or some chimera of a URL and an IRI.

Resources