I have a .FON file I'd like to use, vgafix.FON, "use" in the sense that I'd like to be able to generate an image from individual characters of the font. I have seen this question regarding the .FON format, but am having trouble interpreting the answers.
From the various links in that question, I understand that the .FON format is just a .EXE wrapped around a .FNT, but I cannot tell where the .EXE ends and the .FNT begins. .FON's should be NE executables, and the extended header contains an offset to the resource table, where I'd expect the .FNT data to be contained. In vgafix.FON, the extended header starts at 0x80, and offset 0x24 should contain the resource table offset, and offset 0x34 for the number of resource table entries.
However, the resource table offset corresponds to an address that doesn't start with null, which .FNT files are supposed to. Additionally, 0xB4 contains 0, so there are zero resource tables anyways? I am unsure whether offsets are relative to the beginning of the header, or the position of the value within the header, but the above is true for both. I can see the copyright information which I believe is parts of the .FNT file(s), but that info is not exactly 60 characters so I'm unsure where that begins or ends too.
What about these file formats am I misunderstanding, and how can I tell where the .FON container data ends and the proper .FNT data begins?
I'm just finishing up code that does just this. It's still in a somewhat rough state, email me to get the complete class, it's a bit bulky to post here. The purpose of it is to use .FON files to make big letters in terminal windows, you can discard that part. The main thing is that it parses the complete FON file to identify resources - fonts in particular.
Here are the main points:
Traverse the MZ header
Traverse the NE header
Locate the Resource Table in the NE header
Parse the Resource Table to find the fonts (there can be many!)
Parse each font.
Do your stuff.
There is no logic to handle vector fonts, just raster/bitmapped
regards
Kári Poulsen
kpo#kvf.fo
Related
In describing how find (& find and replace) work, the Atom Flight Manual refers to the buffer as one scope of search, with the entire project being another scope. What is the buffer? It seems like it would be the current file, but I expect it is more than that.
From the Atom Flight Manual:
A buffer is the text content of a file in Atom. It's basically the same as a file for most descriptions, but it's the version Atom has in memory. For instance, you can change the text of a buffer and it isn't written to its associated file until you save it.
Also came across this from The Craft of Text Editing by Craig Finseth, Chapter 6:
A buffer is the basic unit of text being edited. It can be any size, from zero characters to the largest item that can be manipulated on the computer system. This limit on size is usually set by such factors as address space, amount of real and/or virtual memory, and mass storage capacity. A buffer can exist by itself, or it can be associated with at most one file. When associated with a file, the buffer is a copy of the contents of the file at a specific time. A file, on the other hand, can be associated with any number of buffers, each one being a copy of that file's contents at the same or at different times.
In a Mach-O executable, I am trying to increase the size of the __LLVM segment that precedes the __LINKEDIT segment (with a home-grown tool). I am considering two strategies: (a) move the __LLVM segment to after the __LINKEDIT segment, producing a file that is not what ld would create (now with a gap and section addresses out of order), and (b) move the __LINKEDIT segment to allow resizing of the __LLVM segment that precedes it. I need the result to be accepted for downstream processing, e.g. generating an .ipa file or sending to the App Store.
This question is about my assumptions and the viability of these approaches. Specifically, what are the potential pitfalls of each that might lead them to fail?
I implemented the first approach (a) is understood by segedit's -extract option, but its -replace option complains that the segments are out of order. I append a new segment to the file and update the address and length values in the corresponding load command to refer to this new segment data (both in the file and the destination memory). This might be fine, as long as the other downstream processing will accept the result (still to check; e.g. any local signature is likely invalidated).
The second approach (b) would seem cleaner, as long as there are no references into the __LINKEDIT segment, which I guess contains linking information (symbol tables etc., rather than code). I have not tried this yet, though it seems to be a foregone conclusion that segedit will be happy with the result, which may suggest other processing might also be happier. Are there likely to be any references that are invalidated due to simply moving this segment? I am guessing that I will have to update further load commands (they seem to reference into the __LINKEDIT segment), which I have not examined, but this should be fairly straightforward.
EDIT: Replaced my confused use of "section" with "segment" (mentioned in answer).
ADDED: Context is where I have no control of generating the original executable. I need to post-process it, essentially performing a 'segedit -replace' process, wherein the a section in the segment is to be replaced with a section that is larger than space previously allocated for the segment.
RUN-ON clarifying question: It seems from the answer that moving the __LINKEDIT segment will break it. Can this be fixed by adjusting load commands only (e.g. LC_DYLD_INFO_ONLY, LC_LOAD_DYLINKER, LC_LOAD_DYLIB), not data in any segments? I am not yet familiar with these load commands, and would like to know whether to pursue this.
So basically the segments and sections describe how the physical file maps onto virtual memory.
As I mentioned in my previous iteration of the answer there are limitations on the segments order:
__TEXT section must start at executable physical file offset 0
__LINKEDIT section must not start at physical file offset 0
__LINKEDIT's File Offset + File Size should be equal to physical executable size (this implies __LINKEDIT being the last segment). Otherwise code signing won't work.
__DYLD_INFO_ONLY contains file offsets to dyld loading bind opcodes for:
rebase
bind at load
weak bind
lazy bind
export
For each kind there is file offset and size entry in __DYLD_INFO_ONLY describing the data in file that matches __LINKEDIT (in a "regular" ld linked executable). __DYLD_INFO_ONLY does not use any segment & section information from __LINKEDIT directly, the file offsets and sizes are enough.
EDIT also as mentioned in #kirelagin answer here
"Apparently, the new version of dyld from 10.12 Sierra performs a check that previous versions did not perform: it makes sure that the LC_SYMTAB symbols table is entirely within the __LINKEDIT segment."
I assume since you want to inflate the size of the preceding __LLVM segment you would also want some extra data in the file itself. Typically data described by __LINKEDIT (i.e. not the segment & sections themselves, but the actual data) won't use 100% of it's space so it could be modified to start "later" and occupy less space.
A tool called jtool by Jonathan Levin could probably do it for you.
I know this is an old question, but I solved this problem while solving another problem.
define the slide amount, this must be page-aligned, so I choose 0x4000.
add the slide amount to the relevant load commands, this includes but is not limited to:
__LINKEDIT segment (duh)
dyld_info_command
symtab_command
dysymtab_command
linkedit_data_commands
physically move the __LINKEDIT in the file.
Is there a simple and quick way to detect encrypted files? I heard about enthropy calculation, but if I calculate it for every file on a drive, it will take days to detect encryption.
Is it possible to, say it, calculate some value for first 100 bytes or 1024 bytes and then decide? Anyone has a sources for that?
I would use a cross-entropy calculation. Calculate the cross-entropy value for X bytes for known encrypted data (it should be near 1, regardless of type of encryption, etc) - you may want to avoid file headers and footers as this may contain non-encrypted file meta data.
Calculate the entropy for a file; if it's close to 1, then it's either encrypted or /dev/random. If it's quite far away from 1, then it's likely not encrypted. I'm sure you could apply signifance tests to this to get a baseline.
It's about 10 lines of Perl; I can't remember what library is used (although, this may be useful: http://dingo.sbs.arizona.edu/~hammond/ling696f-sp03/addonecross.txt)
You could just make a system that recognizes particular common forms of encrypted files (ex: recognize encrypted zip, rar, vim, gpg, ssl, ecryptfs, and truecrypt). Any attempt to determine encryption based on the raw data will quickly run into a steganography discussion.
One of the advantages of good encryption is that you can design it so that it can't be detected - see the Wikipedia article on deniable encryption for example.
Every statistical approach to detect encryption will give you various "false alarms", like
compressed data or random looking data in general.
Imagine I'd write a program that outputs two files: file1 contains 1024 bit of π and file2 is an encrypted version of file1. If you don't know anything about the contents of file1 or file2, there's no way to distinguish them. In fact, it's quite likely that π contains the contents of file2 somewhere!
EDIT:
By the way, it's not even working the other way round (detecting unencrypted files). You could write a program that transforms encrypted data to readable english text by assigning words or whole sentences to bits/bytes of it.
how do people view encrypted pictures like on this wiki page? is there a special program to do it, or did someone decide to do some silly xor just make a point about ECB? im not a graphics person, so if there are programs to view encrypted pictures, what are they?
Encryption works on a stream of bytes. That is, it takes an array of bytes and outputs another array of bytes. Images are also just an array of bytes. We assign the "r" component of the top-left pixel to the first byte, the "g" component to the second byte, the "b" component to the third byte. The "r" component of the pixel next to that is the fourth byte and so on.
So to "encrypt" an image, you just take a byte array of the pixels in the first image, encrypt it (encryption usually doesn't change the number of bytes - apart from padding) and use those encrypted bytes as the pixel data for the second image.
Note that this is different from encrypting an entire image file. Usually an image file has a specific header (e.g. the JPEG header, etc). If you encrypted the whole file then the header would also be included and you wouldn't be able to "display" the image without decrypting the whole thing.
To view an encrypted image, the image has to be an uncompressed image format for example bmp.
PNG, JPEG and so on are compressed images so you wont't be able to display those. Also the imgae header has to be uncompressed.
If you want to encrypt pictures like this, just convert it to an uncompressed format, open it with an hex editor and save the image header. After that u can encrypt the image with AES/ECB.
At last you have to insert the original image header. Now you should be able to view the encrypted image.
It's not just a silly XOR (they can all use XOR) but yes, it's just there to emphasize that any scheme which converts the same input to the same output every time makes it easy to spot patterns that were present in the input. The image is there to show how easily we can spot Tux in the "encrypted" output. The author could have used any kind of data, but used an image because the human eye is very good at spotting patterns, so it makes a good example.
As the article says, better schemes use the output of the previous block to "randomize" the next block, so you can't see patterns in the output (a la the image on the right).
We all know how easy character sets are on the web, yet every time you think you got it right, a foreign charset bites you in the butt. So I'd like to trace the steps of what happens in a fictional scenario I will describe below. I'm going to try and put down my understanding as well as possible but my question is for you folks to correct any mistakes I make and fill in any BLANKs.
When reading this scenario, imagine that this is being done on a Mac by John, and on Windows by Jane, and add comments if one behaves differently than the other in any particular situation.
Our hero (John/Jane) starts by writing a paragraph in Microsoft Word. Word's charset is BLANK1 (CP1252?).
S/he copies the paragraph, including smart quotes (e.g. “ ”). The act of copying is done by the BLANK2 (Operating system...Windows/Mac?) which BLANK3 (detects what charset the application is using and inherits the charset?). S/he then pastes the paragraph in a text box at StackOverflow.
Let's assume StackOverflow is running on Apache/PHP and that their set up in httpd.conf does not specify AddDefaultCharset utf-8 and their php.ini sets the default_charset to ISO-8859-1.
Yet neither charset above matters, because Stack Overflow's header contains this statement META http-equiv="Content-Type" content="text/html; charset=UTF-8", so even though when you clicked on "Ask Question" you might have seen a *RESPONSE header in firebug of "Content-type text/html;" ... in fact, Firefox/IE/Opera/Other browsers BLANK4 (completely 100% ignore the server header and override it with the Meta Content-type declaration in the header? Although it must read the file before knowing the Content-type, since it doesn't have to do anything with the encoding until it displays the body, this makes no different to the browser?).
Since the Meta Content-type of the page is UTF-8, the input form will convert any characters you type into the box, into UTF-8 characters. BLANK5 (If someone can go into excruciating detail about what the browser does in this step, it would be very helpful...here's my understanding...since the operating system controls the clipboard and display of the character in the form, it inserts the character in whatever charset it was copied from. And displays it in the form as that charset...OVERRIDING the UTF-8 in this example).
Let's assume the form method=GET rather than post so we can play w/ the URL browser input.... Continuing our story, the form is submitted as UTF-8. The smart quotes which represent decimal code 147 & 148, when the browser converts them to UTF-8, it gets transformed into BLANK6 characters.
Let's assume that after submission, Stack Overflow found an error in the form, so rather than displaying the resulting question, it pops back up the input box with your question inside the form. In the php, the form variables are escaped with htmlspecialchars($var) in order for the data to be properly displayed, since this time it's the BLANK7 (browser controlling the display, rather than the operating system...therefore the quotes need to be represented as its UTF-8 equivalent or else you'd get the dreaded funny looking � question mark?)
However, if you take the smart quotes, and insert them directly in the URL bar and hit enter....the htmlspecialchars will do BLANK8, messing up the form display and inserting question marks �� since querying a URL directly will just use the encoding in the url...or even a BLANK9 (mix of encodings?) if you have more than one in there...
When the REQUEST is sent out, the browser lists acceptable charsets to the browser. The list of charsets comes from BLANK10.
Now you might think our story ends there, but it doesn't. Because StackOverflow needs to save this data to a database. Fortunately, the people running this joint are smart. So when their MySQL client connects to the database, it makes sure the client and server are talking to each other UTF-8 by issuing the SET NAMES UTF-8 command as soon as the connection is initiated. Additionally, the default character set for MySQL is set to UTF-8 and each field is set the same way.
Therefore, Stack Overflow has completely secured their website from dB injections, CSRF forgeries and XSS site scripting issues...or at least those borne from charset game playing.
*Note, this is an example, not the actual response by that page.
I don't know if this "answers" your "question", but I can at least help you with what I think may be a critical misunderstanding.
You say, "Since the Meta Content-type of the page is UTF-8, the input form will convert any characters you type into the box, into UTF-8 characters." There is no such thing as a "UTF-8 character", and it isn't true or even meaningful to think of the form "converting" anything into anything when you paste it. Characters are a completely abstract concept, and there's no way of knowing (without reading the source) how a given program, including your web browser, decides to implement them. Since most important applications these days are Unicode-savvy, they probably have some internal abstraction to represent text as Unicode characters--note, that's Unicode and not UTF-8.
A piece of text, in Unicode (or in any other character set), is represented as a series of code points, integers that are uniquely assigned to characters, which are named entities in a large database, each of which has any number of properties (such as whether it's a combining mark, whether it goes right-to-left, etc.). Here's the part where the rubber meets the road: in order to represent text in a real computer, by saving it to a file, or sending it over the wire to some other computer, it has to be encoded as a series of bytes. UTF-8 is an encoding (or a "transformation format" in Unicode-speak), that represents each integer code point as a unique sequence of bytes. There are several interesting and good properties of UTF-8 in particular, but they're not relevant to understanding, in general, what's going on.
In the scenario you describe, the content-type metadata tells the browser how to interpret the bytes being sent as a sequence of characters (which are, remember, completely abstract entities, having no relationship to bytes or anything). It also tells the browser to please encode the textual values entered by the user into a form as UTF-8 on the way back to the server.
All of these remarks apply all the way up and down the chain. When a computer program is processing "text", it is doing operations on a sequence of "characters", which are abstractions representing the smallest components of written language. But when it wants to save text to a file or transmit it somewhere else, it must turn that text into a sequence of bytes.
We use Unicode because its character set is universal, and because the byte sequences it uses in its encodings (UTF-8, the UTF-16s, and UTF-32) are unambiguous.
P.S. When you see �, there are two possible causes.
1) A program was asked to write some characters using some character set (say, ISO-8859-1) that does not contain a particular character that appears in the text. So if text is represented internally as a sequence of Unicode code points, and the text editor is asked to save as ISO-8859-1, and the text contains some Japanese character, it will have to either refuse to do it, or spit out some arbitrary ISO-8859-1 byte sequence to mean "no puedo".
2) A program received a sequence of bytes that perhaps does represent text in some encoding, but it interprets those bytes using a different encoding. Some byte sequences are meaningless in that encoding, so it can either refuse to do it, or just choose some character (such as �) to represent each unintelligible byte sequence.
P.P.S. These encode/decode dances happen between applications and the clipboard in your OS of choice. Imagine the possibilities.
In answer to your comments:
It's not true that "Word uses CP1252 encoding"; it uses Unicode to represent text internally. You can verify this, trivially, by pasting some Katakana character such as サ into Word. Windows-1252 cannot represent such a character.
When you "copy" something, from any application, it's entirely up to the application to decide what to put on the clipboard. For example, when I do a copy operation in Word, I see 17 different pieces of data, each having a different format, placed into the clipboard. One of them has type CF_UNICODETEXT, which happens to be UTF-16.
Now, as for URLs... Details are found here. Before sending an HTTP request, the browser must turn a URL (which can contain any text at all) into an IRI. You convert a URL to an IRI by first encoding it as UTF-8, then representing UTF-8 bytes outside the ASCII printable range by their percent-escaped forms. So, for example, the correct encoding for http://foo.com/dir1/引き割り.html is http://foo.com/dir1/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html . (Host names follow different rules, but it's all in the linked-to resource).
Now, in my opinion, the browser ought to show plain old text in the location bar, and do all of the encoding behind the scenes. But some browsers make stupid choices, and they show you the IRI form, or some chimera of a URL and an IRI.