When processing EPUB files, I've ran into the issue that in some epub books the paths of the xhtml files are written into the content.opf URL encoded.
For example the path "abcá.xhtml" is written into the content.opf as href="abc%C3%A1.xhtml" (%C3%A1 being the url encoded representation of the character 'á').
I could not find any information about this anywhere. Is this in the EPUB standard? The EPUB file in question was generated with Adobe InDesign.
UPDATE: I tested the epub with the Calibre E-book viewer, with the following results:
Special character in file name, URL-encoded path in content.opf (abcá.xhtml and href="abc%C3%A1.xhtml"): Calibre opens the epub with no problem.
Special character in file name, special character is directly written into path in content.opf with UTF-8 (abcá.xhtml and href="abcá.xhtml"): Calibre opens the epub with no problem.
File name contains a string which happens to be URL-decodeable and the same string is written into the content.opf (abc%C3%A1.xhtml and href="abc%C3%A1.xhtml"): Calibre can not open the epub and displays an error message.
So I guess that Calibre URL-decodes every path in the content.opf before it tries to open the files, which can lead to weird edge cases like the last one.
However this seems to be quite a rare case, so I think I am going to process the paths the same way by URL-decoding them.
It looks like it's probably a bad thing done by InDesign. Two relevant passages from the OPF spec:
From section 1.3.4: Relationship to Unicode
Reading Systems must parse all UTF-8 and UTF-16 characters properly (as required by XML). Reading Systems may decline to display some characters, but must be capable of signaling in some fashion that undisplayable characters are present. Reading Systems must not display Unicode characters merely as if they were 8-bit characters.
And section 1.4 Conformance
1.4.1.1: Package Conformance
Each conformant OPF Package Document must meet these necessary conditions:
it is a well-formed XML document (as defined in XML 1.0); and
it is encoded in UTF-8 or UTF-16; and
...
My reading of that is the reading system needs to be capable of parsing href="abcá.xhtml", and so that's what InDesign should put in the .opf file.
Related
Trying to create custom types, aspects and properties for Alfresco, I followed the Alfresco Developer Series guide. When I reached the localization section I found out that Alfresco does not handle UTF-8 encoding in the .properties files that you create. Greek characters are not displayed correctly in Share.
Checking out other built-in .properties files (/opt/alfresco-4.0.e/tomcat/webapps/alfresco/WEB-INF/classes/alfresco/messages) I noticed that in Japanese, for example, the characters are in this notation: \u3059\u3079\u3066\u306e...
So, the question is: do I have to convert the greek words in the above mentioned notation for Share to display them correctly, or is there another -more elegant- way to do it?
The \u#### form is the Java form of the Unicode Escape Sequence, and is used to reference unicode characters without having to worry about the encoding of the file storing them.
This question has some information on how to create and decode them
Another way, which is what Alfresco developers tend to use, is the Native2ASCII tool which ships with Java itself. With that, you can initially write your strings in a UTF-8 (for example) file, then use the tool to turn them into their escaped form.
I'm developing an application that needs to be able to create & manipulate SQLite databases in user-defined paths. I'm running into a problem I don't really understand. I'm testing my stuff against some really gross sample data with huge unwieldy unicode paths, for most of them there isn't a problem, but for one there is.
An example of a working connection string is:
Data Source="c:\test6\意外な高価で売れるかも? 出品は手順を覚えれば後はかんたん!\11オークションストアの出品は対象外とさせていただきます。\test.db";Version=3;
While one that fails is
Data Source="c:\test6\意外な高価で売れるかも? 出品は手順を覚えれば後はかんたん!\22今やPCライフに欠かせないのがセキュリティソフト。そのため、現在何種類も発売されているが、それぞれ似\test.db";Version=3;
I'm using System.Data.SQLite v1.0.66.0 due to reasons outside of my control, but I quickly tested with the latest, v1.0.77.0 and had the same problems.
Both when attempting to newly create the test.db file or if I manually put one there and it's attempting to open, SQLiteConnection.Open throws an exception saying only "Unable to open the database file", with the stack trace showing that it's actually System.Data.SQLite.SQLite3.Open that is throwing.
Is there any way I can get System.Data.SQLite to play nicely with these paths? A workaround could be to create and manipulate my databases in a temporary location and then just move them to the actual locations for storage, since I can create and manipulate files normally otherwise. That's kind of a last resort though.
Thank you.
I am guessing you are on a Japanese-locale machine where the default system encoding (ANSI code page) is cp932 Japanese (≈Shift-JIS).
The second path contains:
ソ
which encodes to the byte sequence:
0x83 0x5C
Shift-JIS is a multibyte encoding that has the unfortunate property of sometimes re-using ASCII code units in the trail byte. In this case it has used byte 0x5C which corresponds to the backslash \. (Though this typically displays as a yen sign in Japanese fonts, for historical reasons.)
So if this pathname is passed into a byte-based API, it will get encoded in the ANSI code page, and you won't be able to tell the difference between a backslash meant as a directory separator and one that is a side-effect of multi-byte encoding. Consequently any path with one of the following characters in will fail when accessed with a byte-based IO method:
―ソЫⅨ噂浬欺圭構蚕十申曾箪貼能表暴予禄兔喀媾彌拿杤歃畚秉綵臀藹觸軆鐔饅鷭偆砡纊犾
(Also any pathname that contains a Unicode character not present in cp932 will naturally fail.)
It would appear that behind the scenes SQLite is using a byte-based IO method to open the filename it is given. This is unfortunate, but extremely common in cross-platform code, because the POSIX C standard library is defined to use byte-based filenames for operations like file open().
Consequently using the C stdlib functions it is impossible to reliably access files with non-ASCII names. This sad situation inherits into all sorts of cross-platform libraries and languages written using the stdlib; only tools written with specific support for Win32 Unicode filenames (eg Python) can reliably access all files under Windows.
Your options, then, are:
avoid using non-ASCII characters in the path name for your db, as per the move/rename suggestion;
continue to rely on the system locale being Japanese (ANSI code page=932), and just rename files to avoid any of the characters listed above;
get the short (8.3) filename of the file in question and use that instead of the real one—something like c:\test6\85D0~1\22PC~1\test.db. You can use dir /x to see the short-filenames. They are always pure ASCII, avoiding the encoding problem;
add some code to get the short filename from the real one, using GetShortPathName. This is a Win32 API so you need a little help to call it from .NET. Note also short filenames will still fail if run on a machine with the short filename generation feature disabled;
persuade SQLite to add support for Windows Unicode filenames;
persuade Microsoft to fix this problem once and for all by making the default encoding for byte interfaces UTF-8, like it is on all other modern operating systems.
I'm downloading a vCard to the browser using Response.Write to output .NET strings with special accented characters. Mime type is text/x-vcard and
French characters are appearing wrong in Outlook, for example Montréal;Québec .NET string shows as Montréal Québec in browser.
Apparently vCard default format is ASCII. .NET strings are Unicode UTF-16.
I'm using this vCard generator code from CodeProject.com
I've played with the System.Encoding sample code at the bottom of this linked MSDN page to convert the unicode string into bytes and then write the ascii bytes but then I get Montr?al Qu?bec (progress but not a win). Also I've tried setting content type to both us-ascii and utf-8 of the response.
If I open the downloaded vCard in Windows Notepad and save it as ANSI text (instead of default unicode format) and open in Outlook it's okay. So my assumption is I need to cause download of ANSI charset but am unsure if I'm doing it wrong or have a misunderstanding of where to start.
Update: Looking at the raw HTTP, it appears my French characters are being downloaded in the unexpected format so it looks like I need to do some work on the server side...
raw http://img444.imageshack.us/img444/8533/charsd.png (full size)
é is what é looks like when it's encoded as UTF-8 and mistakenly decoded as ISO-8859-1 or windows-1252 (or "ANSI", as Microsoft apps like to call it). When you open the file in Notepad, it automatically detects the encoding as UTF-8. Then you change the encoding by saving it as "ANSI", which works because é is supported by that encoding as well.
When you view the page in Outlook, what does the it say the encoding is? That HTTP dump looks like well-formed UTF-8 to me, but Outlook seems to be reading it as ISO-8859-1 or windows-1252. I don't use Outlook and I don't know its quirks; are you sure you got the headers right?
You don't need to convert anything! Just specify in the HTTP response headers on the text/x-vcard document that the response is UTF-8 encoded (Response.CharSet or Response.ContentEncoding or similar - not sure what your specific situation is).
Also, you could try emitting an UTF-8 Byte Order Mark to help the client determine the encoding.
I created a simple test page on my website www.xaisoft.com and it had no errors, but it came back with the following warning and I am not sure what it means.
The Unicode Byte-Order Mark (BOM) in UTF-8 encoded files is known to cause problems for some text editors and older browsers. You may want to consider avoiding its use until it is better supported.
To find out what the BOM is, you can take a look at the Unicode FAQ (quoting) :
Q: What is a BOM?
A: A byte order mark (BOM) consists of
the character code U+FEFF at the
beginning of a data stream, where it
can be used as a signature defining
the byte order and encoding form,
primarily of unmarked plaintext files.
Under some higher level protocols, use
of a BOM may be mandatory (or
prohibited) in the Unicode data stream
defined in that protocol.
Depending on your editor, you might find an option in the preferences to indicate it should save unicode documents without a BOM... or change editor ^^
Some text editors - notably Notepad - put an extra character at the front of the text file to indicate that it's Unicode and what byte-order it is in. You don't expect Notepad to do this sort of thing, and you don't see it when you edit with Notepad. You need to open the file and explicitly resave it as ANSI. If you're using fancy characters like smart quotes, trademark symbols, circle-r, or that sort of thing, don't. Use the HTML entities instead.
Has anyone had experience generating files that have filenames containing non-ascii international language characters?
Is doing this an easy thing to achieve, or is it fraught with danger?
Is this functionality expected from Japanese/Chinese speaking web users?
Should file extensions also be international language characters?
Info: We currently support multilanguage on our site, but our filenames are always ASCII. We are using ASP.NET on the .NET framework. This would be used in a scenario where international users could choose a common format and name for there files.
Is this functionality expected from Japanese/Chinese speaking web users?
Yes.
Is doing this an easy thing to achieve, or is it fraught with danger?
There are issues. If you are serving files directly, or otherwise have the filename in the URL (eg.: http://www.example.com/files/こんにちは.txt -> http://www.example.com/files/%E3%81%93%E3%82%93%E3%81%AB%E3%81%A1%E3%81%AF.txt), you're generally OK.
But if you're serving files with the filename generated by the script, you can have problems. The issue is with the header:
Content-Disposition: attachment;filename="こんにちは.txt"
How do we encode those characters into the filename parameter? Well it would be nice if we could just dump it in in UTF-8. And that will work in some browsers. But not IE, which uses the system codepage to decode characters from HTTP headers. On Windows, the system codepage might be cp1252 (Latin-1) for Western users, or cp932 (Shift-JIS) for Japanese, or something else completely, but it will never be UTF-8 and you can't really guess what it's going to be in advance of sending the header.
Tedious aside: what does the standard say should happen? Well, it doesn't really. The HTTP standard, RFC2616, says that bytes in HTTP headers are ISO-8859-1, which wouldn't allow us to use Japanese. It goes on to say that non-Latin-1 characters can be embedded in a header by the rules of RFC2047, but RFC2047 explicitly denies that its encoded-words can fit in a quoted-string. Normally in RFC822-family headers you would use RFC2231 rules to embed Unicode characters in a parameter of a Content-Disposition (RFC2183) header, and RFC2616 does defer to RFC2183 for definition of that header. But HTTP is not actually an RFC822-family protocol and its header syntax is not completely compatible with the 822 family anyway. In summary, the standard is a bloody mess and no-one knows what to do, certainly not the browser manufacturers who pay no attention to it whatsoever. Hell, they can't even get the ‘quoted-string’ format of ‘filename="..."’ right, never mind character encodings.
So if you want to serve a file dynamically with non-ASCII characters in the name, the trick is to avoid sending the ‘filename’ parameter and instead dump the filename you want in a trailing part of the URL.
Should file extensions also be international language characters?
In principle yes, file extensions are just part of the filename and can contain any character.
In practice on Windows I know of no application that has ever used a non-ASCII file extension.
One final thing to look out for on systems for East Asian users: you will find them typing weird, non-ASCII versions of Latin characters sometimes. These are known as the full-width and half-width forms, and are designed to allow Asians to type Latin characters that line up with the square grid used by their ideographic (Han etc.) characters.
That's all very well in free text, but for fields you expect to parse as Latin text or numbers, receiving an unexpected ‘42’ integer or ‘.txt’ file extension can trip you up. To convert these ‘compatibility characters’ down to plain Latin, normalise your strings to ‘Unicode Normal Form NFKC’ before doing anything with them.
Refer to this overview of file name limitations on Wikipedia.
You will have to consider where your files will travel, and stay within the most restrictive set of rules.
From my experience in Japan, filenames are typically saved in Japanese with the standard English extension. Apply the same to any other language.
The only problem you will run into is that in an unsupported environment for that character set, people will usually just see a whole bunch of squares with an extension. Obviously this won't be a problem for your target users.
I have been playing around with Unicode and Indian languages for a while now. Here are my views on your questions:
Its easy. You will need two things: Enable Unicode (UTF-8/16/32) support in your OS so that you can type those characters and get Unicode compatible editors/tools so that your tools understand those characters.
Also, since you are looking at a localised web application, you have to ensure or atleast inform your visitor that he/she needs to have a browser which uses relevant encoding.
Your file extensions need not be i18-ned.
My two cents:
Key thing to international file names is to make URLs like bobince suggested:
www.example.com/files/%E3%81%93%E3%82%93%E3.txt
I had to make special routine for IE7 since it crop filename if its longer then 30 characters. So instead of "Your very long file name.txt" file will appear as "%d4y long file name.txt". However interesting thing is that IE7 actually understands header attachment;filename=%E3%81%93%E3%82%93%E3.txt correctly.