As part of localization for our WordPress website we need to support Chinese simplified script. Not knowing anything about Chinese we did some research and used zh-CN (also known as zh-Hans).
Our client states this is incorrect and we should use:
ch-cn (also known as ch-hans). We checked several language plugins and did some research on the internet but this code ch-cn seems pretty rare. Is my client wrong? He wants to use simplified script, chinese PRC
Many thanks!
The client is wrong, and zh-CN and zh-Hans are correct (their meanings are different in principle, but they boil down to the same language form). In HTML, on the Internet, and in most other contexts that might be imaginable here, a two-letter code as the first part of a language code must be an ISO 639-1 code. The authoritative registry assigns “ch” to the Chamorro language, whereas “zh” is assigned to Chinese.
Related
Just struggling with the following thing:
I have a lot of kind-of-comments-dirty-writing-manner comments. Comments came from India region, and there is a mix of languages (but not within the single comment). In addition there are samples with the transliteration, and I was wondering whether I can detect the language in this kind of mess.
But with Google Translate UI I found it deals with it.
As may be seen Google Translate UI detects language (probably correctly), suggests how the text should be written in the language detected, and finally translated it.
In contrast, Google Translate API does not give such a translation, but detects the language yet. Here's the response for the same input string:
translations {
translated_text: "Theek kiya"
detected_language_code: "hi"
}
So, I just wondering whether UI does additional stuff before it runs the translation, like spelling or whatever.
I do not see what did I missed with the API to make it finally translate the text.
Maybe someone faced the same problem and can help me?
I'm fairly new to Poedit, but I'm trying to make a translation of a Wordpress theme. I think I'm slowly beginning to understand the whole I18n, l10n and po, pot and mo thing.
I've purchased a Poedit license and are then trying to create a new pot-file. When I choose the folder that I'm trying to translate, then it automatically sets the language that I'm trying to translate from to be English (and I haven't chosen it). It's an old theme, that I've modified and used, so some of the words on this image is english, but the theme is actually in danish.
So where do I change the language that Poedit thinks that I'm translating from?
Since this question was answered it became possible to tell Poedit the source language using a custom header field in the PO Header. (Since 1.8.10)
X-Source-Language: de
See: Specify the source language of the PO file
The gettext translation system assumes that the source text is always in English. There are good reasons for it, including the fact that it is the lingua franca of computing and that designing plural forms handling for any source languages would be insane. The absolutely best thing you can do is to develop your code in English.
Consequently, the PO file format (which isn't Poedit's, but a standard thing) has no way to encode the source language — there's no need, it is known to be English.
Some developers insist on using non-English source texts anyway, against their better judgement (this is relatively common in Germany for some reason). To accommodate this, Poedit autodetects the language since 1.8 using CLD2 (i.e. the same thing Chrome uses). There's no way to override this, because... well, see above.
In other words, in your case, Poedit sees the source as English because it is primarily English. If it were in Danish, Poedit would detect that. Case in point: your entire screenshot (save for maybe 1 word) is in English and not Danish as you say the file is.
There are no negative consequences to misdetected source language, except for suboptimal suggestions. Don't worry about it.
I'm pretty much after people opinions/best practices and nuggets of experience here.
I need to produce a new website in ASP.net C# which has the requirement of changing the language based on the user profiles.
I've done a couple of simple samples before but I'm curious on a slightly lower level. I'm after resources which I can read and review really.
What design patterns are in place for doing things like translating grids of data into different cultures.
If I'm going to store currency info, is it standard practice to store the exchange rates also?
If I'm going to down the route of a standard ASP.net web application can I use URL routing to help pick the culture to use? for instance www.mynewsite.com/en-GB/default.aspx.
Wisdom/Thoughts welcome.
Thanks for looking and thanks more for answering,
Mike
A couple of things that I've learned:
Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.
Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs.
If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.
If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.
Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.
You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.
Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.
Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files.
Test. Generally speaking I will test in German, Polish, Hebrew or Arabic, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support, and Hebrew and Arabic are both right to left languages.
How should I sanitize urls so people don't put 漢字 or other things in them?
EDIT: I'm using java. The url will be generated from a question the user asks on a form. It seems StackOverflow just removed the offending characters, but it also turns an á into an a.
Is there a standard convention for doing this? Or does each developer just write their own version?
The process you're describing is slugify. There's no fixed mechanism for doing it; every framework handles it in their own way.
Yes, I would sanitize/remove. It will either be inconsistent or look ugly encoded
Using Java see URLEncoder API docs
Be careful! If you are removing elements such as odd chars, then two distinct inputs could yield the same stripped URL when they don't mean to.
The specification for URLs (RFC 1738, Dec. '94) poses a problem, in that it limits the use of allowed characters in URLs to only a limited subset of the US-ASCII character set
This means it will get encoded. URLs should be readable. Standards tend to be English biased (what's that? Langist? Languagist?).
Not sure what convention is other countries, but if I saw tons of encoding in a URL send to me, I would think it was stupid or suspicious ...
Unless the link is displayed properly, encoded by the browser and decoded at the other end ... but do you want to take that risk?
StackOverflow seems to just remove those chars from the URL all together :)
StackOverflow can afford to remove the
characters because it includes the
question ID in the URL. The slug
containing the question title is for
convenience, and isn't actually used
by the site, AFAIK. For example, you
can remove the slug and the link will
still work fine: the question ID is
what matters and is a simple mechanism
for making links unique, even if two
different question titles generate the
same slug. Actually, you can verify
this by trying to go to
stackoverflow.com/questions/2106942/…
and it will just take you back to this
page.
Thanks Mike Spross
Which language you are talking about?
In PHP I think this is the easiest and would take care of everything:
http://us2.php.net/manual/en/function.urlencode.php
I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application.
A couple of things that I've learned:
Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.
Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs.
If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.
If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.
Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.
You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.
Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.
Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files.
Test. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support.
Learn about the System.Globalization namespace:
System.Globalization
Also, a good book is NET Internationalization: The Developer's Guide to Building Global Windows and Web Applications
Would be good to refresh a bit on Unicodes if you are targeting other cultures,languages.
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
This is a hard problem. I live in Canada, so multilingualism is a big issue. In all my years of doing software development, I've never seen a solution that I liked. I've seen a lot of solutions that worked, and got the job done, but they've always felt like a big kludge. I would go with #harriyott, and make sure that none of your strings are actually in code. A resource file works well for desktop applications. However in ASP.Net, I'd recommend using the database. #John Christensen also has some good pointers.
Make sure you're compiling with Code Analysis turned on, and pay attention to the Globalization warnings that it gives you. Keep data in an invariant format (CultureInfo.InvariantCulture) until you display it to the user (then use CultureInfo.CurrentCulture).
I would seriously consider reading the following code project article:
Globalization and localization demystified in ASP.NET 2.0
It covers everything from Cultures and Locales, setting the threads current culture, resource files, encodings, you name it!
And of course it's loaded with pretty pictures and examples :-). Good luck!
I would suggest:
Put all strings in either the database or resource files.
Allow extra space for translated text, as some (e.g. German) are wordier.