Rainbow delimiters for Brackets - adobe-brackets

Is there an extension for Brackets that provides Emacs' Rainbow Delimiters functionality? I've had a look within the extensions repository and online.

Related

Any publicly available word dictionary in a text file?

I am looking for a word dictionary for different languages(english, spanish, ...). However, almost all dictionaries that I could find are either provided by a program or on a website.
I want to get this word dictionary as a text file. (Also, this file should be publicly available.) Are there any public available word dictionaries in text files?
You can download the Wikitionary for off-line processing, instructions here in the FAQ:
https://en.wiktionary.org/wiki/Help:FAQ#Downloading_Wiktionary
If you're a Linux user, you can use one of the following files (depending on your distribution)
/usr/share/dict/
/var/lib/dict/
The GNU 'aspell' program makes use of this dictionary.

Can you use wildcards in a token with ParseKit?

I'm trying to add a symbol token using ParseKit below:
[t.symbolState add:#"<p style=\"margin-left: 20px;\">"];
I'm wondering if ParseKit allows for wildcards when adding a symbol, such as:
[t.symbolState add:#"<p style=\"margin-left: ##px;\">"];
I want to be able to then extract the wildcard from the token during the parsing procedure.
Is such a thing possible with ParseKit?
Developer of ParseKit here.
I think using ParseKit in this way is not a good idea.
ParseKit (and its successor PEGKit) excel at tokenizing input and then parsing at the token level.
There are several natural tokens in the example input you've provided, but what you are trying to do here is ignore those natural tokens, combine them into a blob of input, and then do fancy sub-token matching using patterns.
There is a popular, powerful tool for fancy sub-token matching using patterns: Regular Expressions. They will be a much better solution for that kind of thing than PEGKit.
However, I still don't think Regular Expressions are the tool you want to use here (or, at least not the only tool).
It looks like you want to parse XML input. Don't use Regex or PEGKit for that. Use an XML parser. Always use an XML parser for parsing XML input.
You may choose to use another XML API layered on top of the XML Parser (SAX, StAX, DOM, XSLT, XQuery, etc.) but, underneath it all, you should be parsing with an XML parser (and, of course, all of those tools I listed, do).
See here for more info.
Then, once you have the style attribute string value you are looking for, use Regex to do fancy pattern matching.

Alfresco localization encoding

Trying to create custom types, aspects and properties for Alfresco, I followed the Alfresco Developer Series guide. When I reached the localization section I found out that Alfresco does not handle UTF-8 encoding in the .properties files that you create. Greek characters are not displayed correctly in Share.
Checking out other built-in .properties files (/opt/alfresco-4.0.e/tomcat/webapps/alfresco/WEB-INF/classes/alfresco/messages) I noticed that in Japanese, for example, the characters are in this notation: \u3059\u3079\u3066\u306e...
So, the question is: do I have to convert the greek words in the above mentioned notation for Share to display them correctly, or is there another -more elegant- way to do it?
The \u#### form is the Java form of the Unicode Escape Sequence, and is used to reference unicode characters without having to worry about the encoding of the file storing them.
This question has some information on how to create and decode them
Another way, which is what Alfresco developers tend to use, is the Native2ASCII tool which ships with Java itself. With that, you can initially write your strings in a UTF-8 (for example) file, then use the tool to turn them into their escaped form.

Extract localizable strings from source code, aspx, xaml to resource files

As part of internationalizing our application which is based on asp.net, c#, silverlight, XBAP, I'm evaluating approaches to start with. I'm having to chose between GNU gettext()(PO files) and Microsoft's resource(resx) based approach. So at this juncture, I'm trying to understand what is the best way to extract localizable strings from .cs files, aspx, ascx, xaml (silverlight) files to resource files(resx) automatically if I have to go the MS way.
I have below options in mind:
Resource Refactoring tool, but it extracts all strings (no matter if you have to translate or not) like page headers etc. And we cannot mark or exclude particular strings. Or we will have to manually select each string and then extract (right click and click extract).
Resharper's Localization assistance, here I do not see the automatic extraction, but I'll have to manually extract string by string.
I know there has to be a bit of manual intervention, but any advise would help in choosing the right direction, between gettext()(gnu gettext() c# or fairlylocal or MS localization approach.
Both the approaches have pros and cons, lets discuss.
FairlyLocal
(GNU Gettext) first, initial tweaking is required:
download library & tools and dump at some place relative to your project
modify the base page object of your site (manual intervention)
add a post-build step to your web project that will run xgettext and update your .po files
second, strings extraction has been taken care-of by FairlyLocal itself.
third, translation of strings could be done in-house or outsourced as PO files are widely known by linguists. fourth, rendering of a few UTF-8 chars (if any) depend on webfonts {eot (trident), svg (webkit, gecko, presto)}. fifth, locale needs to be maintained (like pa-IN languageCode-countryCode). sixth, several converters are available for PO files. seventh, the default logic will fall-back on default-locale (en-US) resources for the value. an issue, The .po files that the build script generates won't be UTF8 by default. You'll need to open them in POEdit (or similar) and explicitly change the encoding the first time you edit them if you want your translated text to correctly show special characters.
MS localization
first, extraction of strings is pretty easy using Resource Refactoring Tool. second, resgen.exe command-line tool could be used to make .resx files linguists friendly.
resgen /compile examplestrings.xx.resx,examplestrings.xx.txt
third, Localization within .NET (not specific to ASP.NET Proper or ASP.NET MVC) implements a standard fallback mechanism. fourth, no dependency on GNU Gettext Utils. fifth, can achieve localization from Strings to Dates, Currency, etc. using CurrentUICulture and CurrentCulture. sixth, webfonts are recommended here too.
thanks.

Asp.net Regular expression to support multiple languages

I have asp.net RegularExpressionValidator
ValidationExpression="^[a-zA-Z\?*.\?!\##\%\&\~`\$\^_\,()\//]{1,30}$" />
It will support any alpha numeric charectors excepts script tags. right now it wont supports any other language except english.
I want modify this regular expression to support arabic charectors also.
Please help me how to modify this expression..
Thanks in advance..
You essentially need to change your regex from a whitelist to a blacklist. So you want to check for characters that you don't want to allow. You can achieve this by starting your regex with a ^ inside the opening bracket. So
ValidationExpression="[^\?*.\?!\##\%\&\~`\$\^_\,()\//]"
will pass any string that does not contain the characters in the expression.
You can add Arabic characters to regular expressions; they match themselves. One problem with Unicode is that Arabic digits, punctuation, and ornaments are scattered around in code blocks, so you may have to add specific symbols you are looking for:
ValidationExpression="^[a-zA-Z\?*.\?!\##\%\&\~`\$\^_\,()\//\u0621-\u063F\u066E-\u06D3]{1,30}$"

Resources