I use a keyboard app which suggests correct spelling of most of the Bangla words. Is there any way to extract this dictionary file as some sort of text file, so that I can use it on windows for Bangla spell checking?
Related
Google contains instructions for how to subset a font, and even specifies:
This feature also works for international fonts, allowing you to specify UTF-8 characters. For example, ¡Hola! is represented as: https://fonts.googleapis.com/css?family=Inconsolata&text=%c2%a1Hola!.
Each symbol in the Google Material Icons/Symbols fonts has a codepoint, as specified in this SO question.
Is there a way to pass the codepoint values as the value to &text URL parameter, so that I can a download a subset of the Material Symbols font with all the icons that I want to use in my app? For example, the search icon has a codepoint of e8b6. I tried several incantations, including &text=e8b6 and &text=%e8%b6 that didn't download the search icon. Then I found a converter from codepoint to utf-8, which converted e8b6 to some non-printable characters ^H^F. I then passed &text=^H^F and it downloaded the search icon, but it didn't subset the font, i.e. it downloaded the whole font file, rather than a subset of it.
I am working with Mitsubishi PLC files that were originally commented in Japanese but then opened on English-only computers which converted the Japanese symbols to incomprehensible latin keyboard symbol combinations such as ‰^“]€”õONŠm”F(‘€ì”Õ1).
Being able to understand these comments would greatly enhance my ability to analyze and modify these files as I am required to do so for my work. If I could translate these back to Japanese symbols (I do have the Japanese language pack installed on my windows laptop), I could then translate these with Google Translate, which I know is not perfect, but is a lot better than ##$$##&^.
Does anyone have any ideas how this could be done? I figure that Windows must have interpreted the original characters somehow, and there may be a way to interpret them back to the original symbols.
I am thinking of trying to do some kind of character translation using a script in Python or Powershell or VBA (maybe I can create a map in Excel...)
Any ideas?
I can export these comments into CSV files so easy to get to and manipulate if I can figure out how....
This is an ongoing problem for me so I am willing to put some time into a solution.
I tried re-opening the oldest version of the files, in my computer with the Japanese language pack installed and no luck.
You can run your text through an ascii to hex converter and then through a hex to ascii converter in order to change the encoding without your system settings being in the way.
I need to study a corpus of various .txt files in R with the tm package. My files have to be stored in a single folder called "english" since their text is in English, but the file names are sometimes in Chinese, Korean, Russian or Arabic.
I have tried using the code below to create my corpus.
setwd("C:/Users/etomilina/Documents/M2/mails/english")
data <- Corpus(DirSource(encoding="UTF-8"))
However, I get the following error messages: "non-existent or non-readable files : ./??.txt, ./?????.txt" (and so on with interrogation marks instead of the file names which should be 陳國.txt or 정극지생명과학연구부.txt for example)
Currently Sys.getlocale() returns "LC_COLLATE=French_France.1252;LC_CTYPE=French_France.1252;LC_MONETARY=French_France.1252;LC_NUMERIC=C;LC_TIME=French_France.1252" but even when specifying Sys.Setlocale("English") the error persists.
If I only had Chinese filenames I would simply switch my system locale to Chinese, but here I have some in Korean, Arabic and Russian as well besides some English ones.
Is there any way to handle all these languages at once?
I am using Rstudio 1.4.1106 and my R version is 4.2.1.
I am looking for Online encoders or decoders which can covert the below text coming in SCTP to Japanese characters.
?ISO-2022-JP?B?GyRCJUYlOSVIJWEhPCVrLWItIxsoQg==?=
Let me know if any java library is available to test as well.
I have tried a few online editors but they donot seem to convert ISO-2022-JP format
I am currently scraping some text data from several PDFs using readPDF() function in the tm package. This all works very well and in most cases the encoding seems to be "latin1" - in some, however, it is not. Is there a good way in R to check character encodings? I found the functions is.utf8() and is.local() in the tau package but that obviously only gets me so far.
Thanks.
The PDF specification defines these encodings for simple fonts (each of which can include a maximum of 256 character shapes) for latin text that should be predefined in any conforming reader:
/StandardEncoding
(for Type 1 Latin text fonts, but not for TrueType fonts)
/MacRomanEncoding
(the Mac OS standard encoding, for both TrueType and Type1 fonts)
/PDFDocEncoding
(only used for text strings outside of the document's content streams; normally not used to show text from fonts)
/WinAnsiEncoding
(Windows code page 1252 encoding, for both TrueType and Type1 fonts)
/MacExpertEncoding
(name is misleading -- encoding is not platform-specific; however only few fonts have an appropriate character set to use this encoding)
Then there are 2 specific encodings for symbol fonts:
Symbol Encoding
ZapfDingBats Encoding
Also, fonts can have built-in encodings, which may deviate any way their creator wanted from a standard encoding (f.e. also used for differences encoding when embedded standard fonts are subsetted).
So in order to correctly interpret a PDF file, you'll have to lookup each of the font encodings of the fonts used, and you must to take into account any /Encoding using a /Differences array too.
However, the overall task is still quite simple for simple fonts. The PDF viewer program just needs to map 1:1 "each one of a seqence of bytes I see that's meant to represent a text string" to "exactly one glyph for me to draw which I can lookup in the encoding table".
For composite, CID-keyed fonts (which may contain many thousands of character shapes), the lookup/mapping for the viewer program for "this is the sequence of bytes I see that I'm supposed to draw as text" to "this is the sequence of glyph shapes to draw" is no longer 1:1. Here, a sequence of one or more bytes needs to be decoded to select each one glyph from the CIDFont.
And to help this CIDFont decoding, there need to be CMap structures around. CMaps define mappings from Unicode encodings to character collections. The PDF specification defines at least 5 dozen CMaps -- and their standard names -- for Chinese, Japanese and Korean language fonts. These pre-defined CMaps need not be embedded in the PDF (but the conforming PDF reader needs to know how to handle them correctly). But there are (of course) also custom CMaps which may have been generated 'on the fly' when the PDF-creating application wrote out the PDF. In that case the CMap needs to be embedded in the PDF file.
All details about these complexities are layed down in the official PDF-1.7 specification.
I don't know much about R. But I have now poked a bit at CRAN, to see what the mentioned tm and tau packages are.
So tm is for text mining, and for PDF reading it requires and relies on the pdftotext utility from Poppler. I had at first the [obviously wrong] impression, that your mentioned readPDF() function was doing some low-level, library-based access to PDF objects directly in the PDF file.... How wrong I was! Turns out it 'only' looks at the text output of the pdftotext commandline tool.
Now this explains why you'll probably not succeed in reading any of the PDFs which do use more complex font encodings than the 'simple' Latin1.
I'm afraid, the reason for your problem is that currently Poppler and pdftotext are simply not yet able to handle these.
Maybe you're better off to ask the tm maintainers for a feature request: :-)
that you would like them to try + add support to their tm package for a more capable third party PDF text extraction tool, such as PDFlib.com's TET (english version) which for sure is the best text extraction utility on the planet (better than Adobe's own tools, BTW).