How to check whether an English word is meaningful in Julia? - julia

In Julia, how can I check an English word is a meaningful word? Suppose I want to know whether "Hello" is meaningful or not. In Python, one can use the enchant or nltk packages(Examples: [1],[2]). Is it possible to do this in Julia as well?
What I need is a function like this:
is_english("Hello")
>>>true
is_english("Hlo")
>>>false
# Because it doesn't have meaning! We don't have such a word in English terminology!
is_english("explicit")
>>>true
is_english("eeplicit")
>>>false
Here is what I've tried so far:
I have a dataset that contains frequent 5char English words(link to google drive). So I decided to augment it to my question for better understanding. Although this dataset is not adequate (because it just contains frequent 5char meaningful words, not all the meaningful English words with any length), it's suitable to use it to show what I want:
using CSV
using DataFrames
df = CSV.read("frequent_5_char_words.csv" , DataFrame , skipto=2)
df = [lowercase(item) for item in df[:,"0"]]
function is_english(word::String)::Bool
return lowercase(word) in df
end
Then when I try these:
julia>is_english("Helo")
false
julia>is_english("Hello")
true
But I don't have an affluent dataset! So this isn't enough. So I'm curious if there are any packages like what I mentioned before, in Julia or not?

(not enough rep to post a comment!)
You can still use NLTK in Julia via PyCall. Or, as it seems you don't need an NLP tool but just a dictionary, you can use wiktionary to do some lookup or build the dataset.

There is a recently new package, Named LanguageDetect.jl. It does not return true/false, but a list of probabilities. You could define something like:
using LanguageDetect: detect
function is_english(text, threshold=0.8)
langs = detect(text)
for lang in langs
if lang.language == "en"
return lang.probability >= threshold
end
end
ret

Related

Rename a column with R

I'm trying to rename a specific column in my R script using the colnames function but with no sucess so far.
I'm kinda new around programming so it may be something simple to solve.
Basically, I'm trying to rename a column called Reviewer Overall Notes and name it Nota Final in a data frame called notas with the codes:
colnames(notas$`Reviewer Overall Notes`) <- `Nota Final`
and it returns to me:
> colnames(notas$`Reviewer Overall Notes`) <- `Nota Final`
Error: object 'Nota Final' not found
I also found in [this post][1] a code that goes:
colnames(notas) [13] <- `Nota Final`
But it also return the same message.
What I'm doing wrong?
Ps:. Sorry for any misspeling, English is not my primary language.
You probably want
colnames(notas)[colnames(notas) == "Reviewer Overall Notes"] <- "Nota Final"
(#Whatif's answer shows how you can do this with the numeric index, but probably better practice to do it this way; working with strings rather than column indices makes your code both easier to read [you can see what you're renaming] and more robust [in case the order of columns changes in the future])
Alternatively,
notas <- notas %>% dplyr::rename(`Nota Final` = `Reviewer Overall Notes`)
Here you do use back-ticks, because tidyverse (of which dplyr is a part) prefers its arguments to be passed as symbols rather than strings.
Why using backtick? Use the normal quotation mark.
colnames(notas)[13] <- 'Nota Final'
This seems to matter:
df <- data.frame(a = 1:4)
colnames(df)[1] <- `b`
Error: object 'b' not found
You should not use single or double quotes in naming:
I have learned that we should not use space in names. If there are spaces in names (it works and is called a non-syntactic name: And according to Wickham Hadley's description in Advanced R book this is due to historical reasons:
"You can also create non-syntactic bindings using single or double quotes (e.g. "_abc" <- 1) instead of backticks, but you shouldn’t, because you’ll have to use a different syntax to retrieve the values. The ability to use strings on the left hand side of the assignment arrow is an historical artefact, used before R supported backticks."
To get an overview what syntactic names are use ?make.names:
make.names("Nota Final")
[1] "Nota.Final"

I am using R code to count for a specific word occurrence in a string. How can I update it to count if the word's synonyms are used?

I'm using the following code to find if the word "assist" is used in a string variable.
string<- c("assist")
`assist <-
(1:nrow(df) %in% c(sapply(string, grep, df$textvariable, fixed = TRUE)))+0`
`sum(assist)`
If I also wanted to check if synonyms such as "help" and "support" are used in the string, how can I update the code? So if either of these synonyms are used, I want to code it as 1. If neither of these words are used, I want to code it as 0. It doesn't matter if all of the words appear in the string or how many times they are used.
I tried changing it to
string<- c("assist", "help", "support")
But it looks like it is searching for strings in which all of these words are used?
I'd appreciate your help!
Thank you

Autocoding using RQDA

I try to use RQDA for quantitative text analysis. I want to code text passages with the same characters automatically.
Let´s say I have the category dog and I marked "dog" in the first sentence and "dogfood" in the fourth. I want RQDA mark "dog" also in the second sentence and "dogfood in the fifth.
In Maxqda, for example, this is done automatically if I enable the software. Is there a function to do this?
If I understand you want to make an automatic coding using RQDA. The function would be codingBySearch:
codingBySearch(pattern, fid = getFileIds(), cid, seperator,
concatenate = FALSE)
But this function only allows you to make a single pattern per time. If you would like to get a list of patterns, a loop will sort it out:
X <- c("pattern1", "pattern2", "pattern3", "pattern4", "pattern5", "pattern6")
for (i in X) {
codingBySearch(i,fid=getFileIds(),cid=cid_number, seperator="[.!?]",ignore.case=TRUE)
}
Where cid is the number of the code you created in the GUID interface. You can also adapt the separators as you see fit.

What does str in ls.str( ) stand for?

In r language, what is the meaning of "str" in "ls.str()"? I understand that ls.str() gives you a detailed description of the objects in the active memory. But I am still confused about what str stands for then?
From ?ls.str:
‘ls.str’ and ‘lsf.str’ are variations of ‘ls’ applying ‘str()’
From ?str:
Compactly display the internal *str*ucture of an R object ...
so the answer is "list structures".
Taken from the Full reference manual.
str
Compactly Display the Structure of an Arbitrary R Object
Description
Compactly display the internal
str
ucture of an
R
object, a diagnostic function and an alternative to
summary
(and to some extent,
dput
). Ideally, only one line for each ‘basic’ structure is displayed. It
is especially well suited to compactly display the (abbreviated) contents of (possibly nested) lists.
The idea is to give reasonable output for
any
R
object. It calls
args
for (non-primitive) function
objects.
strOptions()
is a convenience function for setting
options
(str = .)
, see the examples.
The abbreviation "str" is taken from the first 3 letters of the word structure.
[The same is highlighted in the official R documentation below]
https://www.rdocumentation.org/packages/utils/versions/3.5.1/topics/str

Is there a way to check the spelling of words in a character vector?

The text to be checked is in Greek, but I would like to know if it can be done for English words too. My initial idea is described here, and I have already found a way to do it using VBA. But I wonder if there's a way to do it using R. If there isn't a way in R, do you think of something better than Excel-vba?
Alternatively, OpenOffice ships with a dictionary that entries stored in a text file. You can read that and remove the word definitions to create your word list.
This was tested on v3.0; the file location may have shifted, and the filename will change depending on which dictionary you want.
library(stringr)
dict <- readLines("C:/Program Files/OpenOffice.org 3/share/uno_packages/cache/uno_packages/174.tmp_/dict-en.oxt/th_en_US_v2.dat")
is_word <- str_detect(dict, "^[^(]")
words <- str_split_fixed(dict[is_word], "\\|", 2)
words <- words[,1]
This list contains some multi-word phrases. You may prefer to split on the first space, and take unique values. You probably also want to write words to file, to save repeating yourself.
Once this is done, checking a word is as easy as
c("persnickety", "sqwrzib") %in% words # TRUE FALSE
There exists an open source GNU spell checker called Aspell with suppot for various languages. This is a command line program which I basically use for scanning bunches of text files at once (then the output is just given to the console).
But there also exists a C API and perhaps more interesting for you a Pipe mode which accepts streams of texts and outputs to the standard output.
Hope this helps.

Resources