textcat missclassification: English reported as Scottish - r

I had tried your the textcat package and function which generally gave satisfactory results, but there are certain anomalies which I hope will be addressed.
For example, the string “a good thing”, regardless of the casing arrangement of the letters, will return “Scots” rather than “English”.
The same thing happened if I tried it with the strings or codes:
textcat("The human species learned long, long ago that sticking together is a good thing.")
[1] "scots"
textcat("A good thing.")
[1] "scots"
I tried other packages as well such as cld2, cld3 and franc, and possibly a few others.
detect_language("long ago that sticking together is a good thing")
[1] "en"
The package cld2 provided a correct classification i.e. “en” but I have not tried it more thoroughly with my training and test data sets.
Package cld3’s return value is the same as cld2.
library("cld3", lib.loc="~/R/win-library/3.3")
detect_language("long ago that sticking together is a good thing")
[1] "en"
The franc package’s returned “sco” which is consistent with textcat.
franc("The human species learned long, long ago that sticking together is a good thing.")
[1] "sco"

Got a solution from the package developer. Dropping Scots is one option. Scots in this matter refers to lowland scots, thus still referring to germanic english. Well, I suspected as much (",)...R> textcat::textcat("The human species learned long, long ago that R> sticking together is a good thing.", textcat::TC_char_profiles[-43]) [1] "english"

Related

How do I change numeric data that is reading in as a character in R?

I am trying to read in a csv file (exported from survey monkey).
I have tried survey <- read.csv("Survey Item Evaluation2.csv", header=TRUE, stringsAsFactors = FALSE)
I ran skim(survey), which shows it is reading in as characters.
str(survey) output: data.frame: 623obs. of 68 variables. G1 (which is a survey item) reads in as chr "1" "3" "4" "1"....
How do I change those survey item variables to numeric?
The correct answer to your question is given in the first two comments by two very well respected people with a combined reputation of over 600k. I'll post their very similar answer here:
as.numeric(survey$G1)
However, that is not very good advice in my opinion. Your question should really have been:
"Why am I getting character data when I'm sure this variable should be numeric?"
To which the answer would be: "Either your not reading the data correctly (does the data start at row 3), or there is non-numeric (garbage) data among the numeric data (for example NA is entered as . or some other character), or certain people entered a , instead of a . to represent decimal point (such as nationals of Indonesia and some European countries), or they entered a thin thousand separator instead of a comma, or some other unknown reason which needs further investigation. Maybe a certain group of people enter text instead of numbers for their age (fifty instead of 50), or they put a . at the end of the data, for example 62.5. instead of 62.5 for their age (older folks were taught to always end a sentence with a period!). In these last two cases, certain groups (elderly) will have missing data and your data is then missing not at random (MNAR), a big bias in your analysis".
I see this all too often and I worry that new users of R are making terrible mistakes due being given poor advice, or because they didn't learn the basics. Importing data is the first step of analysis. It can be difficult because data files come in all shapes and sizes - there is no global standard. Data is also often entered without any quality control mechanisms. I'm glad that you added the stringsAsFactors=FALSE argument in your command to import the data. Someone gave you good advice there. But that person forgot to advise you not to trust your data, especially if it was given to you by someone else to analyse. Always check every variable carefully before the analysis. This can take time, but it can be worth the investment.
Hope that helps at least someone out there.

fread issue with integer64

I am trying to read in some larger datafiles in R with fread using integer64="numeric", but for some reason the conversion does not work anywhere (it used to work in the past). Some of my outcome data is in integer, some is in integer64 and some is in numeric. That is probably not intended. The problem seems to be known: https://github.com/Rdatatable/data.table/issues/2607
My question is: What is the best current workaround to deal with this? If someone has an idea how to post sample data to illustrate the issue more clearly, please feel free to contribute to this post.
I guess this affects a lot of people who are using numbers >= |2^31|. Also see the documentation of fread in this regard: "integer64" (default) reads columns detected as containing integers larger than 2^31 as type bit64::integer64. Alternatively, "double"|"numeric" reads as base::read.csv does; i.e., possibly with loss of precision and if so silently. Or, "character".

Text mining on sentences with the tm.package in R

I'm working with the tm package in R.
I have several txt.files in a folder and a list of 30 sentences.
Now I have to check if my files contains these sentences.
How can I create now a programming which considers sentences and not single words?
Below is a potential approach. Also you may want to look into the readtext package for quickly reading in an entire directory of files as text in one function call.
library(tidytext)
library(stringr)
sample_text <- "Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth."
# this must be lower-case because tidytext will tokenize to lower-case by default
sentence_to_match <- "we are met on a great battle-field of that war."
sentences_df <- tibble(text = sample_text) %>%
unnest_tokens(sentence, text, token = "sentences") %>%
mutate(sentence_match = str_detect(sentence, sentence_to_match))

R Text Mining: Counting the number of times a specific word appears in a corpus?

I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus. Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!
Ain't perfect but this should get you started.
#User Defined Function
strip <- function(x, digit.remove = TRUE, apostrophe.remove = FALSE){
strp <- function(x, digit.remove, apostrophe.remove){
x2 <- Trim(tolower(gsub(".*?($|'|[^[:punct:]]).*?", "\\1", as.character(x))))
x2 <- if(apostrophe.remove) gsub("'", "", x2) else x2
ifelse(digit.remove==TRUE, gsub("[[:digit:]]", "", x2), x2)
}
unlist(lapply(x, function(x) Trim(strp(x =x, digit.remove = digit.remove,
apostrophe.remove = apostrophe.remove)) ))
}
#==================================================================
#Create 2 'corpus' documents (you'd have to actually do all this in tm
corpus1 <- 'I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus.
Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of
couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!'
corpus2 <- "What have you tried? If you have seen it answered in another language, why don't you try translating that
language into R? – Eric Strom 2 hours ago
I am not a coder, otherwise would do. I just do not know a way to do this. – appletree 1 hour ago
Could you provide some example? or show what you have in mind for input and output? or a pseudo code?
As it is I find the question a bit too general. As it sounds I think you could use regular expressions
with grep to find your 'tags'. – AndresT 15 mins ago"
#=======================================================
#Clean up the text
corpus1 <- gsub("\\s+", " ", gsub("\n|\t", " ", corpus1))
corpus2 <- gsub("\\s+", " ", gsub("\n|\t", " ", corpus2))
corpus1.wrds <- as.vector(unlist(strsplit(strip(corpus1), " ")))
corpus2.wrds <- as.vector(unlist(strsplit(strip(corpus2), " ")))
#create frequency tables for each corpus
corpus1.Freq <- data.frame(table(corpus1.wrds))
corpus1.Freq$corpus1.wrds <- as.character(corpus1.Freq$corpus1.wrds)
corpus1.Freq <- corpus1.Freq[order(-corpus1.Freq$Freq), ]
rownames(corpus1.Freq) <- 1:nrow(corpus1.Freq)
key.terms <- corpus1.Freq[corpus1.Freq$Freq>2, 'corpus1.wrds'] #key words to match on corpus 2
corpus2.Freq <- data.frame(table(corpus2.wrds))
corpus2.Freq$corpus2.wrds <- as.character(corpus2.Freq$corpus2.wrds)
corpus2.Freq <- corpus2.Freq[order(-corpus2.Freq$Freq), ]
rownames(corpus2.Freq) <- 1:nrow(corpus2.Freq)
#Match key words to the words in corpus 2
corpus2.Freq[corpus2.Freq$corpus2.wrds %in%key.terms, ]
If I understand correctly, here's how the tm package could be used for this:
Some reproducible data...
examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?"
examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem"
examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system."
examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation"
examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson."
library(tm)
list_examps <- lapply(1:5, function(i) eval(parse(text=paste0("examp",i))))
list_corpora <- lapply(1:length(list_examps), function(i) Corpus(VectorSource(list_examps[[i]])))
Now remove stopwords, numbers, punctuation, etc.
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
list_corpora1 <- lapply(1:length(list_corpora), function(i) tm_map(list_corpora[[i]], FUN = tm_reduce, tmFuns = funcs))
Convert processed corpora to term document matrix:
list_dtms <- lapply(1:length(list_corpora1), function(i) TermDocumentMatrix(list_corpora1[[i]], control = list(wordLengths = c(3,10))))
Get the most frequently occuring words in the first corpus:
tags <- findFreqTerms(list_dtms[[1]], 2)
Here are the key lines that should do the trick Find out how many times those tags occur in the other tdms:
list_mats <- lapply(1:length(list_dtms), function(i) as.matrix(list_dtms[[i]]))
library(plyr) # two methods of doing the same thing here
list_common <- lapply(1:length(list_mats), function(i) list_mats[[i]][intersect(rownames(list_mats[[i]]), tags),])
list_common <- lapply(1:length(list_mats), function(i) list_mats[[i]][(rownames(list_mats[[i]]) %in% tags),])
This is how I'd approach the problem now:
library(tm)
library(qdap)
## Create a MWE like you should have done:
corpus1 <- 'I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus.
Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of
couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!'
corpus2 <- "What have you tried? If you have seen it answered in another language, why don't you try translating that
language into R? – Eric Strom 2 hours ago
I am not a coder, otherwise would do. I just do not know a way to do this. – appletree 1 hour ago
Could you provide some example? or show what you have in mind for input and output? or a pseudo code?
As it is I find the question a bit too general. As it sounds I think you could use regular expressions
with grep to find your 'tags'. – AndresT 15 mins ago"
## Now the code:
## create the corpus and extract frequent terms (top7)
corp1 <- Corpus(VectorSource(corpus1))
(terms <- apply_as_df(corp1, freq_terms, top=7, stopwords=tm::stopwords("en")))
## WORD FREQ
## 1 corpus 3
## 2 phrases 3
## 3 another 2
## 4 appeared 2
## 5 corpusb 2
## 6 obtained 2
## 7 tags 2
## 8 times 2
## Use termco to search for these top 7 terms in a new corpus
corp2 <- Corpus(VectorSource(corpus2))
apply_as_df(corp2, termco, match.list=terms[, 1])
## docs word.count corpus phrases another appeared corpusb obtained tags times
## 1 1 96 0 0 1(1.04%) 0 0 0 1(1.04%) 0

What are fortunes?

In R, one sometime sees people making references to fortunes. For example:
fortune(108)
What does this mean? Where does this originate? Where can I get the code?
Edit. The sharp-eyed reader would have noticed that this question marks the 5,000th question with the [r] tag. Forgive the frivolity, but such a milestone should be marked with a bit of humour. For an extra bit of fun, you can provide an answer with your favourite fortune cookie.
It refers to the fortunes package, which is a package that contains a whole set of humorous quotes and comments from the help lists, conferences, fora and even StackOverflow.
It is actually a database or small dataframe you can browse through.
library(fortunes)
fortune()
To get a random one. Or look for a specific one, eg :
> fortune("stackoverflow")
datayoda: Bing is my friend...I found the cumsum() function.
Dirk Eddelbuettel: If bing is your friend, then rseek.org is bound
to be your uncle.
-- datayoda and Dirk Eddelbuettel (after searching for a function that
computes cumulative sums)
stackoverflow.com (October 2010)
If you want to get all of them in a dataframe, just do
MyFortunes <- read.fortunes()
The numbers sometimes referred to, are the row numbers of this dataframe. To find everything on stackoverflow :
> grep("(?i)stackoverflow",MyFortunes$source)
[1] 273 275
> fortune(275)
I used a heuristic... pulled from my posterior. That makes it Bayesian, right?
-- JD Long (in a not too serious chat about modeling strategies)
Stackoverflow (November 2010)
And for the record, 108 is is this one:
R> library(fortunes)
R> fortune(108)
Actually, I see it as part of my job to inflict R on people who are
perfectly happy to have never heard of it. Happiness doesn't equal
proficient and efficient. In some cases the proficiency of a person
serves a greater good than their momentary happiness.
-- Patrick Burns
R-help (April 2005)
R>
They're humorous (sometimes snarky) comments collected from the R lists.
install.packages("fortunes")
Or more generally
install.packages("sos")
library("sos")
findFn("fortune")
A quick search on CRAN turns up the fortunes package, which basically just prints random witty quotes related to R. The concept is based on the fortune program from Unix.

Resources