Combine character vectors in Rcpp - vector

I am new to Rcpp and all I want to do is to create a vector by combining two character vectors in Rcpp in the style of:
v1 <- c("Cat", "Dog", "Fox)
v2 <- c("Fish", "Python))
res <- c(v1, v2)
I know this is basic, but I have already spent over a day going down rabbit holes of pure C++ vector creation, the Rccp CharacterVector style, the endless differences between concatenation of numerics and characters in R vs C++, and a bunch of other, similar things, and I just cannot solve my issue. I have already read through Hadley Wickham's Advanced R guide for Rcpp (http://adv-r.had.co.nz/Rcpp.html) and this fairly extensive markdown tutorial (https://teuder.github.io/rcpp4everyone_en/), so please no snarky replies of 'you should really do your research'. Simple, clean examples and any other tips or links or guides would greatly appreciated

Related

Concrete use cases of .BY and .EACHI special symbols used in data.table?

I have seen the following two SO questions here and here and both do not clearly explain the purpose of the two reserved symbols .BY and .EACHI.
I understand the symbols .N, .SD and .SDcols very well and use them in my code all the time but can't get my head around the real use of .BY and EACHI. I have scanned all the data table documentation on CRAN but these two symbols are missing in most and at best give a superficial explanation without concrete examples.
I went through this excellent tutorial on data.tables on CRAN here and again it misses to explain the symbols .BY and .EACHI. Can someone please explain these two symbols with real use case. Preferably take the same airlines example code given in the datatable vignette here?

In R, Create Summary Data Frame from Multiple Objects

I'm trying to create a "summary" data frame that holds some high-level stats about a few objects in my R project. I'm having trouble even accomplishing this simple task and I've tried using For loops and Apply functions with no luck.
After searching (a lot) on SO I'm seeing that For loops might not be the best performing option, so I'm open to any solution that gets the job done.
I have three objects: text1 text2 and text3 of class "Large Character (vectors)" (imagine I might be exploring these objects and will create a NLP predictive model from them). Each are > 250 MB in size (upwards of 1 million "rows" each) once loaded into R.
My goal: Store the results of object.size() length() and max(nchar()) in a table for my 3 objects.
Method 1: Use an Apply() Function
Issue: I haven't successfully applied multiple functions to a single object. I understand how to do simple applies like lapply(x, mean) but I'm falling short here.
Method 2: Bind Rows Using a For loop
I'm liking this solution because I almost know how to implement it. A lot of SO users say this is a bad approach, but I'm lacking other ideas.
sources <- c("text1", "text2", "text3")
text.summary <- data.frame()
for (i in sources){
text.summary[i ,] <- rbind(i, object.size(get(i)), length(get(i)),
max(nchar(get(i))))
}
Issue: This returns the error data length exceeds size of matrix - I know I could define the structure of my data frame (on line 2), but I've seen too much feedback on other questions that advise against doing this.
Thanks for helping me understand the proper way to accomplish this. I know I'm going to have trouble doing NLP if I can't even figure out this simple problem, but R is my first foray into programming. Oof!
Just try for example:
do.call(rbind, lapply(list(text1,text2,text3),
function(x) c(objectSize=c(object.size(x)),length=length(x),max=max(nchar(x)))))
You'll obtain a matrix. You can coerce to data.frame later if you need.

replacing several string occurrences in r

I want to replace several strings with one. I've researched and found that gsub can replace elements but one at a time.
If I do this I get a warning saying that only the first one was used.
data$EVTYPE <- gsub( c("x","y") , "xy", data$EVTYPE)
I am trying now with sapply
data$EVTYPE <- sapply(data$EVTYPE, gsub, c("x", "y"), "xy") but it's been already more than 5 minutes and is still processing. I will get a stack overflow message any time now. :-/ Is there an elegant short solution for this? Is there a package I can use for this? It needs to be small because I need to do this for several cases where I have duplicate names.
Thanks for your useful comments. It was done as Frank suggested. gsub( "x|y" , "xy", data$EVTYPE).
Instead of using a vector.
For the cold temperature case, you could use gsub("COLD TEMPERATURES?", "COLD", data$EVTYPE) it's worth spending a little time getting one's head around the basics of regular expressions. There are lots of tutorials including this one.

Learning R for someone used to MATLAB, and confusion with R data types

Are there concise (yet fairly thorough) tutorials to get someone used to working in MATLAB, up to speed with writing R code.
Here is one particular issue I have in mind: From my limited experience with the R documentation and tutorials, I am left with a lot of confusion regarding datatypes in R and how to manipulate them. For example, what is a vector, matrix, list, data frame, etc and how do they relate. I haven't found a source which explains the basic data types clearly, to the point that I am wondering if the language is ambiguous by design.
It's always difficult if you are primarily familiar with only one programming language when you try to learn another that works differently, because you expect to think through a problem in a different way, and these incorrect expectations cause problems. It would be very difficult to have an introductory guide that is appropriate for students coming from each of the other languages ('you're going to think you should do X, but in R, you should do Y'). However, I can assure you that R was not designed to be ambiguous.
Mostly, you are simply going to have to get an introductory guide and plod through it. At first, it will be a lot of work, and frustrating, but that's the only way. In the end, it will get easier. Perhaps I can tell you a couple of things to jumpstart the process:
a list is just an ordered set of elements. This can be of any length, and contain any old type of thing. For example, x <- list(5, "word", TRUE).
a vector is also an ordered set of elements. Although it can be of any length, the elements must all be of the same type. For example, x <- c(3,5,4), x <- c("letter", "word", "a phrase"), x <- c(TRUE, FALSE, FALSE, TRUE).
a matrix is a vector of vectors, where all component vectors are of the same length and type. For example, x <- matrix(c("a", "b", "c", "d"), ncol=2).
a data.frame is a list of vectors, where all component vectors are of the same length, but do NOT have to be of the same type. For example, x <- data.frame(category=c("blue", "green"), amount=c(5, 30), condition.met=c(TRUE, FALSE)).
(response to comments:)
The function ?c is for concatenation; c(c("a", "b"), c("c", "d")), will not create a matrix, but a longer vector from two shorter vectors. The function ?cbind (to bind columns together), or rbind() (to bind rows together), will create a matrix.
I don't know of a single function that will output the type of any object. The closest thing is probably ?class, but this will sometimes give, e.g., "integer", where I think you want "vector". There are also mode(), and typeof(), which are related, but aren't quite what you're looking for. Find out more about the distinctions among these here and here. To check whether an object is a specific type you can use is.<specific type>(), e.g., ?is.vector.
To coerce (i.e., 'cast') an object to a specific type, you can use as.vector(), but this will only work if the conditions (e.g., noted above) are met.

R Text Mining: Counting the number of times a specific word appears in a corpus?

I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus. Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!
Ain't perfect but this should get you started.
#User Defined Function
strip <- function(x, digit.remove = TRUE, apostrophe.remove = FALSE){
strp <- function(x, digit.remove, apostrophe.remove){
x2 <- Trim(tolower(gsub(".*?($|'|[^[:punct:]]).*?", "\\1", as.character(x))))
x2 <- if(apostrophe.remove) gsub("'", "", x2) else x2
ifelse(digit.remove==TRUE, gsub("[[:digit:]]", "", x2), x2)
}
unlist(lapply(x, function(x) Trim(strp(x =x, digit.remove = digit.remove,
apostrophe.remove = apostrophe.remove)) ))
}
#==================================================================
#Create 2 'corpus' documents (you'd have to actually do all this in tm
corpus1 <- 'I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus.
Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of
couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!'
corpus2 <- "What have you tried? If you have seen it answered in another language, why don't you try translating that
language into R? – Eric Strom 2 hours ago
I am not a coder, otherwise would do. I just do not know a way to do this. – appletree 1 hour ago
Could you provide some example? or show what you have in mind for input and output? or a pseudo code?
As it is I find the question a bit too general. As it sounds I think you could use regular expressions
with grep to find your 'tags'. – AndresT 15 mins ago"
#=======================================================
#Clean up the text
corpus1 <- gsub("\\s+", " ", gsub("\n|\t", " ", corpus1))
corpus2 <- gsub("\\s+", " ", gsub("\n|\t", " ", corpus2))
corpus1.wrds <- as.vector(unlist(strsplit(strip(corpus1), " ")))
corpus2.wrds <- as.vector(unlist(strsplit(strip(corpus2), " ")))
#create frequency tables for each corpus
corpus1.Freq <- data.frame(table(corpus1.wrds))
corpus1.Freq$corpus1.wrds <- as.character(corpus1.Freq$corpus1.wrds)
corpus1.Freq <- corpus1.Freq[order(-corpus1.Freq$Freq), ]
rownames(corpus1.Freq) <- 1:nrow(corpus1.Freq)
key.terms <- corpus1.Freq[corpus1.Freq$Freq>2, 'corpus1.wrds'] #key words to match on corpus 2
corpus2.Freq <- data.frame(table(corpus2.wrds))
corpus2.Freq$corpus2.wrds <- as.character(corpus2.Freq$corpus2.wrds)
corpus2.Freq <- corpus2.Freq[order(-corpus2.Freq$Freq), ]
rownames(corpus2.Freq) <- 1:nrow(corpus2.Freq)
#Match key words to the words in corpus 2
corpus2.Freq[corpus2.Freq$corpus2.wrds %in%key.terms, ]
If I understand correctly, here's how the tm package could be used for this:
Some reproducible data...
examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?"
examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem"
examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system."
examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation"
examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson."
library(tm)
list_examps <- lapply(1:5, function(i) eval(parse(text=paste0("examp",i))))
list_corpora <- lapply(1:length(list_examps), function(i) Corpus(VectorSource(list_examps[[i]])))
Now remove stopwords, numbers, punctuation, etc.
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
list_corpora1 <- lapply(1:length(list_corpora), function(i) tm_map(list_corpora[[i]], FUN = tm_reduce, tmFuns = funcs))
Convert processed corpora to term document matrix:
list_dtms <- lapply(1:length(list_corpora1), function(i) TermDocumentMatrix(list_corpora1[[i]], control = list(wordLengths = c(3,10))))
Get the most frequently occuring words in the first corpus:
tags <- findFreqTerms(list_dtms[[1]], 2)
Here are the key lines that should do the trick Find out how many times those tags occur in the other tdms:
list_mats <- lapply(1:length(list_dtms), function(i) as.matrix(list_dtms[[i]]))
library(plyr) # two methods of doing the same thing here
list_common <- lapply(1:length(list_mats), function(i) list_mats[[i]][intersect(rownames(list_mats[[i]]), tags),])
list_common <- lapply(1:length(list_mats), function(i) list_mats[[i]][(rownames(list_mats[[i]]) %in% tags),])
This is how I'd approach the problem now:
library(tm)
library(qdap)
## Create a MWE like you should have done:
corpus1 <- 'I have seen this question answered in other languages but not in R.
[Specifically for R text mining] I have a set of frequent phrases that is obtained from a Corpus.
Now I would like to search for the number of times these phrases have appeared in another corpus.
Is there a way to do this in TM package? (Or another related package)
For example, say I have an array of phrases, "tags" obtained from CorpusA. And another Corpus, CorpusB, of
couple thousand sub texts. I want to find out how many times each phrase in tags have appeared in CorpusB.
As always, I appreciate all your help!'
corpus2 <- "What have you tried? If you have seen it answered in another language, why don't you try translating that
language into R? – Eric Strom 2 hours ago
I am not a coder, otherwise would do. I just do not know a way to do this. – appletree 1 hour ago
Could you provide some example? or show what you have in mind for input and output? or a pseudo code?
As it is I find the question a bit too general. As it sounds I think you could use regular expressions
with grep to find your 'tags'. – AndresT 15 mins ago"
## Now the code:
## create the corpus and extract frequent terms (top7)
corp1 <- Corpus(VectorSource(corpus1))
(terms <- apply_as_df(corp1, freq_terms, top=7, stopwords=tm::stopwords("en")))
## WORD FREQ
## 1 corpus 3
## 2 phrases 3
## 3 another 2
## 4 appeared 2
## 5 corpusb 2
## 6 obtained 2
## 7 tags 2
## 8 times 2
## Use termco to search for these top 7 terms in a new corpus
corp2 <- Corpus(VectorSource(corpus2))
apply_as_df(corp2, termco, match.list=terms[, 1])
## docs word.count corpus phrases another appeared corpusb obtained tags times
## 1 1 96 0 0 1(1.04%) 0 0 0 1(1.04%) 0

Resources