For loop for specific rows - r

How would I remove some job titles from the data frame (like below) FROM specific LOB? E.g. I want to keep Technology manager in LOB4 and I don't need technology sales in LOB2. When I execute the code below it removes titles from the entire data frame.
Is there any way to do this?
LOB Title
LOB1 sales rep
LOB2 technology sales
LOB2 receptionist
LOB3 Web Designer
LOB4 Technology Manager
for (i in c("(?=.*technology)", "(?=.*designer)")) {
del <- grepl(i, data[data$LOB == "LOB1" | data$LOB == "LOB2",2], perl = T, ignore.case = T)
data <- data[!del, ]
}

This is likely not working because the grepl statement is returning a vector with length of three that is then used to subset a data.frame with five rows. A for loop is also probably not needed and any of the following will drop technology sales in LOB2:
data[!grepl("(?=.*(technology|designer))", data$Title, perl = TRUE), ]
data[!data$Title == "technology sales", ]
data[!data$Title %in% c("technology sales", "job2 to drop"), ]
data[-2, ]

Related

In R, How do you extract multiple matched terms as string and match if TRUE with Regex or Grep?

I'm still a beginner in R. I need help with some code that searches a vector for terms in a list and returns TRUE. If TRUE, return a string of matched terms.
I have it set to tell me if terms match and return the first matched term but I'm not sure how to get the rest of the matched terms.
In the attached code, I have my Desired_Output and the imperfect Final_Output.
#create dataset of 2 columns/vectors. 1st column is "Job Title", 2nd column is "Work Experience"
'Work Experience' <- c("cooked food; cleaned house; made beds", "analyzed data; identified gaps; used sql, python, and r", "used tableau to make dashboards for clients; applied advanced macro excel functions", "financial planning and strategy; consulted with leaders and clients")
'Job Title' <- c("dad", "research analyst", "business intelligence consultant", "finance consultant")
Job_Hist <- data.frame(`Job Title`, `Work Experience`)
#create list of terms to search for in Job_Hist
Term_List <- c("python", " r", "sql", "tableau", "excel")
#use grepl to search the Work Experience vector for terms in CS_Term_List THEN return TRUE or FALSE
Term_TF<- grepl(paste(Term_List, collapse = '|'),Job_Hist$Work.Experience)
#add a new column to our final output dataframe that shows if the job experience matched our terms
Final_Output<-Job_Hist
Final_Output$Term_Test <- Term_TF
#Let's see what what terms caused the TRUE Flag in the Final_Output
m<-regexpr(paste(Term_List, collapse = '|'),
Job_Hist$Work.Experience, perl=TRUE)
T_Match <- regmatches(Job_Hist$Work.Experience,m)
#Compare Final_Output to my Desired_Output and please help me :)
Desired_T_Match <- c("NA", "sql, python, r", "tableau, excel", "NA")
Desired_Output <- data.frame(`Job Title`, `Work Experience`, Term_TF, Desired_T_Match)
#I need 2 things.
#1) a way to tie T_Match back to Final_Output... something like if, TRUE then match
#2) a way to return every term matched in a coma delimited string. Example: research analyst analyzed data... TRUE sql, python
You can use stringr::str_extract_all to get a list of matches from each row:
library(stringr)
library(tidyverse)
Job_Hist$matches <- str_extract_all(Job_Hist$Work.Experience,
paste(Term_List, collapse = '|'), simplify = TRUE)
Work.Experience Term matches.1 matches.2
1 cooked food; cleaned house; made beds FALSE
2 analyzed data; identified gaps; used sql, python, and r TRUE sql python
3 used tableau to make dashboards for clients; applied advanced macro excel functions TRUE tableau excel
4 financial planning and strategy; consulted with leaders and clients FALSE
matches.3
1
2 r
3
4
Edit: if you'd rather have matches in one column as a comma separated string, you can use:
str_extract_all(Job_Hist$Work.Experience, paste(Term_List, collapse = '|')) %>%
sapply(., paste, collapse = ", ")
matches
1
2 sql, python, r
3 tableau, excel
4
Note that if you use the default argument simplify = FALSE in str_extract_all, your column matches will look correct, like the result we get with sapply above. However, if you inspect with str() you'd see each element is actually it's own list, which will cause problems for some types of analysis.

R: Read text files with blanks and unequal number of columns

I am trying to read many text files into R using read.table. Most of the time we have clean text files which have defined columns.
The data that I am trying to read comes from ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt
You can see that the blanks and length of text files varies by report.
ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt
ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/100917_livecattle.txt
My objective is to read many of these text files and combine them into a dataset.
If I can read one of the them then compiling should not be an issue. However, I am running into several issues because of the format of the text file:
1) the number of FIRMS vary from report to report. For example, sometimes there will be 3 rows (i.e. 3 firms that did business on that data) of data to import and sometimes there may be 10.
2) Blanks are being recognized. For example, under the FIRM section there should be a column for Deliveries (DEL) and Receipts (REC). The data when it is read in THIS section should look like:
df <- data.frame("FIRM_#" = c(407, 685, 800, 905),
"FIRM_NAME" = c("STRAITS FIN LLC", "R.J.O'BRIEN ASSOC", "ROSENTHAL COLLINS LL", "ADM INVESTOR SERVICE"),
"DEL" = c(1,1,15,1), "REC"= c(NA,18,NA,NA))
however when I read this in the fomatting is all messed up and does not put NA for the blank values
3) The above issues apply for "YARDS" and "FUTURE DELIVERIES SCHEDULED" section of the text file.
I have tried to read in sections of the text file and then format it accordingly but since the the number of firms change day to day the code does not generalize.
Any help would greatly be appreciated.
Here an answer which starts from the scratch via rvest for downloading data and includes lots of formatting. The general idea is to identify fixed widths that may be used to separate columns - I used a little help from SO for this purpose link.
You could then use read.fwf() in combination with cat()and tempfile(). In my first attempt this did not work, due to some formatting issues, so I added some additional lines to get the final table format.
Maybe there are some more elegant options and shortcuts I have overseen, but at least, my answer should get you started. Of course, you will have to adapt the selection of lines, identification of widths for spliting tables depending on what parts of the data you need. Once this is settled, you may loop through all the websites to gather data. I hope this helps...
library(rvest)
library(dplyr)
page <- read_html("ftp://ftp.cmegroup.com/delivery_reports/live_cattle_delivery/102317_livecattle.txt")
table <- page %>%
html_text("pre") %>%
#reformat by splitting on line breakes
{ unlist(strsplit(., "\n")) } %>%
#select range based on strings in specific lines
"["(.,(grep("FIRM #", .):(grep(" DELIVERIES SCHEDULED", .)-1))) %>%
#exclude empty rows
"["(., !grepl("^\\s+$", .)) %>%
#fix width of table to the right
{ substring(., 1, nchar(gsub("\\s+$", "" , .[1]))) } %>%
#strip white space on the left
{ gsub("^\\s+", "", .) }
headline <- unlist(strsplit(table[1], "\\s{2,}"))
get_split_position <- function(substring, string) {
nchar(string)-nchar(gsub(paste0("(^.*)(?=", substring, ")"), "", string , perl=T))
}
#exclude first element, no split before this element
split_positions <- sapply(headline[-1], function(x) {
get_split_position(x, table[1])
})
#exclude headline from split
table <- lapply(table[-1], function(x) {
substring(x, c(1, split_positions + 1), c(split_positions, nchar(x)))
})
table <- do.call(rbind, table)
colnames(table) <- headline
#strip whitespace
table <- gsub("\\s+", "", table)
table <- as.data.frame(table, stringsAsFactors = FALSE)
#assign NA values
table[ table == "" ] <- NA
#change column type
table[ , c("FIRM #", "DEL", "REC")] <- apply(table[ , c("FIRM #", "DEL", "REC")], 2, as.numeric)
table
# FIRM # FIRM NAME DEL REC
# 1 407 STRAITSFINLLC 1 NA
# 2 685 R.J.O'BRIENASSOC 1 18
# 3 800 ROSENTHALCOLLINSLL 15 NA
# 4 905 ADMINVESTORSERVICE 1 NA

R Optimizing double loop that uses stri_extract

I have been working on some text scraping/analysis. One thing I did was pull out the top words from documents to compare and learn about different metrics. This was fast and easy. There became an issue with defining what separators to use though and pulling out individual words rather than phrases removed information from the analysis. For example .Net Developer becomes net and developer after the transformation. I already had a list of set phrases/words from an old project someone else gave up on. The next step was pulling out specific keywords from multiple rows for multiple documents.
I have been looking into several techniques including vectorization, parallel processing, using C++ code within R and others. Moving forward I will experiment with all of these techniques and try and speed up my process as well as give me these tools for future projects. In the mean time (without experimentation) I'm wondering what adjustments are obvious which will significantly decrease the time taken e.g. moving parts of the code outside the loop, using better packages etc
I also have a progress bar, but I can remove it if its slowing down my loop significantly.
Here is my code:
words <- read.csv("keyphrases.csv")
df <- data.frame(x=(list.files("sec/new/")))
total = length(df$x)
pb <- txtProgressBar(title = "Progress Bar", min = 0, max =total , width = 300, style=3)
for (i in df$x){
s <- read.csv(paste0("sec/new/",i))
u <- do.call(rbind, pblapply(words$words, function(x){
t <- data.frame(ref= s[,2], words = stri_extract(s[,3], coll=x))
t<-na.omit(t)
}))
write.csv(u,paste0("sec/new_results/new/",i), row.names = F)
setTxtProgressBar(pb, i, title=paste( round(which(df$x== i)/total*100, 2),"% done"))
}
So words has 60,000 rows of words/short phrases - no more than 30 characters each. Length i is around 4000 where each i has between 100 and 5000 rows with each row having between 1 and 5000 characters. Any random characters/strings can be used if my question needs to be reproducible.
I only used lapply because combining it with rbind and do.call worked really well, having a loop within a loop may be slowing down the process significantly too.
So off the bat there are somethings I can do right? Swapping data.frame to data.table or using vectors instead. Do the reading and writing outside the loop somehow? Perhaps write it such that one of the loops isnt nested?
Thanks in advance
EDIT
The key element that needs speeding up is the extract. Whether I use lapply above or cut it down to:
for(x in words$words){t<-data.table(words=stri_extract(s[,3], coll=x))}
This still takes the most time for a long way. skills and t are data tables in this case.
EDIT2
Attempting to create reproducible data:
set.seed(42)
words <- data.frame(words=rnorm(1:60000))
words$wwords <- as.String(words$words)
set.seed(42)
file1 <- data.frame(x=rnorm(1:5000))
file1$x<-as.String(file1$x)
pblapply(words$words, function(x){
t <- data.frame(words = stri_extract(file1$x, coll=x))
})
First things first. Yes, I would definitely switch from data.frame to data.table. Not only is it faster and easier to use, when you start merging data sets data.table will do reasonable things when data.frame will give you unexpected and unintended results.
Secondly, is there an advantage to using R to take care of your separators? You mentioned a number of different techniques you are considering using. If separators are just noise for the purposes of your analysis, why not split the work into two tools and use a tool that is much better at handling separators and continuation lines and so on? For me, Python is a natural choice to do things like parsing a bunch of text into keywords--including stripping off separators and other "noise" words you do not care about in your analysis. Feed the results of the Python parsing into R, and use R for its strengths.
There are a few different ways to get the output of Python into R. I would suggest starting off with something simple: CSV files. They are what you are starting with, they are easy to read and write in Python and easy to read in R. Later you can deal with a direct pipe between Python and R, but it does not give you much advantage until you have a working prototype and it is a lot more work at first. Make Python read in your raw data and turn out a CSV file that R can drop straight into a data.table without further processing.
As for stri_extract, it is really not the tool you need this time. You certainly can match on a bunch of different words, but it is not really what it is optimized for. I agree with #Chris that using merge() on data.tables is a much more efficient--and faster--way to search for a number of key words.
Single Word Version
When you have single words in each lookup, this is easily accomplished with merging:
library(data.table)
#Word List
set.seed(42)
WordList <- data.table(ID = 1:60000, words = sapply(1:60000, function(x) paste(sample(letters, 5), collapse = '')))
#A list of dictionaries
set.seed(42)
Dicts <- list(
Dict1 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
}),
Dict2 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
}),
Dict3 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
})
)
#Create Dictionary Data.table and add Identifier
Dicts <- rbindlist(lapply(Dicts, function(x){data.table(ref = x)}), use.names = T, idcol = T)
# set key for joining
setkey(WordList, "words")
setkey(Dicts, "ref")
Now we have a data.table with all dictionary words, and a data.table with all words in our word list. Now we can just merge:
merge(WordList, Dicts, by.x = "words", by.y = "ref", all.x = T, allow.cartesian = T)
words ID .id
1: abcli 30174 Dict3
2: abcrg 26210 Dict2
3: abcsj 8487 Dict1
4: abczg 24311 Dict2
5: abdgl 1326 Dict1
---
60260: zyxeb 52194 NA
60261: zyxfg 57359 NA
60262: zyxjw 19337 Dict2
60263: zyxoq 5771 Dict1
60264: zyxqa 24544 Dict2
So we can see abcli appears in Dict3, while zyxeb does not appear in any of the dictionaries. There look to be 264 duplicates (words that appear in >1 dictionary), as the resultant data.table is larger than our word list (60264 > 60000). This is shown as follows:
merge(WordList, Dicts, by.x = "words", by.y = "ref", all.x = T, allow.cartesian = T)[words == "ahlpk"]
words ID .id
1: ahlpk 7344 Dict1
2: ahlpk 7344 Dict2
3: ahlpk 28487 Dict1
4: ahlpk 28487 Dict2
We also see here that duplicated words in our word list are going to create multiple resultant rows.
This is very very quick to run
Phrases + Sentences
In the case where you are searching for phrases within sentences, you will need to perform a string match instead. However, you will still need to make n(Phrases) * n(Sentences) comparisons, which will quick hit memory limits in most R data structures. Fortunately, this is an embarrassingly parallel operation:
Same setup:
library(data.table)
library(foreach)
library(doParallel)
# Sentence List
set.seed(42)
Sentences <- data.table(ID = 1:60000, Sentence = sapply(1:60000, function(x) paste(sample(letters, 10), collapse = '')))
# A list of phrases
set.seed(42)
Phrases <- list(
Phrases1 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
}),
Phrases2 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
}),
Phrases3 = sapply(1:15000, function(x) {
paste(sample(letters, 5), collapse = '')
})
)
# Create Dictionary Data.table and add Identifier
Phrases <- rbindlist(lapply(Phrases, function(x){data.table(Phrase = x)}), use.names = T, idcol = T)
# Full Outer Join
Sentences[, JA := 1]
Phrases[, JA := 1]
# set key for joining
setkey(Sentences, "JA")
setkey(Phrases, "JA")
We now want to break up our Phrases table into manageable batches
cl<-makeCluster(4)
registerDoParallel(cl)
nPhrases <- as.numeric(nrow(Phrases))
nSentences <- as.numeric(nrow(Sentences))
batch_size <- ceiling(nPhrases*nSentences / 2^30) #Max data.table allocation is 2^31. Lower this if you are hitting memory allocation limits
seq_s <- seq(1,nrow(Phrases), by = floor(nrow(Phrases)/batch_size))
ln_s <- length(seq_s)
if(ln_s > 1){
str_seq <- paste0(seq_s,":",c(seq_s[2:ln_s],nrow(Phrases) + 1) - 1)
} else {
str_seq <- paste0(seq_s,":",nrow(Phrases))
}
We are now ready to send our job out. The grepl line below is doing the work - testing which phrases match each sentence. We then filter out any non-matches.
ls<-foreach(i = 1:ln_s) %dopar% {
library(data.table)
TEMP_DT <- merge(Sentences,Phrases[eval(parse(text = str_seq[1]))], by = "JA", allow.cartesian = T)
TEMP_DT <- TEMP_DT[, match_test := grepl(Phrase,Sentence), by = .(Phrase,Sentence)][match_test == 1]
return(TEMP_DT)
}
stopCluster(cl)
DT_OUT <- unique(do.call(rbind,ls))
DT_OUT now summarizes the sentences that match, along with the Phrase + Phrase list that it is found in.
This still will take some time (as there is a lot of processing that is necessary) , but nowhere near a year.

Flexibly creating logical statements from Vectors with text elements

I'm probably going about this in a silly way but bear with me. I'm using selectizeInput in shiny so users can select multiple categories. Those selections are used to subset a dataframe. After trying match and pmatch and %in% to look for patterns in text, I decided to use grep. Its able to find the proper row when a project has multiple categories. However, the pattern paramter can't have a length greater than 1. The workaround? Add | (or operator) between elements in the pattern. I need help coming up with a process for inserting | between elements of a vector so I can use grep to subset dataframes when length(input$Category)>1.
Example
df <- data.frame(title = 1:5, category = c("ab", "bcd", "efg","ab,bcd","efg"))
selected category
cate <- c("bcd")
df[grep(cate,x = df$category),]
Works great!
But, if someone were to select more than one category, grep only uses the first element in the pattern:
cate <- c("bcd","efg")
df[grep(cate,x = df$category),]
failure
UNLESS we add an | between the categories selected
cate <- c("bcd|efg")
df[grep(cate,x = df$category),]
success
I cannot figure out how to programmatically add the | between elements of cate without making a giant mess with if statements.
if(length(cate)== 1){
df[grep(cate,x = df$category),])
} else {
if(length(cate) == 2){
cate2 <- paste(cate[[1]],"|",cate[[2]], sep = "")
df[grep(cate2,x = df$category),]
} else {...
There must be some way to generate the pattern:
paste(cate[[1]],"|",cate[[2]],...,"|",cate[[n]], sep = "")
I'm not 100% sure on what you're doing and thus can't point you to the "better" solution, but for your purposes I'm pretty sure something along the lines of this will do:
cate <- character()
cate[1] <- c("ab")
cate[2] <- c("efg")
cate[3] <- c("ab")
do.call(what = paste, c(list(cate), collapse = "|"))

Using paste to create logical expression for data frame subset

I have two dataframes, remove and dat (the actual dataframe). remove specifies various combinations of the factor variables found in dat, and how many to sample (remove$cases).
Reproducible example:
set.seed(83)
dat <- data.frame(RateeGender=sample(c("Male", "Female"), size = 1500, replace = TRUE),
RateeAgeGroup=sample(c("18-39", "40-49", "50+"), size = 1500, replace = TRUE),
Relationship=sample(c("Direct", "Manager", "Work Peer", "Friend/Family"), size = 1500, replace = TRUE),
X=rnorm(n=1500, mean=0, sd=1),
y=rnorm(n=1500, mean=0, sd=1),
z=rnorm(n=1500, mean=0, sd=1))
What I am trying to accomplish is to read in a row from remove and use it to subset dat. My current approach looks like:
remove <- expand.grid(RateeGender = c("Male", "Female"),
RateeAgeGroup = c("18-39","40-49", "50+"),
Relationship = c("Direct", "Manager", "Work Peer", "Friend/Family"))
remove$cases <- c(36,34,72,58,47,38,18,18,15,22,17,10,24,28,11,27,15,25,72,70,52,43,21,27)
# For each row of remove (combination of factor levels:)
for (i in 1:nrow(remove)) {
selection <- character()
# For each column of remove (particular selection):
for (j in 1:(ncol(remove)-1)){
add <- paste0("dat$", names(remove)[j], ' == "', remove[i,j], '" & ')
selection <- paste0(selection, add)
}
selection <- sub(' & $', '', selection) # Remove trailing ampersand
cat(selection, sep = "\n") # What does selection string look like?
tmp <- sample(dat[selection, ], size = remove$cases[i], replace = TRUE)
}
The output from cat() while the loop runs looks right, for example: dat$RateeGender == "Male" & dat$RateeAgeGroup == "18-39" & dat$Relationship == "Direct" and if I paste that into dat[dat$RateeGender == "Male" & dat$RateeAgeGroup3 == "18-39" & dat$Relationship == "Direct" ,], I get the right subset.
However, if I run the loop as written with dat[selection, ], each subset only returns NAs. I get the same outcome if I use subset(). Note, I have replace = TRUE in the above solely because of the random sampling. In the actual application, there will always be more cases per combination than required.
I know I can dynamically construct formulas for lm() and other functions using paste() in this way, but am obviously missing something in translating this into working with [,].
Any advice would be really appreciated!
You cannot use character expressions as you describe to subset either with [ or subset. If you wanted to do that you would have to construct the entire expression, and then use eval. That said, there is a better solution using merge. For example, let's get all the entries in dat that match the first two rows from remove:
merge(dat, remove[1:2,])
If we want all the rows that don't match those two, then:
subset(merge(dat, remove[1:2,], all.x=TRUE), is.na(cases))
This is assuming you want to join on the columns with the same names across the two tables. If you have a lot of data you should consider using data.table as it is very fast for this type of operation.
I upvoted BrodieG's answer before I realized it doesn't do what you wanted in situations wehre the size of the category is smaller than the number of samples desired. (In fact his method doesn't really do sampling at all, but I think it is is an elegant solution to a different question so I'm not reversing my vote. And you could use a similar split strategy as illustrated below with that data.frame as the input.).
sub <- lapply( split(dat, with(dat, paste(RateeGender, # split vector
RateeAgeGroup,
Relationship, sep="_")) ),
function (d) { n= with(remove, remove[
RateeGender==d$RateeGender[1]&
RateeAgeGroup==d$RateeAgeGroup[1]&
Relationship==d$Relationship[1],
"cases"])
cat(n);
sample(d, n, repl=TRUE) } )

Resources