Smarter way to include Row Names into multiple formulas - r

I have multiple data sets to call on, and present in a few places. The code works fine, but its a laborious task to copy/paste each variable and mutation to get the result. There must be an easier way to incorporate rownames + variable to create some sort of loop!
I am simply copy/pasting the same variables and changing the name to the next row.
row1Mapped <- sum(cnx$row1Connect =="Mapped", na.rm = TRUE)
row2Mapped <- sum(cnx$row2Connect =="Mapped", na.rm = TRUE)
row3Mapped <- sum(cnx$row3Connect =="Mapped", na.rm = TRUE)
row4Mapped <- sum(cnx$row4Connect =="Mapped", na.rm = TRUE)
main <- main %>%
mutate(Mapped = ifelse(Bank == "row1", row1Mapped,
ifelse(Bank == "row2", row2Mapped,
ifelse(Bank == "row3", row3Mapped, NA))))
Everything works, I however would like to be more efficient!

Related

Ho to run a function (many times) that changes variable (tibble) in global env

I'm a newbie in R, so please have some patience and... tips are most welcome.
My goal is to create tibble that holds a "Full Name" (of a person, that may have 2 to 4 names) and his/her gender. I must start from a tibble that contains typical Male and Female names.
Below I present a minimum working example.
My problem: I can call get_name() multiple time (in 10.000 for loop!!) and get the right answer. But, I was looking for a more 'elegant' way of doing it. replicate() unfortunately returns a vector... which make it unusable.
My doubts: I know I have some (very few... right!!) issues, like the if statement, that is evaluated every time (which is redundant), but I don't find another way to do it. Any suggestion?
Any other suggestions about code struct are also welcome.
Thank you very much in advance for your help.
# Dummy name list
unit_names <- tribble(
~Women, ~Man,
"fem1", "male1",
"fem2", "male2",
"fem3", "male3",
"fem4", "male4",
"fem5", "male5",
"fem6", NA,
"fem7", NA
)
set.seed(12345) # seed for test
# Create a tibble with the full names
full_name <- tibble("Full Name" = character(), "Gender" = character() )
get_name <- function() {
# Get the Number of 'Unit-names' to compose a 'Full-name'
nbr_names <- sample(2:4, 1, replace = TRUE)
# Randomize the Gender
gender <- sample(c("Women", "Man"), 1, replace = TRUE)
if (gender == "Women") {
lim_names <- sum( !is.na(unit_names$"Women"))
} else {
lim_names <- sum( !is.na(unit_names$"Man"))
}
# Sample the Fem/Man List names (may have duplicate)
sample(unlist(unit_names[1:lim_names, gender]), nbr_names, replace = TRUE) %>%
# Form a Full-name
paste ( . , collapse = " ") %>%
# Add it to the tibble (INCLUDE the Gender)
add_row(full_name, "Full Name" = . , "Gender" = gender)
}
# How can I make 10k of this?
full_name <- get_name()
If you pass a larger number than 1 to sample this problem becomes easier to vectorise.
One thing that currently makes your problem much harder is the layout of your unit_names table: you are effectively treating male and female names as individually paired, but they clearly aren’t: hence they shouldn’t be in columns of the same table. Use a list of two vectors, for instance:
unit_names = list(
Women = c("fem1", "fem2", "fem3", "fem4", "fem5", "fem6", "fem7"),
Men = c("male1", "male2", "male3", "male4", "male5")
)
Then you can generate random names to your heart’s delight:
generate_names = function (n, unit_names) {
name_length = sample(2 : 4, n, replace = TRUE)
genders = sample(c('Women', 'Men'), n, replace = TRUE)
names = Map(sample, unit_names[genders], name_length, replace = TRUE) %>%
lapply(paste, collapse = ' ') %>%
unlist()
tibble(`Full name` = names, Gender = genders)
}
A note on style, unlike your function the above doesn’t use any global variables. Furthermore, don’t "quote" variable names (you do this in unit_names$"Women" and for the arguments of add_row). R allows this, but this is arguably a mistake in the language specification: these are not strings, they’re variable names, making them look like strings is misleading. You don’t quote your other variable names, after all. You do need to backtick-quote the `Full name` column name, since it contains a space. However, the use of backticks, rather than quotes, signifies that this is a variable name.
I am not 100% of what you are trying to get, but if I got it right...did you try with mutate at dplyr? For example:
result= mutate(data.frame,
concated_column = paste(column1, column2, column3, column4, sep = '_'))
With a LITTLE help from Konrad Rudolph, the following elegant (and vectorized ... and fast) solution that I was looking. map2 does the necessary trick.
Here is the full working example if someone needs it:
(Just a side note: I kept the initial conversion from tibble to list because the data arrives to me as a tibble...)
Once again thanks to Konrad.
# Dummy name list
unit_names <- tribble(
~Women, ~Men,
"fem1", "male1",
"fem2", "male2",
"fem3", "male3",
"fem4", "male4",
"fem5", "male5",
"fem6", NA,
"fem7", NA
)
name_list <- list(
Women = unit_names$Women[!is.na(unit_names$Women)],
Men = unit_names$Men[!is.na(unit_names$Men)]
)
generate_names = function (n, name_list) {
name_length = sample(2 : 4, n, replace = TRUE)
genders = sample(c('Women', 'Men'), n, replace = TRUE)
#names = lapply(name_list[genders], sample, name_length) %>%
names = map2(name_list[genders], name_length, sample) %>%
lapply(paste, collapse = ' ') %>%
unlist()
tibble(`Full name` = names, Gender = genders)
}
full_name <- generate_names(10000, name_list)

if/else grepl "argument is of length zero"

I want to perform a set of operations (in R) on a number of data frames located within a list. In particular, for each of one I create a "library" column, which is then used to determine which kind of filtering operation to perform. This is the actual code:
sampleList <- list(RNA1 = "data/not_processed/dedup.Bp1R4T2_S2.txt",
RNA2 = "data/not_processed/dedup.Bp1R4T3_S4.txt",
RNA3 = "data/not_processed/dedup.Bp1R5T2_S1.txt",
RNA4 = "data/not_processed/dedup.Bp1R5T3_S2.txt",
RNA5 = "data/not_processed/dedup.Bp1R14T5_S1.txt",
RNA6 = "data/not_processed/dedup.Bp1R14T6_S1.txt",
RNA7 = "data/not_processed/dedup.Bp1R14T6_S2.txt",
RNA8 = "data/not_processed/dedup.Bp1R14T7_S2.txt",
RNA9 = "data/not_processed/dedup.Bp1R14T8_S3.txt",
RNA10 = "data/not_processed/dedup.Bp1R14T9_S3.txt",
RNA11 = "data/not_processed/dedup.Bp1R14T9_S4.txt",
DNA1 = "data/not_processed/dedup.dna10_1_S4.txt",
DNA2 = "data/not_processed/dedup.dna10_2_S5.txt",
DNA3 = "data/not_processed/dedup.dna10_3_S6.txt",
DNA4 = "data/not_processed/dedup.dna50_1_S1.txt",
DNA5 = "data/not_processed/dedup.dna50_2_S2.txt",
DNA6 = "data/not_processed/dedup.dna50_3_S3.txt",
DNA7 = "data/not_processed/dedup.dna50_pcrcocktail_S7.txt")
batch <- lapply(names(sampleList),function(mysample){
aux <- read.table(sampleList[[mysample]], col.names=c(column1, column2, ..., ID, library, column4, etc...))
aux %>% mutate(library = mysample, R = Fw_ref + Rv_ref, A = Fw_alt + Rv_alt) %>% distinct(ID, .keep_all=T)
if (grepl("DNA", aux$library)){
aux %>% filter(aux$R>1 & aux$A>1)
} else {
aux %>% filter((aux$R+aux$A)>7 & aux$Fw_ref>=1 & aux$Rv_ref>=1 & aux$Fw_alt>=1 & aux$Rv_alt>=1)
}
aux
})
batch_file <- do.call(rbind, batch)
write.table(batch_file, "data/batch_file.txt", col.names = T, sep = "\t")
The possible values of the library column are DNA1 to DNA7, and RNA1 to 11. I tried also with "char" %in%, but it gives the same problem:
Error in if (grepl("DNA", aux$library)) { : argument is of length zero
Seems like the if condition is not able to identify the value in library. However, when I tried to apply the if/else condition on the batch_file (not filtered, basically obtained with this code without the if/else part) it worked perfectly.
Many thanks in advance.

Formatting multiple files at once in R

I'm quite new to R so I hope this question will still be interesting. I created a for loop which produced 11 csv files. Here's the code I used for that in case that could help clarify the question:
for (i in seq(0, 1, by = 0.1))
{collar$results2<-mutate(collar,results2 = case_when( (probability > i & results1 == "POSITIVE") | (probability < i & results1 == "NEGATIVE") ~ TRUE, TRUE ~ FALSE) )
as.character(collar$results2)
collaraccuracy1=paste('collar41361_41365', i, 'csv', sep = '.')
write.csv(collar,collaraccuracy1)}
As you can see, all files created have the following format: collar41361_41365.i.csv, with ``i` ranging from 0 to 1 every 0.1, like so:
[1] "collar41361_41365.0.csv"
[1] "collar41361_41365.0.1.csv"
[1] "collar41361_41365.0.2.csv"
[1] "collar41361_41365.0.3.csv"
[1] "collar41361_41365.0.4.csv"
[1] "collar41361_41365.0.5.csv"
[1] "collar41361_41365.0.6.csv"
[1] "collar41361_41365.0.7.csv"
[1] "collar41361_41365.0.8.csv"
[1] "collar41361_41365.0.9.csv"
[1] "collar41361_41365.1.csv"
Now, I'd like to format all the files at one since they have the same structure (10 columns, 240 rows and same column header) and same name format.
See below the code with the actions I've been trying take over this 11 files. I've used Sys.glob as this was mentionned to be the best way to perform the task in another post. I've previously coded this actions for a single file and it worked. I now want to apply the code for all 11 files at once:
#1) Reading multiple files at one. Now, this will only work for the files with a decimal value of i in their name -which is fine-. If I was reading files with i=0 or i=1, then we'll have the pattern "collar41361_41365.*.csv". Am I right?
collaraccuracy<-lapply(Sys.glob("collar41361_41365.***.csv"), read.csv)
#2) Select only the columns with header "observed","predicted","probability","results1","results2.results2"
collaraccuracy<-fread("collar41361_41365.***.csv",select=c("observed","predicted","probability","results1","results2.results2"),stringsAsFactors = F)
#3) Rename column "results2.results2" to "results2"
colnames(collaraccuracy)<-c("observed","predicted","probability","results1","results2")
#4) Create 6th column "results" by merging columns "results1" and "results2"
collaraccuracy$results <- paste(collaraccuracy$results2,
collaraccuracy$results1,sep="_")
#5) End of the formatting. Write new formated csv files with the pattern "collar41361_by_41365.i.csv"
collaraccuracy2=paste('collar41361_by_41365', i, 'csv', sep = '.')
write.csv(collaraccuracy,collaraccuracy2)
As you can see, I've 5 different actions to compute the should end up with eventually 9 files for i values of (0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9)
I'm specially concerned with the syntax in actions 1) and 2), but this is the best I can do so far.
Any tips to formulate this? Any help is appreciated!
P.S. Update: I've tried to create a function and applying it to the rest of the files with lapply:
collarcolumns<-function(collaraccuracy1)
{collaraccuracy1<-fread(("collar41361_41365.1.csv"),select=c("observed","predicted","probability","results1","results2.results2"),stringsAsFactors = F)
colnames(collaraccuracy1)<-c("observed","predicted","probability","results1","results2")
collaraccuracy1$results <- paste(collaraccuracy1$results2, collaraccuracy1$results1,sep="_")
collaraccuracy2=paste('collar41361_by_41365', i, 'csv', sep = '.')
write.csv(collaraccuracy1,collaraccuracy2)}
lapply(Sys.glob("collar41361_41365.*.csv"), collarcolumns)
And got 11 "NULL"s printed by R.. Was I on the right track?
Taking a step back, it sounds like you want to do the following for each i:
Add a column results2 that checks whether the predicted value matches the observed value with probability i.
Add a column results that concatenates results1 and results2.
The reason you're seeing strange column names like results2.results2 is that the original for loop is redundant; you don't need both an assignment statement (collar$results2 <- ...) and mutate. We can strip the whole thing down to one loop, like this:
for(i in seq(0, 1, by = 0.1)) {
collar.temp = collar %>%
mutate(results2 = case_when((probability > i & results1 == "POSITIVE") |
(probability < i & results1 == "NEGATIVE") ~ T,
T ~ F)) %>%
mutate(results = paste(results1, results2, sep = "_"))
collaraccuracy1 = paste('collar41361_41365', i, 'csv', sep = '.')
write.csv(collar.temp, collaraccuracy1)
}
Taking yet a further step back, are you sure you want 11 separate tables? It looks to me like you're effectively checking how accurate the predictions are at various "confidence" cutoffs. One way to put the data into a tidy format would be like this, where cutoff is its own column:
collar.tidy = do.call(
"bind_rows",
lapply(
seq(0, 1, by = 0.1),
function(x) {
collar %>%
mutate(cutoff = x,
results2 = case_when((probability > x & results1 == "POSITIVE") |
(probability < x & results1 == "NEGATIVE") ~ T,
T ~ F)) %>%
mutate(results = paste(results1, results2, sep = "_"))
}
)
)
See here for a great introduction to tidy data. You may think of other ways to tidy up this dataset; for example, it's not clear to me whether the results column that concatenates two other columns is strictly necessary.

Find and replace with sample from a list

I have a dataframe called 'out' with the following strings in it.
out<-data.frame(c("Normal","Normal","Abnormal","Normal","Abnormal","Abnormal","Normal","Abnormal"))
I want to replace the "Normal" with a string sampled from a list as follows
mychoices<-(x="Really bad",x="so so", x="Actually OK")
I have tried:
str_replace_all(out[,1],"Normal", as.character(sample(mychoices,1,replace=F)))
but it only replaces with one of the list throughout. I tried wrapping it in a function as well
out2 <- apply(out, 1, function(x) {
if (stringr::str_detect(x, "Normal")) {
return(str_replace_all(out[,1],"Normal", as.character(sample(mychoices,1,replace=F))))
}
})
But it returns lists within a dataframe.
This is one way of doing what I think you want. I changed your data structure a little to make it easier to work with (gave the column a name, and set stringsAsFactors = FALSE)
out <- data.frame(abornorm = c("Normal","Normal","Abnormal","Normal","Abnormal","Abnormal","Normal","Abnormal"), stringsAsFactors = FALSE)
out$abornorm[out$abornorm == "Normal"] <- sample(c("Really bad", "so so", "Actually OK"), sum(out$abornorm == "Normal", na.rm = TRUE), replace = TRUE)
This takes advantage of the ability to assign a set of indices of a vector, provided your source and target are of the same length.

R + match values at scale (using apply?)

Is there a way to make matching values at scale more programmatic? Basically what I want to do is add a bunch of columns for value lookups onto a dataframe, but I don't want to write the match[] argument every time. It seems like this would be a use case for mapply but I can't quite figure out how to use it here. Any suggestions?
Here's the data:
data <- data.frame(
region = sample(c("northeast","midwest","west"), 50, replace = T),
climate = sample(c("dry","cold","arid"), 50, replace = T),
industry = sample(c("tech","energy","manuf"), 50, replace = T))
And the corresponding lookup tables:
lookups <- data.frame(
orig_val = c("northeast","midwest","west","dry","cold","arid","tech","energy","manuf"),
look_val = c("dir1","dir2","dir3","temp1","temp2","temp3","job1","job2","job3")
)
So now what I want to do is: First add a column to "data" that's called "reg_lookups" and it will match the region to its appropriate value in "lookups". Do the same for "climate_lookups" and so on.
Right now, I've got this mess:
data$reg_lookup <- lookups$look_val[match(data$region, lookups$orig_val)]
data$clim_lookup <- lookups$look_val[match(data$climate, lookups$orig_val)]
data$indus_lookup <- lookups$look_val[match(data$industry, lookups$orig_val)]
I've tried using a function to do this, but the function doesn't seem to work, so then applying that to mapply is a no-go (plus I'm confused about how the mapply syntax would work here):
match_fun <- function(df, newval, df_look, lookup_val, var, ref_val) {
df$newval <- df_look$lookup_val[match(df$var, df_look$ref_val)]
return(df)
}
data2 <- match_fun(data, reg_2, lookups, look_val, region, orig_val)
I think you're just trying to do this:
data <- merge(data,lookups[1:3,],by.x = "region",by.y = "orig_val",all.x = TRUE)
data <- merge(data,lookups[4:6,],by.x = "climate",by.y = "orig_val",all.x = TRUE)
data <- merge(data,lookups[7:9,],by.x = "industry",by.y = "orig_val",all.x = TRUE)
But it would be much better to store the lookups either in separate data frames. That way you can control the names of the new columns more easily. It would also allow you to do something like this:
lookups1 <- split(lookups,rep(1:3,each = 3))
colnames(lookups1[[1]]) <- c('region','reg_lookup')
colnames(lookups1[[2]]) <- c('climate','clim_lookup')
colnames(lookups1[[3]]) <- c('industry','indus_lookup')
do.call(cbind,mapply(merge,
x = list(data[,1,drop = FALSE],data[,2,drop =FALSE],data[,3,drop = FALSE]),
y = lookups1,
moreArgs = list(all.x = TRUE),
SIMPLIFY = FALSE))
and you should be able to wrap that do.call bit in a function.
I used data[,1,drop = FALSE] in order to preserve them as one column data frames.
The way you structure mapply calls is to pass named arguments as lists (the x = and y = parts). I wanted to be sure to preserve all the rows from data, so I passed all.x = TRUE via moreArgs, so that gets passed each time merge is called. Finally, I need to stitch them all together myself, so I turned off SIMPLIFY.

Resources