For loop to replace NA values in R - r

I want to write a for loop in R to replace NA values from one column of my dataframe and replace them for the mean of the values of the same column when 2 conditions are true.
When conditions are met, I want to assign the mean to NAs using observations from the same year and from the same group.
I wrote the following code, but I am struggling to write the conditions.
missing <- which(is.na(df$price))
for (i in 1:36){
x <- df[missing,]group
y <- df[missing,]year
selection <- df[conditions??,]$price
df[missing,]$price <- mean(selection, na.rm = TRUE)
}

You don't need a for loop, you can directly replace all the NAs with the mean(, na.rm=T) directly to calculate the mean of said column without NAs. This is for the general case:
df[is.na(df$price),]$price <- mean(df$price, na.rm = TRUE)
Using tidyverse you can achieve what you want:
library(tidyverse)
df %>% group_by(group, year) %>% mutate(price=ifelse(is.na(price), mean(price, na.rm=T), price))
Using data.table
dt <- data.table(df)
dt[,price:=fifelse(is.na(price), mean(price, na.rm=T), price), by=.(group,year)][]

A base R solution using by, which splits a data frame by the groups in the list in the second argument, and applies a function defined in the third:
result <- by(df,
list(df[["group"]], df[["year"]]),
function(x) {
x[is.na(x$price), "price"] <- mean(x[["price"]], na.rm = TRUE)
x
},
simplify = TRUE)
do.call(rbind, result)

Related

calculate z-score across multiple dataframes in R

I have ten dataframes with equal number of rows and columns. They look like this:
df1 <- data.frame(geneID=c("AKT1","AKT2","AKT3","ALK",
"APC"),
CDKN2A=c(3490,9447,4368,908,204),
INPP4B=c(NA,9459,4395,1030,NA),
BCL2=c(NA,9480,4441,1209,NA),
IRS2=c(NA,NA,4639,1807,NA),
HRAS=c(3887,9600,4691,1936,1723))
df2 <- data.frame(geneID=c("AKT1","AKT2","AKT3","ALK",
"APC"),
CDKN2A=c(10892,17829,7156,1325,387),
INPP4B=c(NA,17840,7185,1474,NA),
BCL2=c(NA,17845,7196,1526,NA),
IRS2=c(NA,NA,12426,10244,NA),
HRAS=c(11152,17988,7545,2734,2423))
df3 <- data.frame(geneID=c("AKT1","AKT2","AKT3","ALK",
"APC"),
CDKN2A=c(11376,17103,8580,780,178),
INPP4B=c(NA,17318,9001,2829,NA),
BCL2=c(NA,17124,8621,1141,NA),
IRS2=c(NA,NA,8658,1397,NA),
HRAS=c(11454,17155,8683,1545,1345))
I would like to calculate z-score for each data frame, based on mean and variance across multiple dataframes. The z-score should be calculated as follows: z-score=(x-mean(x))/sd(x))).
I found that ddply function of plyr can do this job, but the solution was for single dataframe, while I have multiple dataframes as separate files with 18214 rows and 269 columns.
I would appreciate any suggestions.
Thank you very much for your help!
Olha
Here is one option where we bind the datasets together with bind_rows (from dplyr), then group by the grouping column and return the zscore transformed numeric columns
library(dplyr)
bind_rows(df1, df2, df3, .id = 'grp') %>%
group_by(geneID) %>%
mutate(across(where(is.numeric),
~(.- mean(., na.rm = TRUE))/sd(., na.rm = TRUE), .names = '{col}_zscore'))
NOTE: if we dont need new columns, then remove the .names part
If we need to do this in a loop, without binding into a single data.frame, can loop over the list
library(purrr)
list(df1, df2, df3) %>% # // automatically => mget(ls('^df\\d+$'))
map(~ .x %>%
mutate(across(where(is.numeric),
~(.- mean(., na.rm = TRUE))/sd(., na.rm = TRUE), .names = '{col}_zscore')))
Here is a base R solution with function scale.
df_list <- list(df1, df2, df3)
df_list2 <- lapply(df_list, function(DF){
i <- sapply(DF, is.numeric)
DF[i] <- lapply(DF[i], scale)
DF
})
S3 methods
Considering that scale is generic and that methods can be written for it, here is a data.frame method, then applied to the same list df_list.
scale.data.frame <- function(x, center = TRUE, scale = TRUE){
i <- sapply(x, is.numeric)
x[i] <- lapply(x[i], scale, center = center, scale = scale)
x
}
df_list3 <- lapply(df_list, scale)
identical(df_list2, df_list3)
#[1] TRUE

Loop to fill NA's with means not working properly

I'm trying to fill all the NA's in my fields with the mean of each column.
The code I've been using is:
var1<-colnames(DF)
for (i in 1:length(var1)) {
v<-paste0("`",var1[i],"`")
DF<-DF %>%
mutate(v=ifelse(is.na(v),mean(v,na.rm=TRUE),v))
}
After running this piece of code, nothing happens with the DF.
I already tried running for an individual column, and the code works:
DF<-DF%>%
mutate(col1=ifelse(is.na(col1),mean(col1,na.rm=TRUE),col1))
I'm using the ` in the paste part because some of the columns can have spaces between words and I cannot change this. I have the feeling that this part is where the mistake reside.
For multiple columns use mutate_at (for all columns - mutate_all)
DF %>%
mutate_all(funs(ifelse(is.na(.), mean(., na.rm = TRUE), .)))
It can be made compact with na.aggregate from zoo (replaces the NA with the mean for each columns. By default FUN = mean)
library(zoo)
na.aggregate(DF)
If we are using a for loop, then there is no need for a package. Just update the column NA elements with the mean of that column
for(nm in var1) DF[[nm]][is.na(DF[[nm]])] <- mean(DF[[nm]], na.rm = TRUE)
Or with lapply
DF[] <- lapply(DF, function(x) replace(x, is.na(x), mean(x, na.rm = TRUE)))
Or using colMeans
DF[is.na(DF)] <- colMeans(DF, na.rm = TRUE)[col(DF)][is.na(DF)]
data
set.seed(24)
DF <- as.data.frame(matrix(sample(c(NA, 0:5), 20 *5, replace = TRUE), 20, 5))

Summarize each category of rows in one column using R

I'm wondering if this is something possible in R:
I have 2 columns. Column A (primaryhistory2.DEPT) has a bunch of categorical data, column B (primaryhistry2.ACT.ENROLL) has numbers and NAs.
I want to get a summary of column B for each category in column A.
Something like, for "NUT" in column A, I want to see min, max, mean, median, NAs, etc. And I would like to see this for every category. Like when you use summary() command.
Not sure if this is possible.. Thank you all in advance!
#Moody_Mudskipper
The results are what I'm looking for. But without column names it's hard to read.
and for the base R, it's not doing counts for NAs, which I do see a lot of NAs in my file.
Very possible using dplyr library:
library(dplyr)
most.of.the.answer = df %>%
group_by(primaryhistory2.DEPT) %>%
summarise(min = min(primaryhistry2.ACT.ENROLL, na.rm = TRUE), max = max(primaryhistry2.ACT.ENROLL, na.rm = TRUE), mean = mean(primaryhistry2.ACT.ENROLL, na.rm = TRUE), median = median(primaryhistry2.ACT.ENROLL, na.rm = TRUE))
(assuming your dataframe is called df)
For counting NA's, try dplyr's filter feature:
count.NAs = df %>% filter(is.na(primaryhistry2.ACT.ENROLL)) %>%
group_by(primaryhistory2.DEPT) %>%
summarise(count.NA = n())
I'll leave it to you to merge the two dataframes.
with base R you can do this:
temp <- aggregate(primaryhistory2..ACT.ENROLL ~ primaryhistory2.DEPT,df,function(x){c(mean = mean(x,na.rm=T),median = median(x,na.rm=T),min = min(x,na.rm=T),max = max(x,na.rm=T),nas=sum(is.na(x)))})
res <- cbind(temp[1],temp[[2]])
If you want to use summary:
summary1 <- sapply(unique(df$primaryhistory2.DEPT),function(x) summary(subset(df,primaryhistory2.DEPT == x)$primaryhistory2..ACT.ENROLL))
colnames(summary1) <- unique(df$primaryhistory2.DEPT)

R: optimize finding max values of function on a data frame and then trim the rest

First of all my data comes from Temperature.xls which can be downloaded from this link: RBook
My code is this:
temp = read.table("Temperature.txt", header = TRUE)
length(unique(temp$Year)) # number of unique values in the Year vector.
res = ddply(temp, c("Year","Month"), summarise, Mean = mean(Temperature, na.rm = TRUE))
res1 = ddply(temp, .(Year,Month), summarise,
SD = sd(Temperature, na.rm = TRUE),
N = sum(!is.na(Temperature))
)
# ordering res1 by sd and year:
res1 = res1[order(res1$Year,res1$SD),];
# finding maximum of SD in res1 by year and displaying just them in a separate data frame
res1_maxsd = ddply(res1, .(Year), summarise, MaxSD = max(SD, na.rm = TRUE)) # find the maxSD in each Year
res1_max = merge(res1_maxsd,res1, all = FALSE) # merge it with the original to see other variables at the max's rows
res1_m = res1_max[res1_max$MaxSD==res1_max$SD,] # find which rows are the ones corresponding to the max value
res1_mm = res1_m[complete.cases(res1_m),] # trim all others (which are NA's)
I know that I can cut the 4 last lines to less lines. Can I somehow execute the last 2 lines in one command? I have stumbled across:
res1_m = res1_max[complete.cases(res1_max$MaxSD==res1_max$SD),]
But this does not give me what I want which is eventually a smaller data frame only with the rows (with all the variables) that contain the maxSD.
Rather than fixing the last 2 lines why not start with res1? Reversing the order of SD and taking the first row per year gives you an equivalent final data set...
res1 <- res1[order(res1$Year,-res1$SD),]
res_final <- res1[!duplicated(res1$Year),]
The last four lines can be cut down if you use dplyr package. Since you want to keep some information from original data set, you probably don't want to use summarise because it only returns summarized information and you have to merge with original dataset, so mutate and filter would be a better choice:
library(dplyr)
res1_mm1 <- res1 %>% group_by(Year) %>% filter(SD == max(SD, na.rm = T))
You can also use a mutate function to create the new column MaxSD which is the same as SD in the result data frame for your case:
res1_mm1 <- res1 %>% group_by(Year) %>% mutate(MaxSD = max(SD, na.rm = T)) %>%
filter(SD == MaxSD)

Calculate min, maximum and mean in R

I have data set with 130 rows and two columns.
I want to calculate the mean, minimum and maximum of every 5 rows of the seconds column using R. By using colMeans and the following command
rep(colMeans(matrix(data$Pb, nrow=5), na.rm=TRUE), each=5) I was able to compute mean for every 5 rows. However i am not able to compute max and min since there is no built in function for the same. I tried as suggested here for 5 rows instead of 2. However I get an error that dim(X) must have a positive length. Can someone please help me understand as to what should I do to fix and compute the above quantities ? My end goal is to plot min,mean, max for every 5 rows.
Thanks in advance.
If we are looking for function to find the max and min of each column of matrix, colMaxs and colMins from matrixStats can be used.
library(matrixStats)
colMaxs(mat)
#[1] 7 8 20
colMins(mat)
#[1] 3 1 7
But, if this is to find for every 5 rows of dataset columns, use gl to create the grouping index for each 5 rows, and then with the help of by we get the colMaxs or colMins or colMeans
by(data, list(gr=as.numeric(gl(nrow(data), 5, nrow(data)))),
FUN = function(x) colMaxs(as.matrix(x)))
The same way, we can find the colMins or colMeans
by(data, list(gr=as.numeric(gl(nrow(data), 5, nrow(data)))),
FUN = function(x) colMins(as.matrix(x)))
by(data, list(gr=as.numeric(gl(nrow(data), 5, nrow(data)))),
FUN = function(x) colMeans(as.matrix(x)))
The above can be done in a compact way with dplyr
library(dplyr)
data %>%
group_by(gr = as.numeric(gl(nrow(.), 5, nrow(.)))) %>%
summarise_each(funs(min, max, mean))
To do the plotting, may be we can extend this with ggplot
library(ggplot2)
library(tidyr)
data %>%
group_by(gr = as.numeric(gl(nrow(.), 5, nrow(.)))) %>%
summarise_each(funs(min, max, mean)) %>%
gather(Var, Val, -gr) %>%
separate(Var, into = c("Var1", "Var2")) %>%
ggplot(., aes(x=factor(gr), y=Val, fill=Var2)) +
geom_bar(stat="identity")+
facet_wrap(~Var1)
data
mat <- matrix(c(3,1,20,5,4,12,6,2,9,7,8,7), byrow=T, ncol=3)
set.seed(24)
data <- data.frame(Pb = sample(1:9, 42, replace=TRUE), Ps = rnorm(42))
A nice function for this would be the base by function combined with apply. Below is an example where you first make a index of the groups for your function:
m <- matrix(runif(130*2),130,2)
group <- rep(seq(nrow(m)), each=5, length.out=nrow(m))
res <- by(m, INDICES = group, FUN = function(x){apply(x, MARGIN=2, FUN=max)})
class(res) # "by" class
do.call(rbind, res) # matrix

Resources