Remove elements of list based on condition during "loop" - r

I'm dealing with a very large list of large data frames (~2GB). To save space and reduce file size, I want to remove some elements of the list that are all NA. As part of the operation I need to gather and then bind into a single data.frame.
Here's an example:
library(tidyr)
library(dplyr)
a <- data.frame(x=rep(1,3), y1=1:3, y2=1:3)
b <- data.frame(x=rep(2,3), y1=NA, y2=NA)
c <- data.frame(x=rep(3,3), y1=1:3, y2=NA)
l <- list(a,b,c)
t <- lapply(l, function(x){
gather(x, key="type", value="value", -x) # %>%
#remove list element here %>%
#do other operations like mutate here
}) %>%
bind_rows
The result of this includes some data.frames that are all NA for my values of y.
I would like to remove elements from the list completely. If remove all rows with NA it still leaves me with an empty list element, which then crashes further calculations with mutate or other operations.
I'm trying to take care of this operation with the first call to lapply because I find that doing filtering after that requires a lot of memory (often crashing after maxing out the 16GB I have on this computer). In the title when I say "list" I'm referring to this apply statement.
In this example the result should look like:
> t[-(7:12),]
x type value
1 1 y1 1
2 1 y1 2
3 1 y1 3
4 1 y2 1
5 1 y2 2
6 1 y2 3
13 3 y1 1
14 3 y1 2
15 3 y1 3
16 3 y2 NA
17 3 y2 NA
18 3 y2 NA

So, I'm not 100% sure I understood the question, but assuming I did, a possible answer would be:
t <- lapply(l, function(x){
gather(x, key="type", value="value", -x) %>%
subset(!sum(!is.na(value)) == 0) })
%>% bind_rows
t
x type value
1 1 y1 1
2 1 y1 2
3 1 y1 3
4 1 y2 1
5 1 y2 2
6 1 y2 3
7 3 y1 1
8 3 y1 2
9 3 y1 3
10 3 y2 NA
11 3 y2 NA
12 3 y2 NA

Related

How to change NA into 0 based on other variable / how many times it was recorded

I am still new to R and need help. I want to change the NA value in variables x1,x2,x3 to 0 based on the value of count. Count specifies the number of observations, and the x1,x2,x3 stand for the visit to the site (or replication). The value in each 'X' variable is the number of species found. However, not all sites were visited 3 times. The variable count is telling us how many times the site was actually visited. I want to identify the actual NA and real 0 (which means no species found). I want to change the NA into 0 if the site is actually visited and keep it NA if the site is not visited. For example from the dummy data, 'zhask' site is visited 2 times, then the NA in x1 of zhask needs to be replaced with 0.
This is the dummy data:
site x1 x2 x3 count
1 miya 1 2 1 3
2 zhask NA 1 NA 2
3 balmond 3 NA 2 3
4 layla NA 1 NA 2
5 angela NA 3 NA 2
So, it the table need to be changed into:
site x1 x2 x3 count
1 miya 1 2 1 3
2 zhask 0 1 NA 2
3 balmond 3 0 2 3
4 layla 0 1 NA 2
5 angela 0 3 NA 2
I've tried many things and try to make my own function, however, it is not working:
for(i in 1:nrow(df))
{
if( is.na(df$x1[i]) && (i < df$count[i]))
{df$x1[i]=0}
else
{df$x1[i]=df$x1[i]}
}
this is the script for the dummy dataframe:
x1= c(1,NA,3, NA, NA)
x2= c(2,1, NA, 1, 3)
x3 = c(1, NA, 2, NA, NA)
count=c(3,2,3,2,2)
site=c("miya", "zhask", "balmond", "layla", "angela")
df=data.frame(site,x1,x2,x3,count)
Any help will be very much appreciated!
One way to be to apply a function over all of your count columns. Here's a way to do that.
cols <- c("x1", "x2", "x3")
df[, cols] <- mapply(function(col, idx, count) {
ifelse(idx <=count & is.na(col), 0, col)
}, df[,cols], seq_along(cols), MoreArgs=list(count=df$count))
# site x1 x2 x3 count
# 1 miya 1 2 1 3
# 2 zhask 0 1 NA 2
# 3 balmond 3 0 2 3
# 4 layla 0 1 NA 2
# 5 angela 0 3 NA 2
We use mapply to iterate over the columns and the index of the column. We also pass in the count value each time (since it's the same for all columns, it goes in the MoreArgs= parameter). This mapply will return a list and we can use that to replace the columns with the updated values.
If you wanted to use dplyr, that might look more like
library(dplyr)
cols <- c("x1"=1, "x2"=2, "x3"=3)
df %>%
mutate(across(starts_with("x"), ~if_else(cols[cur_column()]<count & is.na(.x), 0, .x)))
I used the cols vector to get the index of the column which doesn't seem to be otherwise available when using across().
But a more "tidy" way to tackle this problem would be to pivot your data first to a "tidy" format. Then you can clean the data more easily and pivot back if necessary
library(dplyr)
library(tidyr)
df %>%
pivot_longer(cols=starts_with("x")) %>%
mutate(index=readr::parse_number(name)) %>%
mutate(value=if_else(index < count & is.na(value), 0, value)) %>%
select(-index) %>%
pivot_wider(names_from=name, values_from=value)
# site count x1 x2 x3
# <chr> <dbl> <dbl> <dbl> <dbl>
# 1 miya 3 1 2 1
# 2 zhask 2 0 1 NA
# 3 balmond 3 3 0 2
# 4 layla 2 0 1 NA
# 5 angela 2 0 3 NA
Via some indexing of the columns:
vars <- c("x1","x2","x3")
df[vars][is.na(df[vars]) & (col(df[vars]) <= df$count)] <- 0
# site x1 x2 x3 count
#1 miya 1 2 1 3
#2 zhask 0 1 NA 2
#3 balmond 3 0 2 3
#4 layla 0 1 NA 2
#5 angela 0 3 NA 2
Essentially this is:
selecting the variables/columns and storing in vars
flagging the NA cells within those variables with is.na(df[vars])
col(df[vars]) returns a column number for every cell, which can be checked if it is less than the df$count in each corresponding row
the values meeting both the above criteria are overwritten <- with 0
This could be yet another solution using purrr::pmap:
purrr::pmap is used for row-wise operations when applied on a data frame. It enables us to iterate over multiple arguments at the same time. So here c(...) refers to all corresponding elements of the selected variable (all except site) in each row
I think the rest of the solution is pretty clear but please let me know if I need to explain more about this.
library(dplyr)
library(purrr)
library(tidyr)
df %>%
mutate(output = pmap(df[-1], ~ {x <- head(c(...), -1)
inds <- which(is.na(x))
req <- tail(c(...), 1) - sum(!is.na(x))
x[inds[seq_len(req)]] <- 0
x})) %>%
select(site, output, count) %>%
unnest_wider(output)
# A tibble: 5 x 5
site x1 x2 x3 count
<chr> <dbl> <dbl> <dbl> <dbl>
1 miya 1 2 1 3
2 zhask 0 1 NA 2
3 balmond 3 0 2 3
4 layla 0 1 NA 2
5 angela 0 3 NA 2

create new column (with outcome min or NA) from multiple selected columns

My data has many columns and subjects, but to illustrate it simpler, lets say I have 7 subjects with 3 variables/columns called x1, x2 and x3 (values range from 1 to 3 and NAs). In the analysis that I want it is important I actually call the columns I want to use (since I cannot just use the whole dataframe in my analysis because there are more variables/columns there)
>data <- data.frame(‘id’=c(1,2,3,4,5,6,7), ‘x1’=c(1,2,2,NA,3,3,1), ‘x2’=c(NA,3,1,NA,2,3,2), ‘x3’=c(NA,2,NA,NA,3,NA,1)
id x1 x2 x3
1 1 NA NA
2 2 3 2
3 2 1 NA
4 NA NA NA
5 3 2 NA
6 3 3 NA
7 1 2 1
The class of x1 x2 and x3 are numeric.
Out of that, I want to create a variable/column called ‘x4’ that:
- gives me the lowest number of row x1, x2 and x3.
-If there is an NA in a row of x1,x2,x3, the NA shall be ignored.
-If they are however ALL NAs, I would want the outcome to be NA. (NOT Inf, which is what it does with my code now)
-If there are two lowest numbers that are the same, just display any one of those two. So like this:
>data <- data.frame(‘id’=c(1,2,3,4,5,6,7), ‘x1’=c(1,2,2,NA,3,3,1), ‘x2’=c(NA,3,1,NA,2,3,2), ‘x3’=c(NA,2,NA,NA,3,NA,1), ‘x4’=c(1,2,1,NA,2,3,1)
id x1 x2 x3 x4
1 1 NA NA 1
2 2 3 2 2
3 2 1 NA 1
4 NA NA NA NA
5 3 2 NA 2
6 3 3 NA 3
7 1 2 1 1
I managed to find a very similar question, and I can mostly make it work: min for each row with dataframe in R
data$x4 <- apply(data[, c("x1","x2","x3")],1, FUN=min, na.rm = TRUE)
the problem I have now is that in case of all NAs (so id number 4), my outcome is not NA, but it is 'Inf'.
Question 1:How can I make it so it becomes an NA instead of Inf? I can of course do that afterwards like this:
is.na(data$x4) <- sapply(data$x4, is.infinite)
But I wonder if there is a nice way to do that already with/inside the previous code?
Also, rather then using sapply and the inside FUNction min, I would also like to try to make it work with code in a way like below: Question 2: is using this other code below possible?
data$x4 <- min(data[, c("x1","x2","x3")],1 , na.rm = TRUE)
for this x4 gets the outcome '1' everytime. I guess it just shows the lowest number (1) of the whole column? I dont understand why. I am already using ',1' but doesnt help.
I hope somebody can help me(r and stackoverflow newbie) out, thanks!
You are looking for pmin function which returns the (regular or parallel) minima of the input values. Below are two approaches using pmin:
df$minIget <- do.call(pmin, c(df[,-1], na.rm = TRUE)) # Approch1: using do.call
df %>% rowwise() %>% mutate(minIget = pmin(x1, x2,x3,na.rm = T))# Approch2: using tidyverse.
output:
A tibble: 7 x 5
# Rowwise:
id x1 x2 x3 minIget
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1 1 NA NA 1
2 2 2 3 2 2
3 3 2 1 NA 1
4 4 NA NA NA NA
5 5 3 2 3 2
6 6 3 3 NA 3
7 7 1 2 1 1
You can test if all are NA before you call min like:
apply(data[, c("x1","x2","x3")], 1, function(x)
if(all(is.na(x))) NA else min(x, na.rm=TRUE))
#[1] 1 2 1 NA 2 3 1
min(data[, c("x1","x2","x3")],1 , na.rm = TRUE) gives you the minimum of 1 and data[, c("x1","x2","x3")].

Drop data frame rows if NA for certain variables referred to by name in dplyr

I would like to drop entire rows from a data frame if they have all NAs but for only certain subset of columns (which are named in a sequence as well as start with "X").
This is different than other SO answers that I found from what I can tell since I cannot refer to each column manually by name (too many variables) and do not only want to drop the rows if they are completely NA (rather if some variables are completely NA).
So turn sample data:
data1 <- as.data.frame(rbind(c(1,2,3), c(1, NA, 4), c(4,6,7), c(1, NA, NA), c(4, 8, NA)))
colnames(data1) <- c("Z","X1","X2")
data1
Z X1 X2
1 1 2 3
2 1 NA 4
3 4 6 7
4 1 NA NA
5 4 8 NA
into:
V1 V2 V3
1 1 2 3
2 1 NA 4
3 4 6 7
4 4 8 NA
I.e. drop the row if both X1 and X2 (all of the X sequence) are NA.
In this example there are only two variables(X1:X2)for ease but in reality I have closer to 100 of this sequence and many other important variables that may or may not be NA. I would prefer to do so in dplyr with filter but other solutions would be appreciated as well.
I feel like:
data2 %>% filter(!is.na(all(X1:X2)))
or something similar is close but R does not like the sequence reference to X1:X2 within filter.
You can use rowSums + select + starts_with + filter:
data1 %>%
filter(rowSums(!is.na(select(., starts_with("X")))) != 0)
# Z X1 X2
#1 1 2 3
#2 1 NA 4
#3 4 6 7
#4 4 8 NA
A base R solution using apply would be:
drop <- which(apply(data1[,startsWith(colnames(data1), "X")], 1, function(x) all(is.na(x))))
data1[-drop,]
# Z X1 X2
#1 1 2 3
#2 1 NA 4
#3 4 6 7
#5 4 8 NA
Another option using rowSums:
drop <- which(rowSums(is.na(data1[,c("X1","X2")]))>=2)
data1[-drop]

How to reshape a data frame from wide to long format in R?

I am new to R. I am trying to read data from Excel in the mentioned format
x1 x2 x3 y1 y2 y3 Result
1 2 3 7 8 9
4 5 6 10 11 12
and data.frame in R should take data in mentioned format for 1st row
x y
1 7
2 8
3 9
then I want to use lm() and export the result to result column.
I want to automate this for n rows i.e once results of 1st column is exported to Excel then I want to import data for second row.
Please Help.
library(gdata)
# this spreadsheet is exactly as in your question
df.original <- read.xls("test.xlsx", sheet="Sheet1", perl="C:/strawberry/perl/bin/perl.exe")
#
#
> df.original
x1 x2 x3 y1 y2 y3
1 1 2 3 7 8 9
2 4 5 6 10 11 12
#
# for the above code you'll just need to change the argument 'perl' with the
# path of your installer
#
# now the example for the first row
#
library(reshape2)
df <- melt(df.original[1,])
df$variable <- substr(df$variable, 1, 1)
df <- as.data.frame(lapply(split(df, df$variable), `[[`, 2))
> df
x y
1 1 7
2 2 8
3 3 9
Now, at this stage we automated the process of inport/transformation (for one line).
First question: How you want the data to look like when every line will be treated?
Second question: In result, what do you want exactly to put? residual, fitted values? what you need from lm()?
EDIT:
ok, #kapil tell me if the final shape of df is what you thought:
library(reshape2)
library(plyr)
df <- adply(df.original, 1, melt, .expand=F)
names(df)[1] <- "rowID"
df$variable <- substr(df$variable, 1, 1)
rows <- df$rowID[ df$variable=="x"] # with y would be the same (they are expected to have the same legnth)
df <- as.data.frame(lapply(split(df, df$variable), `[[`, c("value")))
df$rowID <- rows
df <- df[c("rowID", "x", "y")]
> df
rowID x y
1 1 1 7
2 1 2 8
3 1 3 9
4 2 4 10
5 2 5 11
6 2 6 12
regarding the coefficient you can calculate for each rowID (which refers to the actual row in the xls file) in this way:
model <- dlply(df, .(rowID), function(z) {print(z); lm(y ~ x, df);})
> sapply(model, `[`, "coefficients")
$`1.coefficients`
(Intercept) x
6 1
$`2.coefficients`
(Intercept) x
6 1
so, for each group (or row in original spreadsheet) you have (as expected) two coefficients, intercept and slope, therefore I can't figure out how you want the coefficient to fit inside the data.frame (especially in the 'long' way it appears just above). But if you wanted the data.frame to stay in 'wide' mode then you can try this:
# obtained the object model, you can put the coeff in the df.original data.frame
#
> ldply(model, `[[`, "coefficients")
rowID (Intercept) x
1 1 6 1
2 2 6 1
df.modified <- cbind(df.original, ldply(model, `[[`, "coefficients"))
> df.modified
x1 x2 x3 y1 y2 y3 rowID (Intercept) x
1 1 2 3 7 8 9 1 6 1
2 4 5 6 10 11 12 2 6 1
# of course, if you don't like it, you can remove rowID with df.modified$rowID <- NULL
Hope this helps, and let me know if you wanted the 'long' version of df.

Variable Length Core Name Identification

I have a data set with the following row-naming scheme:
a.X.V
where:
a is a fixed-length core ID
X is a variable-length string that subsets a, which means I should keep X
V is a variable-length ID which specifies the individual elements of a.X to be averaged
. is one of {-,_}
What I am trying to do is take column averages of all the a.X's. A sample:
sampleList <- list("a.12.1"=c(1,2,3,4,5), "b.1.23"=c(3,4,1,4,5), "a.12.21"=c(5,7,2,8,9), "b.1.555"=c(6,8,9,0,6))
sampleList
$a.12.1
[1] 1 2 3 4 5
$b.1.23
[1] 3 4 1 4 5
$a.12.21
[1] 5 7 2 8 9
$b.1.555
[1] 6 8 9 0 6
Currently I am manually gsubbing out the .Vs to get a list of general :
sampleList <- t(as.data.frame(sampleList))
y <- rowNames(sampleList)
y <- gsub("(\\w\\.\\d+)\\.d+", "\\1", y)
Is there a faster way to do this?
This is one half of 2 issues I've encountered in a workflow. The other half was answered here.
You can use a vector of patterns to find the locations of the columns you want to group. I included a pattern I knew wouldn't match anything in order to show that the solution is robust to that situation.
# A *named* vector of patterns you want to group by
patterns <- c(a.12="^a.12",b.12="^b.12",c.12="^c.12")
# Find the locations of those patterns in your list
inds <- lapply(patterns, grep, x=names(sampleList))
# Calculate the mean of each list element that matches the pattern
out <- lapply(inds, function(i)
if(l <- length(i)) Reduce("+",sampleList[i])/l else NULL)
# Set the names of the output
names(out) <- names(patterns)
Perhaps you could consider messing with your data structure to make it easier to apply some standard tools:
sampleList <- list("a.12.1"=c(1,2,3,4,5),
"b.1.23"=c(3,4,1,4,5), "a.12.21"=c(5,7,2,8,9),
"b.1.555"=c(6,8,9,0,6))
library(reshape2)
m1 <- melt(do.call(cbind,sampleList))
m2 <- cbind(m1,colsplit(m1$Var2,"\\.",c("coreID","val1","val2")))
The results looks like this:
head(m2)
Var1 Var2 value coreID val1 val2
1 1 a.12.1 1 a 12 1
2 2 a.12.1 2 a 12 1
3 3 a.12.1 3 a 12 1
Then you can more easily do something like this:
aggregate(value~val1,mean,data=subset(m2,coreID=="a"))
R is poised to do this stuff if you would just move to data.frames instead of lists. Make Your 'a', 'X', and 'V' into their own columns. Then you can use ave, by, aggregate, subset, etc.
data.frame(do.call(rbind, sampleList),
do.call(rbind, strsplit(names(sampleList), '\\.')))
# X1 X2 X3 X4 X5 X1.1 X2.1 X3.1
# a.12.1 1 2 3 4 5 a 12 1
# b.1.23 3 4 1 4 5 b 1 23
# a.12.21 5 7 2 8 9 a 12 21
# b.1.555 6 8 9 0 6 b 1 555

Resources