My data looks something like this:
db <- as.data.frame(matrix(ncol=10, nrow=3,
c(3,NA,NA,4,5,NA,7,NA,NA,NA,NA,NA,7,NA,8,9,NA,NA,4,6,NA,NA,7,8,11,5,10,NA,NA,NA), byrow = TRUE))
db
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 3 NA NA 4 5 NA 7 NA NA NA
2 NA NA 7 NA 8 9 NA NA 4 6
3 NA NA 7 8 11 5 10 NA NA NA
For each row, I'm trying to count the number of NAs that appear between the first and last non-NA element (I have numbers and characters) by row.
The output should be something like this:
db$na.tot <- c(3, 3, 0)
db
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 na.tot
1 3 NA NA 4 5 NA 7 NA NA NA 3
2 NA NA 7 NA 8 9 NA NA 4 6 3
3 NA NA 7 8 11 5 10 NA NA NA 0
Where na.tot represents the number of NAs observed between the first and last non-NA elements by row (between 3 and 7, 7 and 6 and 7 and 10 in rows 1, 2 and 3 respectively).
Does anyone have a simple solution?
Thanks!
Try this:
require(data.table)
z<-as.data.table(which(!is.na(db),arr.ind=TRUE))
setkey(z,row,col)
z[,list(NAs=last(col)-first(col)-.N+1),by=row]
# row NAs
#1: 1 3
#2: 2 3
#3: 3 0
Related
In other words, I am trying to lag a data.frame that looks like this:
V1 V2 V3 V4 V5 V6
1 1 1 1 1 1
2 2 2 2 2 NA
3 3 3 3 NA NA
4 4 4 NA NA NA
5 5 NA NA NA NA
6 NA NA NA NA NA
To something that looks like this:
V1 V2 V3 V4 V5 V6
1 NA NA NA NA NA
2 1 NA NA NA NA
3 2 1 NA NA NA
4 3 2 1 NA NA
5 4 3 2 1 NA
6 5 4 3 2 1
So far, I have used a function that counts the number of NAs, and have tried to lag my each column in my data.frame by the corresponding number of NAs in that column.
V1 <- c(1,2,3,4,5,6)
V2 <- c(1,2,3,4,5,NA)
V3 <- c(1,2,3,4,NA,NA)
V4 <- c(1,2,3,NA,NA,NA)
V5 <- c(1,2,NA,NA,NA,NA)
V6 <- c(1,NA,NA,NA,NA,NA)
mydata <- cbind(V1,V2,V3,V4,V5,V6)
na.count <- colSums(is.na(mydata))
lag.by <- function(mydata, na.count){lag(mydata, k = na.count)}
lagged.df <- apply(mydata, 2, lag.by)
But this code just lags the entire data.frame by one...
One option would be to loop through the columns with apply and append the NA elements first by subsetting the NA elements using is.na and then the non-NA element by negating the logical vector (is.na)
apply(mydata, 2, function(x) c(x[is.na(x)], x[!is.na(x)]))
# V1 V2 V3 V4 V5 V6
#[1,] 1 NA NA NA NA NA
#[2,] 2 1 NA NA NA NA
#[3,] 3 2 1 NA NA NA
#[4,] 4 3 2 1 NA NA
#[5,] 5 4 3 2 1 NA
#[6,] 6 5 4 3 2 1
You could use the sort function with option na.last = FALSE like this:
edit:
Akrun's comment is a valid one. If the values need to stay in the order as they are in the data.frame, then Akrun's answer is the best. Sort will out everything in order from low to high with the NA's in front.
library(purrr)
map_df(mydata, sort, na.last = FALSE)
# A tibble: 6 x 6
V1 V2 V3 V4 V5 V6
<int> <int> <int> <int> <int> <int>
1 1 NA NA NA NA NA
2 2 1 NA NA NA NA
3 3 2 1 NA NA NA
4 4 3 2 1 NA NA
5 5 4 3 2 1 NA
6 6 5 4 3 2 1
Or apply:
apply(mydata, 2, sort , na.last = FALSE)
V1 V2 V3 V4 V5 V6
[1,] 1 NA NA NA NA NA
[2,] 2 1 NA NA NA NA
[3,] 3 2 1 NA NA NA
[4,] 4 3 2 1 NA NA
[5,] 5 4 3 2 1 NA
[6,] 6 5 4 3 2 1
edit2:
As nicolo commented. order can preserve the order of the variables.
mydata[,3] <- c(4, 3, 1, 2, NA, NA)
map_df(mydata, function(x) x[order(!is.na(x))])
# A tibble: 6 x 6
V1 V2 V3 V4 V5 V6
<int> <int> <dbl> <int> <int> <int>
1 1 NA NA NA NA NA
2 2 1 NA NA NA NA
3 3 2 4 NA NA NA
4 4 3 3 1 NA NA
5 5 4 1 2 1 NA
6 6 5 2 3 2 1
This question already has answers here:
Combining rows based on a column
(1 answer)
Aggregate / summarize multiple variables per group (e.g. sum, mean)
(10 answers)
Closed 4 years ago.
I got a problem I could not find a solution yet.
I have a dataframe in R which looks like that:
p v1 v2 v3 v4 v5 v6 v7 v8 v9 <- Header
V1 1 2 3 NA NA NA NA NA NA
V2 1 2 3 NA NA NA NA NA NA
V3 1 2 3 NA NA NA NA NA NA
V1 NA NA NA 4 5 6 NA NA NA
V2 NA NA NA 4 5 6 NA NA NA
V3 NA NA NA 4 5 6 NA NA NA
V1 NA NA NA NA NA NA 7 8 9
V2 NA NA NA NA NA NA 7 8 9
V3 NA NA NA NA NA NA 7 8 9
How can I merge all the rows dependent in the first coloum the get the following output:
V1 1 2 3 4 5 6 7 8 9
V2 1 2 3 4 5 6 7 8 9
V3 1 2 3 4 5 6 7 8 9
Thank you very much!
We can group by the first column and then get the sum
library(dplyr)
df1 %>%
group_by(p) %>%
summarise_all(sum, na.rm = TRUE)
# A tibble: 3 x 10
# p v1 v2 v3 v4 v5 v6 v7 v8 v9
# <chr> <int> <int> <int> <int> <int> <int> <int> <int> <int>
#1 V1 1 2 3 4 5 6 7 8 9
#2 V2 1 2 3 4 5 6 7 8 9
#3 V3 1 2 3 4 5 6 7 8 9
This question already has answers here:
Remove rows that have more than a threshold of missing values missing values [closed]
(1 answer)
Delete columns/rows with more than x% missing
(5 answers)
Closed 5 years ago.
I have a dataframe with some missing values, displayed as NA.
For example:
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 3 6 7 2 1 2 3 4 1
2 5 5 4 3 2 1 3 7 6 7
3 6 6 NA NA NA NA NA NA NA NA
4 5 2 2 1 7 NA NA NA NA NA
5 7 NA NA NA NA NA NA NA NA NA
I would like to remove rows that have contain at least 80% of missing data. In this example it is clearly row 3 and 5. I know how to remove rows manually, but I would like some help with the code because my original dataframe contains 480 Variables and more than 1000 rows, so a code for automatically identifying and removing rows with >80% NA data would be extremely useful.
Thanking you in advance
you could use rowMeans:
df = read.table(text=' V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 3 6 7 2 1 2 3 4 1
2 5 5 4 3 2 1 3 7 6 7
3 6 6 NA NA NA NA NA NA NA NA
4 5 2 2 1 7 NA NA NA NA NA
5 7 NA NA NA NA NA NA NA NA NA')
df[rowMeans(is.na(df))<.8,]
Output:
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
1 4 3 6 7 2 1 2 3 4 1
2 5 5 4 3 2 1 3 7 6 7
4 5 2 2 1 7 NA NA NA NA NA
Hope this helps!
We can use rowSums on the logical matrix
df1[rowSums(is.na(df1))/ncol(df1) < 0.8,]
# V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
#1 4 3 6 7 2 1 2 3 4 1
#2 5 5 4 3 2 1 3 7 6 7
#4 5 2 2 1 7 NA NA NA NA NA
I have data-frame (populations1) which consists of 11 million rows (observations) and 11 columns (individuals). The first few rows of my dataframe look like this:
> head(population1)
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
1 7 3 NA NA 10 NA NA NA NA NA NA
2 14 11 7 NA 12 3 4 5 14 3 6
3 13 11 7 NA 11 4 NA 4 13 3 4
4 3 NA 4 5 4 NA NA 6 17 NA 7
5 3 NA 5 5 4 NA NA 7 20 NA 8
6 6 NA 3 6 NA NA NA 5 16 NA 10
For each individual, I want to estimate the proportion of observations with values more than 5. Is there any easy solution to do it in R?
Here is a solution uses sapply to apply a function to each column. The function is defined to count how many observations are larger than 5 and then divided by the length of x.
sapply(dt, function(x) sum(x > 5, na.rm = TRUE)/length(x))
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
0.6666667 0.3333333 0.3333333 0.1666667 0.5000000 0.0000000 0.0000000 0.3333333 0.8333333 0.0000000
V11
0.6666667
DATA
dt <- read.table(text = " V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
1 7 3 NA NA 10 NA NA NA NA NA NA
2 14 11 7 NA 12 3 4 5 14 3 6
3 13 11 7 NA 11 4 NA 4 13 3 4
4 3 NA 4 5 4 NA NA 6 17 NA 7
5 3 NA 5 5 4 NA NA 7 20 NA 8
6 6 NA 3 6 NA NA NA 5 16 NA 10",
header = TRUE)
Here is an option using tidyverse
library(dplyr)
pop1 %>%
summarise_all(funs(sum(.>5, na.rm = TRUE)/n()))
# V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
#1 0.6666667 0.3333333 0.3333333 0.1666667 0.5 0 0 0.3333333 0.8333333 0 0.6666667
If we need as a vector then unlist it
pop1 %>%
summarise_all(funs(sum(.>5, na.rm = TRUE)/n())) %>%
unlist(., use.names = FALSE)
I have the following dataframe dat, which presents a row-specific number of NAs at the beginning of some of its rows:
dat <- as.data.frame(rbind(c(NA,NA,1,3,5,NA,NA,NA), c(NA,1:3,6:8,NA), c(1:7,NA)))
dat
# V1 V2 V3 V4 V5 V6 V7 V8
# NA NA 1 3 5 NA NA NA
# NA 1 2 3 6 7 8 NA
# 1 NA 2 3 4 5 6 NA
My aim is to delete all the NAs at the beginning of each row and to left shift the row values (adding NAs at the end of the shifted rows accordingly, in order to keep their length constant).
The following code works as expected:
for (i in 1:nrow(dat)) {
if (is.na(dat[i,1])==TRUE) {
dat1 <- dat[i, min(which(!is.na(dat[i,]))):length(dat[i,])]
dat[i,] <- data.frame( dat1, t(rep(NA, ncol(dat)-length(dat1))) )
}
}
dat
returning:
# V1 V2 V3 V4 V5 V6 V7 V8
# 1 3 5 NA NA NA NA NA
# 1 2 3 6 7 8 NA NA
# 1 NA 2 3 4 5 6 NA
I was wondering whther there is a more direct way to do so without using a for-loop and by using the tail function.
With respect to this last point, by using min(which(!is.na(dat[1,]))) the result is 3, as expected. But then if I type tail(dat[1,],min(which(!is.na(dat[1,])))) the result is the same initial row, and I don't understand why..
Thank you very much for anu suggestion.
if you just want all NA's to be pushed to the end, you could try
dat <- as.data.frame(rbind(c(NA,NA,1,3,5,NA,NA,NA), c(NA,1:3,6:8,NA), c(1:7,NA)))
dat[3,2] <- NA
> dat
V1 V2 V3 V4 V5 V6 V7 V8
1 NA NA 1 3 5 NA NA NA
2 NA 1 2 3 6 7 8 NA
3 1 NA 3 4 5 6 7 NA
dat.new<-do.call(rbind,lapply(1:nrow(dat),function(x) t(matrix(dat[x,order(is.na(dat[x,]))])) ))
colnames(dat.new)<-colnames(dat)
> dat.new
V1 V2 V3 V4 V5 V6 V7 V8
[1,] 1 3 5 NA NA NA NA NA
[2,] 1 2 3 6 7 8 NA NA
[3,] 1 3 4 5 6 7 NA NA
I don't think you can do this without a loop.
dat <- as.data.frame(rbind(c(NA,NA,1,3,5,NA,NA,NA), c(NA,1:3,6:8,NA), c(1:7,NA)))
dat[3,2] <- NA
# V1 V2 V3 V4 V5 V6 V7 V8
# 1 NA NA 1 3 5 NA NA NA
# 2 NA 1 2 3 6 7 8 NA
# 3 1 NA 3 4 5 6 7 NA
t(apply(dat, 1, function(x) {
if (is.na(x[1])) {
y <- x[-seq_len(which.min(is.na(x))-1)]
length(y) <- length(x)
y
} else x
}))
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#[1,] 1 3 5 NA NA NA NA NA
#[2,] 1 2 3 6 7 8 NA NA
#[3,] 1 NA 3 4 5 6 7 NA
Then turn the matrix into a data.frame if you must.
Here there is the answer by using the tail function:
dat <- as.data.frame(rbind(c(NA,NA,1,3,5,NA,NA,NA), c(NA,1:3,6:8,NA), c(1:7,NA)))
dat
for (i in 1:nrow(dat)) {
if (is.na(dat[i,1])==TRUE) {
# drops initial NAs of the row (if the sequence starts with NAs)
dat1 <- tail(as.integer(dat[i,]), -min(which(!is.na(dat[i,]))-1))
# adds final NAs to keep the row length constant (i.e. conformable with 'dat')
length(dat1) <- ncol(dat)
dat[i,] <- dat1
}
}
dat