RowSums NA + NA gives 0 [duplicate] - r

This question already has answers here:
There is pmin and pmax each taking na.rm, why no psum?
(3 answers)
Closed 6 years ago.
I'll just understand a (for me) weird behavior of the function rowSums. Imagine I have this super simple dataframe:
a = c(NA, NA,3)
b = c(2,NA,2)
df = data.frame(a,b)
df
a b
1 NA 2
2 NA NA
3 3 2
and now I want a third column that is the sum of the other two. I cannot use simply + because of the NA:
df$c <- df$a + df$b
df
a b c
1 NA 2 NA
2 NA NA NA
3 3 2 5
but if I use rowSums the rows that have NA are calculated as 0, while if there is only one NA everything works fine:
df$d <- rowSums(df, na.rm=T)
df
a b c d
1 NA 2 NA 2
2 NA NA NA 0
3 3 2 5 10
am I missing something?
Thanks to all

One option with rowSums would be to get the rowSums with na.rm=TRUE and multiply with the negated (!) rowSums of negated (!) logical matrix based on the NA values after converting the rows that have all NAs into NA (NA^)
rowSums(df, na.rm=TRUE) *NA^!rowSums(!is.na(df))
#[1] 2 NA 10

Because
sum(numeric(0))
# 0
Once you used na.rm = TRUE in rowSums, the second row is numeric(0). After taking sum, it is 0.
If you want to retain NA for all NA cases, it would be a two-stage work. I recommend writing a small function for this purpose:
my_rowSums <- function(x) {
if (is.data.frame(x)) x <- as.matrix(x)
z <- base::rowSums(x, na.rm = TRUE)
z[!base::rowSums(!is.na(x))] <- NA
z
}
my_rowSums(df)
# [1] 2 NA 10
This can be particularly useful, if the input x is a data frame (as in your case). base::rowSums would first check whether input is matrix or not. If it gets a data frame, it would convert it into a matrix first. Type conversion is in fact more costly than actual row sum computation. Note that we call base::rowSums two times. To reduce type conversion overhead, we should make sure x is a matrix beforehand.
For #akrun's "hacking" answer, I suggest:
akrun_rowSums <- function (x) {
if (is.data.frame(x)) x <- as.matrix(x)
rowSums(x, na.rm=TRUE) *NA^!rowSums(!is.na(x))
}
akrun_rowSums(df)
# [1] 2 NA 10

Related

Fast way to insert values in column of data frame in R

I am currently trying to find unique elements between two columns of a data frame and write these to a new final data frame.
This is my code, which works perfectly fine, and creates a result which matches my expectation.
set.seed(42)
df <- data.frame(a = sample(1:15, 10),
b=sample(1:15, 10))
unique_to_a <- df$a[!(df$a %in% df$b)]
unique_to_b <- df$b[!(df$b %in% df$a)]
n <- max(c(unique_to_a, unique_to_b))
out <- data.frame(A=rep(NA,n), B=rep(NA,n))
for (element in unique_to_a){
out[element, "A"] = element
}
for (element in unique_to_b){
out[element, "B"] = element
}
out
The problem is, that it is very slow, because the real data contains 100.000s of rows. I am quite sure it is because of the repeated indexing I am doing in the for loop, and I am sure there is a quicker, vectorized way, but I dont see it...
Any ideas on how to speed up the operation is much appreciated.
Cheers!
Didn't compare the speed but at least this is more concise:
elements <- with(df, list(setdiff(a, b), setdiff(b, a)))
data.frame(sapply(elements, \(x) replace(rep(NA, max(unlist(elements))), x, x)))
# X1 X2
# 1 NA NA
# 2 NA NA
# 3 NA 3
# 4 NA NA
# 5 NA NA
# 6 NA NA
# 7 NA NA
# 8 NA NA
# 9 NA NA
# 10 NA NA
# 11 11 NA

NA Remove to calculation

I have some problems with NA value cause my dataset from excel is not same column number so It showed NA. It deleted all row containing NA value when make calculation Similarity Index function Psicalc in RInSp package.
B F
4 7
5 6
6 8
7 5
NA 4
NA 3
NA 2
Do you know how to handle with NA or remove it but not delete all row or not affect to package?. Beside when I import.RinSP it has message
In if (class(filename) == "character") { :
the condition has length > 1 and only the first element will be used
Thank you so much
Many R functions ( specifically base R ) have an na.rm argument, which is FALSE by default. That means if you omit this argument, and your data has NA, your "calculation" will result in NA. To remove these in the calculations, include an na.rm argument and assign it to TRUE.
Example:
x <- c(4,5,6,7,NA,NA)
mean(x) # Oops!
[1] NA
mean(x, na.rm=TRUE)
[1] 5.5

Subtract rows with numeric values and ignore NAs

I have several data frames containing 18 columns with approx. 50000 rows. Each row entry represents a measurement at a specific site (= column), and the data contain NA values.
I need to subtract the consecutive rows per column (e.g. row(i+1)-row(i)) to detect threshold values, but I need to ignore (and retain) the NAs, so that only the entries with numeric values are subtracted from each other.
I found very helpful posts with data.table solutions for a single column Iterate over a column ignoring but retaining NA values in R, and for multiple column operations (e.g. Summarizing multiple columns with dplyr?).
However, I haven't managed to combine the approaches suggested in SO (i.e. apply diff over multiple columns and ignore the NAs)
Here's an example df for illustration and a solution I tried:
library(data.table)
df <- data.frame(x=c(1:3,NA,NA,9:7),y=c(NA,4:6, NA,15:13), z=c(6,2,7,14,20, NA, NA, 2))
that's how it works for a single column
diff_x <- df[!is.na(x), lag_diff := x - shift(x)] # actually what I want, but for more columns at once
and that's how I apply a diff function over several columns with lapply
diff_all <- setDT(df)[,lapply(.SD, diff)] # not exactly what I want because NAs are not ignored and the difference between numeric values is not calculated
I'd appreciate any suggestion (base, data.table, dplyr ,... solutions) on how to implement a valid !is.na or similar statement into this second line of code very much.
Defining a helper function makes things a bit cleaner:
lag_diff <- function(x) {
which_nna <- which(!is.na(x))
out <- rep(NA_integer_, length(x))
out[which_nna] <- x[which_nna] - shift(x[which_nna])
out
}
cols <- c("x", "y", "z")
setDT(df)
df[, paste0("lag_diff_", cols) := lapply(.SD, lag_diff), .SDcols = cols]
Result:
# x y z lag_diff_x lag_diff_y lag_diff_z
# 1: 1 NA 6 NA NA NA
# 2: 2 4 2 1 NA -4
# 3: 3 5 7 1 1 5
# 4: NA 6 14 NA 1 7
# 5: NA NA 20 NA NA 6
# 6: 9 15 NA 6 9 NA
# 7: 8 14 NA -1 -1 NA
# 8: 7 13 2 -1 -1 -18
So you are looking for:
library("data.table")
df <- data.frame(x=c(1:3,NA,NA,9:7),y=c(NA,4:6, NA,15:13), z=c(6,2,7,14,20, NA, NA, 2))
setDT(df)
# diff_x <- df[!is.na(x), lag_diff := x - shift(x)] # actually what I want, but
lag_d <- function(x) { y <- x[!is.na(x)]; x[!is.na(x)] <- y - shift(y); x }
df[, lapply(.SD, lag_d)]
or
library("data.table")
df <- data.frame(x=c(1:3,NA,NA,9:7),y=c(NA,4:6, NA,15:13), z=c(6,2,7,14,20, NA, NA, 2))
lag_d <- function(x) { y <- x[!is.na(x)]; x[!is.na(x)] <- y - shift(y); x }
as.data.frame(lapply(df, lag_d))

Sum 2 columns, ignore NA, except when both are NA [duplicate]

This question already has answers here:
Using `:=` in data.table to sum the values of two columns in R, ignoring NAs
(2 answers)
Closed 3 years ago.
I try to sum 2 columns with some NA. There are a lot of forum questions like my first question: how to sum and ignore NA, but now I do want it to return NA when both columns have NA in a specific row. This is an example:
df<-data.table(x = c(1,2,NA),
y = c(1,NA,NA))
> df
x y
1 1
2 NA
NA NA
and I want this:
x y final
1 1 2
2 NA 2
NA NA NA
I've tried the following:
df$sum<-rowSums(df[,c("x", "y")], na.rm=TRUE)
df$final<-ifelse (is.na(df$x) && is.na(df$y) , NA,
ifelse (is.na(df$x) | is.na(df$y), df$sum,
ifelse (!is.na(df$x) && !is.na(df$y), df$sum)))
But this doesn't return what I want.. Could someone help me..?
NOTE: Some have said this is a duplicate for the reason that I ask that NA's are ignored, but those questions do not answer my main question: How should 2 x NAget me NA and not 0
I used the following. It gives sums even when there are NAs, but returns NA when all sumed elements are NA.
rowSums(df, na.rm = TRUE) * NA ^ (rowSums(!is.na(df)) == 0)
Here are two more options:
ifelse(rowSums(is.na(df)) != ncol(df), rowSums(df, na.rm = TRUE), NA)
#[1] 2 2 NA
and
vals <- rowSums(df, na.rm = TRUE)
NA^(vals == 0) * vals
#[1] 2 2 NA

Identify duplicate values and remove them

I have a vector:
vec <- c(2,3,5,5,5,5,6,1,9,4,4,4)
I want to check if a particular value is repeated consecutively and if yes, keep the first two values and assign NA to the rest of the values.
For example, in the above vector, 5 is repeated 4 times, therefore I will keep the first two 5's and make the second two 5's NA.
Similarly, 4 is repeated three times, so I will keep the first two 4's and remove the third one.
In the end my vector should look like:
2,3,5,5,NA,NA,6,1,9,4,4,NA
I did this:
bad.values <- vec - binhf::shift(vec, 1, dir="right")
bad.repeat <- bad.values == 0
vec[bad.repeat] <- NA
[1] 2 3 5 NA NA NA 6 1 9 4 NA NA
I can only get it to work to keep the first 5 and 4 (rather than first two 5's or 4',4's).
Any solutions?
Another option with just base R functions:
rl <- rle(vec)
i <- unlist(lapply(rl$lengths, function(l) if (l > 2) c(FALSE,FALSE,rep(TRUE, l - 2)) else rep(FALSE, l)))
vec * NA^i
which gives:
[1] 2 3 5 5 NA NA 6 1 9 4 4 NA
I figured it out. I just had to change the argument to 2 in binhf::shift
vec <- c(2,3,5,5,5,5,6,1,9,4,4,4)
bad.values <- vec - binhf::shift(vec, 2, dir="right")
bad.repeat <- bad.values == 0
vec[bad.repeat] <- NA
[1] 2 3 5 5 NA NA 6 1 9 4 4 NA
I think this might work, if I got your problem right:
vec <- c(2,3,5,5,5,5,6,1,9,4,4,4)
diffs1<-vec-binhf::shift(vec,1,dir="right")
diffs2<-vec-binhf::shift(vec,2,dir="right")
get_zeros<-abs(diffs1)+abs(diffs2)
vec[which(get_zeros==0)]<-NA
I hope this helps!
This question may refer to a problem you encountered in a dataframe, not a vector. In any case, here's a tidyverse solution to both.
tibble(x = vec) %>%
group_by(x) %>%
mutate(mycol = ifelse(row_number()>2, NA, x) ) %>%
pull(mycol)

Resources