Related
I have a few moderately large data frames and need to do a calculation across different columns in the data; for example I want to compare column i in one data frame with i - 1 in another. I currently use a for loop. The calculation involves element-wise comparison of each pair of values so is somewhat slow: e.g. I take each column of data, turn it into a matrix and compare with the transpose of itself (with some additional complications). In my application (in which the data have about 100 columns and 3000 rows) this currently takes about 95 seconds. I am looking for ways to make this more efficient. If I were comparing the SAME column of each data frame I would try using mapply, but because I need to make comparisons across different columns I don't see how this could work. The current code is something like this:
d1 <- as.data.frame(matrix(rnorm(100000), nrow=1000))
d2 <- as.data.frame(matrix(rnorm(100000), nrow=1000))
r <- list()
ptm2 <- proc.time()
for(i in 2:100){
t <- matrix(0 + d1[,i] > 0,1000,1000)
u <- matrix(d1[,i],1000,1000)*t(matrix(d2[,i-1],1000,1000))
r[[i]] <- t * u
}
proc.time() - ptm2
This takes about 3 seconds on my computer; as mentioned the actual calculation is a bit more complicated than this MWE suggests. Obviously one could also improve efficiency in the calculation itself but I am looking for a solution to the 'compare column i to column i-1' issue.
Based on your example, if you align the d1 and d2 matrices ahead of time based on which columns you are comparing, then here is how you could use mapply. It appears to be only marginally faster, so parallel computing would be a better way to achieve speed gains.
d1 <- as.data.frame(matrix(rnorm(100000), nrow=1000))
d2 <- as.data.frame(matrix(rnorm(100000), nrow=1000))
r <- list()
ptm2 <- proc.time()
for(i in 2:100){
t <- matrix(0 + d1[,i] > 0,1000,1000)
u <- matrix(d1[,i],1000,1000)*t(matrix(d2[,i-1],1000,1000))
r[[i]] <- t * u
}
proc.time() - ptm2
#user system elapsed
#0.90 0.87 1.79
#select last 99 columns of d1 and first 99 columns of d2 based on your calcs
d1_99 <- as.data.frame(d1[,2:100]) #have to convert to data.frame for mapply to loop across columns; a data.frame is simply a list of vectors of equal length
d2_99 <- as.data.frame(d2[,1:99])
ptm3 <- proc.time()
r_test <- mapply(function(x, y) {
t <- matrix(x > 0, 1000, 1000) #didn't understand why you were adding 0 in your example
u <- matrix(x,1000,1000)*t(matrix(y,1000,1000))
t * u
}, x=d1_99, y=d2_99, SIMPLIFY = FALSE)
proc.time() - ptm3
#user system elapsed
#0.91 0.83 1.75
class(r_test)
#[1] "list"
length(r_test)
#[1] 99
#test for equality
all.equal(r[[2]], r_test[[1]])
#[1] TRUE
all.equal(r[[100]], r_test[[99]])
#[1] TRUE
I made a matrix based population model, however, I would like to run more than one simultaneously in order to represent different groups of animals, in order that dispersing individuals can move between matrices. I originally just repeated everything to get a second matrix but then I realised that because I run the model using a for loop and break() under certain conditions (when that specific matrix should stop running, ie that group has died out) it is, understandably, stopping the whole model rather than just that singular matrix.
I was wondering if anyone had any suggestions on the best ways to code the model so that instead of breaking, and stopping the whole for loop, it just stops running across that specific matrix. I'm a little stumped. I have include a single run of one matrix below.
Also if anyone has a more efficient way of creating and running 9 matrices than writing everything out 9 times advice much appreciated.
n.steps <- 100
mats <- array(0,c(85,85,n.steps))
ns <- array(0,c(85,n.steps))
ns[1,1]<-0
ns[12,1]<-rpois(1,3)
ns[24,1]<-rpois(1,3)
ns[85,1] <- 1
birth<-4
nextbreed<-12
for (i in 2:n.steps){
# set up an empty matrix;
mat <- matrix(0,nrow=85,ncol=85)
surv.age.1 <- 0.95
x <- 2:10
diag(mat[x,(x-1)]) <- surv.age.1
surv.age.a <- 0.97
disp <- 1:74
disp <- disp*-0.001
disp1<-0.13
disp<-1-(disp+disp1)
survdisp<-surv.age.a*disp
x <- 11:84
diag(mat[x,(x-1)])<-survdisp
if (i == nextbreed) {
pb <- 1
} else {
pb <- 0
}
if (pb == 1) {
(nextbreed <- nextbreed+12)
}
mat[1,85] <- pb*birth
mat[85,85]<-1
death<-sample(c(replicate(1000,
sample(c(1,0), prob=c(0.985, 1-0.985), size = 1))),1)
if (death == 0) {
break()}
mats[,,i]<- mat
ns[,i] <- mat%*%ns[,i-1]
}
group.size <- apply(ns[1:85,],2,sum)
plot(group.size)
View(mat)
View(ns)
As somebody else suggested on Twitter, one solution might be to simply turn the matrix into all 0s whenever death happens. It looks to me like death is the probability that a local population disappears? It which case it seems to make good biological sense to just turn the entire population matrix into 0s.
A few other small changes: I made a list of replicate simulations so I could summarize them easily.
If I understand correctly,
death<-sample(c(replicate(1000,sample(c(1,0), prob=c(0.985, 1-0.985), size =1))),1)
says " a local population dies completely with probability 1.5% ". In which case, I think you could replace it with rbinom(). I did that below and my plots look similar to those I made with your code.
Hope that helps!
lots <- replicate(100, simplify = FALSE, expr = {
for (i in 2:n.steps){
# set up an empty matrix;
mat <- matrix(0,nrow=85,ncol=85)
surv.age.1 <- 0.95
x <- 2:10
diag(mat[x,(x-1)]) <- surv.age.1
surv.age.a <- 0.97
disp <- 1:74
disp <- disp*-0.001
disp1<-0.13
disp<-1-(disp+disp1)
survdisp<-surv.age.a*disp
x <- 11:84
diag(mat[x,(x-1)])<-survdisp
if (i == nextbreed) {
pb <- 1
} else {
pb <- 0
}
if (pb == 1) {
(nextbreed <- nextbreed+12)
}
mat[1,85] <- pb*birth
mat[85,85]<-1
death<-rbinom(1, size = 1, prob = 0.6)
if (death == 0) {
mat <- 0
}
mats[,,i]<- mat
ns[,i] <- mat%*%ns[,i-1]
}
ns
})
lapply(lots, FUN = function(x) apply(x[1:85,],2,sum))
I have looked around StackOverflow, but I cannot find a solution specific to my problem, which involves appending rows to an R data frame.
I am initializing an empty 2-column data frame, as follows.
df = data.frame(x = numeric(), y = character())
Then, my goal is to iterate through a list of values and, in each iteration, append a value to the end of the list. I started with the following code.
for (i in 1:10) {
df$x = rbind(df$x, i)
df$y = rbind(df$y, toString(i))
}
I also attempted the functions c, append, and merge without success. Please let me know if you have any suggestions.
Update from comment:
I don't presume to know how R was meant to be used, but I wanted to ignore the additional line of code that would be required to update the indices on every iteration and I cannot easily preallocate the size of the data frame because I don't know how many rows it will ultimately take. Remember that the above is merely a toy example meant to be reproducible. Either way, thanks for your suggestion!
Update
Not knowing what you are trying to do, I'll share one more suggestion: Preallocate vectors of the type you want for each column, insert values into those vectors, and then, at the end, create your data.frame.
Continuing with Julian's f3 (a preallocated data.frame) as the fastest option so far, defined as:
# pre-allocate space
f3 <- function(n){
df <- data.frame(x = numeric(n), y = character(n), stringsAsFactors = FALSE)
for(i in 1:n){
df$x[i] <- i
df$y[i] <- toString(i)
}
df
}
Here's a similar approach, but one where the data.frame is created as the last step.
# Use preallocated vectors
f4 <- function(n) {
x <- numeric(n)
y <- character(n)
for (i in 1:n) {
x[i] <- i
y[i] <- i
}
data.frame(x, y, stringsAsFactors=FALSE)
}
microbenchmark from the "microbenchmark" package will give us more comprehensive insight than system.time:
library(microbenchmark)
microbenchmark(f1(1000), f3(1000), f4(1000), times = 5)
# Unit: milliseconds
# expr min lq median uq max neval
# f1(1000) 1024.539618 1029.693877 1045.972666 1055.25931 1112.769176 5
# f3(1000) 149.417636 150.529011 150.827393 151.02230 160.637845 5
# f4(1000) 7.872647 7.892395 7.901151 7.95077 8.049581 5
f1() (the approach below) is incredibly inefficient because of how often it calls data.frame and because growing objects that way is generally slow in R. f3() is much improved due to preallocation, but the data.frame structure itself might be part of the bottleneck here. f4() tries to bypass that bottleneck without compromising the approach you want to take.
Original answer
This is really not a good idea, but if you wanted to do it this way, I guess you can try:
for (i in 1:10) {
df <- rbind(df, data.frame(x = i, y = toString(i)))
}
Note that in your code, there is one other problem:
You should use stringsAsFactors if you want the characters to not get converted to factors. Use: df = data.frame(x = numeric(), y = character(), stringsAsFactors = FALSE)
Let's benchmark the three solutions proposed:
# use rbind
f1 <- function(n){
df <- data.frame(x = numeric(), y = character())
for(i in 1:n){
df <- rbind(df, data.frame(x = i, y = toString(i)))
}
df
}
# use list
f2 <- function(n){
df <- data.frame(x = numeric(), y = character(), stringsAsFactors = FALSE)
for(i in 1:n){
df[i,] <- list(i, toString(i))
}
df
}
# pre-allocate space
f3 <- function(n){
df <- data.frame(x = numeric(1000), y = character(1000), stringsAsFactors = FALSE)
for(i in 1:n){
df$x[i] <- i
df$y[i] <- toString(i)
}
df
}
system.time(f1(1000))
# user system elapsed
# 1.33 0.00 1.32
system.time(f2(1000))
# user system elapsed
# 0.19 0.00 0.19
system.time(f3(1000))
# user system elapsed
# 0.14 0.00 0.14
The best solution is to pre-allocate space (as intended in R). The next-best solution is to use list, and the worst solution (at least based on these timing results) appears to be rbind.
Suppose you simply don't know the size of the data.frame in advance. It can well be a few rows, or a few millions. You need to have some sort of container, that grows dynamically. Taking in consideration my experience and all related answers in SO I come with 4 distinct solutions:
rbindlist to the data.frame
Use data.table's fast set operation and couple it with manually doubling the table when needed.
Use RSQLite and append to the table held in memory.
data.frame's own ability to grow and use custom environment (which has reference semantics) to store the data.frame so it will not be copied on return.
Here is a test of all the methods for both small and large number of appended rows. Each method has 3 functions associated with it:
create(first_element) that returns the appropriate backing object with first_element put in.
append(object, element) that appends the element to the end of the table (represented by object).
access(object) gets the data.frame with all the inserted elements.
rbindlist to the data.frame
That is quite easy and straight-forward:
create.1<-function(elems)
{
return(as.data.table(elems))
}
append.1<-function(dt, elems)
{
return(rbindlist(list(dt, elems),use.names = TRUE))
}
access.1<-function(dt)
{
return(dt)
}
data.table::set + manually doubling the table when needed.
I will store the true length of the table in a rowcount attribute.
create.2<-function(elems)
{
return(as.data.table(elems))
}
append.2<-function(dt, elems)
{
n<-attr(dt, 'rowcount')
if (is.null(n))
n<-nrow(dt)
if (n==nrow(dt))
{
tmp<-elems[1]
tmp[[1]]<-rep(NA,n)
dt<-rbindlist(list(dt, tmp), fill=TRUE, use.names=TRUE)
setattr(dt,'rowcount', n)
}
pos<-as.integer(match(names(elems), colnames(dt)))
for (j in seq_along(pos))
{
set(dt, i=as.integer(n+1), pos[[j]], elems[[j]])
}
setattr(dt,'rowcount',n+1)
return(dt)
}
access.2<-function(elems)
{
n<-attr(elems, 'rowcount')
return(as.data.table(elems[1:n,]))
}
SQL should be optimized for fast record insertion, so I initially had high hopes for RSQLite solution
This is basically copy&paste of Karsten W. answer on similar thread.
create.3<-function(elems)
{
con <- RSQLite::dbConnect(RSQLite::SQLite(), ":memory:")
RSQLite::dbWriteTable(con, 't', as.data.frame(elems))
return(con)
}
append.3<-function(con, elems)
{
RSQLite::dbWriteTable(con, 't', as.data.frame(elems), append=TRUE)
return(con)
}
access.3<-function(con)
{
return(RSQLite::dbReadTable(con, "t", row.names=NULL))
}
data.frame's own row-appending + custom environment.
create.4<-function(elems)
{
env<-new.env()
env$dt<-as.data.frame(elems)
return(env)
}
append.4<-function(env, elems)
{
env$dt[nrow(env$dt)+1,]<-elems
return(env)
}
access.4<-function(env)
{
return(env$dt)
}
The test suite:
For convenience I will use one test function to cover them all with indirect calling. (I checked: using do.call instead of calling the functions directly doesn't makes the code run measurable longer).
test<-function(id, n=1000)
{
n<-n-1
el<-list(a=1,b=2,c=3,d=4)
o<-do.call(paste0('create.',id),list(el))
s<-paste0('append.',id)
for (i in 1:n)
{
o<-do.call(s,list(o,el))
}
return(do.call(paste0('access.', id), list(o)))
}
Let's see the performance for n=10 insertions.
I also added a 'placebo' functions (with suffix 0) that don't perform anything - just to measure the overhead of the test setup.
r<-microbenchmark(test(0,n=10), test(1,n=10),test(2,n=10),test(3,n=10), test(4,n=10))
autoplot(r)
For 1E5 rows (measurements done on Intel(R) Core(TM) i7-4710HQ CPU # 2.50GHz):
nr function time
4 data.frame 228.251
3 sqlite 133.716
2 data.table 3.059
1 rbindlist 169.998
0 placebo 0.202
It looks like the SQLite-based sulution, although regains some speed on large data, is nowhere near data.table + manual exponential growth. The difference is almost two orders of magnitude!
Summary
If you know that you will append rather small number of rows (n<=100), go ahead and use the simplest possible solution: just assign the rows to the data.frame using bracket notation and ignore the fact that the data.frame is not pre-populated.
For everything else use data.table::set and grow the data.table exponentially (e.g. using my code).
Update with purrr, tidyr & dplyr
As the question is already dated (6 years), the answers are missing a solution with newer packages tidyr and purrr. So for people working with these packages, I want to add a solution to the previous answers - all quite interesting, especially .
The biggest advantage of purrr and tidyr are better readability IMHO.
purrr replaces lapply with the more flexible map() family,
tidyr offers the super-intuitive method add_row - just does what it says :)
map_df(1:1000, function(x) { df %>% add_row(x = x, y = toString(x)) })
This solution is short and intuitive to read, and it's relatively fast:
system.time(
map_df(1:1000, function(x) { df %>% add_row(x = x, y = toString(x)) })
)
user system elapsed
0.756 0.006 0.766
It scales almost linearly, so for 1e5 rows, the performance is:
system.time(
map_df(1:100000, function(x) { df %>% add_row(x = x, y = toString(x)) })
)
user system elapsed
76.035 0.259 76.489
which would make it rank second right after data.table (if your ignore the placebo) in the benchmark by #Adam Ryczkowski:
nr function time
4 data.frame 228.251
3 sqlite 133.716
2 data.table 3.059
1 rbindlist 169.998
0 placebo 0.202
A more generic solution for might be the following.
extendDf <- function (df, n) {
withFactors <- sum(sapply (df, function(X) (is.factor(X)) )) > 0
nr <- nrow (df)
colNames <- names(df)
for (c in 1:length(colNames)) {
if (is.factor(df[,c])) {
col <- vector (mode='character', length = nr+n)
col[1:nr] <- as.character(df[,c])
col[(nr+1):(n+nr)]<- rep(col[1], n) # to avoid extra levels
col <- as.factor(col)
} else {
col <- vector (mode=mode(df[1,c]), length = nr+n)
class(col) <- class (df[1,c])
col[1:nr] <- df[,c]
}
if (c==1) {
newDf <- data.frame (col ,stringsAsFactors=withFactors)
} else {
newDf[,c] <- col
}
}
names(newDf) <- colNames
newDf
}
The function extendDf() extends a data frame with n rows.
As an example:
aDf <- data.frame (l=TRUE, i=1L, n=1, c='a', t=Sys.time(), stringsAsFactors = TRUE)
extendDf (aDf, 2)
# l i n c t
# 1 TRUE 1 1 a 2016-07-06 17:12:30
# 2 FALSE 0 0 a 1970-01-01 01:00:00
# 3 FALSE 0 0 a 1970-01-01 01:00:00
system.time (eDf <- extendDf (aDf, 100000))
# user system elapsed
# 0.009 0.002 0.010
system.time (eDf <- extendDf (eDf, 100000))
# user system elapsed
# 0.068 0.002 0.070
Lets take a vector 'point' which has numbers from 1 to 5
point = c(1,2,3,4,5)
if we want to append a number 6 anywhere inside the vector then below command may come handy
i) Vectors
new_var = append(point, 6 ,after = length(point))
ii) columns of a table
new_var = append(point, 6 ,after = length(mtcars$mpg))
The command append takes three arguments:
the vector/column to be modified.
value to be included in the modified vector.
a subscript, after which the values are to be appended.
simple...!!
Apologies in case of any...!
My solution is almost the same as the original answer but it doesn't worked for me.
So, I gave names for the columns and it works:
painel <- rbind(painel, data.frame("col1" = xtweets$created_at,
"col2" = xtweets$text))
For a given column in a dataframe, I want to construct a new vector which for each point consists of the average of the points on either side. However for the last observation it will instead be the second to last. And for the first observation it will be second. I wrote this R code to solve the issue, however I am calling it repeatedly and it is extremely slow. Can someone give some tips on how to do it more efficiently? Thanks.
x1 <- c(rep('a',100),rep('b',100),rep('c',100))
x2 <- rnorm(300)
x <- data.frame(x1,x2)
names(x) <- c('col1','data1')
a.linear.interpolation <- function(x) {
require(zoo)
require(data.table)
a.dattab <- data.table(x)
setkey(a.dattab,col1)
#replace any NA values using LOCF / NOCB
a.dattab[,data1:=na.locf(data1,na.rm=FALSE),by=list(col1)]
a.dattab[,data1:=na.locf(data1,na.rm=FALSE,fromLast=TRUE),by=list(col1)]
#Adding a within group sequence number and a size of group field to facilitate
#row by row processing
a.dattab[,grpseq:=seq_len(.N),by=list(col1)]
a.dattab[,grpseq_max:=.N,by=list(col1)]
#convert back to data.frame
#data.frame seems faster than data.table for this row by row type processing
a.df <- data.frame(a.dattab)
new.col <- vector(length=nrow(a.df))
for(i in seq(nrow(a.df))){
if(a.df[i,"grpseq"]==1){
new.col[i] <- a.df[i+1,"data1"]
}
else if(a.df[i,"grpseq"]==a.df[i,"grpseq_max"]){
new.col[i] <- a.df[i-1,"data1"]
}
else {
new.col[i] <- (a.df[i-1,"data1"]+a.df[i+1,"data1"])/2
}
}
return(new.col)
}
Apart from using rollmeans, the base R filter function can do this sort of thing as well. E.g.:
linint <- function(vec) {
c(vec[2], filter(vec, c(0.5, 0, 0.5))[-c(1, length(vec))], vec[length(vec) - 1])
}
x <- c(1,3,6,10,1)
linint(x)
#[1] 3.0 3.5 6.5 3.5 10.0
And it's pretty quick, chewing through 10M cases in less than a second:
x <- rnorm(1e7)
system.time(linint(x))
#user system elapsed
#0.57 0.18 0.75
I'm looking for something similar to na.locf() in the zoo package, but instead of always using the previous non-NA value I'd like to use the nearest non-NA value. Some example data:
dat <- c(1, 3, NA, NA, 5, 7)
Replacing NA with na.locf (3 is carried forward):
library(zoo)
na.locf(dat)
# 1 3 3 3 5 7
and na.locf with fromLast set to TRUE (5 is carried backwards):
na.locf(dat, fromLast = TRUE)
# 1 3 5 5 5 7
But I wish the nearest non-NA value to be used. In my example this means that the 3 should be carried forward to the first NA, and the 5 should be carried backwards to the second NA:
1 3 3 5 5 7
I have a solution coded up, but wanted to make sure that I wasn't reinventing the wheel. Is there something already floating around?
FYI, my current code is as follows. Perhaps if nothing else, someone can suggest how to make it more efficient. I feel like I'm missing an obvious way to improve this:
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) {
return(which.min(abs(non.na.pos - x)))
})
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
To answer smci's questions below:
No, any entry can be NA
If all are NA, leave them as is
No. My current solution defaults to the lefthand nearest value, but it doesn't matter
These rows are a few hundred thousand elements typically, so in theory the upper bound would be a few hundred thousand. In reality it'd be no more than a few here & there, typically a single one.
Update So it turns out that we're going in a different direction altogether but this was still an interesting discussion. Thanks all!
Here is a very fast one. It uses findInterval to find what two positions should be considered for each NA in your original data:
f1 <- function(dat) {
N <- length(dat)
na.pos <- which(is.na(dat))
if (length(na.pos) %in% c(0, N)) {
return(dat)
}
non.na.pos <- which(!is.na(dat))
intervals <- findInterval(na.pos, non.na.pos,
all.inside = TRUE)
left.pos <- non.na.pos[pmax(1, intervals)]
right.pos <- non.na.pos[pmin(N, intervals+1)]
left.dist <- na.pos - left.pos
right.dist <- right.pos - na.pos
dat[na.pos] <- ifelse(left.dist <= right.dist,
dat[left.pos], dat[right.pos])
return(dat)
}
And here I test it:
# sample data, suggested by #JeffAllen
dat <- as.integer(runif(50000, min=0, max=10))
dat[dat==0] <- NA
# computation times
system.time(r0 <- f0(dat)) # your function
# user system elapsed
# 5.52 0.00 5.52
system.time(r1 <- f1(dat)) # this function
# user system elapsed
# 0.01 0.00 0.03
identical(r0, r1)
# [1] TRUE
Code below. The initial question was not totally well-defined, I had asked for these clarifications:
Is it guaranteed that at least the first and/or last entries are non-NA? [No]
What to do if all entries in a row are NA? [Leave as-is]
Do you care how ties are split i.e. how to treat the middle NA in 1 3 NA NA NA 5 7? [Don't-care/ left]
Do you have an upper-bound (S) on the longest contiguous span of NAs in a row? (I'm thinking a recursive solution if S is small. Or a dataframe solution with ifelse if S is large and number of rows and cols is large.) [worst-case S could be pathologically large, hence recursion should not be used]
geoffjentry, re your solution your bottlenecks will be the serial calculation of nearest.non.na.pos and the serial assignment dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
For a large gap of length G all we really need to compute is that the first (G/2, round up) items fill-from-left, the rest from right. (I could post an answer using ifelse but it would look similar.)
Are your criteria runtime, big-O efficiency, temp memory usage, or code legibility?
Coupla possible tweaks:
only need to compute N <- length(dat) once
common-case speed enhance: if (length(na.pos) == 0) skip row, since it has no NAs
if (length(na.pos) == length(dat)-1) the (rare) case where there is only one non-NA entry hence we fill entire row with it
Outline solution:
Sadly na.locf does not work on an entire dataframe, you must use sapply, row-wise:
na.fill_from_nn <- function(x) {
row.na <- is.na(x)
fillFromLeft <- na.locf(x, na.rm=FALSE)
fillFromRight <- na.locf(x, fromLast=TRUE, na.rm=FALSE)
disagree <- rle(fillFromLeft!=fillFromRight)
for (loc in (disagree)) { ... resolve conflicts, row-wise }
}
sapply(dat, na.fill_from_nn)
Alternatively, since as you say contiguous NAs are rare, do a fast-and-dumb ifelse to fill isolated NAs from left. This will operate data-frame wise => makes the common-case fast. Then handle all the other cases with a row-wise for-loop. (This will affect the tiebreak on middle elements in a long span of NAs, but you say you don't care.)
I can't think of an obvious simple solution, but, having looked at the suggestions (particularly smci's suggestion of using rle) I came up with a complicated function that appears to be more efficient.
This is the code, I'll explain below:
# Your function
your.func = function(dat) {
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) which.min(abs(non.na.pos - x)))
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
dat
}
# My function
my.func = function(dat) {
nas=is.na(dat)
if (!any(!nas)) return (dat)
t=rle(nas)
f=sapply(t$lengths[t$values],seq)
a=unlist(f)
b=unlist(lapply(f,rev))
x=which(nas)
l=length(dat)
dat[nas]=ifelse(a>b,dat[ ifelse((x+b)>l,x-a,x+b) ],dat[ifelse((x-a)<1,x+b,x-a)])
dat
}
# Test
n = 100000
test.vec = 1:n
set.seed(1)
test.vec[sample(test.vec,n/4)]=NA
system.time(t1<-my.func(test.vec))
system.time(t2<-your.func(test.vec)) # 10 times speed improvement on my machine
# Verify
any(t1!=t2)
My function relies on rle. I am reading the comments above but it looks to me like rle works just fine for NA. It is easiest to explain with a small example.
If I start with a vector:
dat=c(1,2,3,4,NA,NA,NA,8,NA,10,11,12,NA,NA,NA,NA,NA,18)
I then get the positions of all the NAs:
x=c(5,6,7,8,13,14,15,16,17)
Then, for every "run" of NAs I create a sequence from 1 to the length of the run:
a=c(1,2,3,1,1,2,3,4,5)
Then I do it again, but I reverse the sequence:
b=c(3,2,1,1,5,4,3,2,1)
Now, I can just compare vectors a and b: If a<=b then look back and grab the value at x-a. If a>b then look ahead and grab the value at x+b. The rest is just handling the corner cases when you have all NAs or NA runs at the end or the start of the vector.
There is probably a better, simpler, solution, but I hope this gets you started.
I like all the rigorous solutions. Though not directly what was asked, I found this post looking for a solution to filling NA values with an interpolation. After reviewing this post I discovered na.fill on a zoo object(vector, factor, or matrix):
z <- c(1,2,3,4,5,6,NA,NA,NA,2,3,4,5,6,NA,NA,4,6,7,NA)
z1 <- zoo::na.fill(z, "extend")
Note the smooth transition across the NA values
round(z1, 0)
#> [1] 1 2 3 4 5 6 5 4 3 2 3 4 5 6 5 5 4 6 7 7
Perhaps this could help
Here's my stab at it. I never like to see a for loop in R, but in the case of a sparsely-NA vector, it looks like it will actually be more efficient (performance metrics below). The gist of the code is below.
#get the index of all NA values
nas <- which(is.na(dat))
#get the Boolean map of which are NAs, used later to determine which values can be used as a replacement, and which are just filled-in NA values
namask <- is.na(dat)
#calculate the maximum size of a run of NAs
length <- getLengthNAs(dat);
#the furthest away an NA value could be is half of the length of the maximum NA run
windowSize <- ceiling(length/2)
#loop through all NAs
for (thisIndex in nas){
#extract the neighborhood of this NA
neighborhood <- dat[(thisIndex-windowSize):(thisIndex+windowSize)]
#any already-filled-in values which were NA can be replaced with NAs
neighborhood[namask[(thisIndex-windowSize):(thisIndex+windowSize)]] <- NA
#the center of this neighborhood
center <- windowSize + 1
#compute the difference within this neighborhood to find the nearest non-NA value
delta <- center - which(!is.na(neighborhood))
#find the closest replacement
replacement <- delta[abs(delta) == min(abs(delta))]
#in case length > 1, just pick the first
replacement <- replacement[1]
#replace with the nearest non-NA value.
dat[thisIndex] <- dat[(thisIndex - (replacement))]
}
I liked the code you proposed, but I noticed that we were calculating the delta between every NA value and every other non-NA index in the matrix. I think this was the biggest performance hog. Instead, I just extract the minimum-sized neighborhood or window around each NA and find the nearest non-NA value within that window.
So the performance scales linearly on the number of NAs and the window size -- where the window size is (the ceiling of) half the length of the maximum run of NAs. To calculate the length of the maximum run of NAs, you can use the following function:
getLengthNAs <- function(dat){
nas <- which(is.na(dat))
spacing <- diff(nas)
length <- 1;
while (any(spacing == 1)){
length <- length + 1;
spacing <- diff(which(spacing == 1))
}
length
}
Performance Comparison
#create a test vector with 10% NAs and length 50,000.
dat <- as.integer(runif(50000, min=0, max=10))
dat[dat==0] <- NA
#the a() function is the code posted in the question
a <- function(dat){
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) {
return(which.min(abs(non.na.pos - x)))
})
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
dat
}
#my code
b <- function(dat){
#the same code posted above, but with some additional helper code to sanitize the input
if(is.null(dat)){
return(NULL);
}
if (all(is.na(dat))){
stop("Can't impute NAs if there are no non-NA values.")
}
if (!any(is.na(dat))){
return(dat);
}
#starts with an NA (or multiple), handle these
if (is.na(dat[1])){
firstNonNA <- which(!is.na(dat))[1]
dat[1:(firstNonNA-1)] <- dat[firstNonNA]
}
#ends with an NA (or multiple), handle these
if (is.na(dat[length(dat)])){
lastNonNA <- which(!is.na(dat))
lastNonNA <- lastNonNA[length(lastNonNA)]
dat[(lastNonNA+1):length(dat)] <- dat[lastNonNA]
}
#get the index of all NA values
nas <- which(is.na(dat))
#get the Boolean map of which are NAs, used later to determine which values can be used as a replacement, and which are just filled-in NA values
namask <- is.na(dat)
#calculate the maximum size of a run of NAs
length <- getLengthNAs(dat);
#the furthest away an NA value could be is half of the length of the maximum NA run
#if there's a run at the beginning or end, then the nearest non-NA value could possibly be `length` away, so we need to keep the window large for that case.
windowSize <- ceiling(length/2)
#loop through all NAs
for (thisIndex in nas){
#extract the neighborhood of this NA
neighborhood <- dat[(thisIndex-windowSize):(thisIndex+windowSize)]
#any already-filled-in values which were NA can be replaced with NAs
neighborhood[namask[(thisIndex-windowSize):(thisIndex+windowSize)]] <- NA
#the center of this neighborhood
center <- windowSize + 1
#compute the difference within this neighborhood to find the nearest non-NA value
delta <- center - which(!is.na(neighborhood))
#find the closest replacement
replacement <- delta[abs(delta) == min(abs(delta))]
#in case length > 1, just pick the first
replacement <- replacement[1]
#replace with the nearest non-NA value.
dat[thisIndex] <- dat[(thisIndex - (replacement))]
}
dat
}
#nograpes' answer on this question
c <- function(dat){
nas=is.na(dat)
if (!any(!nas)) return (dat)
t=rle(nas)
f=sapply(t$lengths[t$values],seq)
a=unlist(f)
b=unlist(lapply(f,rev))
x=which(nas)
l=length(dat)
dat[nas]=ifelse(a>b,dat[ ifelse((x+b)>l,x-a,x+b) ],dat[ifelse((x-a)<1,x+b,x-a)])
dat
}
#run 10 times each to get average performance.
sum <- 0; for (i in 1:10){ sum <- sum + system.time(a(dat))["elapsed"];}; cat ("A: ", sum/10)
A: 5.059
sum <- 0; for (i in 1:10){ sum <- sum + system.time(b(dat))["elapsed"];}; cat ("B: ", sum/10)
B: 0.126
sum <- 0; for (i in 1:10){ sum <- sum + system.time(c(dat))["elapsed"];}; cat ("C: ", sum/10)
C: 0.287
So it looks like this code (at least under these conditions), offers about a 40X speedup from the original code posted in the question, and a 2.2X speedup over #nograpes' answer below (though I imagine an rle solution would certainly be faster in some situations -- including a more NA-rich vector).
Speed is about 3-4x slower than that of the chosen answer. Mine is pretty simple though. It's a rare while loop too.
f2 <- function(x){
# check if all are NA to skip loop
if(!all(is.na(x))){
# replace NA's until they are gone
while(anyNA(x)){
# replace from the left
x[is.na(x)] <- c(NA,x[1:(length(x)-1)])[is.na(x)]
# replace from the right
x[is.na(x)] <- c(x[-1],NA)[is.na(x)]
}
}
# return original or fixed x
x
}