R data.table: update with shift() does not work as expected - r

I'm trying to missing values in a data.table column with the value below it using shift, but I can only get it to work if I first create a temporary variable. Is this the expected behavior? MWE:
library(data.table)
dt <- data.table(x=c(1, NA))
dt[is.na(x), x:=shift(x)]
# Fails
dt <- data.table(x=c(1, NA))
dt <- dt[, x.lag:=shift(x)]
dt[is.na(x), x:=x.lag]
# Works

I'm a little new to data.table, but I think the rolling join might be what you're after here. Presumably you want to be able to impute a data point when there are multiple missing values in sequence, in which case your shift method will just fill NA.
Your example is a little too minimal to really see what you're doing, but if I expand it a little to include a record column, where various x values are missing;
library(data.table)
dt <- data.table(record=1:10, x=c(1, NA, NA, 4, 5, 6, NA, NA, NA, 10))
> dt
record x
1: 1 1
2: 2 NA
3: 3 NA
4: 4 4
5: 5 5
6: 6 6
7: 7 NA
8: 8 NA
9: 9 NA
10: 10 10
Then create a copy with only the non-missing rows, and set a key as the x column
dtNA <- dt[!is.na(x)]
setkey(dtNA, record)
> dtNA
record x
1: 1 1
2: 4 4
3: 5 5
4: 6 6
5: 10 10
Then do a rolling join (whereby if a value is missing, the previous record is rolled forwards) on the full list of records
dtNA[data.table(record=dt$record, key="record"), roll=TRUE]
record x
1: 1 1
2: 2 1
3: 3 1
4: 4 4
5: 5 5
6: 6 6
7: 7 6
8: 8 6
9: 9 6
10: 10 10
Compared to your method which produces the following (still has NA values in x);
dt[, x.lag:=shift(x)]
dt[is.na(x), x:=x.lag]
> dt
record x x.lag
1: 1 1 NA
2: 2 1 1
3: 3 NA NA
4: 4 4 NA
5: 5 5 4
6: 6 6 5
7: 7 6 6
8: 8 NA NA
9: 9 NA NA
10: 10 10 NA

Related

Impute missing variables but not at the beginning and the end?

Consider the following working example:
library(data.table)
library(imputeTS)
DT <- data.table(
time = c(1:10),
var1 = c(1:5, NA, NA, 8:10),
var2 = c(NA, NA, 1:4, NA, 6, 7, 8),
var3 = c(1:6, rep(NA, 4))
)
time var1 var2 var3
1: 1 1 NA 1
2: 2 2 NA 2
3: 3 3 1 3
4: 4 4 2 4
5: 5 5 3 5
6: 6 NA 4 6
7: 7 NA NA NA
8: 8 8 6 NA
9: 9 9 7 NA
10: 10 10 8 NA
I want to impute the missing values at different points within the time series using the na_interpolation from the imputeTS package. However, I do not want to impute missing values at the beginning or the end of the series which can be of various length (In my application replacing those values would not make sense).
When I run the following code to impute the series, however all the NAs get replaced:
DT[,(cols_to_impute_example) := lapply(.SD, na_interpolation), .SDcols = cols_to_impute_example]
> DT
time var1 var2 var3
1: 1 1 1 1
2: 2 2 1 2
3: 3 3 1 3
4: 4 4 2 4
5: 5 5 3 5
6: 6 6 4 6
7: 7 7 5 6
8: 8 8 6 6
9: 9 9 7 6
10: 10 10 8 6
What I want to achieve is:
time var1 var2 var3
1: 1 1 NA 1
2: 2 2 NA 2
3: 3 3 1 3
4: 4 4 2 4
5: 5 5 3 5
6: 6 6 4 6
7: 7 7 5 NA
8: 8 8 6 NA
9: 9 9 7 NA
10: 10 10 8 NA
a dplyr implementation:
we select the middle part of the df where we do the NA interpolation and then we bind it back.
library(imputeTS)
library(dplyr)
DT <- data_frame(
time = c(1:10),
var1 = c(1:5, NA, NA, 8:10),
var2 = c(NA, NA, 1:4, NA, 6, 7, 8),
var3 = c(1:6, rep(NA, 4))
)
na_inter_middle<-function(row_start, row_end){
# extracts the first part of the df where no NA need to be replaced
DT[1:row_start,]->start
# middle part, interpolating NA values
DT[(row_start + 1):(nrow(DT) - row_end),]->middle
#end part
DT[(nrow(DT) - (row_end - 1) ):nrow(DT),]->end
start %>%
bind_rows(
middle %>%
mutate_all(na.interpolation)
) %>%
bind_rows(end)
}
na_inter_middle(2,3)
# A tibble: 10 x 4
time var1 var2 var3
<int> <dbl> <dbl> <dbl>
1 1 1 NA 1
2 2 2 NA 2
3 3 3 1 3
4 4 4 2 4
5 5 5 3 5
6 6 5 4 6
7 7 5 4 6
8 8 8 6 NA
9 9 9 7 NA
10 10 10 8 NA
Maybe not so well known, you can also use additional parameters from approx in the na.interpolation function of imputeTS.
This one could be solved with:
library(imputeTS)
DT[,(2:4) := lapply(.SD, na_interpolation, yleft = NA , yright = NA), .SDcols = 2:4]
Here with yleft and yright you specify what to do with the trailing / leading NAs.
Which leads to the desired output:
time var1 var2 var3
1: 1 1 NA 1
2: 2 2 NA 2
3: 3 3 1 3
4: 4 4 2 4
5: 5 5 3 5
6: 6 6 4 6
7: 7 7 5 NA
8: 8 8 6 NA
9: 9 9 7 NA
10: 10 10 8 NA
Basically nearly all parameters that you find on the approx function description can also be given to the na.interpolation function as additional parameters for finetuning.
Library zoo offers a function for interpolation that allows more customization:
library(zoo)
DT[,(2:4) := lapply(.SD, na.approx, x = time, na.rm = FALSE), .SDcols = 2:4]

Shifting the last non-NA value by id

I have a data table that looks like this:
DT<-data.table(day=c(1,2,3,4,5,6,7,8),Consumption=c(5,9,10,2,NA,NA,NA,NA),id=c(1,2,3,1,1,2,2,1))
day Consumption id
1: 1 5 1
2: 2 9 2
3: 3 10 3
4: 4 2 1
5: 5 NA 1
6: 6 NA 2
7: 7 NA 2
8: 8 NA 1
I want to create two columns that show the last non-Na consumption value before the observation, and the day difference between those observations using the id groups. So far, I tried this:
DT[, j := day-shift(day, fill = NA,n=1), by = id]
DT[, yj := shift(Consumption, fill = NA,n=1), by = id]
day Consumption id j yj
1: 1 5 1 NA NA
2: 2 9 2 NA NA
3: 3 10 3 NA NA
4: 4 2 1 3 5
5: 5 NA 1 1 2
6: 6 NA 2 4 9
7: 7 NA 2 1 NA
8: 8 NA 1 3 NA
However, I want that the lagged consumption values with n=1 come from the rows which have non-NA consumption values. For example, in the 7th row and column "yj", the yj value is NA because it comes from the 6th row which has NA consumption. I want it to come from the 2nd row. Therefore, I would like the end up with this data table:
day Consumption id j yj
1: 1 5 1 NA NA
2: 2 9 2 NA NA
3: 3 10 3 NA NA
4: 4 2 1 3 5
5: 5 NA 1 1 2
6: 6 NA 2 4 9
7: 7 NA 2 5 9
8: 8 NA 1 4 2
Note: The reason for specifically using the parameter n of shift function is that I will also need the 2nd last non-Na consumption values in the next step.
Thank You
Here's a data.table solution with an assist from zoo:
library(data.table)
library(zoo)
DT[, `:=`(day_shift = shift(day),
yj = shift(Consumption)),
by = id]
#make the NA yj records NA for the days
DT[is.na(yj), day_shift := NA_integer_]
#fill the DT with the last non-NA value
DT[,
`:=`(day_shift = na.locf(day_shift, na.rm = F),
yj = zoo::na.locf(yj, na.rm = F)),
by = id]
# finally calculate j
DT[, j:= day - day_shift]
# you can clean up the ordering or remove columns later
DT
day Consumption id day_shift yj j
1: 1 5 1 NA NA NA
2: 2 9 2 NA NA NA
3: 3 10 3 NA NA NA
4: 4 2 1 1 5 3
5: 5 NA 1 4 2 1
6: 6 NA 2 2 9 4
7: 7 NA 2 2 9 5
8: 8 NA 1 4 2 4

R Data Table - join but filter with update

I'm trying to figure out how to join 2 data tables and update the first but with a filter applied.
DT<-data.table(a=rep(1:3,3),b=seq(1:9))
DT
a b
1: 1 1
2: 2 2
3: 3 3
4: 1 4
5: 2 5
6: 3 6
7: 1 7
8: 2 8
9: 3 9
DT2 <- data.table(b=seq(1:9), c=rep(10,9))
> DT2
b c
1: 1 10
2: 2 10
3: 3 10
4: 4 10
5: 5 10
6: 6 10
7: 7 10
8: 8 10
9: 9 10
I can do a basic equijoin like so
DT[DT2, on=c(b="b")]
But what I'd like to do logically is this
DT[a==3,DT2, on=c(b="b")]
but I get the following error
Error in `[.data.table`(DT, a == 3, DT2, on = c(b = "b")) :
logical error. i is not a data.table, but 'on' argument is provided.
I can reverse the order of the join and apply the filter...
DT2[DT[a==3,], on=c(b="b")]
b a
1: 3 3
2: 6 3
3: 9 3
Which gives the correct rows but the column order is incorrect. That aside I'd like to update DT with c but only for the rows I've filtered in DT and that satisfy the join.
If this was SQL I would use an update with a subquery like so:
UPDATE
DT
set
c = (select c from DT2 where DT2.b = DT.B)
WHERE
DT.a=3
I seem to be going in circles with the Data table syntax - can anyone point me in the right direction?
Cheers
David
Another option without having to make a dummy variable is:
DT[a==3, c := DT2[DT[a==3], c, on = c(b="b")]]
DT
# a b c
#1: 1 1 NA
#2: 2 2 NA
#3: 3 3 10
#4: 1 4 NA
#5: 2 5 NA
#6: 3 6 10
#7: 1 7 NA
#8: 2 8 NA
#9: 3 9 10
You can create a dummy variable a in DT2, join on both columns a and b and then Update:
DT[DT2[, c(a = 3, .SD)], c := i.c, on = c("a", "b")]
DT
# a b c
#1: 1 1 NA
#2: 2 2 NA
#3: 3 3 10
#4: 1 4 NA
#5: 2 5 NA
#6: 3 6 10
#7: 1 7 NA
#8: 2 8 NA
#9: 3 9 10

Index the first and the last rows with NA in a dataframe

I have a large dataset, which contains many NAs. I want to find the rows where the first NA and the last NA appear. For example, for column A, I want the output to be the second row (the last NA before a number) and the fifth row (the first NA after a number). My code, which was shown below, does not work very well.
nonnaindex <- which(!is.na(df))
firstnonna <- apply(nonnaindex, 2, min)
Data:
ID A B C
1 NA NA 3
2 NA 2 2
3 3 3 1
4 4 5 NA
5 NA 6 NA
I believe this function might be what you are looking for:
first_and_last_non_na <- function(DT, col) {
library(data.table)
data.table(DT)[, grp := rleid(is.na(get(col)))][
, rbind(last(.SD[is.na(get(col)) & grp == min(grp)]),
first(.SD[is.na(get(col)) & grp == max(grp)]))][
!is.na(ID)][, grp := NULL][]
}
which returns
first_and_last_na_row(DT, "A")
ID A B C
1: 2 NA 2 2
2: 5 NA 6 NA
first_and_last_na_row(DT, "B")
ID A B C
1: 1 NA NA 3
first_and_last_na_row(DT, "C")
ID A B C
1: 4 4 5 NA
first_and_last_na_row(DT, "D")
Empty data.table (0 rows) of 4 cols: ID,A,B,C
in case of
DT
ID A B C
1: 1 NA NA 3
2: 2 NA 2 2
3: 3 3 3 1
4: 4 4 5 NA
5: 5 NA 6 NA
or
first_and_last_na_row(DT2, "D")
ID A B C D
1: 1 NA NA 3 NA
in case of Akrun's (simplified) example
DT2
ID A B C D
1: 1 NA NA 3 NA
2: 2 NA 2 2 2
3: 3 3 3 1 NA
4: 4 4 5 NA NA
5: 5 NA 6 NA 4
Edit: Faster version using melt()
The OP has commented that his production data set consists of 4000 columns and 192 rows and that he needs the indices to clean another data set. He tried a for loop across all columns which is very slow.
Therefore, I suggest to reshape the data set from wide to long format and to use data.table's efficient grouping mechanism:
# reshape from wide to long format
long <- setDT(DT2)[, melt(.SD, id = "ID")][
# add grouping variable to distinguish streaks continuous of NA/non-NA values
# for each variable
, grp := rleid(variable, is.na(value))][
# set sort order just for convenience, not essential
, setorder(.SD, variable, ID)]
long
ID variable value grp
1: 1 A NA 1
2: 2 A NA 1
3: 3 A 3 2
4: 4 A 4 2
5: 5 A NA 3
6: 1 B NA 4
7: 2 B 2 5
8: 3 B 3 5
9: 4 B 5 5
10: 5 B 6 5
11: 1 C 3 6
12: 2 C 2 6
13: 3 C 1 6
14: 4 C NA 7
15: 5 C NA 7
16: 1 D NA 8
17: 2 D 2 9
18: 3 D NA 10
19: 4 D NA 10
20: 5 D 4 11
Now, we get the indices of the starting or ending, resp., NA sequence for each variable (if any) by
# starting NA sequence
long[, .(ID = which(is.na(value) & grp == min(grp))), by = variable]
variable ID
1: A 1
2: A 2
3: B 1
4: D 1
# ending NA sequence
long[, .(ID = which(is.na(value) & grp == max(grp))), by = variable]
variable ID
1: A 5
2: C 4
3: C 5
Note that this returns all indices of the starting or ending NA sequences which might be more convenient for subsequent cleaning of another data set. If only the last and first indices are required this can be achieved by
long[long[, is.na(value) & grp == min(grp), by =variable]$V1, .(ID = max(ID)), by = variable]
variable ID
1: A 2
2: B 1
3: D 1
long[long[, is.na(value) & grp == max(grp), by =variable]$V1, .(ID = min(ID)), by = variable]
variable ID
1: A 5
2: C 4
I have tested this approach using a dummy data set of 192 rows times 4000 columns. The whole operation needed less than one second.

calculating sum of previous 3 rows in R data.table (by grid-square)

I would like to calculate the rainfall that has fallen over the last three days for each grid square, and add this as a new column in my data.table. To be clear, I want to sum up the current and PREVIOUS two (2) days of rainfall, for each meterological grid square
library ( zoo )
library (data.table)
# making the data.table
rain <- c(NA, NA, NA, 0, 0, 5, 1, 0, 3, 10) # rainfall values to work with
square <- c(1,1,1,1,1,1,1,1,1,2) # the geographic grid square for the rainfall measurement
desired_result <- c(NA, NA, NA, NA, NA, 5, 6, 6, 4, NA ) # this is the result I'm looking for (the last NA as we are now on to the first day of the second grid square)
weather <- data.table(rain, square, desired_result) # making the data.table
My attempt to answer: this line used to work, but no longer does
weather[, rain_3 := filter(rain, rep(1, 2), sides = 1), by = list(square)]
So here I am trying another method:
# this next line gets the numbers right, but sums the following values, not the preceeding ones.
weather$rain_3 <- rollapply(zoo(weather$rain), list(seq(-2,0)), sum)
# here I add in the by weather$ square, but still no success
weather$rain_3 <- rollapply(zoo(weather$rain), list(seq(-2,0)), sum, by= list(weather$square))
I would greatly appreciate any insights or suggestions you may have.
Many thanks!
Here's a quick and efficient solution using the latest data.table version (v 1.9.6+)
weather[, rain_3 := Reduce(`+`, shift(rain, 0:2)), by = square]
weather
# rain square desired_result rain_3
# 1: NA 1 NA NA
# 2: NA 1 NA NA
# 3: NA 1 NA NA
# 4: 0 1 NA NA
# 5: 0 1 NA NA
# 6: 5 1 5 5
# 7: 1 1 6 6
# 8: 0 1 6 6
# 9: 3 1 4 4
# 10: 10 2 NA NA
The basic idea here is to shift the rain column twice and then sum up the rows.
The rollapply solution would be done like this:
weather[, rain_3 := rollapplyr(rain, 3, sum, fill = NA_real_), by = square]
giving:
rain square desired_result rain_3
1: NA 1 NA NA
2: NA 1 NA NA
3: NA 1 NA NA
4: 0 1 NA NA
5: 0 1 NA NA
6: 5 1 5 5
7: 1 1 6 6
8: 0 1 6 6
9: 3 1 4 4
10: 10 2 NA NA
Update
Have simplified based on version of zoo that came out since this question was originally asked.
weather[, rain_3 := filter(rain, rep(1, 3), sides = 1), by = list(square)]
#Error in filter(rain, rep(1, 3), sides = 1) :
# 'filter' is longer than time series
weather[, rain_3 := if(.N > 2) filter(rain, rep(1, 3), sides = 1) else NA_real_,
by = square]
# rain square desired_result rain_3
# 1: NA 1 NA NA
# 2: NA 1 NA NA
# 3: NA 1 NA NA
# 4: 0 1 NA NA
# 5: 0 1 NA NA
# 6: 5 1 5 5
# 7: 1 1 6 6
# 8: 0 1 6 6
# 9: 3 1 4 4
#10: 10 2 NA NA
Take care that dplyr is not loaded because it masks filter. If you need dplyr, you can call stats::filter explicitly.
You have almost got the answer yourself. rollsum (or rollapply in your case) gives you the vector of length N-2, so you just have to fill the desired cells with NAs. It can be simply done like this: roll<-c(NA,NA,rollsum(yourvector,k=3))
Here is how I do it. I am using roll_sum from {RcppRoll} package, because it is much faster and deals with NAs easier. Simple by argument from data.table lets you group result by square.
library(RcppRoll)
weather[,rain_3:=if(.N>2){c(NA,NA,roll_sum(rain,n=3))}else{NA},by=square]
weather
rain square desired_result rain_3
1: NA 1 NA NA
2: NA 1 NA NA
3: NA 1 NA NA
4: 0 1 NA NA
5: 0 1 NA NA
6: 5 1 5 5
7: 1 1 6 6
8: 0 1 6 6
9: 3 1 4 4
10: 10 2 NA NA
Late to the party, but a more recent version of data.table package (1.12.8 for me) has frollsum function that will accomplish this a bit more cleanly than earlier (but very much valid) answers:
library (data.table)
# making the data.table
rain <- c(NA, NA, NA, 0, 0, 5, 1, 0, 3, 10) # rainfall values to work with
square <- c(1,1,1,1,1,1,1,1,1,2) # the geographic grid square for the rainfall measurement
desired_result <- c(NA, NA, NA, NA, NA, 5, 6, 6, 4, NA ) # this is the result I'm looking for (the last NA as we are now on to the first day of the second grid square)
weather <- data.table(rain, square, desired_result) # making the data.table
# using `frollsum`
weather[, rain3 := frollsum(rain, n = 3), by = square][]
#> rain square desired_result rain3
#> 1: NA 1 NA NA
#> 2: NA 1 NA NA
#> 3: NA 1 NA NA
#> 4: 0 1 NA NA
#> 5: 0 1 NA NA
#> 6: 5 1 5 5
#> 7: 1 1 6 6
#> 8: 0 1 6 6
#> 9: 3 1 4 4
#> 10: 10 2 NA NA
Created on 2020-07-09 by the reprex package (v0.3.0)
A dplyr solution:
library(dplyr)
weather %>%
group_by(square) %>%
mutate(rain_3 = rain + lag(rain) + lag(rain, n = 2L))
Result:
Source: local data table [10 x 4]
rain square desired_result rain_3
(dbl) (dbl) (dbl) (dbl)
1 NA 1 NA NA
2 NA 1 NA NA
3 NA 1 NA NA
4 0 1 NA NA
5 0 1 NA NA
6 5 1 5 5
7 1 1 6 6
8 0 1 6 6
9 3 1 4 4
10 10 2 NA NA
If you want to assign rain3 to your dataset, you can use the %<>% symbol from maggritr in your pipe:
library(magrittr)
weather %<>%
group_by......

Resources