Create equal number of rows for observations in data.tables - r

I have several hundred data sets that cover several hundred variables for the period from 1875 to 2020. However, there are not the same number of entries for each year, or even none at all, so I would like to adjust the data sets.
Specifically, I would like to have the same number of rows for each year, with the added series for each year containing only NAs. If the year with the most entries has 5 rows in the data set, then all years should have 5 rows in the data set. If a year is not yet included in the data set, it would have to be added with the corresponding number of rows and NAs for all variables.
Due to the size of the data sets I would like to work with data.tables, but I have no idea how to solve this problem in an efficient way using data.table coding. My previous attempts were mainly loop combinations, which made the processing extremely slow. For your orientation, here is a minimal example of the data set structure. Any kind of help is deeply appreciated.
First <- 1875; Last <- 2020
Year <- c(1979,1979,1979,1982,1987,1987,1987,1988,1989,1990,1993,1995,1997,1997,1998,1999,2000)
Sums <- c(0.30,1.47,4.05,1.30,1.42,1.86,1.29,0.97,1.54,0.46,0.67,0.98,1.73,0.74,1.70,0.95,0.90)
Days <- c(3,4,3,5,3,3,3,3,7,3,8,10,3,3,3,3,3)
Data <- data.table(Year=Year, Sums=Sums, Days=Days)
Ideally, the procedure would output a data.table with a similar pattern. For reasons of readability, the data set does not start with 1875, but 1975.
Year Sums Days
1: 1979 0.30 3 # 1979 has the most observations in the data.table
2: 1979 1.47 4
3: 1979 4.05 3
4: 1982 1.30 5
5: 1982 1.42 3
6: 1982 NA NA # New observation
7: 1987 1.86 3
8: 1987 1.29 3
9: 1987 0.97 3
10: 1988 1.54 7
11: 1988 NA NA # New observation
12: 1988 NA NA # New observation
13: 1989 0.46 3
14: 1989 NA NA # New observation
15: 1989 NA NA # New observation
16: 1990 0.67 8
17: 1990 NA NA # New obeservation
18: 1990 NA NA # New obeservation
19: 1991 NA NA # New observation for 1991; year wasn't included previously
20: 1991 NA NA # New observation for 1991; year wasn't included previously
21: 1991 NA NA # New observation; year wasn't included
22: 1992 NA NA # New observation; year wasn't included
23: 1992 NA NA # New observation; year wasn't included
24: 1992 NA NA # New observation; year wasn't included
25: 1993 0.98 10
26: 1993 NA NA # New observation
27: 1993 NA NA # New observation
28: 1994 NA NA # New observation; year wasn't included
29: 1994 NA NA # New observation; year wasn't included
30: 1994 NA NA # New observation; year wasn't included
31: 1995 1.73 3
32: 1995 NA NA # New obeservations
33: 1995 NA NA # New obeservations
..................

Another data.table option:
Data[, ri := rowid(Year)][
CJ(Year=seq(min(Year), max(Year), by=1L), ri=seq.int(max(ri))), on=.NATURAL]
Or for a specific range (First to Last):
Data[, ri := rowid(Year)][
CJ(Year=First:Last, ri=seq.int(max(ri))), on=.NATURAL]

n <- max(table(Data$Year))
setkey(Data, Year)
Data2 <- Data[J(First:Last), .SD[1:n], by = .EACHI]
Or without setting key (thanks to chinsoon12):
Data2 <- Data[J(Year = First:Last), on = .NATURAL, .SD[1:n], by = .EACHI]
Example output:
Data2[Year %between% c(1996L, 1999L)]
# Year Sums Days
# 1: 1996 NA NA
# 2: 1996 NA NA
# 3: 1996 NA NA
# 4: 1997 1.73 3
# 5: 1997 0.74 3
# 6: 1997 NA NA
# 7: 1998 1.70 3
# 8: 1998 NA NA
# 9: 1998 NA NA
# 10: 1999 0.95 3
# 11: 1999 NA NA
# 12: 1999 NA NA

We can find the most number of rows for a particular year using table function. We can then use complete to include all the incomplete observations from First to Last year with each year having n rows.
library(dplyr)
library(tidyr)
n <- max(table(Data$Year))
Data %>%
group_by(Year) %>%
mutate(row = row_number()) %>%
ungroup %>%
complete(Year = First:Last, row = 1:n, fill = list(Sums = 0, Days = 0))
# A tibble: 438 x 4
# Year row Sums Days
# <dbl> <int> <dbl> <dbl>
# 1 1875 1 0 0
# 2 1875 2 0 0
# 3 1875 3 0 0
# 4 1876 1 0 0
# 5 1876 2 0 0
# 6 1876 3 0 0
# 7 1877 1 0 0
# 8 1877 2 0 0
# 9 1877 3 0 0
#10 1878 1 0 0
# … with 428 more rows

Related

frequency of a time series with NA (R software)

I would like to work with time series in R, but I am stuck at the beginning cause I have a problem with the frequency. I have monthly data for 30 years (start date = 1988, end date = 2018), but with holes sometimes: I have months or years with no data.
I would prefer not to interpolate or mediate in order to fill the hole, but just omit the empty month/year using na.omit. The problem is that I am going to have some years with 12 months and others with less months.
My question is..How could I determine and work with my frequency now?
Here an example of two years:
YEAR MONTH Temp Salt
1988 1 NA NA
1988 2 NA NA
1988 3 NA NA
1988 4 NA NA
1988 5 NA NA
1988 6 NA NA
1988 7 16.45388889 37.4064537
1988 8 17.48898148 37.89002778
1988 9 NA NA
1988 10 NA NA
1988 11 15.8050463 38.08833333
1988 12 NA NA
1989 1 NA NA
1989 2 10.74912037 38.2787037
1989 3 NA NA
1989 4 NA NA
1989 5 NA NA
1989 6 14.52092593 37.71060185
1989 7 16.84342593 37.32300926
1989 8 17.97930556 37.82277778
1989 9 NA NA
1989 10 NA NA
1989 11 16.23837963 38.00009259
1989 12 13.6325463 37.97509259
Any advice would be useful!
Thank you very much!
The zooreg class in the zoo package is intended for this type of situation where you have an underlying regularity but some values may not be present.
Assuming the input in the Note at the end this produces a zooreg series with frequency 12. This is a time series that has frequency 12 but without all values. Replace text = Lines with your filename, e.g. "myfile.dat" to read it in from a file. Note that the yearmon class stores the time as a year plus 0 for January, 1/12 for Feb, 2/12 for March, etc.
library(zoo)
to_ym <- function(y, m) as.yearmon(paste(y, m, sep = "-"))
z <- read.zoo(text = Lines, header = TRUE, index = 1:2, FUN = to_ym, regular = TRUE)
z <- na.omit(z)
frequency(z)
## [1] 12
The question is not clear on exactly what you have (a file? a data frame?) but if what you have is a data.frame DF read it in from that. to_ym is from above.
DF <- read.table(text = Lines, header = TRUE)
z <- read.zoo(DF, index = 1:2, FUN = to_ym, regular = TRUE)
z <- na.omit(z)
To restore the NA's just convert it to a ts series:
z_na <- as.zooreg(as.ts(z))
Also, if you do decide to fill in the NAs then there are several routines available including na.spline, na.approx, na.StructTS (Kalman filter) and na.locf.
We can now work with z like like this:
as.integer(time(z)) # year
cycle(time(z)) # month (1 = Jan, 2 = Feb, ...)
start(z) # starting time of series
end(z) # ending time of series
plot(z)
plot(scale(z), screen = 1, col = 1:2)
legend("bottomleft", leg = names(z), col = 1:2, lty = 1)
library(ggplot2)
autoplot(z)
autoplot(z) + facet_free()
autoplot(z, facet = NULL)
autoplot(scale(z), facet = NULL)
Note
Lines <- "
YEAR MONTH Temp Salt
1988 1 NA NA
1988 2 NA NA
1988 3 NA NA
1988 4 NA NA
1988 5 NA NA
1988 6 NA NA
1988 7 16.45388889 37.4064537
1988 8 17.48898148 37.89002778
1988 9 NA NA
1988 10 NA NA
1988 11 15.8050463 38.08833333
1988 12 NA NA
1989 1 NA NA
1989 2 10.74912037 38.2787037
1989 3 NA NA
1989 4 NA NA
1989 5 NA NA
1989 6 14.52092593 37.71060185
1989 7 16.84342593 37.32300926
1989 8 17.97930556 37.82277778
1989 9 NA NA
1989 10 NA NA
1989 11 16.23837963 38.00009259
1989 12 13.6325463 37.97509259"

Summarizing a dataframe by date and group

I am trying to summarize a data set by a few different factors. Below is an example of my data:
household<-c("household1","household1","household1","household2","household2","household2","household3","household3","household3")
date<-c(sample(seq(as.Date('1999/01/01'), as.Date('2000/01/01'), by="day"), 9))
value<-c(1:9)
type<-c("income","water","energy","income","water","energy","income","water","energy")
df<-data.frame(household,date,value,type)
household date value type
1 household1 1999-05-10 100 income
2 household1 1999-05-25 200 water
3 household1 1999-10-12 300 energy
4 household2 1999-02-02 400 income
5 household2 1999-08-20 500 water
6 household2 1999-02-19 600 energy
7 household3 1999-07-01 700 income
8 household3 1999-10-13 800 water
9 household3 1999-01-01 900 energy
I want to summarize the data by month. Ideally the resulting data set would have 12 rows per household (one for each month) and a column for each category of expenditure (water, energy, income) that is a sum of that month's total.
I tried starting by adding a column with a short date, and then I was going to filter for each type and create a separate data frame for the summed data per transaction type. I was then going to merge those data frames together to have the summarized df. I attempted to summarize it using ddply, but it aggregated too much, and I can't keep the household level info.
ddply(df,.(shortdate),summarize,mean_value=mean(value))
shortdate mean_value
1 14/07 15.88235
2 14/09 5.00000
3 14/10 5.00000
4 14/11 21.81818
5 14/12 20.00000
6 15/01 10.00000
7 15/02 12.50000
8 15/04 5.00000
Any help would be much appreciated!
It sounds like what you are looking for is a pivot table. I like to use reshape::cast for these types of tables. If there is more than one value returned for a given expenditure type for a given household/year/month combination, this will sum those values. If there is only one value, it returns the value. The "sum" argument is not required but only placed there to handle exceptions. I think if your data is clean you shouldn't need this argument.
hh <- c("hh1", "hh1", "hh1", "hh2", "hh2", "hh2", "hh3", "hh3", "hh3")
date <- c(sample(seq(as.Date('1999/01/01'), as.Date('2000/01/01'), by="day"), 9))
value <- c(1:9)
type <- c("income", "water", "energy", "income", "water", "energy", "income", "water", "energy")
df <- data.frame(hh, date, value, type)
# Load lubridate library, add date and year
library(lubridate)
df$month <- month(df$date)
df$year <- year(df$date)
# Load reshape library, run cast from reshape, creates pivot table
library(reshape)
dfNew <- cast(df, hh+year+month~type, value = "value", sum)
> dfNew
hh year month energy income water
1 hh1 1999 4 3 0 0
2 hh1 1999 10 0 1 0
3 hh1 1999 11 0 0 2
4 hh2 1999 2 0 4 0
5 hh2 1999 3 6 0 0
6 hh2 1999 6 0 0 5
7 hh3 1999 1 9 0 0
8 hh3 1999 4 0 7 0
9 hh3 1999 8 0 0 8
Try this:
df$ym<-zoo::as.yearmon(as.Date(df$date), "%y/%m")
library(dplyr)
df %>% group_by(ym,type) %>%
summarise(mean_value=mean(value))
Source: local data frame [9 x 3]
Groups: ym [?]
ym type mean_value
<S3: yearmon> <fctr> <dbl>
1 jan 1999 income 1
2 jun 1999 energy 3
3 jul 1999 energy 6
4 jul 1999 water 2
5 ago 1999 income 4
6 set 1999 energy 9
7 set 1999 income 7
8 nov 1999 water 5
9 dez 1999 water 8
Edit: the wide format:
reshape2::dcast(dfr, ym ~ type)
ym energy income water
1 jan 1999 NA 1 NA
2 jun 1999 3 NA NA
3 jul 1999 6 NA 2
4 ago 1999 NA 4 NA
5 set 1999 9 7 NA
6 nov 1999 NA NA 5
7 dez 1999 NA NA 8
If I understood your requirement correctly (from the description in the question), this is what you are looking for:
library(dplyr)
library(tidyr)
df %>% mutate(date = lubridate::month(date)) %>%
complete(household, date = 1:12) %>%
spread(type, value) %>% group_by(household, date) %>%
mutate(Total = sum(energy, income, water, na.rm = T)) %>%
select(household, Month = date, energy:water, Total)
#Source: local data frame [36 x 6]
#Groups: household, Month [36]
#
# household Month energy income water Total
# <fctr> <dbl> <dbl> <dbl> <dbl> <dbl>
#1 household1 1 NA NA NA 0
#2 household1 2 NA NA NA 0
#3 household1 3 NA NA 200 200
#4 household1 4 NA NA NA 0
#5 household1 5 NA NA NA 0
#6 household1 6 NA NA NA 0
#7 household1 7 NA NA NA 0
#8 household1 8 NA NA NA 0
#9 household1 9 300 NA NA 300
#10 household1 10 NA NA NA 0
# ... with 26 more rows
Note: I used the same df you provided in the question. The only change I made was the value column. Instead of 1:9, I used seq(100, 900, 100)
If I got it wrong, please let me know and I will delete my answer. I will add an explanation of what's going on if this is correct.

How can I drop observations within a group following the occurrence of NA?

I am trying to clean my data. One of the criteria is that I need an uninterrupted sequence of a variable "assets", but I have some NAs. However, I cannot simply delete the NA observations, but need to delete all subsequent observations following the NA event.
Here an example:
productreference<-c(1,1,1,1,2,2,2,3,3,3,3,4,4,4,5,5,5,5)
Year<-c(2000,2001,2002,2003,1999,2000,2001,2005,2006,2007,2008,1998,1999,2000,2000,2001,2002,2003)
assets<-c(2,3,NA,2,34,NA,45,1,23,34,56,56,67,23,23,NA,14,NA)
mydf<-data.frame(productreference,Year,assets)
mydf
# productreference Year assets
# 1 1 2000 2
# 2 1 2001 3
# 3 1 2002 NA
# 4 1 2003 2
# 5 2 1999 34
# 6 2 2000 NA
# 7 2 2001 45
# 8 3 2005 1
# 9 3 2006 23
# 10 3 2007 34
# 11 3 2008 56
# 12 4 1998 56
# 13 4 1999 67
# 14 4 2000 23
# 15 5 2000 23
# 16 5 2001 NA
# 17 5 2002 14
# 18 5 2003 NA
I have already seen that there is a way to carry out functions by group using plyr and I have also been able to create a column with 0-1, where 0 indicates that assets has a valid entry and 1 highlights missing values of NA.
mydf$missing<-ifelse(mydf$assets>=0,0,1)
mydf[c("missing")][is.na(mydf[c("missing")])] <- 1
I have a very large data set so cannot manually delete the rows and would greatly appreciate your help!
I believe this is what you want:
library(dplyr)
group_by(mydf, productreference) %>%
filter(cumsum(is.na(assets)) == 0)
# Source: local data frame [11 x 3]
# Groups: productreference [5]
#
# productreference Year assets
# (dbl) (dbl) (dbl)
# 1 1 2000 2
# 2 1 2001 3
# 3 2 1999 34
# 4 3 2005 1
# 5 3 2006 23
# 6 3 2007 34
# 7 3 2008 56
# 8 4 1998 56
# 9 4 1999 67
# 10 4 2000 23
# 11 5 2000 23
Here is the same approach using data.table:
library(data.table)
dt <- as.data.table(mydf)
dt[,nas:= cumsum(is.na(assets)),by="productreference"][nas==0]
# productreference Year assets nas
# 1: 1 2000 2 0
# 2: 1 2001 3 0
# 3: 2 1999 34 0
# 4: 3 2005 1 0
# 5: 3 2006 23 0
# 6: 3 2007 34 0
# 7: 3 2008 56 0
# 8: 4 1998 56 0
# 9: 4 1999 67 0
#10: 4 2000 23 0
#11: 5 2000 23 0
Here is a base R option
mydf[unsplit(lapply(split(mydf, mydf$productreference),
function(x) cumsum(is.na(x$assets))==0), mydf$productreference),]
# productreference Year assets
#1 1 2000 2
#2 1 2001 3
#5 2 1999 34
#8 3 2005 1
#9 3 2006 23
#10 3 2007 34
#11 3 2008 56
#12 4 1998 56
#13 4 1999 67
#14 4 2000 23
#15 5 2000 23
Or an option with data.table
library(data.table)
setDT(mydf)[, if(any(is.na(assets))) .SD[seq(which(is.na(assets))[1]-1)]
else .SD, by = productreference]
You can do it using base R and a for loop. This code is a bit longer than some of the code in the other answers. In the loop we subset mydf by productreference and for every subset we look for the first occurrence of assets==NA, and exclude that row and all following rows.
mydf2 <- NULL
for (i in 1:max(mydf$productreference)){
s1 <- mydf[mydf$productreference==i,]
s2 <- s1[1:ifelse(all(!is.na(s1$assets)), NROW(s1), min(which(is.na(s1$assets)==T))-1),]
mydf2 <- rbind(mydf2, s2)
mydf2 <- mydf2[!is.na(mydf2$assets),]
}
mydf2

aggregate + mean returns wrong result

Using R, I am about to calculate groupwise means with aggregate(..., mean). The mean return however is wrong.
testdata <-read.table(text="
a b c d year
2 10 1 NA 1998
1 7 NA NA 1998
4 6 NA NA 1998
2 2 NA NA 1998
4 3 2 1 1998
2 6 NA NA 1998
3 NA NA NA 1998
2 7 NA 3 1998
1 8 NA 4 1998
2 7 2 5 1998
1 NA NA 4 1998
2 5 NA 6 1998
2 4 NA NA 1998
3 11 2 7 1998
1 18 4 10 1998
3 12 7 5 1998
2 17 NA NA 1998
2 11 4 5 1998
1 3 1 1 1998
3 5 1 3 1998
",header=TRUE,sep="")
aggregate(. ~ year, testdata,
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)))
colMeans(subset(testdata, year=="1998", select=d), na.rm=TRUE)
aggregate says the mean of d for group 1998 is 4.62, but it is 4.5.
Reducing the data to one column only, aggregate gets it right:
aggregate(. ~ year, test[4:5],
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)))
What's wrong with my aggregate() + mean() function?
aggregate is taking out your rows containing NAs in any column before passing it to the mean function. Try running your aggregate call without na.rm=TRUE - it will still work.
To fix this, you need to change the default na.action in aggregate to na.pass:
aggregate(. ~ year, testdata,
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)), na.action = na.pass)
year a b c d
1 1998 2.15 7.89 2.67 4.5

Removing rows of data frame if number of NA in a column is larger than 3

I have a data frame (panel data): Ctry column indicates the name of countries in my data frame. In any column (for example: Carx) if number of NAs is larger 3; I want to drop the related country in my data fame. For example,
Country A has 2 NA
Country B has 4 NA
Country C has 3 NA
I want to drop country B in my data frame. I have a data frame like this (This is for illustration, my data frame is actually very huge):
Ctry year Carx
A 2000 23
A 2001 18
A 2002 20
A 2003 NA
A 2004 24
A 2005 18
B 2000 NA
B 2001 NA
B 2002 NA
B 2003 NA
B 2004 18
B 2005 16
C 2000 NA
C 2001 NA
C 2002 24
C 2003 21
C 2004 NA
C 2005 24
I want to create a data frame like this:
Ctry year Carx
A 2000 23
A 2001 18
A 2002 20
A 2003 NA
A 2004 24
A 2005 18
C 2000 NA
C 2001 NA
C 2002 24
C 2003 21
C 2004 NA
C 2005 24
A fairly straightforward way in base R is to use sum(is.na(.)) along with ave, to do the counting, like this:
with(mydf, ave(Carx, Ctry, FUN = function(x) sum(is.na(x))))
# [1] 1 1 1 1 1 1 4 4 4 4 4 4 3 3 3 3 3 3
Once you have that, subsetting is easy:
mydf[with(mydf, ave(Carx, Ctry, FUN = function(x) sum(is.na(x)))) <= 3, ]
# Ctry year Carx
# 1 A 2000 23
# 2 A 2001 18
# 3 A 2002 20
# 4 A 2003 NA
# 5 A 2004 24
# 6 A 2005 18
# 13 C 2000 NA
# 14 C 2001 NA
# 15 C 2002 24
# 16 C 2003 21
# 17 C 2004 NA
# 18 C 2005 24
You can use by() function to group by Ctry and count NA's of each group :
DF <- read.csv(
text='Ctry,year,Carx
A,2000,23
A,2001,18
A,2002,20
A,2003,NA
A,2004,24
A,2005,18
B,2000,NA
B,2001,NA
B,2002,NA
B,2003,NA
B,2004,18
B,2005,16
C,2000,NA
C,2001,NA
C,2002,24
C,2003,21
C,2004,NA
C,2005,24',
stringsAsFactors=F)
res <- by(data=DF$Carx,INDICES=DF$Ctry,FUN=function(x)sum(is.na(x)))
validCtry <-names(res)[res <= 3]
DF[DF$Ctry %in% validCtry, ]
# Ctry year Carx
#1 A 2000 23
#2 A 2001 18
#3 A 2002 20
#4 A 2003 NA
#5 A 2004 24
#6 A 2005 18
#13 C 2000 NA
#14 C 2001 NA
#15 C 2002 24
#16 C 2003 21
#17 C 2004 NA
#18 C 2005 24
EDIT :
if you have more columns to check, you could adapt the previous code as follows:
res <- by(data=DF,INDICES=DF$Ctry,
FUN=function(x){
return(sum(is.na(x$Carx)) <= 3 &&
sum(is.na(x$Barx)) <= 3 &&
sum(is.na(x$Tarx)) <= 3)
})
validCtry <- names(res)[res]
DF[DF$Ctry %in% validCtry, ]
where, of course, you may change the condition in FUN according to your needs.
Since you mention that you data is "very huge" (whatever that means exactly), you could try a solution with dplyr and see if it's perhaps faster than the solutions in base R. If the other solutions are fast enough, just ignore this one.
require(dplyr)
newdf <- df %.% group_by(Ctry) %.% filter(sum(is.na(Carx)) <= 3)

Resources