Add multiple columns lagged by one year - r

I need to add a 1-year-lagged version of multiple columns from my dataframe. Here's my data:
data<-data.frame(Year=c("2011","2011","2011","2012","2012","2012","2013","2013","2013"),
Country=c("America","China","India","America","China","India","America","China","India"),
Value1=c(234,443,754,334,117,112,987,903,476),
Value2=c(2,4,5,6,7,8,1,2,2))
And I want to add two columns that contain Value1 and Value2 at t-1, so that my dataframe looks like this:
How can I do this? Would this be the correct way to lag my variables by year?
Thanks in advance!

Using data.table:
library(data.table)
setDT(data)
cols <- grep("^Value", colnames(data), value = TRUE)
data[, paste0(cols, "_lag") := lapply(.SD, shift), .SDcols = cols, by = Country]
# Year Country Value1 Value2 Value1_lag Value2_lag
# 1: 2011 America 234 2 NA NA
# 2: 2011 China 443 4 NA NA
# 3: 2011 India 754 5 NA NA
# 4: 2012 America 334 6 234 2
# 5: 2012 China 117 7 443 4
# 6: 2012 India 112 8 754 5
# 7: 2013 America 987 1 334 6
# 8: 2013 China 903 2 117 7
# 9: 2013 India 476 2 112 8

In dplyr, use lag by group:
library(dplyr) #1.1.0
data %>%
mutate(across(contains("Value"), lag, .names = "{col}_lagged"), .by = Country)
Year Country Value1 Value2 Value1_lagged Value2_lagged
1 2011 America 234 2 NA NA
2 2011 China 443 4 NA NA
3 2011 India 754 5 NA NA
4 2012 America 334 6 234 2
5 2012 China 117 7 443 4
6 2012 India 112 8 754 5
7 2013 America 987 1 334 6
8 2013 China 903 2 117 7
9 2013 India 476 2 112 8
Below 1.1.0:
data %>%
group_by(Country) %>%
mutate(across(c(GDP, Population), lag, .names = "{col}_lagged")) %>%
ungroup()

Another way using dplyr to ge tthe job done.
library(dplyr)
data_lagged <- data %>%
group_by(Country) %>%
mutate(Value1_Lagged = lag(Value1),
Value2_Lagged = lag(Value2),
Year = as.integer(as.character(Year)) + 1)
data_final <- cbind(data, data_lagged[, c("Value1_Lagged", "Value2_Lagged")])
data_final
Output:
Year Country Value1 Value2 Value1_Lagged Value2_Lagged
1 2011 America 234 2 NA NA
2 2011 China 443 4 NA NA
3 2011 India 754 5 NA NA
4 2012 America 334 6 234 2
5 2012 China 117 7 443 4
6 2012 India 112 8 754 5
7 2013 America 987 1 334 6
8 2013 China 903 2 117 7
9 2013 India 476 2 112 8

Related

R: Pivot_Wider/spread by obtaining average sorted by year

I've the following dataset
Pet Shop
Year
Item
Price
A
2021
dog
300
A
2021
dog
250
A
2021
fish
20
A
2020
turtle
50
A
2020
dog
250
A
2020
cat
280
A
2019
rabbit
180
A
2019
cat
165
A
2019
cat
270
B
2021
dog
350
B
2021
fish
80
B
2021
fish
70
B
2020
cat
220
B
2020
turtle
90
B
2020
turtle
80
B
2020
fish
55
B
2019
fish
75
C
2021
dog
280
C
2020
cat
260
C
2020
cat
270
C
2019
fish
65
C
2019
cat
270
The code for the data is as follows
Pet_Shop = c(rep("A",9), rep("B",8), rep("C",5))
Item = c("Dog","Dog","Fish","Turtle","Dog","Cat","Rabbit","Cat","Cat","Dog","Fish","Fish","Cat","Turtle","Turtle","Fish","Fish","Dog","Cat","Cat","Fish","Cat")
Price = c(300,250,20,50,250,280,180,165,270,350,80,70,220,90,80,55,75,280,260,270,65,270)
Data = data.frame(Pet_Shop, Item, Price)
Does anyone here know how I can use pivot_wider or spread (or any other method) to achieve the following table? It groups the Shop by year and takes the average of the similar item of the same shop for the year. I've issues incorporating the year.
Pet Shop
Year
dog
fish
turtle
cat
rabbit
A
2021
Average(300,250) = 275
20
NA
NA
NA
A
2020
250
NA
50
280
NA
A
2019
NA
NA
NA
217.5
NA
B
2021
350
75
NA
NA
NA
B
2020
NA
55
85
220
NA
B
2019
NA
75
NA
NA
NA
C
2021
280
NA
NA
NA
NA
C
2020
NA
NA
NA
265
NA
C
2019
NA
60
NA
270
NA
In pivot_wider you may pass a function (values_fn) to be applied to each combination of Pet_Shop and Year.
result <- tidyr::pivot_wider(Data, names_from = Item,
values_from = Price, values_fn = mean)
result
# Pet_Shop Year dog fish turtle cat rabbit
# <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
#1 A 2021 275 20 NA NA NA
#2 A 2020 250 NA 50 280 NA
#3 A 2019 NA NA NA 218. 180
#4 B 2021 350 75 NA NA NA
#5 B 2020 NA 55 85 220 NA
#6 B 2019 NA 75 NA NA NA
#7 C 2021 280 NA NA NA NA
#8 C 2020 NA NA NA 265 NA
#9 C 2019 NA 65 NA 270 NA
The same can also be done with data.table dcast -
library(data.table)
dcast(setDT(Data), Pet_Shop + Year ~ Item,
value.var = "Price", fun.aggregate = mean)

Make time-period observations into annual observations in R

I have a dataset (df1) on hundreds of national crises, where each observation is a crisis event at the country level with a start and an end date. I also have the date when the crisis was announced (yyyy-mm-dd format), and a bunch of other crisis characteristics.
df1 <- data.frame(cbind(eventID=c(1,2,3,4), country=c("ALB","ALB","ARG","ARG"), start=c(1994, 1998, 1998, 1991), end=c(1996,1999,1999,1993), announcement=c("1994-11-01","1998-03-01","1998-07-01","1992-01-01"), x1=c(6,2,8,7), x2=c("a","q","k","b")))
eventID country start end announcement x1 x2
1 ALB 1994 1996 1994-11-01 6 a
2 ALB 1998 1999 1998-03-01 2 q
3 ARG 1998 1999 1998-07-01 8 k
4 ARG 1991 1993 1992-01-01 7 b
I need to make df2, a panel of countries with annual observations from the earliest "start" year to the latest "end" year. I want to have a dummy variable, "crisis", that equals 1 for the years between "start" and "end" in df1, and 0 otherwise. I want "announcement" to contain the announcement date in df1 for the year with an announcement, and "NA" otherwise. I would like the extra crisis characteristics, x1 and x2, to show up for crisis years to which they correspond, and "NA" otherwise.
I also need observations for each country for years in which no country has a crisis (in df2: 1997).
df2 <- data.frame(cbind(year=c(1991,1992,1993,1994,1995,1996,1997,1998,1999,1991,1992,1993,1994,1995,1996,1997,1998,1999), country=c("ALB","ALB","ALB","ALB","ALB","ALB","ALB","ALB","ALB","ARG","ARG","ARG","ARG","ARG","ARG","ARG","ARG","ARG"),crisis=c(0,0,0,1,1,1,0,1,1,1,1,1,0,0,0,0,1,1), announcement=c(NA, NA,NA,"1994-11-01",NA,NA,NA,"1998-03-01",NA,NA,"1992-01-01",NA,NA,NA,NA,NA,"1998-07-01"), x1=c(NA,NA,NA,6,6,6,NA,2,2,8,8,8,NA,NA,NA,NA,7,7), x2=c(NA,NA,NA,"a","a","a",NA,"q","q","k","k","k",NA,NA,NA,NA,"b","b")))
year country crisis announcement x1 x2
1991 ALB 0 NA NA NA
1992 ALB 0 NA NA NA
1993 ALB 0 NA NA NA
1994 ALB 1 1994-11-01 6 a
1995 ALB 1 NA 6 a
1996 ALB 1 NA 6 a
1997 ALB 0 NA NA NA
1998 ALB 1 1998-03-01 2 q
1999 ALB 1 NA 2 q
1991 ARG 1 NA 8 k
1992 ARG 1 1992-01-01 8 k
1993 ARG 1 NA 8 k
1994 ARG 0 NA NA NA
1995 ARG 0 NA NA NA
1996 ARG 0 NA NA NA
1997 ARG 0 NA NA NA
1998 ARG 1 1998-07-01 7 b
1999 ARG 1 NA 7 b
I would love any suggestions! I'm stumped as to how to replicate the observations for each year, but only include x1 and x2 values when my new "crisis" dummy = 1
Thanks!
Making use of dplyr and tidyr this could be achieved like so:
library(dplyr)
library(tidyr)
df1 <- data.frame(cbind(eventID=c(1,2,3,4), country=c("ALB","ALB","ARG","ARG"), start=c(1994, 1998, 1998, 1991), end=c(1996,1999,1999,1993), announcement=c("1994-11-01","1998-03-01","1998-07-01","1992-01-01"), x1=c(6,2,8,7), x2=c("a","q","k","b")))
df1 %>%
mutate(year = factor(start, levels = min(start):max(end))) %>%
complete(year, country) %>%
mutate(year = as.numeric(as.character(year))) %>%
arrange(country, year) %>%
group_by(country) %>%
fill(eventID, end, x1, x2) %>%
ungroup() %>%
mutate(across(c(eventID, end, x1, x2), ~ ifelse(end < year, NA, .)),
crisis = as.numeric(!is.na(eventID)))
#> # A tibble: 18 x 9
#> year country eventID start end announcement x1 x2 crisis
#> <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl>
#> 1 1991 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 2 1992 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 3 1993 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 4 1994 ALB 1 1994 1996 1994-11-01 6 a 1
#> 5 1995 ALB 1 <NA> 1996 <NA> 6 a 1
#> 6 1996 ALB 1 <NA> 1996 <NA> 6 a 1
#> 7 1997 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 8 1998 ALB 2 1998 1999 1998-03-01 2 q 1
#> 9 1999 ALB 2 <NA> 1999 <NA> 2 q 1
#> 10 1991 ARG 4 1991 1993 1992-01-01 7 b 1
#> 11 1992 ARG 4 <NA> 1993 <NA> 7 b 1
#> 12 1993 ARG 4 <NA> 1993 <NA> 7 b 1
#> 13 1994 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 14 1995 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 15 1996 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 16 1997 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 17 1998 ARG 3 1998 1999 1998-07-01 8 k 1
#> 18 1999 ARG 3 <NA> 1999 <NA> 8 k 1

How to remove subjects with missing yearly observations in R?

num Name year age X
1 1 A 2011 68 116292
2 1 A 2012 69 46132
3 1 A 2013 70 7042
4 1 A 2014 71 -100425
5 1 A 2015 72 6493
6 2 B 2011 20 -8484
7 3 C 2015 23 -120836
8 4 D 2011 3 -26523
9 4 D 2012 4 9923
10 4 D 2013 5 82432
I have the data which is represented by various subjects in 5 years. I need to remove all the subjects, which are missing any of years from 2011 to 2015. How can I accomplish it, so in given data only subject A is left?
Using data.table:
A data.table solution might look something like this:
library(data.table)
dt <- as.data.table(df)
dt[, keep := identical(unique(year), 2011:2015), by = Name ][keep == T, ][,keep := NULL]
# num Name year age X
#1: 1 A 2011 68 116292
#2: 1 A 2012 69 46132
#3: 1 A 2013 70 7042
#4: 1 A 2014 71 -100425
#5: 1 A 2015 72 6493
This is more strict in that it requires that the unique years be exactly equal to 2011:2015. If there is a 2016, for example that person would be excluded.
A less restrictive solution would be to check that 2011:2015 is in your unique years. This should work:
dt[, keep := all(2011:2015 %in% unique(year)), by = Name ][keep == T, ][,keep := NULL]
Thus, if for example, A had a 2016 year and a 2010 year it would still keep all of A. But if anyone is missing a year in 2011:2015 this would exclude them.
Using base R & aggregate:
Same option, but using aggregate from base R:
agg <- aggregate(df$year, by = list(df$Name), FUN = function(x) all(2011:2015 %in% unique(x)))
df[df$Name %in% agg[agg$x == T, 1] ,]
Here is a slightly more straightforward tidyverse solution.
First, expand the dataframe to include all combinations of Name + year:
df %>% complete(Name, year)
# A tibble: 20 x 5
Name year num age X
<fctr> <int> <int> <int> <int>
1 A 2011 1 68 116292
2 A 2012 1 69 46132
3 A 2013 1 70 7042
4 A 2014 1 71 -100425
5 A 2015 1 72 6493
6 B 2011 2 20 -8484
7 B 2012 NA NA NA
8 B 2013 NA NA NA
9 B 2014 NA NA NA
10 B 2015 NA NA NA
...
Then extend the pipe to group by "Name", and filter to keep only those with 0 NA values:
df %>% complete(Name, year) %>%
group_by(Name) %>%
filter(sum(is.na(age)) == 0)
# A tibble: 5 x 5
# Groups: Name [1]
Name year num age X
<fctr> <int> <int> <int> <int>
1 A 2011 1 68 116292
2 A 2012 1 69 46132
3 A 2013 1 70 7042
4 A 2014 1 71 -100425
5 A 2015 1 72 6493
Just check which names have the right number of entries.
## Reproduce your data
df = read.table(text=" num Name year age X
1 1 A 2011 68 116292
2 1 A 2012 69 46132
3 1 A 2013 70 7042
4 1 A 2014 71 -100425
5 1 A 2015 72 6493
6 2 B 2011 20 -8484
7 3 C 2015 23 -120836
8 4 D 2011 3 -26523
9 4 D 2012 4 9923
10 4 D 2013 5 82432",
header=TRUE)
Tab = table(df$Name)
Keepers = names(Tab)[which(Tab == 5)]
df[df$Name %in% Keepers,]
num Name year age X
1 1 A 2011 68 116292
2 1 A 2012 69 46132
3 1 A 2013 70 7042
4 1 A 2014 71 -100425
5 1 A 2015 72 6493
Here is a somewhat different approach using tidyverse packages:
library(tidyverse)
df <- read.table(text = " num Name year age X
1 1 A 2011 68 116292
2 1 A 2012 69 46132
3 1 A 2013 70 7042
4 1 A 2014 71 -100425
5 1 A 2015 72 6493
6 2 B 2011 20 -8484
7 3 C 2015 23 -120836
8 4 D 2011 3 -26523
9 4 D 2012 4 9923
10 4 D 2013 5 82432")
df2 <- spread(data = df, key = Name, value = year)
x <- colSums(df2[, 4:7], na.rm = TRUE) > 10000
df3 <- select(df2, num, age, X, c(4:7)[x])
df4 <- na.omit(df3)
All steps can of course be constructed as one single pipe with the %>% operator.

calculate proportion in grouped data frame

I have the following data frame :
year owngun N
1 2000 Yes 603
2 2000 No 1231
3 2000 Refused 23
4 2012 Yes 440
5 2012 No 841
6 2012 Refused 24
How can I make a column to have the proportions for each year and level of owngun?
Assuming your N's are already your aggregated counts, you could get proportions using data.table:
library(data.table)
setDT(df)[,prop:=N/sum(N),by=year]
df
year owngun N prop
1: 2000 Yes 603 0.32471729
2: 2000 No 1231 0.66289715
3: 2000 Refused 23 0.01238557
4: 2012 Yes 440 0.33716475
5: 2012 No 841 0.64444444
6: 2012 Refused 24 0.01839080
Same approach using plyr:
library(plyr)
df2<-ddply(df,.(year),transform,prop=N/sum(N))
We can use ave from base R
df1$prop <- with(df1, N/ave(N, year, FUN = sum))
df1$prop
#[1] 0.32471729 0.66289715 0.01238557 0.33716475 0.64444444 0.01839080
Or another option with tapply
with(df1, prop.table(tapply(N, list(year, owngun), FUN = sum), 1))
>df
year owngun N
1 2000 Yes 603
2 2000 No 1231
3 2000 Refused 23
4 2012 Yes 440
5 2012 No 841
6 2012 Refused 24
>library(dplyr)
> df %>% group_by(year) %>% mutate(Proportion=N/sum(N))
year owngun N Proportion
(int) (fctr) (int) (dbl)
1 2000 Yes 603 0.32471729
2 2000 No 1231 0.66289715
3 2000 Refused 23 0.01238557
4 2012 Yes 440 0.33716475
5 2012 No 841 0.64444444
6 2012 Refused 24 0.01839080
prop.table and xtabs could be a handy tool:
library(magrittr)
xtabs(N ~., df) %>% prop.table(1) %>% round(2)
# owngun
#year No Refused Yes
# 2000 0.66 0.01 0.32
# 2012 0.64 0.02 0.34
Is that what you want?
ndf<-reshape2::dcast(dfr[,-1], owngun ~ year)
ndf$p2000=ndf$`2000`/rowSums(ndf[,-1])
ndf$p2012=ndf$`2012`/rowSums(ndf[,-1])
ndf[c(3,1,2),]
Proportion of year by level of owngun
owngun 2000 2012 p2000 p2012
3 Yes 603 440 0.5781400 0.4216263
1 No 1231 841 0.5941120 0.4057717
2 Refused 23 24 0.4893617 0.5053763
Proportion of owngun by year
ndf<-reshape2::dcast(dfr[,-1], year ~ owngun)
cbind(year=ndf$year,(100*ndf[,-1]/apply(ndf[,-1], 1, sum))[,c(3,1,2)])
year Yes No Refused
1 2000 32.47173 66.28971 1.238557
2 2012 33.71648 64.44444 1.839080

R ifelse condition: frequency of continuously NA

I'm new to R, and I was looking for similar questions, but was not able to find one to fix mine, any help would be appreciated.
I have a data frame M:
date value
1 182-2002-01-01 23.95
2 182-2002-01-02 17.47
3 182-2002-01-03 NA
4 183-2002-01-01 NA
5 183-2002-01-02 5.50
6 183-2002-01-03 17.02
What I need to do is: if there are less than 5 NA (continuously), I will just repeat the previous number(17.47), and if there are more than 5 NA in a row, I will need to delete the whole month.
I tried function rle many times, but didn't work, many thanks for your help.
I'm going to adjust your question a little bit for the purposes of demonstration.
I'm going to use a similar dataset to you, but for 2 NAs in a row. This generalises to 5 very easily, don't worry. I'm also going to use a data set that better demonstrates the solution
So first, how to get your data to look like what I'm going to use:
library(reshape)
M2<-data.frame(colsplit(M$date, "-", c("ID", "year", "month", "day")),
value=M$value)
Now that's out of the road, this is the data I'm going to work with:
library(reshape)
M2<-data.frame(colsplit(M$date, "-", c("ID", "year", "month", "day")),
value=M$value)
set.seed(1234)
M2<-expand.grid(ID=182, year=2002:2004, month=1:2, day=1:3, KEEP.OUT.ATTRS=FALSE)
M2 <- M2[with(M2, order(year, month, day, ID)),] #sort the data
M2$value <- sample(c(NA, rnorm(100)), nrow(M2),
prob=c(0.5, rep(0.5/100, 100)), replace=TRUE)
M2
ID year month day value
1 182 2002 1 1 -0.5012581
7 182 2002 1 2 1.1022975
13 182 2002 1 3 NA
4 182 2002 2 1 -0.1623095
10 182 2002 2 2 1.1022975
16 182 2002 2 3 -1.2519859
2 182 2003 1 1 NA
8 182 2003 1 2 NA
14 182 2003 1 3 NA
5 182 2003 2 1 0.9729168
11 182 2003 2 2 0.9594941
17 182 2003 2 3 NA
3 182 2004 1 1 NA
9 182 2004 1 2 -1.1088896
15 182 2004 1 3 0.9594941
6 182 2004 2 1 -0.4027320
12 182 2004 2 2 -0.0151383
18 182 2004 2 3 -1.0686427
First, we're going to remove all cases where, within a month, there are 2 or more NAs in a row:
NA_run <- function(x, maxlen){
runs <- rle(is.na(x$value))
if(any(runs$lengths[runs$values] >= maxlen)) NULL else x
}
library(plyr)
rem <- ddply(M2, .(ID, year, month), NA_run, 2)
rem
ID year month day value
1 182 2002 1 1 -0.5012581
2 182 2002 1 2 1.1022975
3 182 2002 1 3 NA
4 182 2002 2 1 -0.1623095
5 182 2002 2 2 1.1022975
6 182 2002 2 3 -1.2519859
7 182 2003 2 1 0.9729168
8 182 2003 2 2 0.9594941
9 182 2003 2 3 NA
10 182 2004 1 1 NA
11 182 2004 1 2 -1.1088896
12 182 2004 1 3 0.9594941
13 182 2004 2 1 -0.4027320
14 182 2004 2 2 -0.0151383
15 182 2004 2 3 -1.0686427
You can see that the two in a row NAs have been removed. The one remaining is there because it belongs to two different months. Now we're going to fill in the remaining NAs. The na.rm=FALSE argument is there to keep the NAs if they're right at the beginning (which is what you want, I think).
library(zoo)
rem$value <- na.locf(rem$value, na.rm=FALSE)
rem
ID year month day value
1 182 2002 1 1 -0.5012581
2 182 2002 1 2 1.1022975
3 182 2002 1 3 1.1022975
4 182 2002 2 1 -0.1623095
5 182 2002 2 2 1.1022975
6 182 2002 2 3 -1.2519859
7 182 2003 2 1 0.9729168
8 182 2003 2 2 0.9594941
9 182 2003 2 3 0.9594941
10 182 2004 1 1 0.9594941
11 182 2004 1 2 -1.1088896
12 182 2004 1 3 0.9594941
13 182 2004 2 1 -0.4027320
14 182 2004 2 2 -0.0151383
15 182 2004 2 3 -1.0686427
Now all you need to do to make this 5 or more with your data is to change the value of the maxlen argument in NA_run to 5.
EDIT: Alternatively, if you don't want values to copy over from previous months:
library(zoo)
rem$value <- ddply(rem, .(ID, year, month), summarise,
value=na.locf(value, na.rm=FALSE))$value
rem
ID year month day value
1 182 2002 1 1 -0.5012581
2 182 2002 1 2 1.1022975
3 182 2002 1 3 1.1022975
4 182 2002 2 1 -0.1623095
5 182 2002 2 2 1.1022975
6 182 2002 2 3 -1.2519859
7 182 2003 2 1 0.9729168
8 182 2003 2 2 0.9594941
9 182 2003 2 3 0.9594941
10 182 2004 1 1 NA
11 182 2004 1 2 -1.1088896
12 182 2004 1 3 0.9594941
13 182 2004 2 1 -0.4027320
14 182 2004 2 2 -0.0151383
15 182 2004 2 3 -1.0686427
I'd do this in two steps:
An rle, rollapply, or shift-based strategy to fill in the small gaps (fewer than 5 NAs in a row).
A by, aggregate, or ddply-based strategy to take any month with NAs remaining after step 1 and make the whole month NA.

Resources