I thought it could be a very easy question, but I am really a new beginner for R.
I have a data.table with key and lots of rows, two of which could be set as key. I want to recreate the table by Key.
For example, the simple data. In this case, the key is ID and Act, and here we can get a total of 4 groups.
ID ValueDate Act Volume
1 2015-01-01 EUR 21
1 2015-02-01 EUR 22
1 2015-01-01 MAD 12
1 2015-02-01 MAD 11
2 2015-01-01 EUR 5
2 2015-02-01 EUR 7
3 2015-01-01 EUR 4
3 2015-02-01 EUR 2
3 2015-03-01 EUR 6
Here is a code to generate test data:
dd <- data.table(ID = c(1,1,1,1,2,2,3,3,3),
ValueDate = c("2015-01-01", "2015-02-01", "2015-01-01","2015-02-01", "2015-01-01","2015-02-01","2015-01-01","2015-02-01","2015-03-01"),
Act = c("EUR","EUR","MAD","MAD","EUR","EUR","EUR","EUR","EUR"),
Volume=c(21,22,12,11,5,7,4,2,6))
After change, each column should present a specific group which is defined by Key (ID and Act).
Below is the result:
ValueDate ID1_EUR D1_MAD D2_EUR D3_EUR
2015-01-01 21 12 5 4
2015-02-01 22 11 7 2
2015-03-01 NA NA NA 6
Thanks a lot !
What you are trying to do is not recreating the data.table, but reshaping it from a long format to a wide format. You can use dcast for this:
dcast(dd, ValueDate ~ ID + Act, value.var = "Volume")
which gives:
ValueDate 1_EUR 1_MAD 2_EUR 3_EUR
1: 2015-01-01 21 12 5 4
2: 2015-02-01 22 11 7 2
3: 2015-03-01 NA NA NA 6
If you want the numbers in the resulting columns to be preceded with ID, then you can use:
dcast(dd, ValueDate ~ paste0("ID",ID) + Act, value.var = "Volume")
which gives:
ValueDate ID1_EUR ID1_MAD ID2_EUR ID3_EUR
1: 2015-01-01 21 12 5 4
2: 2015-02-01 22 11 7 2
3: 2015-03-01 NA NA NA 6
Related
By using the data below, I want to create a new unique customer id by considering their contact date.
Rule: After every two days, I want each customer to get a new unique customer id and preserve it on the following record if the following contact date for the same customer is within the following two days if not assign a new id to this same customer.
I couldn't go any further than calculating date differences.
The original dataset I work is bigger; therefore, I prefer a data.table solution if possible.
library(data.table)
treshold <- 2
dt <- structure(list(customer_id = c('10','20','20','20','20','20','30','30','30','30','30','40','50','50'),
contact_date = as.Date(c("2019-01-05","2019-01-01","2019-01-01","2019-01-02",
"2019-01-08","2019-01-09","2019-02-02","2019-02-05",
"2019-02-05","2019-02-09","2019-02-12","2019-02-01",
"2019-02-01","2019-02-05")),
desired_output = c(1,2,2,2,3,3,4,5,5,6,7,8,9,10)),
class = "data.frame",
row.names = 1:14)
setDT(dt)
setorder(dt, customer_id, contact_date)
dt[, date_diff_in_days:=contact_date - shift(contact_date, type = c("lag")), by=customer_id]
dt[, date_diff_in_days:=as.numeric(date_diff_in_days)]
dt
customer_id contact_date desired_output date_diff_in_days
1: 10 2019-01-05 1 NA
2: 20 2019-01-01 2 NA
3: 20 2019-01-01 2 0
4: 20 2019-01-02 2 1
5: 20 2019-01-08 3 6
6: 20 2019-01-09 3 1
7: 30 2019-02-02 4 NA
8: 30 2019-02-05 5 3
9: 30 2019-02-05 5 0
10: 30 2019-02-09 6 4
11: 30 2019-02-12 7 3
12: 40 2019-02-01 8 NA
13: 50 2019-02-01 9 NA
14: 50 2019-02-05 10 4
Rule: After every two days, I want each customer to get a new unique customer id and preserve it on the following record if the following contact date for the same customer is within the following two days if not assign a new id to this same customer.
When creating a new ID, if you set up the by= vectors correctly to capture the rule, the auto-counter .GRP can be used:
thresh <- 2
dt[, g := .GRP, by=.(
customer_id,
cumsum(contact_date - shift(contact_date, fill=first(contact_date)) > thresh)
)]
dt[, any(g != desired_output)]
# [1] FALSE
I think the code above is correct since it works on the example, but you might want to check on your actual data (comparing against results from, eg, Gregor's approach) to be sure.
We use cumsum to increment whenever date_diff_in_days is NA or when the threshold is exceeded.
dt[, result := cumsum(is.na(date_diff_in_days) | date_diff_in_days > treshold)]
# customer_id contact_date desired_output date_diff_in_days result
# 1: 10 2019-01-05 1 NA 1
# 2: 20 2019-01-01 2 NA 2
# 3: 20 2019-01-01 2 0 2
# 4: 20 2019-01-02 2 1 2
# 5: 20 2019-01-08 3 6 3
# 6: 20 2019-01-09 3 1 3
# 7: 30 2019-02-02 4 NA 4
# 8: 30 2019-02-05 5 3 5
# 9: 30 2019-02-05 5 0 5
# 10: 30 2019-02-09 6 4 6
# 11: 30 2019-02-12 7 3 7
# 12: 40 2019-02-01 8 NA 8
# 13: 50 2019-02-01 9 NA 9
# 14: 50 2019-02-05 10 4 10
I have a list of 100+ time series dataframes my.list with daily observations for each product in its own data frame. Some values are NA without any record of the date. I would like to update each data frame in this list to show the date and NA if it does not have a record on this date.
Dates:
start = as.Date('2016/04/08')
full <- seq(start, by='1 days', length=10)
Sample Time Series Data:
d1 <- data.frame(Date = seq(start, by ='2 days',length=5), Sales = c(5,10,15,20,25))
d2 <- data.frame(Date = seq(start, by= '1 day', length=10),Sales = c(1, 2, 3,4,5,6,7,8,9,10))
my.list <- list(d1, d2)
I want to merge all full date values into each data frame, and if no match exists then sales is NA:
my.list
[[d1]]
Date Sales
2016-04-08 5
2016-04-09 NA
2016-04-10 10
2016-04-11 NA
2016-04-12 15
2016-04-13 NA
2016-04-14 20
2016-04-15 NA
2016-04-16 25
2016-04-17 NA
[[d2]]
Date Sales
2016-04-08 1
2016-04-09 2
2016-04-10 3
2016-04-11 4
2016-04-12 5
2016-04-13 6
2016-04-14 7
2016-04-15 8
2016-04-16 9
2016-04-17 10
If I understand correctly, the OP wants to update each of the dataframes in my.list to contain one row for each date given in the vector of dates full
Base R
In base R, merge() can be used as already mentioned by Hack-R. However, th answer below expands this to work on all dataframes in the list:
# creat dataframe from vector of full dates
full.df <- data.frame(Date = full)
# apply merge on each dataframe in the list
lapply(my.list, merge, y = full.df, all.y = TRUE)
[[1]]
Date Sales
1 2016-04-08 5
2 2016-04-09 NA
3 2016-04-10 10
4 2016-04-11 NA
5 2016-04-12 15
6 2016-04-13 NA
7 2016-04-14 20
8 2016-04-15 NA
9 2016-04-16 25
10 2016-04-17 NA
[[2]]
Date Sales
1 2016-04-08 1
2 2016-04-09 2
3 2016-04-10 3
4 2016-04-11 4
5 2016-04-12 5
6 2016-04-13 6
7 2016-04-14 7
8 2016-04-15 8
9 2016-04-16 9
10 2016-04-17 10
Caveat
The answer assumes that full covers the overall range of Date of all dataframes in the list.
In order to avoid any mishaps, the overall range of Date can be retrieved from the available data in my.list:
overall_date_range <- Reduce(range, lapply(my.list, function(x) range(x$Date)))
full <- seq(overall_date_range[1], overall_date_range[2], by = "1 days")
Using rbindlist()
Alternatively, the list of dataframes which are identical in structure can be stored in a large dataframe. An additional attribute indicates to which product each row belongs to. The homogeneous structure simplifies subsequent operations.
The code below uses the rbindlist() function from the data.table package to create a large data.table. CJ() (cross join) creates all combinations of dates and product id which is then merged / joined to fill in the missing dates:
library(data.table)
all_products <- rbindlist(my.list, idcol = "product.id")[
CJ(product.id = unique(product.id), Date = seq(min(Date), max(Date), by = "1 day")),
on = .(Date, product.id)]
all_products
product.id Date Sales
1: 1 2016-04-08 5
2: 1 2016-04-09 NA
3: 1 2016-04-10 10
4: 1 2016-04-11 NA
5: 1 2016-04-12 15
6: 1 2016-04-13 NA
7: 1 2016-04-14 20
8: 1 2016-04-15 NA
9: 1 2016-04-16 25
10: 1 2016-04-17 NA
11: 2 2016-04-08 1
12: 2 2016-04-09 2
13: 2 2016-04-10 3
14: 2 2016-04-11 4
15: 2 2016-04-12 5
16: 2 2016-04-13 6
17: 2 2016-04-14 7
18: 2 2016-04-15 8
19: 2 2016-04-16 9
20: 2 2016-04-17 10
Subsequent operations can be grouped by product.id, e.g., to determine the number of valid sales data for each product:
all_products[!is.na(Sales), .(valid.sales.data = .N), by = product.id]
product.id valid.sales.data
1: 1 5
2: 2 10
Or, the totals sales per product:
all_products[, .(total.sales = sum(Sales, na.rm = TRUE)), by = product.id]
product.id total.sales
1: 1 75
2: 2 55
If required for some reason the result can be converted back to a list by
split(all_products, by = "product.id")
I have two data frames (DF1 and DF2):
(1) DF1 contains information on individual-level, i.e. on 10.000 individuals nested in 30 units across 11 years (2000-2011). It contains four variables:
"individual" (numeric id for each individual; ranging from 1-10.000)
"unit" (numeric id for each unit; ranging from 1-30)
"date1" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2010-12-31)
"date2" ("Date1" + 1 year)
(2) DF2 contains information on unit-level, i.e. on the same 30 units as in DF1 across the same time period (2000-2011) and further contains a numeric variable ("x"):
"unit" (numeric id for each unit; ranging from 1-30)
"date" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2011-12-31)
"x" (a numeric variable, ranging from 0 to 200)
I would like to create new variable ("newvar") that gives me for each "individual" per "unit" the sum of "x" (DF2) counting from "date1" (DF1) to "date2" (DF2). This means that I would like to add this new variable to DF1.
For instance, if "individual"=1 in "unit"=1 has "date1"=2000-01-01 and "date2"=2001-01-01, and in DF2 "unit"=1 has three observations in the time period "date1" to "date2" (i.e. 2000-01-01 to 2001-01-01) with "x"=1, "x"=2 and "x"=3, then I would like add a new variable that gives for "individual"=1 in "unit"=1 "newvar"=6.
I assume that I need to use a for loop in R and have been using the following code:
for(i in length(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF1$date == DF1$date1[i] &
DF1$date == DF1P$date1[i] &
DF2$unit == DF1P$unit[i]),])
}
but get the error message:
Error in DF2$x[which(DF2$date == : incorrect number of dimensions
Any ideas of how to create this variable would be tremendously appreciated!
Here is a small example as well as the expected output, using one unit for the sake of simplicity:
Assume DF1 looks as follows:
individual unit date1 date2
1 1 2000-01-01 2001-01-01
2 1 2000-02-02 2001-02-02
3 1 2000-03-03 2000-03-03
4 1 2000-04-04 2000-04-04
5 1 2000-12-31 2001-12-31
(...)
996 1 2010-01-01 2011-01-01
997 1 2010-02-15 2011-02-15
998 1 2010-03-05 2011-03-05
999 1 2010-04-10 2011-04-10
1000 1 2010-12-27 2011-12-27
1001 2 2000-01-01 2001-01-01
1002 2 2000-02-02 2001-02-02
1003 2 2000-03-03 2000-03-03
1004 2 2000-04-04 2000-04-04
1005 2 2000-12-31 2001-12-31
(...)
1996 2 2010-01-01 2011-01-01
1997 2 2010-02-15 2011-02-15
1998 2 2010-03-05 2011-03-05
1999 2 2010-04-10 2011-04-10
2000 2 2010-12-027 2011-12-27
(...)
3000 34 2000-02-02 2002-02-02
3001 34 2000-05-05 2001-05-05
3002 34 2000-06-06 2001-06-06
3003 34 2000-07-07 2001-07-07
3004 34 2000-11-11 2001-11-11
(...)
9996 34 2010-02-06 2011-02-06
9997 34 2010-05-05 2011-05-05
9998 34 2010-09-09 2011-09-09
9999 34 2010-09-25 2011-09-25
10000 34 2010-10-15 2011-10-15
Assume DF2 looks as follows:
unit date x
1 2000-01-01 1
1 2000-05-01 2
1 2000-12-01 3
1 2001-01-02 10
1 2001-07-05 20
1 2001-12-31 30
(...)
2 2010-05-05 1
2 2010-07-01 1
2 2010-08-09 1
3 (...)
This is what I would like DF1 to look like after running the code:
individual unit date1 date2 newvar
1 1 2000-01-01 2001-01-01 6
2 1 2000-02-02 2001-02-02 16
3 1 2000-03-03 2001-03-03 15
4 1 2000-04-04 2001-04-04 15
5 1 2000-12-31 2001-12-31 60
(...)
996 1 2010-01-01 2011-01-01 3
997 1 2010-02-15 2011-02-15 2
998 1 2010-03-05 2011-03-05 2
999 1 2010-04-10 2011-04-10 2
1000 1 2010-12-27 2011-12-27 0
(...)
However, I cannot simply aggregate: Imagine that in DF1 each "unit" has several hundreds of individuals for each year between 2000 and 2011. And DF2 has many observations for each unit across the years 2000-2011.
We can use data.table
library(data.table)
setDT(DF1)
setDT(DF2)
DF1[DF2[, .(newvar = sum(x)), .(unit, individual = cumsum(date %in% DF1$date1))],
newvar := newvar, on = .(individual, unit)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
Or we can use a non-equi join
DF1[DF2[DF1, sum(x), on = .(unit, date >= date1, date <= date2),
by = .EACHI], newvar := V1, on = .(unit, date1=date)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
You were almost there, I just modified slightly your for loop, and also made sure that the date variables are considered as such:
DF1$date1 = as.Date(DF1$date1,"%Y-%m-%d")
DF1$date2 = as.Date(DF1$date2,"%Y-%m-%d")
DF2$date = as.Date(DF2$date,"%Y-%m-%d")
for(i in 1:nrow(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF2$unit == DF1$unit[i] &
DF2$date>= DF1$date1[i] &
DF2$date<= DF1$date2[i])])
}
The problem was, that you were asking DF2$date to be simultaneously == DF1$date1 & DF1$date2.
And also, length(DF1) gives you the number of columns. To have the number of rows you can either use nrow(DF1), or dim(DF1)[1].
Hi i would like to change my data frame profile_table_long which represents 24 hour/data for 50 companies from 2 years.
Data - date from 2015-01-01 to 2016-12-31
name - name of firm 1:50
hour - hour 1:24 (with additional 2a between 2 and 3)
load - variable
x <- NULL
x$Data <- rep(seq(as.Date("2015/1/1"), as.Date("2016/12/31"), "days"), length.out=913750)
x$Name <- rep(rep(1:50, each=731), length.out=913750)
x$hour <- rep(rep(c(1, 2, "2a", 3:24), each=36550),length.out=913750)
x$load <- sample(2000:2500, 913750, replace=T)
x <- data.frame(x)
Data name hour load
1 2015-01-01 1 1 8837.050
2 2015-01-01 1 2 6990.952
3 2015-01-01 1 2a 8394.421
4 2015-01-01 1 3 8267.276
5 2015-01-01 1 4 8324.069
6 2015-01-01 1 5 8644.901
7 2015-01-01 1 6 8720.878
8 2015-01-01 1 7 9213.204
9 2015-01-01 1 8 9601.976
10 2015-01-01 1 9 8549.170
11 2015-01-01 1 10 9379.324
12 2015-01-01 1 11 9370.418
13 2015-01-01 1 12 7159.201
14 2015-01-01 1 13 8497.344
15 2015-01-01 1 14 6419.835
16 2015-01-01 1 15 9354.910
17 2015-01-01 1 16 9320.462
18 2015-01-01 1 17 9263.098
19 2015-01-01 1 18 9167.991
20 2015-01-01 1 19 9004.010
21 2015-01-01 1 20 9134.466
22 2015-01-01 1 21 7631.472
23 2015-01-01 1 22 6492.074
24 2015-01-01 1 23 6888.025
25 2015-01-01 1 24 8821.283
25 2015-01-02 1 1 8902.135
I would like to make it look like that:
data hour name1 name2 .... name49 name50
2015-01-01 1 load load .... load load
2015-01-01 2 load load .... load load
.....
2015-01-01 24 load load .... load load
2015-01-02 1 load load .... load load
.....
2016-12-31 24 load load .... load load
I tried spread() from tidyr package profile_table_tidy <- spread(profile_table_long, name, load) but I am getting an error Error: Duplicate identifiers for rows
This method uses the reshape2 package:
library("reshape2")
profile_table_wide = dcast(data = profile_table_long,
formula = Data + hour ~ name,
value.var = "load")
You might also want to choose a value for fill as well. Good luck!
I have a data.table with key and about 1000 rows, two of which are set to key. I would like to create a new variable named difference that contains difference of each numeric rows which were grouped by key.
For example, the simple data is: ID and Act are set to as key
ID ValueDate Act Volume
1 2015-01-01 EUR 21
1 2015-02-01 EUR 22
1 2015-01-01 MAD 12
1 2015-02-01 MAD 11
2 2015-01-01 EUR 5
2 2015-02-01 EUR 7
3 2015-01-01 EUR 4
3 2015-02-01 EUR 2
3 2015-03-01 EUR 6
What I would like to have is: adding a new column to calculate the difference between two rows(order by Time) for each group, note that for the first row of each group , the value of difference is 0.
ID ValueDate Act Volume Difference
1 2015-01-01 EUR 21 0
1 2015-02-01 EUR 22 1
1 2015-01-01 MAD 12 0
1 2015-02-01 MAD 11 -1
2 2015-01-01 EUR 5 0
2 2015-02-01 EUR 7 2
3 2015-01-01 EUR 4 0
3 2015-02-01 EUR 2 -2
3 2015-03-01 EUR 6 4
Here is a code to generate test data:
dd <- data.table(ID = c(1,1,1,1,2,2,3,3,3),
ValueDate = c("2015-01-01", "2015-02-01", "2015-01-01","2015-02-01", "2015-01-01","2015-02-01","2015-01-01","2015-02-01","2015-03-01"),
Act = c("EUR","EUR","MAD","MAD","EUR","EUR","EUR","EUR","EUR"),
Volume=c(21,22,12,11,5,7,4,2,6))
set key for the table:
setkey(dd, ID, Act)
to view the data:
> dd
ID ValueDate Act Volume
1 1 2015-01-01 EUR 21
2 1 2015-02-01 EUR 22
3 1 2015-01-01 MAD 12
4 1 2015-02-01 MAD 11
5 2 2015-01-01 EUR 5
6 2 2015-02-01 EUR 7
7 3 2015-01-01 EUR 4
8 3 2015-02-01 EUR 2
9 3 2015-03-01 EUR 6
so , can we use the function of aggregate to calculate the difference? or the method of .SD for "subset of data, but I don't know how to do the calculation of difference between two rows by group,note that for some groups, the number of rows might be different as well, but i have tried before is using the for(i in 0:x) to re-calculate the difference, but I don't think it could be a good method :(
If you want explicitly use your key, you could pass a keycall to the by argument
dd[, Difference := c(0L, diff(Volume)), by = key(dd)]
dd
# ID ValueDate Act Volume Difference
# 1: 1 2015-01-01 EUR 21 0
# 2: 1 2015-02-01 EUR 22 1
# 3: 1 2015-01-01 MAD 12 0
# 4: 1 2015-02-01 MAD 11 -1
# 5: 2 2015-01-01 EUR 5 0
# 6: 2 2015-02-01 EUR 7 2
# 7: 3 2015-01-01 EUR 4 0
# 8: 3 2015-02-01 EUR 2 -2
# 9: 3 2015-03-01 EUR 6 4
Or using data.table v 1.9.6+ you could also utilize the shift function
dd[, Difference := Volume - shift(Volume, fill = Volume[1L]), by = key(dd)]
We can use dplyr. After grouping by 'ID', 'Act', we create the 'Difference' column as the difference of 'Volume' and lag of that column.
library(dplyr)
dd %>%
group_by(ID, Act) %>%
mutate(Difference = Volume-lag(Volume))
EDIT: As mentioned by #DavidArenburg, replacing lag(Volume) by lag(Volume, default = Volume[1L]) will give 0 instead of NA for the first element in each group.
Or with ave from base R, we can do the diff and concatenate with 0 so that the lengths are the same. The diff returns a vector with length one less than the length of the original vector.
with(dd, ave(Volume, ID, Act, FUN= function(x) c(0, diff(x)))