New to r and I'm having difficulty getting the counts I'm after. I have a dataset that contains several columns of various counts per year. Here is an example:
huc_code_eight
year
count_1
count_2
6010105
1946
4
4
6010105
1947
6
0
6010105
1948
2
0
6010105
1957
4
4
6020001
1957
2
0
8010203
1957
0
0
I want to aggregate these counts based upon consecutive years, grouped by huc_code_eight. The expected output would look like:
huc_code_eight
year
count_1
count_2
6010105
1946 - 1948
12
4
6010105
1957
4
4
6020001
1957
2
0
8010203
1957
0
0
I would like to avoid iterating through the data and summing these manually, but, though I've found many examples of aggregating in r, I've been unable to successfully refactor them to fit my use case.
Any help would be greatly appreciated!
Here is a data.table approach
set as data.table,, get the subsequent year, set to 1 if NA, and create run-length id
dat <- setDT(dat)[, yr:= year-shift(year),by=huc_code_eight][is.na(yr), yr:=1][,grp:=rleid(huc_code_eight,yr)]
create the character year (range if necessary, and sum of counts, by id
dat[,.(
year = fifelse(.N>1,paste0(min(year),"-",max(year)),paste0(year, collapse="")),
count_1=sum(count_1),count_2=sum(count_2)),
by=.(grp,huc_code_eight)][,grp:=NULL][]
Output:
huc_code_eight year count_1 count_2
1: 6010105 1946-1948 12 4
2: 6010105 1957 4 4
3: 6020001 1957 2 0
4: 8010203 1957 0 0
We can create a grouping column based on difference of adjacent elements in 'year' along with 'huc_code_eight' and then summarise
library(dplyr)
library(stringr)
df1 %>%
group_by(huc_code_eight) %>%
mutate(year_grp = cumsum(c(TRUE, diff(year) != 1))) %>%
group_by(year_grp, .add = TRUE) %>%
summarise(year = if(n() > 1)
str_c(range(year), collapse = ' - ') else as.character(year),
across(starts_with('count'), sum, na.rm = TRUE), .groups = 'drop') %>%
dplyr::select(-year_grp)
-output
# A tibble: 4 × 4
huc_code_eight year count_1 count_2
<int> <chr> <int> <int>
1 6010105 1946 - 1948 12 4
2 6010105 1957 4 4
3 6020001 1957 2 0
4 8010203 1957 0 0
data
df1 <- structure(list(huc_code_eight = c(6010105L, 6010105L, 6010105L,
6010105L, 6020001L, 8010203L), year = c(1946L, 1947L, 1948L,
1957L, 1957L, 1957L), count_1 = c(4L, 6L, 2L, 4L, 2L, 0L), count_2 = c(4L,
0L, 0L, 4L, 0L, 0L)), class = "data.frame", row.names = c(NA,
-6L))
Related
I tried to calculate the quarterly growth rate in sales for different stores. However, I group by my data several times until it became the following status:
How can I generate the growth rate table based on this equation: (Q3-Q2)/Q2?
The code I programmed so far is as follows. Thank you.
Store
Quarter
Weekly_Sales
1
Q2
60428109
1
Q3
20253948
2
Q2
74356864
2
Q3
24303355
3
Q2
15459190
3
Q3
5298005
4
Q2
79302989
4
Q3
27796792
5
Q2
12523263
5
Q3
4163791
library("dplyr")
library("lubridate")
Walmart_data_set <- read.csv("Walmart_Store_sales.csv")
Walmart_data_set$Date <- as.Date(Walmart_data_set$Date, "%d-%m-%Y")
Walmart_data_set["Month"] <- month(Walmart_data_set$Date)
Walmart_data_set["Quarter"] <- quarters(Walmart_data_set$Date)
Walmart_data_set["Year"] <- format(Walmart_data_set$Date, format ="%Y")
Q23_2012_Sales<- filter(Walmart_data_set, Year == "2012" & Quarter == "Q3" | Quarter == "Q2")
Sales_Store_quarter = Q23_2012_Sales %>% group_by(Store, Quarter) %>%
summarise(Weekly_Sales = sum(Weekly_Sales),
.groups = 'drop')
You can do it like this:
df %>%
arrange(Store, Quarter) %>%
group_by(Store) %>%
mutate(growth = (Weekly_Sales - lag(Weekly_Sales))/lag(Weekly_Sales))
Output:
Store Quarter Weekly_Sales growth
<dbl> <chr> <dbl> <dbl>
1 1 Q2 60428109 NA
2 1 Q3 20253948 -0.665
3 2 Q2 74356864 NA
4 2 Q3 24303355 -0.673
5 3 Q2 15459190 NA
6 3 Q3 5298005 -0.657
7 4 Q2 79302989 NA
8 4 Q3 27796792 -0.649
9 5 Q2 12523263 NA
10 5 Q3 4163791 -0.668
Don't group by Quarter.
library(dplyr)
dat %>%
arrange(Store, Quarter) %>%
group_by(Store) %>%
mutate(Growth = c(NA, diff(Weekly_Sales)) / dplyr::lag(Weekly_Sales)) %>%
ungroup()
# . + >
# # A tibble: 10 x 4
# Store Quarter Weekly_Sales Growth
# <int> <chr> <int> <dbl>
# 1 1 Q2 60428109 NA
# 2 1 Q3 20253948 -0.665
# 3 2 Q2 74356864 NA
# 4 2 Q3 24303355 -0.673
# 5 3 Q2 15459190 NA
# 6 3 Q3 5298005 -0.657
# 7 4 Q2 79302989 NA
# 8 4 Q3 27796792 -0.649
# 9 5 Q2 12523263 NA
# 10 5 Q3 4163791 -0.668
This method assumes that you always have a Q2 for each Q3. (The converse would be you have more history in your data, with some stores perhaps gapping a quarter or two.)
Data
dat <- structure(list(Store = c(1L, 1L, 2L, 2L, 3L, 3L, 4L, 4L, 5L, 5L), Quarter = c("Q2", "Q3", "Q2", "Q3", "Q2", "Q3", "Q2", "Q3", "Q2", "Q3"), Weekly_Sales = c(60428109L, 20253948L, 74356864L, 24303355L, 15459190L, 5298005L, 79302989L, 27796792L, 12523263L, 4163791L)), class = "data.frame", row.names = c(NA, -10L))
This is the libraryI am using for creating dummies
install.packages("fastDummies")
library(fastDummies)
This is the dataset
winners <- data.frame(
city = c("SaoPaulito", "NewAmsterdam", "BeatifulCow"),
year = c(1990, 2000, 1990),
crime = 1:3)
Let's them create super dummies out of these cities:
dummy_cols(winners, select_columns = c("city"))
The results are
city year crime city_SaoPaulito city_NewAmsterdam city_BeatifulCow
1 SaoPaulito 1990 1 1 0 0
2 NewAmsterdam 2000 2 0 1 0
3 BeatifulCow 1990 3 0 0 1
So the question if that I want to return to the previous dataset, any ideas?
Thanks in advance!
We can use dcast
library(data.table)
dcast(setDT(winners), crime ~ city, length)
If we need to get the input, it would be
subset(df1, select = 1:3)
# city year crime
#1 SaoPaulito 1990 1
#2 NewAmsterdam 2000 2
#3 BeatifulCow 1990 3
Or with melt
melt(setDT(df1), measure = patterns("_"))[value == 1, .(city, year, crime)]
# city year crime
#1: SaoPaulito 1990 1
#2: NewAmsterdam 2000 2
#3: BeatifulCow 1990 3
data
df1 <- structure(list(city = c("SaoPaulito", "NewAmsterdam", "BeatifulCow"
), year = c(1990L, 2000L, 1990L), crime = 1:3, city_SaoPaulito = c(1L,
0L, 0L), city_NewAmsterdam = c(0L, 1L, 0L), city_BeatifulCow = c(0L,
0L, 1L)), class = "data.frame", row.names = c("1", "2", "3"))
If you are going to have only one city as 1 in each row, you can just skip the dummy columns
df[, 1:3]
# city year crime
#1 SaoPaulito 1990 1
#2 NewAmsterdam 2000 2
#3 BeatifulCow 1990 3
If you can have multiple cities one way using dplyr and tidyr::gather is
library(dplyr)
df %>%
tidyr::gather(key, value, starts_with("city_")) %>%
filter(value == 1) %>%
select(-value, -key)
I have a dataset containing variables and a quantity of goods sold: for some days, however, there are no values.
I created a dataset with all 0 values in sales and all NA in the rest. How can I add those lines to the initial dataset?
At the moment, I have this:
sales
day month year employees holiday sales
1 1 2018 14 0 1058
2 1 2018 25 1 2174
4 1 2018 11 0 987
sales.NA
day month year employees holiday sales
1 1 2018 NA NA 0
2 1 2018 NA NA 0
3 1 2018 NA NA 0
4 1 2018 NA NA 0
I would like to create a new dataset, inserting the days where I have no observations, value 0 to sales, and NA on all other variables. Like this
new.data
day month year employees holiday sales
1 1 2018 14 0 1058
2 1 2018 25 1 2174
3 1 2018 NA NA 0
4 1 2018 11 0 987
I tried used something like this
merge(sales.NA,sales, all.y=T, by = c("day","month","year"))
But it does not work
Using dplyr, you could use a "right_join". For example:
sales <- data.frame(day = c(1,2,4),
month = c(1,1,1),
year = c(2018, 2018, 2018),
employees = c(14, 25, 11),
holiday = c(0,1,0),
sales = c(1058, 2174, 987)
)
sales.NA <- data.frame(day = c(1,2,3,4),
month = c(1,1,1,1),
year = c(2018,2018,2018, 2018)
)
right_join(sales, sales.NA)
This leaves you with
day month year employees holiday sales
1 1 1 2018 14 0 1058
2 2 1 2018 25 1 2174
3 3 1 2018 NA NA NA
4 4 1 2018 11 0 987
This leaves NA in sales where you want 0, but that could be fixed by including the sales data in sales.NA, or you could use "tidyr"
right_join(sales, sales.NA) %>% mutate(sales = replace_na(sales, 0))
Here is another data.table solution:
jvars = c("day","month","year")
merge(sales.NA[, ..jvars], sales, by = jvars, all.x = TRUE)[is.na(sales), sales := 0L][]
day month year employees holiday sales
1: 1 1 2018 14 0 1058
2: 2 1 2018 25 1 2174
3: 3 1 2018 NA NA 0
4: 4 1 2018 11 0 987
Or with some neater syntax:
sales[sales.NA[, ..jvars], on = jvars][is.na(sales), sales := 0][]
Reproducible data:
sales <- structure(list(day = c(1L, 2L, 4L), month = c(1L, 1L, 1L), year = c(2018L,
2018L, 2018L), employees = c(14L, 25L, 11L), holiday = c(0L,
1L, 0L), sales = c(1058L, 2174L, 987L)), row.names = c(NA, -3L
), class = c("data.table", "data.frame"))
sales.NA <- structure(list(day = 1:4, month = c(1L, 1L, 1L, 1L), year = c(2018L,
2018L, 2018L, 2018L), employees = c(NA, NA, NA, NA), holiday = c(NA,
NA, NA, NA), sales = c(0L, 0L, 0L, 0L)), row.names = c(NA, -4L
), class = c("data.table", "data.frame"))
That's an answer using the data.table package, since I am more familiar with the syntax, but regular data.frames should work pretty much the same. I also would switch to a proper date format, which will make life easier for you down the line.
Actually, in this way you would not need the Sales.NA table, since it would automatically be solved by all days which have NAs after the first join.
library(data.table)
dt.dates <- data.table(Date = seq.Date(from = as.Date("2018-01-01"), to = as.Date("2018-12-31"),by = "day" ))
dt.sales <- data.table(day = c(1,2,4)
, month = c(1,1,1)
, year = c(2018,2018,2018)
, employees = c(14, 25, 11)
, holiday = c(0,1,0)
, sales = c(1058, 2174, 987)
)
dt.sales[, Date := as.Date(paste(year,month,day, sep = "-")) ]
merge( x = dt.dates
, y = dt.sales
, by.x = "Date"
, by.y = "Date"
, all.x = TRUE
)
> Date day month year employees holiday sales
1: 2018-01-01 1 1 2018 14 0 1058
2: 2018-01-02 2 1 2018 25 1 2174
3: 2018-01-03 NA NA NA NA NA NA
4: 2018-01-04 4 1 2018 11 0 987
...
I need some help working with consecutive results.
Here is my sample data:
df <- structure(list(idno = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2,
2, 2, 2), result = structure(c(1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 2L, 1L, 1L, 2L, 2L, 2L), .Label = c("Negative", "Positive"
), class = c("ordered", "factor")), samp_date = structure(c(15909,
15938, 15979, 16007, 16041, 16080, 16182, 16504, 16576, 16645,
16721, 16745, 17105, 17281, 17416, 17429), class = "Date")), class = "data.frame", row.names = c(NA,
-16L))
The 'idno' represents individual people who had a test with 'result' on a given date ('samp_date').
From each individual person, I need to find the earliest consecutive 'Negatives' and return the date of the first 'negative' result. To return this date, the consecutive negatives must span >30 days with no 'positive' results.
The example answer for idno == 1 would be 2013-10-29, and 2015-11-06 for idno == 2.
I have tried using rle(as.character(df$result)) but have struggled to understand how to apply this to grouped data.
I would prefer an approach that uses dplyr or data.table.
Thanks for any help.
Similar to #MKR's answer, you can make a grouping variable and summarize in data.table:
library(data.table)
setDT(df)[, samp_date := as.IDate(samp_date)]
# summarize by grouping var g = rleid(idno, result)
runDT = df[, .(
start = first(samp_date),
end = last(samp_date),
dur = difftime(last(samp_date), first(samp_date), units="days")
), by=.(idno, result, g = rleid(idno, result))]
# idno result g start end dur
# 1: 1 Negative 1 2013-07-23 2013-07-23 0 days
# 2: 1 Positive 2 2013-08-21 2013-10-01 41 days
# 3: 1 Negative 3 2013-10-29 2015-07-29 638 days
# 4: 2 Positive 4 2015-10-13 2015-10-13 0 days
# 5: 2 Negative 5 2015-11-06 2016-10-31 360 days
# 6: 2 Positive 6 2017-04-25 2017-09-20 148 days
# find rows meeting the criterion
w = runDT[.(idno = unique(idno), result = "Negative", min_dur = 30),
on=.(idno, result, dur >= min_dur), mult="first", which=TRUE]
# filter
runDT[w]
# idno result g start end dur
# 1: 1 Negative 3 2013-10-29 2015-07-29 638 days
# 2: 2 Negative 5 2015-11-06 2016-10-31 360 days
A dplyr based solution can be achieved by creating a group of consecutive occurrence of result column and then finally taking 1st occurrence that meets criteria:
library(dplyr)
df %>% mutate(samp_date = as.Date(samp_date)) %>%
group_by(idno) %>%
arrange(samp_date) %>%
mutate(result_grp = cumsum(as.character(result)!=lag(as.character(result),default=""))) %>%
group_by(idno, result_grp) %>%
filter( result == "Negative" & (max(samp_date) - min(samp_date) )>=30) %>%
slice(1) %>%
ungroup() %>%
select(-result_grp)
# # A tibble: 2 x 3
# idno result samp_date
# <dbl> <ord> <date>
# 1 1.00 Negative 2013-10-29
# 2 2.00 Negative 2015-11-06
library(dplyr)
df %>% group_by(idno) %>%
mutate(time_diff = ifelse(result=="Negative" & lead(result)=='Negative', samp_date - lead(samp_date),0),
ConsNegDate = min(samp_date[which(abs(time_diff)>30)]))
# A tibble: 16 x 5
# Groups: idno [2]
idno result samp_date time_diff ConsNegDate
<dbl> <ord> <date> <dbl> <date>
1 1 Negative 2013-07-23 0 2013-10-29
2 1 Positive 2013-08-21 0 2013-10-29
3 1 Positive 2013-10-01 0 2013-10-29
4 1 Negative 2013-10-29 -34 2013-10-29
5 1 Negative 2013-12-02 -39 2013-10-29
6 1 Negative 2014-01-10 -102 2013-10-29
7 1 Negative 2014-04-22 -322 2013-10-29
8 1 Negative 2015-03-10 -72 2013-10-29
9 1 Negative 2015-05-21 -69 2013-10-29
10 1 Negative 2015-07-29 NA 2013-10-29
11 2 Positive 2015-10-13 0 2015-11-06
12 2 Negative 2015-11-06 -360 2015-11-06
13 2 Negative 2016-10-31 0 2015-11-06
14 2 Positive 2017-04-25 0 2015-11-06
15 2 Positive 2017-09-07 0 2015-11-06
16 2 Positive 2017-09-20 0 2015-11-06
I need to calculate some intermediate calculations using R.
Here is the data about some events and their types during some years.
structure(list(year = c(1994, 1995, 1997, 1997, 1998, 1998, 1998,
2000, 2000, 2001, 2001, 2002), N = c(3L, 1L, 1L, 4L, 1L, 1L,
4L, 1L, 2L, 1L, 5L, 1L), type = c("OIL", "LNG", "AGS", "OIL",
"DOCK", "LNG", "OIL", "LNG", "OIL", "LNG", "OIL", "DOCK")), .Names = c("year",
"N", "type"), row.names = c(NA, 12L), class = "data.frame")
> head(mydf3)
year N type
1 1994 3 OIL
2 1995 1 LNG
3 1997 1 AGS
4 1997 4 OIL
5 1998 1 DOCK
6 1998 1 LNG
I need to get the data about cumulative sum of N by Year and type, total cumulative sum this year and cumulative sum for year until current for all types.
So i need to get information like this
year type cntyear cnt_cumultype cnt_cumulalltypes
1994 OIL 3 3 3
1994 LNG 0 0 3
1994 AGS 0 0 3
1994 DOCK 0 0 3
1995 OIL 0 3 4
1995 LNG 1 1 4
1995 AGS 0 0 4
1995 DOCK 0 0 4
...
Some explanation:
cntyear - this is N count for current year and type.
cnt_cumultype - this is cumulative sum for this type until current year.
cnt_cumulalltypes - this is cumulative sum for all types for all
years including current <=current year.
Just wanted to do something like this, but it didn't worked right...
mydf3$cnt_cumultype<-tail(cumsum(mydf3[which(mydf3$type==mydf3$type & mydf3$year==mydf3$year),]$N), n=1)
How to calculate this numbers by rows?
Here is a solution with the data.table package. This is also possible to solve in base R, but one step in particular is shorter with data.table.
# load library
library(data.table)
# caste df as a data.table and change column order
setcolorder(setDT(df), c("year", "type", "N"))
# change column names
setnames(df, names(df), c("year", "type", "cntyear"))
# get all type-year combinations in data.table with `CJ` and join these to original
# then, in second [, replace all observations with missing counts to 0
df2 <- df[CJ("year"=unique(df$year), "type"=unique(df$type)), on=c("year", "type")
][is.na(cntyear), cntyear := 0]
# get cumulative counts for each type
df2[, cnt_cumultype := cumsum(cntyear), by=type]
# get total counts for each year
df2[, cnt_cumulalltypes := cumsum(cntyear)]
This results in
df2
year type cntyear cnt_cumultype cnt_cumulalltypes
1: 1994 AGS 0 0 0
2: 1994 DOCK 0 0 0
3: 1994 LNG 0 0 0
4: 1994 OIL 3 3 3
5: 1995 AGS 0 0 3
6: 1995 DOCK 0 0 3
7: 1995 LNG 1 1 4
8: 1995 OIL 0 3 4
9: 1997 AGS 1 1 5
....