Summarise based on number of observations per year in a time-series - r

I've got a long dataframe like this:
year value town
2001 0.15 ny
2002 0.19 ny
2002 0.14 ca
2001 NA ny
2002 0.15 ny
2002 0.12 ca
2001 NA ny
2002 0.13 ny
2002 0.1 ca
I want to calculate a mean value per year and per species. Like this:
df %>% group_by(year, town) %>% summarise(mean_year = mean(value, na.rm=T))
However, I only want to summarise those town values which have more than 2 non-NA values. In the example above, I don't want to summarise year 2001 for ny because it only has 1 non-NA value.
So the output would be like this:
town year mean_year
ny 2001 NA
ny 2002 0.156
ca 2002 0.45

try this
df %>% group_by(year, town) %>%
summarise(mean_year = ifelse(sum(!is.na(value))>=2, mean(value, na.rm = T), NA))
# A tibble: 3 x 3
# Groups: year [2]
year town mean_year
<int> <chr> <dbl>
1 2001 ny NA
2 2002 ca 0.12
3 2002 ny 0.157
dput
> dput(df)
structure(list(year = c(2001L, 2002L, 2002L, 2001L, 2002L, 2002L,
2001L, 2002L, 2002L), value = c(0.15, 0.19, 0.14, NA, 0.15, 0.12,
NA, 0.13, 0.1), town = c("ny", "ny", "ca", "ny", "ny", "ca",
"ny", "ny", "ca")), class = "data.frame", row.names = c(NA, -9L
))

Related

Add a row with the sum of some variables for specific combination of values

I am using the Human Mortality Database.
It has different values for each week and country of some variables like the deaths per age groups and the ratio.
It also considers different countries England and Wales, Scotland and Northen Ireland. Instead, I would like to consider them as a unique country, the UK.
How can I compute the sum of these three values for each week and sex?
Can somebody help? thanks
This is what I have
# Country Year Week Rate
# GBR_SCO 2000 1 0.01
# GBR_SCO 2000 2 0.02
# GBR_SCO 2000 3 0.03
... ... ... ...
# GBR_SCO 2001 1 0.15
# GBR_SCO 2001 2 0.16
# GBR_WAL 2000 1 0.19
# GBR_WAL 2000 2 0.18
# GBR_WAL 2000 3 0.31
... ... ... ...
# GBR_WAL 2001 1 0.53
# GBR_WAL 2001 2 0.62
This is what I want to obtain
# Country Year Week Rate
# GBR 2000 1 0.20
# GBR 2000 2 0.20
# GBR 2000 3 0.34
# ... ... ... ...
# GBR 2001 1 0.68
# GBR 2002 2 0.78
Of course consider that in the dataset I have also other countries, years, and weeks that I want to keep. This is an example of what I want to do.
I think you need this kind of code (in contrast to your question) my guess is that you need a group_by and summarise: The main challenge is to get group for Country. We use str_extract to extract all character before the underscore:
library(dplyr)
library(stringr)
df %>%
mutate(Country = str_extract(Country, "[^_]+")) %>%
group_by(Country, Year, Week) %>%
summarise(Rate = sum(Rate))
Country Year Week Rate
<chr> <int> <int> <dbl>
1 GBR 2000 1 0.2
2 GBR 2000 2 0.2
3 GBR 2000 3 0.34
4 GBR 2001 1 0.68
5 GBR 2001 2 0.78
data:
structure(list(Country = c("GBR_SCO", "GBR_SCO", "GBR_SCO", "GBR_SCO",
"GBR_SCO", "GBR_WAL", "GBR_WAL", "GBR_WAL", "GBR_WAL", "GBR_WAL"
), Year = c(2000L, 2000L, 2000L, 2001L, 2001L, 2000L, 2000L,
2000L, 2001L, 2001L), Week = c(1L, 2L, 3L, 1L, 2L, 1L, 2L, 3L,
1L, 2L), Rate = c(0.01, 0.02, 0.03, 0.15, 0.16, 0.19, 0.18, 0.31,
0.53, 0.62)), class = "data.frame", row.names = c(NA, -10L))

Calculating sums of observation in time intervals in a df [duplicate]

This question already has answers here:
Aggregate one data frame by time intervals from another data frame
(3 answers)
Closed 1 year ago.
I've posted this as another question, but realised I've got my sample data wrong.
I've got two separate datasets. df1 looks like this:
loc_ID year observations
nin212 2002 90
nin212 2003 98
nin212 2004 102
cha670 2001 18
cha670 2002 19
cha670 2003 21
df2 looks like this:
loc_ID start_year end_year
nin212 2002 2003
nin212 2003 2004
cha670 2001 2002
cha670 2002 2003
I want to calculate the number of observations in the time intervals (start_year to end_year) per loc_ID. In the example above, I would like to achieve this final dataset:
loc_ID start_year end_year observations
nin212 2002 2003 188
nin212 2003 2004 200
cha670 2001 2002 37
cha670 2002 2003 40
How could I do this?
We can do a non-equi join
library(data.table)
setDT(df2)[, observations := setDT(df1)[df2, sum(observations),
on = .(loc_ID, year >= start_year, year <= end_year),
by = .EACHI]$V1]
-output
df2
# loc_ID start_year end_year observations
#1: nin212 2002 2003 188
#2: nin212 2003 2004 200
#3: cha670 2001 2002 37
#4: cha670 2002 2003 40
data
structure(list(loc_ID = c("nin212", "nin212", "nin212", "cha670",
"cha670", "cha670"), year = c(2002L, 2003L, 2004L, 2001L, 2002L,
2003L), observations = c(90L, 98L, 102L, 18L, 19L, 21L)),
class = "data.frame", row.names = c(NA,
-6L))
> dput(df2)
structure(list(loc_ID = c("nin212", "nin212", "cha670", "cha670"
), start_year = c(2002L, 2003L, 2001L, 2002L), end_year = c(2003L,
2004L, 2002L, 2003L)), class = "data.frame", row.names = c(NA,
-4L))

stacking/melting multiple columns into multiple columns in R

I am trying to melt/stack/gather multiple specific columns of a dataframe into 2 columns, retaining all the others.
I have tried many, many answers on stackoverflow without success (some below). I basically have a situation similar to this post here:
Reshaping multiple sets of measurement columns (wide format) into single columns (long format)
only many more columns to retain and combine. It is important to mention my year columns are factors and I have many, many more columns than the sample listed below so I want to call column names not positions.
>df
ID Code Country year.x value.x year.y value.y year.x.x value.x.x
1 A USA 2000 34.33422 2001 35.35241 2002 42.30042
1 A Spain 2000 34.71842 2001 39.82727 2002 43.22209
3 B USA 2000 35.98180 2001 37.70768 2002 44.40232
3 B Peru 2000 33.00000 2001 37.66468 2002 41.30232
4 C Argentina 2000 37.78005 2001 39.25627 2002 45.72927
4 C Peru 2000 40.52575 2001 40.55918 2002 46.62914
I tried using the pivot_longer in tidyr based on the post above which seemed very similar, which resulted in various errors depending on what I did:
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = ".")
I also played with melt in reshape2 in various ways which either melted only the values columns or only the years columns. Such as:
new.df <- reshape2:::melt(df, id.var = c("ID", "Code", "Country"), measure.vars=c("value.x", "value.y", "value.x.x", "value.y.y", "value.x.x.x", "value.y.y.y"), value.name = "value", variable.vars=c('year.x','year.y', "year.x.x", "year.y.y", "year.x.x.x", "year.y.y.y", "value.x", variable.name = "year")
I also tried dplyr gather based on other posts but I find it extremely difficult to understand the help page and posts.
To be clear what I am looking to achieve:
ID Code Country year value
1 A USA 2000 34.33422
1 A Spain 2000 34.71842
3 B USA 2000 35.98180
3 B Peru 2000 33.00000
4 C Argentina2000 37.78005
4 C Peru 2000 40.52575
1 A USA 2001 35.35241
1 A Spain 2001 39.82727
3 B USA 2001 37.70768
3 B Peru 2001 37.66468
4 C Argentina2001 39.25627
4 C Peru 2001 40.55918
1 A USA 2002 42.30042
etc.
I really appreciate the help here.
We can specify the names_pattern
library(tidyr)
library(dplyr)
df %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or use the names_sep with escaped . as according to ?pivot_longer
names_sep - names_sep takes the same specification as separate(), and can either be a numeric vector (specifying positions to break on), or a single string (specifying a regular expression to split on).
which implies that by default the regex is on and the . in regex matches any character and not the literal dot. To get the literal value, either escape or place it inside square bracket
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "\\.")
# A tibble: 18 x 6
# ID Code Country group year value
# <int> <chr> <chr> <chr> <int> <dbl>
# 1 1 A USA x 2000 34.3
# 2 1 A USA y 2001 35.4
# 3 1 A USA z 2002 42.3
# 4 1 A Spain x 2000 34.7
# 5 1 A Spain y 2001 39.8
# 6 1 A Spain z 2002 43.2
# 7 3 B USA x 2000 36.0
# 8 3 B USA y 2001 37.7
# 9 3 B USA z 2002 44.4
#10 3 B Peru x 2000 33
#11 3 B Peru y 2001 37.7
#12 3 B Peru z 2002 41.3
#13 4 C Argentina x 2000 37.8
#14 4 C Argentina y 2001 39.3
#15 4 C Argentina z 2002 45.7
#16 4 C Peru x 2000 40.5
#17 4 C Peru y 2001 40.6
#18 4 C Peru z 2002 46.6
Update
For the updated dataset
library(stringr)
df2 %>%
rename_at(vars(matches("year|value")), ~
str_replace(., "^([^.]+\\.[^.]+)\\.([^.]+)$", "\\1\\2")) %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or without the rename, use regex lookaround
df2 %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "(?<=year|value)\\.")
data
df <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.z = c(2002L, 2002L, 2002L, 2002L, 2002L,
2002L), value.z = c(42.30042, 43.22209, 44.40232, 41.30232, 45.72927,
46.62914)), class = "data.frame", row.names = c(NA, -6L))
df2 <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.x.x = c(2002L, 2002L, 2002L, 2002L,
2002L, 2002L), value.x.x = c(42.30042, 43.22209, 44.40232, 41.30232,
45.72927, 46.62914)), class = "data.frame", row.names = c(NA,
-6L))

How to calculate percent differences in a table in R

I have a csv file where rows 1-5 represent one state, 5-10 another, etc... I also have a column with years 1970,1980,..,2010 repeated for each state. In R (although I'm not opposed to a solution in Excel if that is easier), I want for each state to calculate the percent difference between that year and 1970, i.e. for Alabama 1990 it would be (AL 1990 - AL 1970)/(AL 1970), and add it to a new column in the data table so I can export it to a csv.
State, Year, Num
AL, 1970, 1
AL, 1980, 2
AL, 1990, 3
AL, 2000, 4
AL, 2010, 6
Output would be a column
pct_change
0
1
2
3
5
The dplyr package includes the function first which provides an easy method for getting the first value of a group. So if we arrange by Year to make it so that 1970 will be the first value of each group, when we group_by(State), we can use first(Num) to get that first value of Num which represents the value from 1970:
# Example data with 2 states
df <- structure(list(State = c("AL", "AL", "AL", "AL", "AL", "TX",
"TX", "TX", "TX", "TX"), Year = c(1970L, 1980L, 1990L, 2000L,
2010L, 1970L, 1980L, 1990L, 2000L, 2010L), Num = c(1, 2, 3, 4,
6, 5, 2, 10, 12, 6)), class = "data.frame", row.names = c(NA,
-10L))
library(dplyr)
df %>%
arrange(State, Year) %>%
group_by(State) %>%
mutate(perc_diff = 100 * (Num - first(Num))/first(Num))
# A tibble: 10 x 4
# Groups: State [2]
State Year Num perc_diff
<chr> <int> <dbl> <dbl>
1 AL 1970 1 0
2 AL 1980 2 100
3 AL 1990 3 200
4 AL 2000 4 300
5 AL 2010 6 500
6 TX 1970 5 0
7 TX 1980 2 -60
8 TX 1990 10 100
9 TX 2000 12 140
10 TX 2010 6 20
We can use data.table. Convert the 'data.frame' to 'data.table' (setDT(df)), order by 'State', 'Year' in the i, grouped by 'State', get the difference of the 'Num' with the first value of 'Num' and assign (:=) to create the 'perc_diff'
library(data.table)
setDT(df)[order(State, Year), perc_diff :=
100 * (Num - first(Num))/first(Num), State][]
# State Year Num perc_diff
# 1: AL 1970 1 0
# 2: AL 1980 2 100
# 3: AL 1990 3 200
# 4: AL 2000 4 300
# 5: AL 2010 6 500
# 6: TX 1970 5 0
# 7: TX 1980 2 -60
# 8: TX 1990 10 100
# 9: TX 2000 12 140
#10: TX 2010 6 20
Or using base R
v1 <- with(df, ave(Num, State, FUN = function(x) x[1]))
df$perc_diff <- with(df, 100 * (Num - v1)/v1)
data
df <- structure(list(State = c("AL", "AL", "AL", "AL", "AL", "TX",
"TX", "TX", "TX", "TX"), Year = c(1970L, 1980L, 1990L, 2000L,
2010L, 1970L, 1980L, 1990L, 2000L, 2010L), Num = c(1, 2, 3, 4,
6, 5, 2, 10, 12, 6)), class = "data.frame", row.names = c(NA,
-10L))
Base R solution using tapply
df <- df[with(df, order(State, Year)), ]
df$pct_change <- unlist( tapply(df$Num, df$State, function(x) 100 * (x - x[1]) / x[1]) )
> df
State Year Num pct_change
1 AL 1970 1 0
2 AL 1980 2 100
3 AL 1990 3 200
4 AL 2000 4 300
5 AL 2010 6 500
6 TX 1970 5 0
7 TX 1980 2 -60
8 TX 1990 10 100
9 TX 2000 12 140
10 TX 2010 6 20

Summing data frames with different length

I have two data sets (one for each country) that look like this:
dfGermany
Country Sales Year Code
Germany 2000 2000 221
Germany 1500 2001 150
Germany 2150 2002 270
dfJapan
Country Sales Year Code
Japan 500 2000 221
Japan 750 2001 221
Japan 800 2001 270
Japan 1000 2002 270
Code here is the "name" of the product. What I want to do is to take half the Japanese sell and add it to the df for Germany if the code and the year matches.
So for instance, half of the sales value for product 221 and 270 in dfJapan (250 € and 500 €) should be added to dfGermany for year 2000 and 2002. But nothing should happen to the values for 2001 since the code does not match with the year.
I tried with merge, but that function did not work since the data is of different size and I also want to match both year and value.
We can do a join on 'Year', 'Code' and then update the 'dfGermany' 'Sales' column
library(data.table)
setDT(dfGermany)[dfJapan, Sales := Sales + i.Sales/2, on = .(Year, Code)]
dfGermany
# Country Sales Year Code
#1: Germany 2250 2000 221
#2: Germany 1500 2001 150
#3: Germany 2650 2002 270
data
dfGermany <- structure(list(Country = c("Germany", "Germany", "Germany"),
Sales = c(2000, 1500, 2150), Year = 2000:2002, Code = c(221L,
150L, 270L)), row.names = c(NA, -3L), class = "data.frame")
dfJapan <- structure(list(Country = c("Japan", "Japan", "Japan", "Japan"
), Sales = c(500L, 750L, 800L, 1000L), Year = c(2000L, 2001L,
2001L, 2002L), Code = c(221L, 221L, 270L, 270L)),
class = "data.frame", row.names = c(NA, -4L))
Using dplyr and #akrun's provided data:
library(dplyr)
dfGermany %>%
left_join(dfJapan %>%
select(Year, Code, sales_japan = Sales),
by = c('Year', 'Code')) %>%
mutate(Sales = Sales + coalesce(sales_japan / 2, 0)) %>%
select(-sales_japan)
> dfGermany
Country Sales Year Code
1 Germany 2250 2000 221
2 Germany 1500 2001 150
3 Germany 2650 2002 270

Resources