Assume I have longitudinal data (e.g. 21 countries' gdp over 5 years) with 2 variables of the same scale. data is sorted as follows
Country-year
x1
x2
USA-1
50
2
USA-2
47
1.5
...
...
...
USA-5
52
1.9
UK-1
63
2
...
...
...
UK-5
41
3
...
...
...
n = 21
m = 5
N = n * m
set.seed(123)
x <- MASS::mvrnorm(N, rep(0,2), diag(2))
I want to do two things:
First, calculate the mean country-wise
Second, calculate the mean year-wise
For the first task, I have the following code
x_barbar <- NULL
k = 1
for(i in 1:n){
x_barbar[i] <- mean(x[k:(k + (m - 1)), ])
k = k + m
}
How can I optimize my code for the first task and figure out a code to do the second task?
Your help is appreciated.
If you want to calculate mean of odd/even rows you can use vector recycling.
mean_odd <- mean(x[c(TRUE, FALSE), ])
mean_even <- mean(x[c(FALSE, TRUE), ])
For the updated dataset we can separate country and year column and calculate mean.
library(dplyr)
library(tidyr)
data <- df %>% separate(`Country-year`, c('Country', 'Year'), sep = '-')
country_mean <- data %>% group_by(Country) %>% summarise(across(c(x1, x2), mean))
year_mean <- data %>% group_by(Year) %>%summarise(across(c(x1, x2), mean))
All you have to do is use tapply
For example the first case
tapply(x, (row(x)+1)%/%2, mean)
For the second case:
tapply(x, row(x)%%2, mean)
Related
I have a data frame of 200*1000 rows and 6 columns. I want to calculate the correlation between 2 columns cor(df$y1, df$y2)) for every 200 rows, so that I get 1000 different correlation values as a result.
When I wanted to calculate the sums of every 200 rows I could simply use
rowsum(df,rep(1:1000,each=200))
but there is no such command in r as rowcor that I could use equivalently for correlations.
We may use a group by approach
by(df[c('y1', 'y2')], as.integer(gl(nrow(df), 200, nrow(df))),
FUN = function(x) cor(x$y1, x$y2))
Or using tidyverse
library(dplyr)
out <- df %>%
group_by(grp = as.integer(gl(n(), 200, n()))) %>%
summarise(Cor = cor(y1, y2))
> dim(out)
[1] 1000 2
data
set.seed(24)
df <- as.data.frame(matrix(rnorm(200 *1000 * 6), ncol = 6))
names(df)[1:2] <- c('y1', 'y2')
Let's say I make a dummy dataframe with 6 columns with 10 observations:
X <- data.frame(a=1:10, b=11:20, c=21:30, d=31:40, e=41:50, f=51:60)
I need to create a loop that evaluates 3 columns at a time, adding the summed second and third columns and dividing this by the sum of the first column:
(sum(b)+sum(c))/sum(a) ... (sum(e)+sum(f))/sum(d) ...
I then need to construct a final dataframe from these values. For example using the dummy dataframe above, it would look like:
value
1. 7.454545
2. 2.84507
I imagine I need to use the next function to iterate within the loop, but I'm fairly lost! Thank you for any help.
You can split your data frame into groups of 3 by creating a vector with rep where each element repeats 3 times. Then with this list of sub data frames, (s)apply the function of summing the second and third columns, adding them, and dividing by the sum of the first column.
out_vec <-
sapply(
split.default(X, rep(1:ncol(X), each = 3, length.out = ncol(X)))
, function(x) (sum(x[2]) + sum(x[3]))/sum(x[1]))
data.frame(value = out_vec)
# value
# 1 7.454545
# 2 2.845070
You could also sum all the columns up front before the sapply with colSums, which will be more efficient.
out_vec <-
sapply(
split(colSums(X), rep(1:ncol(X), each = 3, length.out = ncol(X)))
, function(x) (x[2] + x[3])/x[1])
data.frame(value = out_vec, row.names = NULL)
# value
# 1 7.454545
# 2 2.845070
You could use tapply:
tapply(colSums(X), gl(ncol(X)/3, 3), function(x)sum(x[-1])/x[1])
1 2
7.454545 2.845070
Here is an option with tidyverse
library(dplyr) # 1.0.0
library(tidyr)
X %>%
summarise(across(.fn = sum)) %>%
pivot_longer(everything()) %>%
group_by(grp = as.integer(gl(n(), 3, n()))) %>%
summarise(value = sum(lead(value)/first(value), na.rm = TRUE)) %>%
select(value)
# A tibble: 2 x 1
# value
# <dbl>
#1 7.45
#2 2.85
I am trying to use tidyverse tools (instead of for loops) on some groups to be evaluated with procedures from the mvabund package.
Basically, for the procedure I need a dataframe with just numeric columns (species abundances) first and then grouping variables for a downstream procedure.
But if I want to do this on multiple groupings, I need to include grouping variables. However, when using group_by these non-numeric variables are still present and the procedure will not run.
How can I use dplyr to pass the numeric variables to a (mvabund) function?
If I were to just one group, the process is as follows:
library(tidyverse)
library(mvabund)
df <- data.frame(Genus.species1 = rep(c(0, 1), each = 10),
Genus.species2 = rep(c(1, 0), each = 10),
Genus.species3 = sample(1:100,20,replace=T),
Genus.species4 = sample(1:100,20,replace=T),
GroupVar1 = rep(c("Site1", "Site2"), each=2, times=5),
GroupVar2 = rep(c("AA", "BB"), each = 10),
GroupVar3 = rep(c("A1", "B1"), times=10))
df1 <- filter(df, GroupVar2 == "AA" & GroupVar3 == "A1") # get desired subset/group
df2 <- select(df1, -GroupVar1, -GroupVar2, -GroupVar3) # retain numeric variables
MVA.fit <- mvabund(df2) # run procedure
MVA.model <- manyglm(MVA.fit ~ df1$GroupVar1, family="negative binomial") # here I need to bring back GroupVar1 for this procedure
MVA.anova <- anova(MVA.model, nBoot=1000, test="wald", p.uni="adjusted")
MVA.anova$table[2,] # desired result
I have tried using map, do, nest, etc to no avail.
Without groupings this works
df.t <- as_tibble(df)
nest.df <- df.t %>% nest(-GroupVar1, -GroupVar2, -GroupVar3)
mva.tt <- nest.df %>%
mutate(mva.tt = map(data, ~ mvabund(.x)))
but this next step does not
mva.tt %>% mutate(MANY = map(data, ~ manyglm(.x ~ GroupVar1, family="negative binomial")))
Moreover, once I try to remove columns that sum to zero or include groupings, everything fails.
Is there a smart way to to this with dplyr and pipes? Or is a for loop the answer?
Edit:
Originally, I asked about this :Also, when broken into groups, the dataframe will contain columns that are all zeroes, normally I'd remove these. Can I have dplyr groupings that vary in the number of variables?" but the comments revealed this is not possible given my proposed set up. So I am still interested in the above.
Copied the steps into a function. Also added group information to differentiate in the last line.
fun <- function(df) {
df1 <- select(df, -GroupVar1, -GroupVar2, -GroupVar3)
df3 <- df1 %>% select_if(~sum((.)) > 0)
MVA.fit <- mvabund(df3)
MVA.model <- manyglm(MVA.fit ~ df$GroupVar1, family="negative binomial")
MVA.anova <- anova(MVA.model, nBoot=1000, test="wald", p.uni="adjusted")
cbind(Group2 = df$GroupVar2[1], Group3 = df$GroupVar3[1], MVA.anova$table[2,])
}
Split the dataframe into groups and apply the function
library(tidyverse)
library(mvabund)
df %>%
group_split(GroupVar2, GroupVar3) %>%
map_dfr(fun)
#Time elapsed: 0 hr 0 min 0 sec
#Time elapsed: 0 hr 0 min 0 sec
#Time elapsed: 0 hr 0 min 0 sec
#Time elapsed: 0 hr 0 min 0 sec
# Group2 Group3 Res.Df Df.diff wald Pr(>wald)
#1 AA A1 3 1 1.028206 0.7432567
#2 AA B1 3 1 2.979169 0.1608392
#3 BB A1 3 1 2.330708 0.2137862
#4 BB B1 3 1 1.952617 0.2567433
I have a dataframe containing multiple (thousands) unequal-length monthly time series separated by a non-sequencial id variable. The data set looks like this,
id1 <- rep(12, 60)
ds1 <- seq(as.Date("2014-01-01"), as.Date("2018-12-31"), by = "month")
value1 <- sample(60)
id2 <- rep(132, 48)
ds2 <- seq(as.Date("2015-01-01"), as.Date("2018-12-31"), by = "month")
value2 <- sample(48)
id3 <- rep(210, 72)
ds3 <- seq(as.Date("2013-01-01"), as.Date("2018-12-31"), by = "month")
value3 <- sample(72)
id <- c(id1, id2, id3)
ds <- c(ds1, ds2, ds3)
y <- c(value1, value2, value3)
df <- data.frame(id, ds, y)
> head(df)
id ds y
1 12 2014-01-01 51
2 12 2014-02-01 22
3 12 2014-03-01 34
4 12 2014-04-01 53
5 12 2014-05-01 26
6 12 2014-06-01 56
I want to run the prophet forecasting model on every time series separated by id and generate a data frame with one month ahead forecast with one or two diagnostic statistics. The rows of that data frame should start with the id variable, ie. the first column should be id.
For a single id case, the procedure looks like this,
library(prophet)
set.seed(1234)
id <- rep(23, 60)
ds <- seq(as.Date("2014-01-01"), as.Date("2018-12-31"), by = "month")
y <- sample(60)
df <- data.frame(ds, y)
m <- prophet(df, seasonality.mode = 'multiplicative')
future <- make_future_dataframe(m, periods = 1)
fcst <- predict(m, future)
last_fcst <- fcst[61,]
mse <- mean((df$y - fcst$yhat[c(1:60)])^2)
mae <- mean(abs((df$y - fcst$yhat[c(1:60)])))
final <- cbind(last_fcst, mse, mae)
final
> final
ds trend multiplicative_terms multiplicative_terms_lower multiplicative_terms_upper yearly
61 2018-12-02 27.19465 -0.1401155 -0.1401155 -0.1401155 -0.1401155
yearly_lower yearly_upper additive_terms additive_terms_lower additive_terms_upper yhat_lower yhat_upper
61 -0.1401155 -0.1401155 0 0 0 3.689257 42.66293
trend_lower trend_upper yhat mse mae
61 27.19465 27.19465 23.38425 242.4414 12.80532
I want to repeat this procedure and create a dataset with each one-month forecast with their corresponding row ids. Any idea what's the best way to do that?
As I said in the comments, it's best to split() by id in a list(). This way you can use lapply() or (purrr::map()) to make predictions and calculate the metrics per each id.
library(prophet)
library(dplyr) # for data wrangling
library(purrr) # for map/map2, equivalents are lapply/mapply from baseR
# preparations
l_df <- df %>%
split(.$id)
m_list <- map(l_df, prophet) # prophet call
future_list <- map(m_list, make_future_dataframe, periods = 1) # makes future obs
forecast_list <- map2(m_list, future_list, predict) # map2 because we have two inputs
So, forecast_list will contain the output from the predict, again divided by id.
You can "merge" them back in a data.frame by using bind_rows(forecast_list), as long as each df is equal (same structure).
For the metrics I'd follow the same principle:
# to evaluate the model: create a new list
eval_list <- map2(forecast_list, l_df, function(x,z) {
# x is the single dataframe of predictions
# z is the original dataframe with actuals
x <- x[1:(nrow(x)-1), ] # subset to exclude first true forecast
x <- x %>% mutate(y_true = (z %>% select(y) %>% pull()) ) # add the column of actual values
})
# metrics evaluation:
eval_list <- map(eval_list, function(x) {
x <- x %>%
summarise(mse = mean((y_true - yhat)^2)) # add more scores
})
# $`12`
# mse
# 1 199.1829
#
# $`132`
# mse
# 1 156.6394
#
# $`210`
# mse
# 1 415.9659
You can use map2() like I did for eval_list to bind the true forecast with the metrics if you want.
My current data frame looks like this:
# Create sample data
my_df <- data.frame(seq(1, 100), rep(c("ind_1", "", "", ""), times = 25), rep(c("", "ind_2", "", ""), times = 25), rep(c("", "", "ind_3", ""), times = 25), rep(c("", "", "", "ind_4"), times = 25))
# Rename columns
names(my_df)[names(my_df)=="seq.1..100."] <- "value"
names(my_df)[names(my_df)=="rep.c..ind_1................times...25."] <- "ind_1"
names(my_df)[names(my_df)=="rep.c......ind_2............times...25."] <- "ind_2"
names(my_df)[names(my_df)=="rep.c..........ind_3........times...25."] <- "ind_3"
names(my_df)[names(my_df)=="rep.c..............ind_4....times...25."] <- "ind_4"
# Replace empty elements with NA
my_df[my_df==''] = NA
What I want to script is a rather simple for loop that calculates the sum of the value column for each of the four ind_*columns and prints the result.
So far my very meagre attempt has been:
# Create a vector with all individuals
individuals <- c("ind_1", "ind_2", "ind_3", "ind_4")
# Calculate aggregates for each individual
for (i in individuals){
ind <- 1
sum_i <- aggregate(value~ind_1, data = my_df, sum)
print(paste("Individual", i, "possesses an aggregated value of", sum_i$value))
ind <- ind + 1
}
As you can see, I currently struggle to include the correct command to calculate the sum based on one column after another as the current output, naturally, only calculates the result of ind_1. What needs to be changed in the aggregatecommand to achieve the desired result (I'm a total beginner but thought of using indices for proceeding from one column to another?)?
Assuming you´d want to calculate the sum if ind-column matches an expression in your individuals-vector:
individuals <- c("ind_1", "ind_2", "ind_3", "ind_4")
for (i in 1:(ncol(my_df)-1)){
print(sum(my_df$value[which(my_df[,individuals[i]] == individuals[i])]))
}
Why do you want to use print() instead of storing the results in a separate vector?
You can try tidyverse as well:
my_df %>%
gather(key, Inds, -value) %>%
filter(!is.na(Inds)) %>%
group_by(key) %>%
summarise(Sum=sum(value))
# A tibble: 4 x 2
key Sum
<chr> <int>
1 ind_1 1225
2 ind_2 1250
3 ind_3 1275
4 ind_4 1300
Idea is to make the data long using gather. Filter the NAs out, then group by Inds and summarize the values.
A more base R solution would be:
library(reshape2)
my_df_long <- melt(my_df, id.vars = "value",value.name = "ID")
aggregate(value ~ ID, my_df_long, sum, na.rm= T)
ID value
1 ind_1 1225
2 ind_2 1250
3 ind_3 1275
4 ind_4 1300