Aggregate function in R to create time series data - r

I am working on a dataset of 35 variables. I have derived age dummy variable categories to classify age of patients into different age groups. Now I want to aggregate the total no of cases and the number of cases in each age group based on date and location variables. Following is the code I have tried however I am not getting the sum of values of cases in each age group. For example if there are total 10 cases those ten cases should be divided into different age groups but NAs are appearing. In some cases 1 or 2 no of cases are appearing in few age groups which is not representative of total cases.
df_sa2 <- aggregate( cbind(cases=df_sa1$cases, agecat1=df_sa1$agecat1, agecat2=df_sa1$agecat2, agecat3=df_sa1$agecat3, agecat4=df_sa1$agecat4, agecat5=df_sa1$agecat5), by = list(Date=df_sa1$date, location=df_sa1$location), FUN = sum)
I have checked the datatypes they are all numeric.
Please suggest what is wrong with the code. Thank you.

Consider the formula style of aggregate which can read better and use the data argument to avoid the numerous df_sa1$ qualifiers.
With formula style, numeric columns are placed to the left of ~ and categorical variables to the right for grouping columns. Doing so also renders cbind and list unnecessary.
fml <- cases ~ date + location + agecat1 + agecat2 + agecat3 + agecat4 + agecat5
df_sa2 <- aggregate(fml, data=df_sa1, FUN=sum)
# TO ACCOUNT FOR POTENTIAL MISSING VALUES IN df_sa1$cases
df_sa2 <- aggregate(fml, data=df_sa1, FUN=function(x) sum(x, na.rm=TRUE), na.action=na.pass)
If you need individual age category groupings, adjust formula accordingly:
fml <- cases ~ date + location + agecat1
fml <- cases ~ date + location + agecat2
...
fml <- cases ~ date + location + agecat5

Related

why doesn't the html table give me p-values and add -9 values? table1

I'm trying to make a table with several nominal values and a few logical values against a single nominal value using the "table1" package. I also want each column to include a p value.
This code: table1(~ age + obese + low_income + married + HSless + hosp_visits_2years + ER_2years + nights_hosp_2years | has_PCP, data=Oak, droplevels=F, render=rndr, render.strat=rndr.strat, overall=F)
Gives me this warning:
Warning message:
In table1.formula(~age + obese + low_income + married + HSless + :
Terms to the right of '|' in formula 'x' define table columns and are
expected to be factors with meaningful labels.
The output also gives -9 as a value for the has_PCP nominal values, which I don't want either.
Here is where I got the code from (column of p values).
https://cran.r-project.org/web/packages/table1/vignettes/table1-examples.html
Adding p-values using the table1 function from the package of the same name is a hack that doesn't seem to work based on code in the vignette you linked to.
For an alternative, try the tableStack function from the epiDisplay package.
library(epiDisplay) # for the function
library(MatchIt) # for the lalonde data
data(lalonde)
labs <- c("No","Yes")
lalonde <- within(lalonde, {
treat <- factor(treat, labels=c("Control", "Treatment"))
black <- factor(black, labels=labs)
married <- factor(married, labels=labs)
nodegree <- factor(nodegree, labels=labs)
})
attr(lalonde, "var.labels") <- c("Treatment","Age (yrs)","Education","Black","Hispanic","Married",
"No high school diploma", "1974 Income", "1975 Income", "1978 Income")
That's just preparing the variable and value labels.
The following command creates the table for a subset of the variables. You can view the output on the screen,
Table1 <- tableStack(vars=c(age, black, married, nodegree, re78), by=treat, dataFrame=lalonde,
iqr=re78, name.test=FALSE)
Table1
or send it to a text file.
write.csv(Table1, file="Table1.csv")
Which you can open in Excel and then copy to Word with some minor formatting required.
The tests are automatically chosen based on the class of the variables specified in vars. Specifying variables in iqr forces the function to show medians with IQR for these variables and performs a ranksum test. All other numeric variables specified in vars will display means with SD and a t.test (with variances pooled) or ANOVA will be performed depending on the number of groups. A chi-squared or Fisher's test will be performed for factors. Unfortunately, the function fails if any variable has a zero count, which is true for many of your variables (obese, low income, married...). So it's not ideal for data exploration.

Fama Macbeth Regression in R pmg

In the past few days I have been trying to find how to do Fama Macbeth regressions in R. It is advised to use the plm package with pmg, however every attempt I do returns me that I have an insufficient number of time periods.
My Dataset consists of 2828419 observations with 13 columns of variables of which I am looking to do multiple cross-sectional regressions.
My firms are specified by seriesis, I have got a variable date and want to do the following Fama Macbeth regressions:
totret ~ size
totret ~ momentum
totret ~ reversal
totret ~ volatility
totret ~ value size
totret ~ value + size + momentum
totret ~ value + size + momentum + reversal + volatility
I have been using this command:
fpmg <- pmg(totret ~ momentum, Data, index = c("date", "seriesid")
Which returns: Error in pmg(totret ~ mom, Dataset, index = c("seriesid", "datem")) : Insufficient number of time periods
I tried it with my dataset being a datatable, dataframe and pdataframe. Switching the index does not work as well.
My data contains NAs as well.
Who can fix this, or find a different way for me to do Fama Macbeth?
This is almost certainly due to having NAs in the variables in your formula. The error message is not very helpful - it is probably not a case of "too few time periods to estimate" and very likely a case of "there are firm/unit IDs that are not represented across all time periods" due to missing data being dropped.
You have two options - impute the missing data or drop observations with missing data (the latter being a quick test that the model works without missing points before deciding what you want to do that is valid for estimtation).
If the missingness in your data is truly random, you might be okay just dropping observations with missingness. Otherwise you should probably impute. A common strategy here is to impute multiple times - at least 5 - and then estimate for each of those 5 resulting data sets and average the effect together. Amelia or mice are very strong imputation packages. I like Amelia because with one call you can impute n times for that many resulting data sets and it's easy to pass in a set of variables to not impute (e.g., id variable or time period) with the idvars parameter.
EDIT: I dug into the source code to see where the error was triggered and here is what the issue is - again likely caused by missing data, but it does interact with your degrees of freedom:
...
# part of the code where error is triggered below, here is context:
# X = matrix of the RHS of your model including intercept, so X[,1] is all 1s
# k = number of coefficients used determined by length(coef(plm.model))
# ind = vector of ID values
# so t here is the minimum value from a count of occurrences for each unique ID
t <- min(tapply(X[,1], ind, length))
# then if the minimum number of times a single ID appears across time is
# less than the number of coefficients + 1, you do not have enough time
# points (for that ID/those IDs) to estimate.
if (t < (k + 1))
stop("Insufficient number of time periods")
That is what is triggering your error. So imputation is definitely a solution, but there might be a single offender in your data and importantly, once this condition is satisfied your model will run just fine with missing data.
Lately, I fixed the Fama Macbeth regression in R.
From a Data Table with all of the characteristics within the rows, the following works and gives the opportunity to equally weight or apply weights to the regression (remove the ",weights = marketcap" for equally weighted). totret is a total return variable, logmarket is the logarithm of market capitalization.
logmarket<- df %>%
group_by(date) %>%
summarise(constant = summary(lm(totret~logmarket, weights = marketcap))$coefficient[1], rsquared = summary(lm(totret~logmarket*, weights = marketcap*))$r.squared, beta= summary(lm(totret~logmarket, weights = marketcap))$coefficient[2])
You obtain a DataFrame with monthly alphas (constant), betas (beta), the R squared (rsquared).
To retrieve coefficients with t-statistics in a dataframe:
Summarystatistics <- as.data.frame(matrix(data=NA, nrow=6, ncol=1)
names(Summarystatistics) <- "logmarket"
row.names(Summarystatistics) <- c("constant","t-stat", "beta", "tstat", "R^2", "observations")
Summarystatistics[1,1] <- mean(logmarket$constant)
Summarystatistics[2,1] <- coeftest(lm(logmarket$constant~1))[1,3]
Summarystatistics[3,1] <- mean(logmarket$beta)
Summarystatistics[4,1] <- coeftest(lm(logmarket$beta~1))[1,3]
Summarystatistics[5,1] <- mean(logmarket$rsquared)
Summarystatistics[6,1] <- nrow(subset(df, !is.na(logmarket)))
There are some entries of "seriesid" with only one entry. Therefore the pmg gives the error. If you do something like this (with variable names you use), it will stop the error:
try2 <- try2 %>%
group_by(cusip) %>%
mutate(flag = (if (length(cusip)==1) {1} else {0})) %>%
ungroup() %>%
filter(flag == 0)

Calculate the average of a subsample with cast in R reshape2

I am attempting to calculate the average of a subsample with the function acast. As the subsample, I want to use data within a percentile range for which I use quantile within the subset. The problem seems that the quantiles are calculated before arranging the data by groups, thus the same values are used. See the example below:
library(reshape2)
library(plyr)
data(airquality)
aqm <- melt(airquality, id=c("Month", "Day"), na.rm=TRUE)
## here I calculate the length for each group for the whole sample
acast(aqm, variable + Month ~ . , length, value.var = "value")
## here I calculate the length for the range within the quantiles 0.05 - 0.5
acast(aqm, variable + Month ~ . , length, value.var = "value", subset = .(value >= quantile(value,c(0.05)) & value <= quantile(value,c(0.5))))
I should get with the subset half of the observations for each group, but instead I get in some cases way lees than half and in other way more. It seems to me that the quantiles are calculated with the melted data, therefore the function applies the same quantile numbers to all groups.
Does anyone have an idea how to make that the quantiles are calculates for each group? Any help would be appreciated. I know this would be possible by doing a loop by categories, but want to see if there is a way to do it all at once.
Thanks,
Sergio René

Linear regression on subsets with dependent variable per column using dlply() in R

I would like to automatically produce linear regressions for a data frame for each category separately.
My data frame includes one column with time categories, one column (slope$Abs) as the dependent variable, several columns, which should be used as the independent variable.
head(slope)
timepoint Abs In1 In2 In3 Out1 Out2 Out3 ...
1: t0 275.0 2.169214 2.169214 2.169214 2.069684 2.069684 2.069684
2: t0 275.5 2.163937 2.163937 2.163937 2.063853 2.063853 2.063853
3: t0 276.0 2.153298 2.158632 2.153298 2.052088 2.052088 2.057988
4: ...
All in all for each timepoint I have 40 variables, and I want to end up with a linear regression for each combination. Such as In1~Abs[t0], In1~Abs[t1] and so on for each column.
Of course I can do this manually, but I guess there must be a more elegant way to do the work.
I did my research and found out that dlply() might be the function I'm looking for. However, my attempt results in an error.
So I somehow tried to combine the answers from previous questions I have found:
On individual variables per column and on subsets per category
I came up with a function like this:
lm.fun <- function(x) {summary(lm(x ~ slope$Abs, data=slope))}
lm.list <- dlply(.data=slope, .variables=slope$timepoint, .fun=lm.fun )
But I get the following error:
Error in eval.quoted(.variables, data) :
envir must be either NULL, a list, or an environment.
Hope someone can help me out.
Thanks a lot in advance!
The dplyr package in R does not do well in accepting formulas in the form of y~x into its functions based on my research. So the other alternative is to calculate it someone manually. Now let me first inform you that slope = cor(x,y)*sd(y)/sd(x) (reference found here: http://faculty.cas.usf.edu/mbrannick/regression/regbas.html) and that the intercept = mean(y) - slope*mean(x). Simple linear regression requires that we use the centroid as our point of reference when finding our intercept because it is an unbiased estimator. Using a single point will only get you the intercept of that individual point and not the overall intercept.
Now for this explanation, I will be using the mtcars data set. I only wanted a subset of the data so I am using variables c('mpg', 'cyl', 'disp', 'hp', 'drat', 'wt', 'qsec') to basically mimic your dataset. In my example, my grouping variable is 'cyl', which is the equivalent of your 'timepoint' variable. The variable 'mpg' is the y-variable in this case, which is equivalent to 'Abs' in your data.
Based on my explanation of slope and intercept above, it is clear that we need three tables/datasets: a correlation dataset for your y with respect to your x for each group, a standard deviation table for each variable and group, and a table of means for each group and each variable.
To get the correlation dataset, we want to group by 'cyl' and calculate the correlation coefficients for , you should use:
df <- mtcars[c('mpg', 'cyl', 'disp', 'hp', 'drat', 'wt', 'qsec')]
corrs <- data.frame(k1 %>% group_by(cyl) %>% do(head(data.frame(cor(.[,c(1,3:7)])), n = 1)))
Because of the way my dataset is structured, the second variable (df[ ,2]) is 'cyl'. For you, you should use
do(head(data.frame(cor(.[,c(2:40)])), n = 1)))
since your first column is the grouping variable and it is not numeric. Essentially, you want to go across all numeric variables. Not using head will produce a correlation matrix, but since you are interested in finding the slope independent of each other x-variable, you only need the row that has the correlation coefficient of your y-variable equal to 1 (r_yy = 1).
To get standard deviation and means for each group, each variable, use
sds <- data.frame(k1 %>% group_by(cyl) %>% summarise_each(funs(sd)))
means <- data.frame(k1 %>% group_by(cyl) %>% summarise_each(funs(mean)))
Your group names will be the first column, so make sure to rename your rows for each dataset corrs, sds, and means and delete column 1.
rownames(corrs) <- rownames(means) <- rownames(sds) <- corrs[ ,1]
corrs <- corrs[ ,-1]; sds <- sds[ ,-1]; means <- means[ ,-1]
Now we need to calculate the sd(y)/sd(x). The best way I have done this, and seen it done is using an apply affiliated function.
sdst <- data.frame(t(apply(sds, 1, function(X) X[1]/X)))
I use X[1] because the first variable in sds is my y-variable. The first variable after you have deleted timepoint is Abs which is your y-variable. So use that.
Now the rest is pretty straight forward. Since everything is saved as a data frame, to find slope, all it you need to do is
slopes <- sdst*corrs
inter <- slopes*means
intercept <- data.frame(t(apply(inter, 1, function(x) x[1]-x)))
Again here, since our y-variable is in the first column, we use x[1]. To check if all is well, your slopes for your y-variable should be 1 and the intercept should be 0.
I have solved the issue with a simpler approach, so I wanted to update the answer.
To make life easier I converted the data frame structure so that all columns are converted into rows with the melt() function of the reshape package.
melt(slope, id = c("Abs", "timepoint"), variable_name = "Sites")
The output's column name is by default "value".
Then create one column that adds both predictors with paste().
slope$FullTreat <- paste(slope$Sites,slope$timepoint, sep="_")
Run a function through the dataset to create separate models for each treatment combination.
models <- dlply(slope, ~ FullTreat, function(df) {
lm(value ~ Abs, data = df)
})
To extract the coefficents simply run
coefs <- ldply(models, coef)
Then split the FullTreat column into separate columns again with colsplit() also from reshape. Plus, add the Intercept and slope to the new data frame:
coefs <- cbind(colsplit(coefs$FullTreat, split="_",
c("Sites","Timepoint")), coefs[,2:3])
I haven't worked on a function that plots all the regressions from the models, but I guess this is feasible with the ldply() function.

How to express membership in multiple categories in R?

How does one express a linear model where observations can belong to multiple categories and the number of categories is large?
For example, using time dummies as the categories, here is a problem that is easy to set up since the number of categories (time periods) is small and known:
tmp <- "day 1, day 2
0,1
1,0
1,1"
periods <- read.csv(text = tmp)
y <- rnorm(3)
print(lm(y ~ day.1 + day.2 + 0, data=periods))
Now suppose that instead of two days there were 100. Would I need to create a formula like the following?
y ~ day.1 + day.2 + ... + day.100 + 0
Presumably such a formula would have to be created programmatically. This seems inelegant and un-R-like.
What is the right R way to tackle this? For example, aside from the formula problem, is there a better way to create the dummies than creating a matrix of 1s and 0s (as I did above)? For the sake of concreteness, say that the actual data consists (for each observation) of a start and end date (so that tmp would contain a 1 in each column between start and end).
Update:
Based on the answer of #jlhoward, here is a larger example:
num.observations <- 1000
# Manually create 100 columns of dummies called x1, ..., x100
periods <- data.frame(1*matrix(runif(num.observations*100) > 0.5, nrow = num.observations))
y <- rnorm(num.observations)
print(summary(lm(y ~ ., data = periods)))
It illustrates the manual creation of a data frame of dummies (1s and 0s). I would be interested in learning whether there is a more R-like way of dealing with these "multiple dummies per observation" issue.
You can use the . notation to include all variables other than the response in a formula, and -1 to remove the intercept. Also, put everything in your data frame; don't make y a separate vector.
set.seed(1) # for reproducibility
df <- data.frame(y=rnorm(3),read.csv(text=tmp))
fit.1 <- lm(y ~ day.1 + day.2 + 0, df)
fit.2 <- lm(y ~ -1 + ., df)
identical(coef(fit.1),coef(fit.2))
# [1] TRUE

Resources