Given two simple sets of data:
head(training_set)
x y
1 1 2.167512
2 2 4.684017
3 3 3.702477
4 4 9.417312
5 5 9.424831
6 6 13.090983
head(test_set)
x y
1 1 2.068663
2 2 4.162103
3 3 5.080583
4 4 8.366680
5 5 8.344651
I want to fit a linear regression line on the training data, and use that line (or the coefficients) to calculate the "test MSE" or Mean Squared Error of the Residuals on the test data once that line is fit there.
model = lm(y~x,data=training_set)
train_MSE = mean(model$residuals^2)
test_MSE = ?
In this case, it is more precise to call it MSPE (mean squared prediction error):
mean((test_set$y - predict.lm(model, test_set)) ^ 2)
This is a more useful measure as all models aim at prediction. We want a model with minimal MSPE.
In practice, if we do have a spare test data set, we can directly compute MSPE as above. However, very often we don't have spare data. In statistics, the leave-one-out cross-validation is an estimate of MSPE from the training dataset.
There are also several other statistics for assessing prediction error, like Mallows's statistic and AIC.
Related
I am trying to topic extraction with STM.
I have a question on how to determine the optimal number of topics.
kResult <- searchK(out$documents, out$vocab, K=c(7,8,9,10), prevalence=~rating+s(day), data=meta)
kResult$results
plot(kResult)
Numerical values and graphs are output in the output result of the searchK function.
I don't know how to determine the optimal number of topics from this result.
I would like to know how to determine the number of topics.
> kResult$results
K exclus semcoh heldout residual bound lbound em.its
1 7 8.937433 -52.95924 -7.80857 9.328384 -23391733 -23391725 17
2 8 9.090138 -58.20191 -7.793394 8.950438 -23337625 -23337614 20
3 9 9.168978 -61.09091 -7.781923 8.710382 -23296459 -23296447 25
4 10 9.256421 -61.51863 -7.764806 8.504863 -23247891 -23247876 55
kResult plot result
I read the treatise, but couldn't understand what the following values represent.
exclus:Exclusivity of each model.
semcoh:Semantic coherence of each model.
heldout:Heldout likelihood for each model.
residual:Residual for each model.
bound:Bound for each model.
lbound:lbound for each model.
em.its:Total number of EM iterations used in fitting the model.
Also, I don't know what each of the graphs below represents.
kResult plot result
Given two simple sets of data:
head(training_set)
x y
1 1 2.167512
2 2 4.684017
3 3 3.702477
4 4 9.417312
5 5 9.424831
6 6 13.090983
head(test_set)
x y
1 1 2.068663
2 2 4.162103
3 3 5.080583
4 4 8.366680
5 5 8.344651
I want to fit a linear regression line on the training data, and use that line (or the coefficients) to calculate the "test MSE" or Mean Squared Error of the Residuals on the test data once that line is fit there.
model = lm(y~x,data=training_set)
train_MSE = mean(model$residuals^2)
test_MSE = ?
In this case, it is more precise to call it MSPE (mean squared prediction error):
mean((test_set$y - predict.lm(model, test_set)) ^ 2)
This is a more useful measure as all models aim at prediction. We want a model with minimal MSPE.
In practice, if we do have a spare test data set, we can directly compute MSPE as above. However, very often we don't have spare data. In statistics, the leave-one-out cross-validation is an estimate of MSPE from the training dataset.
There are also several other statistics for assessing prediction error, like Mallows's statistic and AIC.
I would like to run a fixed-effects model using OLS with weighted data.
Since there can be some confusion, I mean to say that I used "fixed effects" here in the sense that economists usually imply, i.e. a "within model", or in other words individual-specific effects. What I actually have is "multilevel" data, i.e. observations of individuals, and I would like to control for their region of origin (and have corresponding clustered standard errors).
Sample data:
library(multilevel)
data(bhr2000)
weight <- runif(length(bhr2000$GRP),min=1,max=10)
bhr2000 <- data.frame(bhr2000,weight)
head(bhr2000)
GRP AF06 AF07 AP12 AP17 AP33 AP34 AS14 AS15 AS16 AS17 AS28 HRS RELIG weight
1 1 2 2 2 4 3 3 3 3 5 5 3 12 2 6.647987
2 1 3 3 3 1 4 3 3 4 3 3 3 11 1 6.851675
3 1 4 4 4 4 3 4 4 4 2 3 4 12 3 8.202567
4 1 3 4 4 4 3 3 3 3 3 3 4 9 3 1.872407
5 1 3 4 4 4 4 4 3 4 2 4 4 9 3 4.526455
6 1 3 3 3 3 4 4 3 3 3 3 4 8 1 8.236978
The kind of model I would like to estimate is:
AF06_ij = beta_0 + beta_1 AP34_ij + alpha_1 * (GRP == 1) + alpha_2 * (GRP==2) +... + e_ij
where i refer to specific indidividuals and j refer to the group they belong to.
Moreover, I would like observations to be weighted by weight (sampling weights).
However, I would like to get "clustered standard errors", to reflect possible GRP-specific heteroskedasticity. In other words, E(e_ij)=0 but Var(e_ij)=sigma_j^2 where the sigma_j can be different for each GRP j.
If I understood correctly, nlme and lme4 can only estimate random-effects models (or so-called mixed models), but not fixed-effects model in the sense of within.
I tried the package plm, which looked ideal for what I wanted to do, but it does not allow for weights. Any other idea?
I think this is more of a stack exchange question, but aside from fixed effects with model weights; you shouldn't be using OLS for an ordered categorical response variable. This is an ordered logistic modeling type of analysis. So below I use the data you have provided to fit one of those.
Just to be clear we have an ordered categorical response "AF06" and two predictors. The first one "AP34" is also an ordered categorical variable; the second one "GRP" is your fixed effect. So generally you can create a group fixed effect by coercing the variable in question to a factor on the RHS...(I'm really trying to stay away from statistical theory because this isn't the place for it. So I might be inaccurate in some of the things I'm saying)
The code below fits an ordered logistic model using the polr (proportional odds logistic regression) function. I've tried to interpret what you were going for in terms of model specification, but at the end of the day OLS is not the right way forward. The call to coefplot will have a very crowded y axis I just wanted to present a very rudimentary start at how you might interpret this. I'd try to visualize this in a more refined way for sure. And back to interpretation...You will need to work on that, but I think this is generally the right method. The best resource I can think of is chapters 5 and 6 of "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Gelman and Hill. It's such a good resource so I'd really recommend you read the whole thing and try to master it if you're interested in this type of analysis going forward.
library(multilevel) # To get the data
library(MASS) # To get the polr modeling function
library(arm) # To get the tools, insight and expertise of Andrew Gelman and his team
# The data
weight <- runif(length(bhr2000$GRP),min=1,max=10)
bhr2000 <- data.frame(bhr2000,weight)
head(bhr2000)
# The model
m <- polr(factor(AF06) ~ AP34 + factor(GRP),weights = weight, data = bhr2000, Hess=TRUE, method = "logistic")
summary(m)
coefplot(m,cex.var=.6) # from the arm package
Check out the lfe package---it does econ style fixed effects and you can specify clustering.
I have done a SIMPLS regression in R but I am not sure how to interpret the results, this is how my function looks,
yarn.simpls<-mvr(Pcubes~X1+X2+X3,data=dtj,validation="CV",method="simpls")
and this is my results from
summary(yarn.simpls)
X dimension: 33471 3
Y dimension: 33471 1
Fit method: simpls
Number of components considered: 3
VALIDATION: RMSEP
Cross-validated using 10 random segments.
(Intercept) 1 comps 2 comps 3 comps
CV 0.5729 0.4449 0.4263 0.4175
adjCV 0.5729 0.4449 0.4263 0.4175
TRAINING: % variance explained
1 comps 2 comps 3 comps
X 86.77 97.67 100
Pcubes 39.74 44.72 47
What i would like to know is, what is my coefficients? Is it the adjCV row under VALIDATION: RMSEP. The TRAINING: % variance explianed, is that like the significance of the variables? I just want to make sure i interpret the results correctly.
The % variance is describing how much variation each ncomp captures from the x variables, and then the response variable, so it can be thought of as the relative ability of each of the ncomps to capture information in your data.
CV & adjCV are the values for the root mean squared error of prediction (RMSEP), which is giving you information about how well each ncomp model predicts the outcome variable. In your case, a model with 1 component seems to have the highest predictive power.
If you want coefficients for the underlying variables, use coef(yarn.simpls). This will give you what the variable coefficients would be at each ncomp.
I am trying to run a fixed effects regression in R. When I run the linear model without the fixed effects factor being applied the model works just fine. But when I apply the factor - which is a numeric code for user ID, I get the following error:
Error in rep.int(c(1, numeric(n)), n - 1L) : cannot allocate vector of length 1055470143
I am not sure what the error means but I fear it may be an issue of coding the variable correctly in R.
I think this is more statistical and less programming problem for two reasons:
First, I am not sure whether you are using cross sectional data or panel data. If you using cross-sectional data it doesn't make sense to control for 30000 individuals(of course, they will add to variation).
Second, if you are using panel data, there are good package such as plm package in R that does this kind of computation.
An example:
set.seed(42)
DF <- data.frame(x=rnorm(1e5),id=factor(sample(seq_len(1e3),1e5,TRUE)))
DF$y <- 100*DF$x + 5 + rnorm(1e5,sd=0.01) + as.numeric(DF$id)^2
fit <- lm(y~x+id,data=DF)
This needs almost 2.5 GB RAM for the R session (if you add RAM needed by the OS this is more than many PCs have available) and takes some time to finish. The result is pretty useless.
If you don't run into RAM limitations you can suffer from limitations of vector length (e.g., if you have even more factor levels), in particular if you use an older version of R.
What happens?
One of the first steps in lm is creating the design matrix using the function model.matrix. Here is a smaller example of what happens with factors:
model.matrix(b~a,data=data.frame(a=factor(1:5),b=2))
# (Intercept) a2 a3 a4 a5
# 1 1 0 0 0 0
# 2 1 1 0 0 0
# 3 1 0 1 0 0
# 4 1 0 0 1 0
# 5 1 0 0 0 1
# attr(,"assign")
# [1] 0 1 1 1 1
# attr(,"contrasts")
# attr(,"contrasts")$a
# [1] "contr.treatment"
See how n factor levels result in n-1 dummy variables? If you have many factor levels and many observations, this matrix gets huge.
What should you do?
I'm quite sure, you should use a mixed effects model. There are two important packages that implement linear mixed effects models in R, package nlme and the newer package lme4.
library(lme4)
fit.mixed <- lmer(y~x+(1|id),data=DF)
summary(fit.mixed)
Linear mixed model fit by REML
Formula: y ~ x + (1 | id)
Data: DF
AIC BIC logLik deviance REMLdev
1025277 1025315 -512634 1025282 1025269
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 8.9057e+08 29842.472
Residual 1.3875e+03 37.249
Number of obs: 100000, groups: id, 1000
Fixed effects:
Estimate Std. Error t value
(Intercept) 3.338e+05 9.437e+02 353.8
x 1.000e+02 1.180e-01 847.3
Correlation of Fixed Effects:
(Intr)
x 0.000
This needs very little RAM, calculates fast, and is a more correct model.
See how the random intercept accounts for most of the variance?
So, you need to study mixed effects models. There are some nice publications, e.g. Baayen, Davidson, Bates (2008), explaining how to use lme4.