I've been attempting to perform an ANOVA in R recently on the attached data frame.
My question revolves around the setting of contrasts.
My design is a 3x5 within-subjects design.
There are 3 visual conditions under 'Circle1' and 5 audio under 'Beep1'.
Does anyone have any idea how I should set the contrasts? This is something I'm unfamiliar with as I'm making the transition from point and click stats in SPSS to coded in R.
Thanks for your time
Data file:
Reiterating my answer from another stackoverflow question that was flagged as similar, since you didn't provide any code, you might start by having a look at the contrast package in R. As they note in the document:
"The purpose of the contrast package is to provide a standardized interface for testing linear combinations of parameters from common regression models. The syntax mimics the contrast. Design function from the Design library. The contrast class has been extended in this package to linear models produced using the functions lm, glm, gls, lme and geese."
There is also a nice little tutorial here by Dr. William King who talks about factorial between subjects ANOVA and also includes an abundance of R code. This is wider scoped than you question but would be a great place to start (just to get context).
Finally, here is another resource that you can refer to which talks about setting up orthogonal contrasts in R.
Related
I have a dataset containing repeated measures and quite a lot of variables per observation. Therefore, I need to find a way to select explanatory variables in a smart way. Regularized Regression methods sound good to me to address this problem.
Upon looking for a solution, I found out about the glmmLasso package quite recently. However, I have difficulties defining a model. I found a demo file online, but since I'm a beginner with R, I had a hard time understanding it.
(demo: https://rdrr.io/cran/glmmLasso/src/demo/glmmLasso-soccer.r)
Since I cannot share the original data, I would suggest you use the soccer dataset (the same dataset used in glmmLasso demo file). The variable team is repeated in observations and should be taken as a random effect.
# sample data
library(glmmLasso)
data("soccer")
I would appreciate if you can explain the parameters lambda and family, and how to tune them.
Currently I'm working through Applied Linear Models $5^{th}$ ed - by Kutner, et al. A question I'm working on is asking me to perform an F-Test for Lack of Fit on my linear model. The linear model is a simple linear model of one parameter nothing too troublesome.
To perform the test one has to assess the difference between the full model and the reduced model. At this current junction the authors have stated to take the full model as $\hat{\mu_{j}} = \bar{Y_{j}}$. Specifically the screenshot below says the following:
The reduced model would be the simple linear model:
I have no problem being able to do this manually within R, by computing the necessary values where need be as I've done for other questions. But I'm trying to improve my R skill set and this is where my problem lies.
I have done some reading to other answers related to this and model comparison can be done directly in the anova() function. But I'm having issues stating my full model correctly to be able to leverage the anova() function. I thought about computing a "vector of means" for the subgroups of data (which I display here just for completeness)
But I'm going to run into the problem of the anova() function most likely not being able to compute the degrees of freedom correctly. My data set is very small and this seems like the sort of situation that would show up all the time. With huge data sets I wouldn't see it being feasible to compute things manually so surely there has to be a way for me to phrase my Full Model properly to allow for the computation of means from the subgroups of replicates. But how do I do so? is the question of the day.
For completeness and posterity an answer was given on a sister site I asked this question on:
https://stats.stackexchange.com/questions/539958/how-to-specify-a-vector-full-of-means-with-degrees-of-freedom-for-a-lack-of-fit
the mods can delete the question if they deem fit and doesn't contribute to the community.
I'm using the Caret package from R to create prediction models for maximum energy demand. What i need to use is neural network multilayer perceptron, but in the Caret package i found out there's 2 of the mlp method, which is "mlp" and "mlpML". what is the difference between the two?
I have read description from a book (Advanced R Statistical Programming and Data Models: Analysis, Machine Learning, and Visualization) but it still doesnt answer my question.
Caret has 238 different models available! However many of them are just different methods to call the same basic algorithm.
Besides mlp there are 9 other methods of calling a multi-layer-perceptron one of which is mlpML. The real difference is only in the parameters of the function call and which model you need depends on your use case and what you want to adapt about the basic model.
Chances are, if you don't know what mlpML or mlpWeightDecay,etc. does you are fine to just use the basic mlp.
Looking at the official documentation we can see that:
mlp(size) while mlpML(layer1,layer2,layer3) so in the first method you can only tune the size of the multi-layer-perceptron while in the second call you can tune each layer individually.
Looking at the source code here:
https://github.com/topepo/caret/blob/master/models/files/mlp.R
and here:
https://github.com/topepo/caret/blob/master/models/files/mlpML.R
It seems that the difference is that mlpML allows several hidden layers:
modelInfo <- list(label = "Multi-Layer Perceptron, with multiple layers",
while mlp has one single layer with hidden units.
The official documentation also hints at this difference. In my opinion, it is not particularly useful to have many different models that differ only very slightly, and the documentation does not explain those slight differences well.
I would like to fit an LSTM model using MXNET in R for the purpose of predicting a continuous response (i.e., regression) given several continuous predictors. However, the mx.lstm() function seems to be geared toward NLP as it requires arguments which don't seem applicable to a regression problem (such as those related to embedding).
Is MXNET capable of this sort of modeling and, if not, what is an example of an appropriate tool (preferably in R)? Are there any tutorials relevant to the problem I've described?
LSTM is used for working with temporal data: text, speech, time series. If you want to predict a continuous response, then I assume you want to do something similar to time series analysis.
If my assumption is correct, then, please, take a look here. It gives quite a good example on how to use MxNet with R for time series on CPU. The GPU version is also available here.
I'm working on fitting a generalized linear model in R (using glm()) for some data that has two predictors in full factorial. I'm confident that the gamma family is the right error distribution to use but not sure about which link function to use so I'd like to test all possible link functions against one another. Of course, I can do this manually by making a separate model for each link function and then compare deviances, but I imagine there is a R function that will do this and compile results. I have searched on CRAN, SO, Cross-validated, and the web - the closest function I found was clm2 but I do not believe I want a cumulative link model - based on my understanding of what clm's are.
My current model looks like this:
CO2_med_glm_alf_gamma <- glm(flux_median_mod_CO2~PercentH2OGrav+
I(PercentH2OGrav^2)+Min_Dist+
I(Min_Dist^2)+PercentH2OGrav*Min_Dist,
data = NC_alf_DF,
family=Gamma(link="inverse"))
How do I code this model into an R function that will do such a 'goodness-of-link' test?
(As far as the statistical validity of such a test goes, this discussion as well as a discussion with a stats post-doc lead me to believe that is valid to compare AIC or deviances between generalized linear models that are identical except for having different link functions)
This is not "all possible links", it's testing against a specified class of links, but there is a goodness-of-link test by Pregibon that is implemented in the LDdiag package. It's not on CRAN, but you can install it from the archives via
devtools::install_version("LDdiag","0.1")
The example given (not that exciting) is
quine$Days <- ifelse(quine$Days==0, 1, quine$Days)
ex <- glm(Days ~ ., family = Gamma(link="log"), data = quine)
pregibon(ex)
The pregibon family of link functions is implemented in the glmx package. As pointed out by Achim Zeleis in comments, the package provides various parametric link functions and supports general estimation and inference based on such parametric links (or more generally parametric families). To see a worked example how this can be employed for a variety of goodness-of-link assessements, see example("WECO", package = "glmx"). This replicates the analyses from two papers by Koenker and Yoon (see below).
This example might be useful too.
Koenker R (2006). “Parametric Links for Binary Response.” R News, 6(4), 32--34; link to page with supplementary materials.
Koenker R, Yoon J (2009). “Parametric Links for Binary Choice Models: A Fisherian-Bayesian Colloquy.” Journal of Econometrics, 152, 120--130; PDF.
I have learned that the dredge function (MuMIn package) can be used to perform goodness-of-link tests on glms, lms, etc. More generally it is a model selection function but allows for a good deal of customization. In this case, you can use the varying option to compare models fit with different link functions. See the Beetle example that they work for details.