Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I’ve got the following variables:
Response: number of quota units leased (in and out) by fishers.
Explanatory: number of quota units own by fishers.
I fitted a GLM (Poisson), but I’m not totally sure if it’s right, considering that the explanatory variable is count as well. I’ve found examples of Poisson regression just with categorical and continuous explanatory variables, but not with counting variables.
So:
Am I right using Poisson with my data? If not so, what alternative do I have?
The residuals variances of my model are not homogeneous. I understand that Poisson regression allows face this problem, or should I pay attention to this issue and solve it (using weights, for example)?
Any help would much appreciated,
The problem seems like it could be well modeled with Poisson regression. The residual variance should NOT be "homogeneous". The Poisson model assumes that the variance is proportional to the mean. You have options if that asumption is violated. The quasi-biniomial and the negative binomial models can also be used and they allow some relaxation of the dispersion parameter estimates.
If the number of quota units owned by fishers sets an upper bound on the number used then I would not think that should be used as an explanatory variable, but might better be entered as offset=log(quota_units). It will change the interpretation of the estimates, such that they are estimates of the log(usage_rate).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
A simple question here:
I'd like to know if there is any function in R that can fit a logit/probit regression model using maximum likelihood method?
Currently, I'm using OLS method given by function glm (I hope it does use OLS method)... I read somewhere that probit/logit model with OLS method may have incidental parameter problem. So I'd like to try MLE method.
Thank you for your help in advance!
#Maju116's comment is correct. glm() doesn't use ordinary least squares, it uses iteratively reweighted least squares; as the linked Wikipedia article says
IRLS is used to find the maximum likelihood estimates of a generalized linear model
The default link for the binomial family is logit, so either glm(...,family=binomial) or glm(...,family=binomial(link="logit")) will fit logistic (logit) regression. glm(...,family=binomial(link="probit")) will fit probit regression.
If you are currently using glm(...) without an explicit family argument, then you are assuming Gaussian errors, which does mean that you'll get the same answers as ordinary least squares (lm()) (which are the maximum likelihood estimates for a data set with Gaussian (normally) distributed errors). For clarity and efficiency, it's generally best to use lm() rather than glm() with the default family when you want to do OLS.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I'm not sure if this is a question for stackoverflow, or crossvalidated.
I'm looking for away to include covariate measures when calculating the correlation between two measures. For example, Lets say I have 100 samples, for which I have two measurements, x and y. Now lets say I also have a third measure, a covariate (lets say age). I want to measure the correlation between x and y, but I also want to ignore any of that correlation that comes from the covariate, age.
If I'm fitting a linear model, I could simply add the term to the model:
lm(y~x+age)
I know you can't calculate correlation with this kind of model in R (using ~).
So I want to know:
Does what I'm asking even make sense to do? I suspect it may not.
If it does, what R packages should I be using?
It sounds like you're asking for a semipartial correlation. You want the correlation between x and y partialling out the correlation between x and z. You need to read about partial and semipartial correlations.
The ppcor package in R will then help you with the calculations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have two multiple linear regression models, built using the same groups of subjects, variables, the only difference is the time point: one is baseline data and the other is obtained some time after.
I want to compare if there is any statistical significance between the two models. I have seen articles saying that using AIC maybe a better option over p-value when comparing models.
My question is: does it make sense to just purely compare the AIC using extractAIC in R, or to obtain the anova(lm)?
It is not standard to test for statistical significance between observations recorded at two points in time by estimating two different models.
You may mean that you are testing to see whether the observations recorded at a second point in time are statistically different from the first, by including some dummy variables, and testing the coefficients on these. Still, this is only estimating one model.
In your model you will have dummy variables for your second point in time, either one intercept or an intercept and an interaction dummy like this.
Then you should do both - test the p-value significance for either or both gammas in the models described, and also look at the AIC. There is no definitive 'better', as the articles likely described.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I was trying to figure out how to do post-hoc tests for Two Way ANOVAs and found the following 2 approaches:
Do pairwise t-tests (bonferroni corrected) if one finds significance with the ANOVA.
Link- http://rtutorialseries.blogspot.com/2011/01/r-tutorial-series-two-way-anova-with.html
Do TukeyHSD on an aov model
Link- http://www.r-bloggers.com/post-hoc-pairwise-comparisons-of-two-way-anova/
Running the data set given in the first example in SPSS gives significant pairwise difference for Treatment and Age (Treatmen and Age were the independent variables) , while using the directions given in the first link didn't give me significant pairwise different for Treatment (only gave for Age).
I have a few questions:
Is the first method completely incorrect as hinted in the second link?
What is the right way to do Bonferroni corrected post hoc tests for Two Way ANOVA in R?
Does anyone know how post hoc tests for SPSS work in the case of Two Way ANOVAs (Univariate analysis)? Especially for Bonferroni corrected tests.
I am new to R, so please let me know if I made a mistake in framing the question; I will try to elucidate as much as I personally can. Thanks for your help.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am running simple linear models(Y~X) in R where my predictor is a categorical variable (0-10). However, this variable is not normally distributed and none of the transformation techniques available are healpful (e.g. log, sq etc.) as the data is not negatively/positively skewed but rather all over the place. I am aware that for lm the outcome variable (Y) has to be normally distributed but is this also required for predictors? If yes, any suggestions of how to do this would be more than welcome.
Also, as the data I am looking at has two groups, patients vs controls (I am interested in group differences, as you can guess), do I have to look at whether the data is normally distributed within the two groups or overall across the two groups?
Thanks.
See #Roman Luštriks comment above: it does not matter how your predictors are distributed. (Except for problems with multicollinearity.) What is important is that the residuals be normal (and with homogeneous variances).