How does r calculate the p-values in logistic regression - r

What type of p-values do R calculate in a binomial logistic regression, and where is this documented?
When i read the documentation for ?glm() I find no reference to the calculation of the p-values.

The p-values are calculated by the function summary.glm. See ?summary.glm for a (very brief) bit about how those are calculated.
For more information, look at the source code by typing
summary.glm
at the R command prompt. There you will find the lines of code where an object pvalue is created. Follow the code back to see how the components of the p-value calculation are (conditionally) calculated.

The authors of R wrote the help system with several principles in mind: compactness (don't write more than is needed, it's not a textbook), accuracy, and a curious and well-educated audience. It really was written for other statisticians. The "curious" part of that opening sentence was included to raise the question why you did not also follow the various links in the ?glm page: to summary.glm where you would have found one answer to your ambiguous question or to anova.glm where you would have found another possible answer. The help-authors do expect that you will follow those links and read the whole page and execute the examples. You will notice that even after you get to summary.glm that there is no mention of "binary logistic regression" since they pretty much assume that you are well-grounded in statistics and have copy of McCullagh and Nelder handy, or if not that you will go read the references.
The other principle: sometimes it is the code itself (given the open-source nature of R) that performs the documentation. Technically glm doesn't print anything and print.glm doesn't print p-values. It would be print.summary.glm or print.anova.glm that would be doing any printing. Part of learning R is learning that the results printed to the console will have gone through a eval-print loop and that output can be tailored with object-class-specific functions.
These assumptions are just part of what many people see as a "steep learning curve for R" (although I would have called it a shallow curve if plotted with time/effort on x-axis.)

Related

Equivalent to fitcdiscr in R (regarding Coeffs.linear and Coeffs.Const)

I am currently translating some MATLAB scripts to R for Multivariate Data Analysis. Currently I am trying to generate the same data as the Coeffs.Linear and Coeffs.Const part of the fitdiscr function in MATLAB.
The code being used is:
fitcdiscr(data, groups, 'DiscrimType', 'linear');
The data consists of 3 groups.
Unfortunately the R function seems to do the LDA only for two LDs and MATLAB seems to always compare all groups in all constellations. Does anybody have an idea how I could obtain that data?
I suspect you mean information on the implementation of various MATLAB function, which would be doc <functionname> (doc fitcdiscr would yield this documentation page on fitcdscr) to get the documentation, and edit <functionname> to get the implementation, if it is not obscured by The MathWorks. If those two do not give you enough information, I'm afraid you're out of luck, since not all TMW codes are available non-obscured.
fitcdiscr is non-obscured, although very brief; it's just a wrapper for some other functions. Keep doing edit <functionname> and doc <functionname> and see how deep the rabbit hole takes you.
NB: there's no built-in function called fitdiscr, but the syntax you describe is that of fitcdiscr (note the c), so I used that as examples. If the actual function being called is named fitdiscr, it's custom-made and you'll have to spit through its file by edit fitdiscr and hope for the best.

Is there any Python equivalent of R's biglm?

I have used biglm in R and found it very useful. Now I need the same type of functionality in python. Any ideas? I have seen that patsy/statsmodels has an incremental mode, but have not been able to find any samples to copy/adapt. Any pointers would be much appreciated.
from a related answer of Nathaniel Smith on the statsmodels mailing list
My incremental LS code might be useful here, it's basically the same
problem:
https://github.com/njsmith/pyrerp/blob/master/pyrerp/incremental_ls.py#L330
The new X'X is the sum of the old X'Xs, then you have to re-do the
scaling and inversion to get the new vcov matrix for the estimates.
Should be doable so long as you know how many data points are in each
and the various sums-of-squares. (The code I linked has some extra
complexity because of handling a particular sort of heteroskedasticity
via FGLS, but it can pretty much be ignored.)
statsmodels doesn't have anything in this area yet.
There is an incremental OLS function in statsmodels, however that was written as helper function for cusum tests (in memory) and hasn't been used or checked for any other purpose:
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.diagnostic.recursive_olsresiduals.html

Frailty estimates in coxph object

If one uses obj=coxph(... + frailty(id) ), then the object also returns (log)frailty estimates for each individual, which can be extracted with obj$frail.
Does anybody knows how these estimates are being obtained? Are they Empirical Bayes estimates?
Thanks!
Theodor
The default distribution for frailty can be seen in the ?frailty page to be "gamma". If you look at the frailty function (which is not hidden) you see that it simply pastes the name of the distribution onto "frailty." and uses get() to retrieve the proper function. So look at frailty.gamma (also not hidden) to find the answers to your question. Looking back at the help page again, you can see that I should have been able to figure all that out without looking at the code, since it's right up at the top of the page. But there are many routes to knowledge with R. (They are ML, not "empirical Bayes", estimates.)
The help page suggests to me that the author (Therneau) expects you to consult Therneau and Grambsch for further details not obvious from reading the code. If you are doing serious work with survival models in R that is a very useful book to have. It's very clear and helpful in understanding the underpinnings of the 'survival'-package.

R script - nls function

Can anyone give me a good explanation for what the parameter "algorithm" does in the nls function in R?
Also, how does the formula work? I know it uses a tilda, but I can't really find a down-to-earth explanation of it.
Also, how important are the start values? Do I need to try multiple start values, or can I still have a guarantee that nls will find the correct parameters regardless of the start values I use?
In brief:
nls() is going to vary parameters to try to minimize the square error between your model and your data. There's several good methods it can try to find the minimum. Reading the details about "method" in ?optim will provide some good info and references.
In general, for nonlinear models, your results can be sensitive to initial guess. You should try several different guesses to make sure that the outputs are close. If your results are very sensitive to your guess, you can try re-parameterizing, using a different algorithm, or rethinking your model.
As for the formula, I'd echo the previous answer. Work through the examples in the bottom of ?nls and then try to ask a more specific question.

Standard error of the ARIMA constant

I am trying to manually calculate the standard error of the constant in an ARIMA model, if it is included. I have referred to Box and Jenkins (1994) text, specially Section 7.2, but my understanding is that the methods mentioned here calculates the variance-covariance matrix for the ARIMA parameters only, not the constant. Tried searching on the Internet, but couldn't find any theory. Software like Minitab, R etc. calculate this, so I was wondering what is the way? Can someone provide any pointer(s) on this topic?
Thanks.
arima() will fit a regression model with ARMA errors. The constant is treated as the coefficient of a regression variable consisting only of 1s. So you need the covariance matrix of the regression coefficients which is usually calculated separately from the covariance matrix of the ARMA coefficients. Look at Section 8.3 of Hamilton's "Time series analysis"
One of the nicest things about R is that you can access a lot of the source code to R itself from within the environment. If you simply type arima at the command prompt, you get the high-level source code for the arima() function. I got several pages of code here, when I tried it.
You do miss out on anything implemented internally within the R executable in native code, but often the high-level code tells you everything you want to know.
Perhaps a shift of perspective can solve this problem.
Rather than seeing the constant as something special, just consider the problem without constant and with a variable that is a vector of ones.

Resources