What does this short R script do? - r

I know python and C++ but have very little experience with R. I'm supposed to figure out what my old coworker's script does - he hasn't been here for several years but I have his files. He has about 10 python files that pass data into a temp file and then into the next python script, which I'm able to track, but he has one R script that I don't understand because I don't know R.
The input to the R script is temp4.txt:
1.414442 0.0043
1.526109 0.0042
1.600553 0.0046
1.637775 0.0045
...etc
Where column 1 is the x-axis of a growth curve (time units) and column 2 is growth level (units OD600, which is a measure of cell density).
The R script is only 4 lines:
inp1 <- scan('/temp4.txt', list(0,0))
decay <- data.frame(t = inp1[[1]], amp = inp1[[2]])
form <- nls(amp ~ const*(exp(fact*t)), data=decay, start = list(const = 0.01, fact = 0.5))
summary(form)
The R script's output:
Parameters:
Estimate Std. Error t value Pr(>|t|)
const 2.293e-03 9.658e-05 23.74 <2e-16 ***
fact 7.106e-01 8.757e-03 81.14 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.002776 on 104 degrees of freedom
Correlation of Parameter Estimates:
const
fact -0.9905
Where the "fact" number is what he's pulling out in the next python script as the value to continue forward with analysis. Usually it's a positive value e.g. "6.649e-01 6.784e-01 6.936e-01 6.578e-01 6.949e-01 6.546e-01 0.6623768 0.6710339 6.952e-01 6.711e-01 6.721e-01 6.520e-01" but because temp file gets overwritten each time I only have one version of it with the negative value -0.9905 which he's throwing away negative values in the next python script.
I need to know what exactly he's doing to recreate it... I know the <- is passing data into an object, so it's the nls() function that's confusing me...
Thanks anyone who can explain the R for me.

The first line reads the data into R
The second line restructures the data into a data frame (a table structure that is used commonly in R, it will be passed as the data to nls in line 3).
This looks like older code, most modern coders would replace lines 1 and 2 with one call to read.table.
Line 3 fits a non-linear least squares equation to the data read in previously and line 4 prints the summary of the fit including the estimates of the parameters for the next python script to read.
The non-linear model that is being fit is an exponential growth curve and the fact parameter is a measure of the rate of growth.

GSee's link in the comments is a good description of the NLS function, but in case you're not used to R documentation, here's a quick rundown of nls.
NLS function is nonlinear least-squares modelling function. It is similar to a linear regression, except that it is believed that at least one of the parameters is not linear (sine function, cosine function, x^2 function, etc.). Non-linear parameters can sometimes be transformed into a linear parameter (i.e. by doing a logarithm conversion or some such) but that's not always the case.
The first option is the model that is being tested: amp ~ const*(exp(fact*t)), means that we would like to model amp as the dependent variable and would like e^(fact*t) to be our independent (non-linear) variable.
The next option just tells us which data object to use (data = decay).
The start argument tells us the starting values to build the model off of, in this case const = .01 and fact = .5.
So the first command reads in the data to the object inp1.
The second creates an object that has class data.frame (which is what most analysis in R is done on). This is basically a table with two columns. In this case the columns are given names (t and amp).
The third command creates an object that has class nls. This object basically contains the information that the nls command generates.
The fourth command prints out a summary of the nls-class object - basically all the pertinent details of the analysis.
The output reads as follows:
First the estimates and std. deviation for the two parameters, const and fact, in the nonlinear model. The t-value and P columns show you a statistical calculation of whether the parameters are statistically significantly different from 0.
Signif. codes is a legend for the stars showing to the right of the parameter estimates - what the p value is.
Residual standard error is an indicator of how much of the std. error is not explained by the model.
This last one I'm not sure about, as I haven't used NLS in a while, but I think it's right.
Correlation of estimates shows how strongly correlated the parameters are. In this case, a -.9905 value is a very strong negative correlation - as fact goes up, const goes down and it's very predictable.

Related

Redirect console output to dataframes in R

What function or functions can you use redirect console output to a data frame in R? As an example, the following code associated with the mgcv package produces a set of diagnostics used to assist in model selection of GAMs:
gam.check(gamout, type=c("deviance")
It produces the following output
Method: GCV Optimizer: magic
Smoothing parameter selection converged after 7 iterations.
The RMS GCV score gradient at convergence was 1.988039e-07 .
The Hessian was positive definite.
Model rank = 10 / 11
Basis dimension (k) checking results. Low p-value (k-index<1) may
indicate that k is too low, especially if edf is close to k'.
k' edf k-index p-value
s(year) 9.00 3.42 1.18 0.79
I'm interested in redirecting this output to a data frame I can process into a table I can output and actually use rather than read off the console. I don't need specifics just functions I might be able to use to start solving the problem. Once I have the function, I can work my way through the specifics.
sink() this apparently outputs to a txt file which...I suppose I could use this function and then re-import the output but that seems like a pretty stupid solution.
The functions I would start with are class(gam.check(gamout, type="deviance")) and names(gam.check(gamout, type="deviance")). This should help you figure out what the data structure is and then how to extract elements of it.

What does the summary function do to the output of regsubsets?

Let me preface this by saying that I do think this question is a coding question, not a statistics question. It would almost surely be closed over at Stats.SE.
The leaps package in R has a useful function for model selection called regsubsets which, for any given size of a model, finds the variables that produce the minimum residual sum of squares. Now I am reading the book Linear Models with R, 2nd Ed., by Julian Faraway. On pages 154-5, he has an example of using the AIC for model selection. The complete code to reproduce the example runs like this:
data(state)
statedata = data.frame(state.x77, row.names=state.abb)
require(leaps)
b = regsubsets(Life.Exp~.,data=statedata)
rs = summary(b)
rs$which
AIC = 50*log(rs$rss/50) + (2:8)*2
plot(AIC ~ I(1:7), ylab="AIC", xlab="Number of Predictors")
The rs$which command produces the output of the regsubsets function and allows you to select the model once you've plotted the AIC and found the number of parameters that minimizes the AIC. But here's the problem: while the typed-up example works fine, I'm having trouble with the wrong number of elements in the array when I try to use this code and adapt it to other data. For example:
require(faraway)
data(odor, package='faraway')
b=regsubsets(odor~temp+gas+pack+
I(temp^2)+I(gas^2)+I(pack^2)+
I(temp*gas)+I(temp*pack)+I(gas*pack),data=odor)
rs=summary(b)
rs$which
AIC=50*log(rs$rss/50) + (2:10)*2
produces a warning message:
Warning message:
In 50 * log(rs$rss/50) + (2:10) * 2 :
longer object length is not a multiple of shorter object length
Sure enough, length(rs$rss)=8, but length(2:10)=9. Now what I need to do is model selection, which means I really ought to have an RSS value for each model size. But if I choose b$rss in the AIC formula, it doesn't work with the original example!
So here's my question: what is summary() doing to the output of the regsubsets() function? The number of RSS values is not only not the same, but the values themselves are not the same.
Ok, so you know the help page for regsubsets says
regsubsets returns an object of class "regsubsets" containing no
user-serviceable parts. It is designed to be processed by
summary.regsubsets.
You're about to find out why.
The code in regsubsets calls Alan Miller's Fortran 77 code for subset selection. That is, I didn't write it and it's in Fortran 77. I do understand the algorithm. In 1996 when I wrote leaps (and again in 2017 when I made a significant modification) I spent enough time reading the code to understand what the variables were doing, but regsubsets mostly followed the structure of the Fortran driver program that came with the code.
The rss field of the regsubsets object has that name because it stores a variable called RSS in the Fortran code. This variable is not the residual sum of squares of the best model. RSS is computed in the setup phase, before any subset selection is done, by the subroute SSLEAPS, which is commented 'Calculates partial residual sums of squares from an orthogonal reduction from AS75.1.' That is, RSS describes the RSS of the models with no selection fitted from left to right in the design matrix: the model with just the leftmost variable, then the leftmost two variables, and so on. There's no reason anyone would need to know this if they're not planning to read the Fortran so it's not documented.
The code in summary.regsubsets extracts the residual sum of squares in the output from the $ress component of the object, which comes from the RESS variable in the Fortran code. This is an array whose [i,j] element is the residual sum of squares of the j-th best model of size i.
All the model criteria are computed from $ress in the same loop of summary.regsubsets, which can be edited down to this:
for (i in ll$first:min(ll$last, ll$nvmax)) {
for (j in 1:nshow) {
vr <- ll$ress[i, j]/ll$nullrss
rssvec <- c(rssvec, ll$ress[i, j])
rsqvec <- c(rsqvec, 1 - vr)
adjr2vec <- c(adjr2vec, 1 - vr * n1/(n1 + ll$intercept -
i))
cpvec <- c(cpvec, ll$ress[i, j]/sigma2 - (n1 + ll$intercept -
2 * i))
bicvec <- c(bicvec, (n1 + ll$intercept) * log(vr) +
i * log(n1 + ll$intercept))
}
}
cpvec gives you the same information as AIC, but if you want AIC it would be straightforward to do the same loop and compute it.
regsubsets has a nvmax parameter to control the "maximum size of subsets to examine". By default this is 8. If you increase it to 9 or higher, your code works.
Please note though, that the 50 in your AIC formula is the sample size (i.e. 50 states in statedata). So for your second example, this should be nrow(odor), so 15.

Non Linear Regression Error (Single Gradient Matrix)

I've seen a few of these previously for very simple functions, however the function i'm trying to fit is basically a mixture of 3 functions
A gaussian (which dominates at x=0)
An exponential (which takes over post gaussian)
and a constant which rounds out the values
From the other examples of this error that I have read it seems that the issue is caused by poor initial guesses, but I have no idea how to correct this or if this is even the actual issue given the size of my function.
Here is my code and one sample of the data I'm looking at.:
Value<-c(163301.080,269704.110,334570.550,409536.530,433021.260,418962.060,349554.460,253987.570,124461.710,140750.480,52612.790,54286.427,26150.025,14631.210,15780.244,8053.618,4402.581,2251.137,2743.511,1707.508,1246.894)
Height<-c(400,300,200,0,-200,-400,-600,-800,-1000,-1000,-1200,-1220,-1300,-1400,-1400,-1500,-1600,-1700,-1700,-1800,-1900)
Framed<-data.frame(Value,Height)
i<-nls(Value~a*exp(-Height^2/(2*b^2))+ c*exp(-d*abs(Height)) + e,
data=Framed,start = list(a=410000,b=5,c=10000,d=5,e=1200))
plot(Value~Height)
summary(i)
Thanks for your help now i have the same problem again, i've used your technique below (R noob) was using the manipulate plot in mathematica previously and i think i've got a relatively good fit for the data, here is a graph of the data i'm also attempting to fit (Sorry can't upload it, not enough reputation)
http://imgur.com/GtzIzSr
However i am getting the same issue, is this to do with my fit or the massive amounts of variability at low distances?
You're right about this usually being about bat starting values, and that's (part of) your case. Looking at your data and your guesses, it's clear that something is wrong. But before going into that, note that Framed was not created in the correct order. It should be X Y, or:
Framed <- data.frame(Height, Value)
With that in mind, try the following:
Vals2 <- 410000*exp(-Height^2/(2*5^2)) - 10000*exp(-5*abs(Height)) + 1200
plot(Framed)
lines(Height, Vals2)
You should get
This shows how bad your guesses are. Playing around with your function, it can be easily seen that b is far off. Change it to 500, and then:
That's much better, but still won't fit. And if you change the other parameters (c, d, and e), you'll notice they don't seem to affect the data too much, or at all. That's probably because a is much bigger and you have Height^2 in the first term. If you simplify your function, and run:
i<-nls(Value~a*exp(-Height^2/(2*b^2)), start = list(a=410000,b=500))
You'll find a fit. This is probably because non-linear functions get harder to fit as the number of parameter increases, specially if there are covariance between them. Less parameters are fitted much easier. You'll have to decide however if you can work with only a and b.
But if you plot that, it still doesn't look good. It's clear that your Value does not have its maximum at Height = 0, like it should from your description and from the simulated curve. There seems to be an error with your data, because if you try Height <- Height+200 along with the above changes, you'll get
> summary(i)
Formula: Value ~ a * exp(-Height^2/(2 * b^2))
Parameters:
Estimate Std. Error t value Pr(>|t|)
a 449820.71 10236.43 43.94 <2e-16 ***
b 496.60 12.54 39.59 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 17790 on 19 degrees of freedom
Number of iterations to convergence: 4
Achieved convergence tolerance: 2.164e-06
Now that's up to you to check if your data is indeed shifted and if you can simplify the function.

R: Regression with a holdout of certain variables

I'm doing a multi-linear regression model using lm(), Y is response variable (e.g.: return of interests) and others are explanatory variable (100+ cases, 30+ variables).
There are certain variables which are considered as key variables (concerning investment), when I ran the lm() function, R returns a model with adj.r.square of 97%. But some of the key variables are not significant predictors.
Is there a way to do a regression by keeping all of the key variables in the model (as significant predictors)? It doesn't matter if the adjusted R square decreases.
If the regression doesn't work, is there other methodology?
thank you!
==========================
the data set is uploaded
https://www.dropbox.com/s/gh61obgn2jr043y/df.csv
==========================
additional questions:
what if some variables have impact from previous period to current period?
Example: one takes a pill in the morning when he/she has breakfast and the effect of pills might last after lunch (and he/she takes the 2nd pill at lunch)
I suppose I need to take consideration of data transformation.
* My first choice is to plus a carry-over rate: obs.2_trans = obs.2 + c-o rate * obs.1
* Maybe I also need to consider the decay of pill effect itself, so a s-curve or a exponential transformation is also necessary.
take variable main1 for example, I can use try-out method to get an ideal c-o rate and s-curve parameter starting from 0.5 and testing by step of 0.05, up to 1 or down to 0, until I get the highest model score - say, lowest AIC or highest R square.
This is already a huge quantity to test.
If I need to test more than 3 variables in the same time, how could I manage that by R?
Thank you!
First, a note on "significance". For each variable included in a model, the linear modeling packages report the likelihood that the coefficient of this variable is different from zero (actually, they report p=1-L). We say that, if L is larger (smaller p), then the coefficient is "more significant". So, while it is quite reasonable to talk about one variable being "more significant" than another, there is no absolute standard for asserting "significant" vs. "not significant". In most scientific research, the cutoff is L>0.95 (p<0.05). But this is completely arbitrary, and there are many exceptions. recall that CERN was unwilling to assert the existence of the Higgs boson until they had collected enough data to demonstrate its effect at 6-sigma. This corresponds roughly to p < 1 × 10-9. At the other extreme, many social science studies assert significance at p < 0.2 (because of the higher inherent variability and usually small number of samples). So excluding a variable from a model because it is "not significant" really has no meaning. On the other hand you would be hard pressed to include a variable with high p while excluding another variable with lower p.
Second, if your variables are highly correlated (which they are in your case), then it is quite common that removing one variable from a model changes all the p-values greatly. A retained variable that had a high p-value (less significant), might suddenly have low p-value (more significant), just because you removed a completely different variable from the model. Consequently, trying to optimize a fit manually is usually a bad idea.
Fortunately, there are many algorithms that do this for you. One popular approach starts with a model that has all the variables. At each step, the least significant variable is removed and the resulting model is compared to the model at the previous step. If removing this variable significantly degrades the model, based on some metric, the process stops. A commonly used metric is the Akaike information criterion (AIC), and in R we can optimize a model based on the AIC criterion using stepAIC(...) in the MASS package.
Third, the validity of regression models depends on certain assumptions, especially these two: the error variance is constant (does not depend on y), and the distribution of error is approximately normal. If these assumptions are not met, the p-values are completely meaningless!! Once we have fitted a model we can check these assumptions using a residual plot and a Q-Q plot. It is essential that you do this for any candidate model!
Finally, the presence of outliers frequently distorts the model significantly (almost by definition!). This problem is amplified if your variables are highly correlated. So in your case it is very important to look for outliers, and see what happens when you remove them.
The code below rolls this all up.
library(MASS)
url <- "https://dl.dropboxusercontent.com/s/gh61obgn2jr043y/df.csv?dl=1&token_hash=AAGy0mFtfBEnXwRctgPHsLIaqk5temyrVx_Kd97cjZjf8w&expiry=1399567161"
df <- read.csv(url)
initial.fit <- lm(Y~.,df[,2:ncol(df)]) # fit with all variables (excluding PeriodID)
final.fit <- stepAIC(initial.fit) # best fit based on AIC
par(mfrow=c(2,2))
plot(initial.fit) # diagnostic plots for base model
plot(final.fit) # same for best model
summary(final.fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 11.38360 18.25028 0.624 0.53452
# Main1 911.38514 125.97018 7.235 2.24e-10 ***
# Main3 0.04424 0.02858 1.548 0.12547
# Main5 4.99797 1.94408 2.571 0.01195 *
# Main6 0.24500 0.10882 2.251 0.02703 *
# Sec1 150.21703 34.02206 4.415 3.05e-05 ***
# Third2 -0.11775 0.01700 -6.926 8.92e-10 ***
# Third3 -0.04718 0.01670 -2.826 0.00593 **
# ... (many other variables included)
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 22.76 on 82 degrees of freedom
# Multiple R-squared: 0.9824, Adjusted R-squared: 0.9779
# F-statistic: 218 on 21 and 82 DF, p-value: < 2.2e-16
par(mfrow=c(2,2))
plot(initial.fit)
title("Base Model",outer=T,line=-2)
plot(final.fit)
title("Best Model (AIC)",outer=T,line=-2)
So you can see from this that the "best model", based on the AIC metric, does in fact include Main 1,3,5, and 6, but not Main 2 and 4. The residuals plot shows no dependance on y (which is good), and the Q-Q plot demonstrates approximate normality of the residuals (also good). On the other hand the Leverage plot shows a couple of points (rows 33 and 85) with exceptionally high leverage, and the Q-Q plot shows these same points and row 47 as having residuals not really consistent with a normal distribution. So we can re-run the fits excluding these rows as follows.
initial.fit <- lm(Y~.,df[c(-33,-47,-85),2:ncol(df)])
final.fit <- stepAIC(initial.fit,trace=0)
summary(final.fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 27.11832 20.28556 1.337 0.185320
# Main1 1028.99836 125.25579 8.215 4.65e-12 ***
# Main2 2.04805 1.11804 1.832 0.070949 .
# Main3 0.03849 0.02615 1.472 0.145165
# Main4 -1.87427 0.94597 -1.981 0.051222 .
# Main5 3.54803 1.99372 1.780 0.079192 .
# Main6 0.20462 0.10360 1.975 0.051938 .
# Sec1 129.62384 35.11290 3.692 0.000420 ***
# Third2 -0.11289 0.01716 -6.579 5.66e-09 ***
# Third3 -0.02909 0.01623 -1.793 0.077060 .
# ... (many other variables included)
So excluding these rows results in a fit that has all the "Main" variables with p < 0.2, and all except Main 3 at p < 0.1 (90%). I'd want to look at these three rows and see if there is a legitimate reason to exclude them.
Finally, just because you have a model that fits your existing data well, does not mean that it will perform well as a predictive model. In particular, if you are trying to make predictions outside of the "model space" (equivalent to extrapolation), then your predictive power is likely to be poor.
Significance is determined by the relationships in your data .. not by "I want them to be significant".
If the data says they are insignificant, then they are insignificant.
You are going to have a hard time getting any significance with 30 variables, and only 100 observations. With only 100+ observations, you should only be using a few variables. With 30 variables, you'd need 1000's of observations to get any significance.
Maybe start with the variables you think should be significant, and see what happens.

How to get bootstrapped p-values and bootstrapped t-values and how does the function boot() work?

I would like to get the bootstrapped t-value and the bootstrapped p-value of a lm.
I have the following code (basically copied from a paper) which works.
# First of all you need the following packages
install.packages("car")
install.packages("MASS")
install.packages("boot")
library("car")
library("MASS")
library("boot")
boot.function <- function(data, indices){
data <- data[indices,]
mod <- lm(prestige ~ income + education, data=data) # the liear model
# the first element of the following vector contains the t-value
# and the second element is the p-value
c(summary(mod)[["coefficients"]][2,3], summary(mod)[["coefficients"]][2,4])
}
Now, I compute the bootstrapping model, which gives me the following:
duncan.boot <- boot(Duncan, boot.function, 1999)
duncan.boot
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = Duncan, statistic = boot.function, R = 1999)
Bootstrap Statistics :
original bias std. error
t1* 5.003310e+00 0.288746545 1.71684664
t2* 1.053184e-05 0.002701685 0.01642399
I have two questions:
My understanding is that the bootsrapped value is the original plus the bias, which means that both bootstrapped values (the bootstrapped t-value as well as the bootstrapped p-value) are greater than the original values. This in turn is not possible, because if the t-value rises (which means more significance) the p-values MUST be lower, right? Therefore I think that I have not yet really understood the output of the boot function (here: duncan.boot). How do I compute the bootstrapped values?
I do not understand how the boot() works. If you look at duncan.boot <- boot(Duncan, boot.function, 1999) you see that I have not passed any arguments for the function "boot.function". I suppose that R sets data <- Duncan. But since I have not passed anything for the argument "indices", I do not understand how the following line in the function "boot.function" works data <- data[indices,]
I hope the questions make sense!??
The boot function is "expecting" to get a function that has two arguments: the first being a data.frame and the second being an "indices" vector (possibly with duplicate entries and probably not using all the indices) to use in selecting rows and probably having some duplicate or triplicates.) It then samples with replacement determined by the pattern of duplicates and triplicates from the original dataframe (multiple times determined by "R" with different "choice sets"), passes those to the indices argument in the boot.function, and then collects the results of the R number of function applications.
Regarding what is reported by the print method for boot objects, take a look at this (done after examining the returned object with str()
> duncan.boot$t0
[1] 5.003310e+00 1.053184e-05
> apply(duncan.boot$t, 2, mean)
[1] 5.342895220 0.002607943
> apply(duncan.boot$t, 2, mean) - duncan.boot$t0
[1] 0.339585441 0.002597411
It becomes more obvious that the T0 value is from the original data while the bias is the difference between the mean of the boot()-ed values and the T0 values. I don't think it makes a lot of sense to be asking why p-values based on parametric considerations are increasing in association with an increase in estimated t-statistics. You are really in two disparate regions of statistical thought when you do that. I would have interpreted the increase in p-values as an effect of the sampling process, which does not take into account the Normal distribution assumptions. It is simply saying something about the sampling distribution of the p-value (which is really just another sample statistic).
(Comment: The sourcebook used at the time of R development was Davison and Hinkley's "Bootstrap Methods and their Applications". I'm no claiming any support for my answer above, but I thought to put it in as a reference after Hagen Brenner asked about sampling with two indices in the comments below. There are many unexpected aspects of bootstrapping that arise after one goes beyond the simple parametric estimation and I would first turn to that reference if I were tackling more complex sampling situations.)

Resources