lm fit on chunks of data in R [duplicate] - r

The standard way of doing a linear regression is something like this:
l <- lm(Sepal.Width ~ Petal.Length + Petal.Width, data=iris)
and then use predict(l, new_data) to make predictions, where new_data is a dataframe with columns matching the formula. But lm() returns an lm object, which is a list that contains crap-loads of stuff that is mostly irrelevant in most situations. This includes a copy of the original data, and a bunch of named vectors and arrays the length/size of the data:
R> str(l)
List of 12
$ coefficients : Named num [1:3] 3.587 -0.257 0.364
..- attr(*, "names")= chr [1:3] "(Intercept)" "Petal.Length" "Petal.Width"
$ residuals : Named num [1:150] 0.2 -0.3 -0.126 -0.174 0.3 ...
..- attr(*, "names")= chr [1:150] "1" "2" "3" "4" ...
$ effects : Named num [1:150] -37.445 -2.279 -0.914 -0.164 0.313 ...
..- attr(*, "names")= chr [1:150] "(Intercept)" "Petal.Length" "Petal.Width" "" ...
$ rank : int 3
$ fitted.values: Named num [1:150] 3.3 3.3 3.33 3.27 3.3 ...
..- attr(*, "names")= chr [1:150] "1" "2" "3" "4" ...
$ assign : int [1:3] 0 1 2
$ qr :List of 5
..$ qr : num [1:150, 1:3] -12.2474 0.0816 0.0816 0.0816 0.0816 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:150] "1" "2" "3" "4" ...
.. .. ..$ : chr [1:3] "(Intercept)" "Petal.Length" "Petal.Width"
.. ..- attr(*, "assign")= int [1:3] 0 1 2
..$ qraux: num [1:3] 1.08 1.1 1.01
..$ pivot: int [1:3] 1 2 3
..$ tol : num 1e-07
..$ rank : int 3
..- attr(*, "class")= chr "qr"
$ df.residual : int 147
$ xlevels : Named list()
$ call : language lm(formula = Sepal.Width ~ Petal.Length + Petal.Width, data = iris)
$ terms :Classes 'terms', 'formula' length 3 Sepal.Width ~ Petal.Length + Petal.Width
.. ..- attr(*, "variables")= language list(Sepal.Width, Petal.Length, Petal.Width)
.. ..- attr(*, "factors")= int [1:3, 1:2] 0 1 0 0 0 1
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:3] "Sepal.Width" "Petal.Length" "Petal.Width"
.. .. .. ..$ : chr [1:2] "Petal.Length" "Petal.Width"
.. ..- attr(*, "term.labels")= chr [1:2] "Petal.Length" "Petal.Width"
.. ..- attr(*, "order")= int [1:2] 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(Sepal.Width, Petal.Length, Petal.Width)
.. ..- attr(*, "dataClasses")= Named chr [1:3] "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:3] "Sepal.Width" "Petal.Length" "Petal.Width"
$ model :'data.frame': 150 obs. of 3 variables:
..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
..$ Petal.Length: num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
..- attr(*, "terms")=Classes 'terms', 'formula' length 3 Sepal.Width ~ Petal.Length + Petal.Width
.. .. ..- attr(*, "variables")= language list(Sepal.Width, Petal.Length, Petal.Width)
.. .. ..- attr(*, "factors")= int [1:3, 1:2] 0 1 0 0 0 1
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. .. .. ..$ : chr [1:3] "Sepal.Width" "Petal.Length" "Petal.Width"
.. .. .. .. ..$ : chr [1:2] "Petal.Length" "Petal.Width"
.. .. ..- attr(*, "term.labels")= chr [1:2] "Petal.Length" "Petal.Width"
.. .. ..- attr(*, "order")= int [1:2] 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(Sepal.Width, Petal.Length, Petal.Width)
.. .. ..- attr(*, "dataClasses")= Named chr [1:3] "numeric" "numeric" "numeric"
.. .. .. ..- attr(*, "names")= chr [1:3] "Sepal.Width" "Petal.Length" "Petal.Width"
- attr(*, "class")= chr "lm"
That stuff takes up a lot of space, and the lm object ends up being almost an order of magnitude larger than the original dataset:
R> object.size(iris)
7088 bytes
R> object.size(l)
52704 bytes
This isn't a problem with a dataset as small as that, but it can be really problematic with a 170Mb dataset that produces a 450mb lm object. Even with all the return options set to false, the lm object is still 5 times the original dataset:
R> ls <- lm(Sepal.Width ~ Petal.Length + Petal.Width, data=iris, model=FALSE, x=FALSE, y=FALSE, qr=FALSE)
R> object.size(ls)
30568 bytes
Is there any way of fitting a model in R, and then being able to predict output values on new input data, without storing crap tonnes of extra unnecessary data? In other words, is there a way to just store the model coefficients, but still be able to use those coefficients to predict on new data?
Edit: I guess, as well as not storing all that excess data, I'm also really interested in a way of using lm so that it doesn't even calculate that data - it's just wasted CPU time...

You can use biglm:
m <- biglm(Sepal.Length ~ Petal.Length + Petal.Width, iris)
Since biglm does not store the data in the output object you need to provide your data when making predictions:
p <- predict(m, newdata=iris)
The amount of data biglm uses is proportional to the number of parameters:
> object.size(m)
6720 bytes
> d <- rbind(iris, iris)
> m <- biglm(Sepal.Width ~ Petal.Length + Petal.Width, data=d)
> object.size(m)
6720 bytes
biglm also allows you to update the model with a new chunk of data using the update method. Using this you can also estimate models when the complete dataset does not fit in memory.

The only components of the lm object that you need to calculate predicted values are terms and coefficients. However, you'll need to roll your own prediction function as predict.lm complains if you delete the qr component (which is needed to compute term-by-term effects and standard errors). Something like this should do.
m <- lm(Sepal.Length ~ Petal.Length + Petal.Width, iris)
m$effects <- m$fitted.values <- m$residuals <- m$model <- m$qr <-
m$rank <- m$assign <- NULL
predict0 <- function(object, newdata)
{
mm <- model.matrix(terms(object), newdata)
mm %*% object$coefficients
}
predict0(m, iris[1:10,])

I think there are two approaches to deal with this:
Use lm and trim the fat afterwards. For quite nice and instructive discussions, see e.g. here and here. This will not solve the "computation time" issue.
Do not use lm.
If you go for the second option, you could easily write up the matrix operations yourself so that you only get the predicted values. If you prefer to use a canned routine, you could try other packages that implement least squares, e.g. fastLm from the RcppArmadillo-package (or the Eigen version of it, or as others pointed out biglm), which stores much less information. Using this approach has some benefits, e.g. providing a formula-interface and such things. fastLm is also quite fast, if computation time is a concern for you.
For comparison, here a small benchmark:
l <- lm(Sepal.Width ~ Petal.Length + Petal.Width, data=iris)
library(biglm)
m <- biglm(Sepal.Length ~ Petal.Length + Petal.Width, iris)
library(RcppArmadillo)
a <- fastLm(Sepal.Length ~ Petal.Length + Petal.Width, iris)
object.size(l)
# 52704 bytes
object.size(m)
# 6664 bytes
object.size(a)
# 6344 bytes

Related

Extract all model statistics from rms fits?

The rms package contains a wealth of useful statistical functions. However, I cannot find a proper way to extract certain fit statistics from the fitted object. Consider an example:
library(pacman)
p_load(rms, stringr, readr)
#fit
> (fit = rms::ols(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species, data = iris))
Linear Regression Model
rms::ols(formula = Sepal.Length ~ Sepal.Width + Petal.Length +
Petal.Width + Species, data = iris)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 150 LR chi2 302.96 R2 0.867
sigma0.3068 d.f. 5 R2 adj 0.863
d.f. 144 Pr(> chi2) 0.0000 g 0.882
Residuals
Min 1Q Median 3Q Max
-0.794236 -0.218743 0.008987 0.202546 0.731034
Coef S.E. t Pr(>|t|)
Intercept 2.1713 0.2798 7.76 <0.0001
Sepal.Width 0.4959 0.0861 5.76 <0.0001
Petal.Length 0.8292 0.0685 12.10 <0.0001
Petal.Width -0.3152 0.1512 -2.08 0.0389
Species=versicolor -0.7236 0.2402 -3.01 0.0031
Species=virginica -1.0235 0.3337 -3.07 0.0026
So, the print function for the fit prints a lot of useful stuff including standard errors and adjusted R2. Unfortunately, if we inspect the model fit object, the values don't seem to be present anywhere.
> str(fit)
List of 19
$ coefficients : Named num [1:6] 2.171 0.496 0.829 -0.315 -0.724 ...
..- attr(*, "names")= chr [1:6] "Intercept" "Sepal.Width" "Petal.Length" "Petal.Width" ...
$ residuals : Named num [1:150] 0.0952 0.1432 -0.0731 -0.2894 -0.0544 ...
..- attr(*, "names")= chr [1:150] "1" "2" "3" "4" ...
$ effects : Named num [1:150] -71.5659 -1.1884 9.1884 -1.3724 -0.0587 ...
..- attr(*, "names")= chr [1:150] "Intercept" "Sepal.Width" "Petal.Length" "Petal.Width" ...
$ rank : int 6
$ fitted.values : Named num [1:150] 5 4.76 4.77 4.89 5.05 ...
..- attr(*, "names")= chr [1:150] "1" "2" "3" "4" ...
$ assign :List of 4
..$ Sepal.Width : int 2
..$ Petal.Length: int 3
..$ Petal.Width : int 4
..$ Species : int [1:2] 5 6
$ qr :List of 5
..$ qr : num [1:150, 1:6] -12.2474 0.0816 0.0816 0.0816 0.0816 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:150] "1" "2" "3" "4" ...
.. .. ..$ : chr [1:6] "Intercept" "Sepal.Width" "Petal.Length" "Petal.Width" ...
..$ qraux: num [1:6] 1.08 1.02 1.11 1.02 1.02 ...
..$ pivot: int [1:6] 1 2 3 4 5 6
..$ tol : num 1e-07
..$ rank : int 6
..- attr(*, "class")= chr "qr"
$ df.residual : int 144
$ var : num [1:6, 1:6] 0.07828 -0.02258 -0.00198 0.01589 -0.02837 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:6] "Intercept" "Sepal.Width" "Petal.Length" "Petal.Width" ...
.. ..$ : chr [1:6] "Intercept" "Sepal.Width" "Petal.Length" "Petal.Width" ...
$ stats : Named num [1:6] 150 302.964 5 0.867 0.882 ...
..- attr(*, "names")= chr [1:6] "n" "Model L.R." "d.f." "R2" ...
$ linear.predictors: Named num [1:150] 5 4.76 4.77 4.89 5.05 ...
..- attr(*, "names")= chr [1:150] "1" "2" "3" "4" ...
$ call : language rms::ols(formula = Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species, data = iris)
$ terms :Classes 'terms', 'formula' language Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species
.. ..- attr(*, "variables")= language list(Sepal.Length, Sepal.Width, Petal.Length, Petal.Width, Species)
.. ..- attr(*, "factors")= int [1:5, 1:4] 0 1 0 0 0 0 0 1 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:5] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" ...
.. .. .. ..$ : chr [1:4] "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
.. ..- attr(*, "term.labels")= chr [1:4] "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
.. ..- attr(*, "order")= int [1:4] 1 1 1 1
.. ..- attr(*, "intercept")= num 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(Sepal.Length, Sepal.Width, Petal.Length, Petal.Width, Species)
.. ..- attr(*, "dataClasses")= Named chr [1:5] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:5] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" ...
.. ..- attr(*, "formula")=Class 'formula' language Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species
.. .. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
$ Design :List of 12
..$ name : chr [1:4] "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
..$ label : chr [1:4] "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
..$ units : Named chr [1:4] "" "" "" ""
.. ..- attr(*, "names")= chr [1:4] "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
..$ colnames : chr [1:5] "Sepal.Width" "Petal.Length" "Petal.Width" "Species=versicolor" ...
..$ mmcolnames : chr [1:5] "Sepal.Width" "Petal.Length" "Petal.Width" "Speciesversicolor" ...
..$ assume : chr [1:4] "asis" "asis" "asis" "category"
..$ assume.code : int [1:4] 1 1 1 5
..$ parms :List of 1
.. ..$ Species: chr [1:3] "setosa" "versicolor" "virginica"
..$ limits : list()
..$ values : list()
..$ nonlinear :List of 4
.. ..$ Sepal.Width : logi FALSE
.. ..$ Petal.Length: logi FALSE
.. ..$ Petal.Width : logi FALSE
.. ..$ Species : logi [1:2] FALSE FALSE
..$ interactions: NULL
$ non.slopes : num 1
$ na.action : NULL
$ scale.pred : chr "Sepal.Length"
$ fail : logi FALSE
$ sformula :Class 'formula' language Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
- attr(*, "class")= chr [1:3] "ols" "rms" "lm"
There is a 7 year old question on R help where the package creator explains a solution to getting these:
On Wed, 11 Aug 2010, david dav wrote:
Hi,
I would like to extract the coefficients of a logistic regression
(estimates and standard error as well) in lrm as in glm with
summary(fit.glm)$coef
Thanks
David
coef(fit) sqrt(diag(vcov(fit)))
But these will not be very helpful except in the trivial case where
everything is linear, nothing interacts, and factors have two levels.
Frank
And the solution is according to the author not optimal. This leaves one wondering how the displayed values are calculated. Tracing down the code results in a hunt through the undocumented package code (the package code is on Github). I.e. we begin with print.ols():
> rms:::print.ols
function (x, digits = 4, long = FALSE, coefs = TRUE, title = "Linear Regression Model",
...)
{
latex <- prType() == "latex"
k <- 0
z <- list()
if (length(zz <- x$na.action)) {
k <- k + 1
z[[k]] <- list(type = paste("naprint", class(zz)[1],
sep = "."), list(zz))
}
stats <- x$stats
...
Reading further we do find that e.g. R2 adj. is calculated in the print function:
rsqa <- 1 - (1 - r2) * (n - 1) / rdf
We also find some standard error calculations, though no p values.
se <- sqrt(diag(x$var))
z[[k]] <- list(type='coefmatrix',
list(coef = x$coefficients,
se = se,
errordf = rdf))
All the results are passed down further to prModFit(). We can look it up and find the p value calculation etc. Unfortunately, the print command returns NULL so these values are not available anywhere for programmatic reuse:
> x = print((fit = rms::ols(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species, data = iris)))
#printed output...
> x
NULL
How does one get all the statistics?
Here is a hack solution where we capture the output of the print command:
#parser
get_model_stats = function(x, precision=60) {
# remember old number formatting function
# (which would round and transforms p-values to formats like "<0.01")
old_format_np = rms::formatNP
# substitute it with a function which will print out as many digits as we want
assignInNamespace("formatNP", function(x, ...) formatC(x, format="f", digits=precision), "rms")
# remember old width setting
old_width = options('width')$width
# substitute it with a setting making sure the table will not wrap
options(width=old_width + 4 * precision)
# actually print the data and capture it
cap = capture.output(print(x))
# restore original settings
options(width=old_width)
assignInNamespace("formatNP", old_format_np, "rms")
#model stats
stats = c()
stats$R2.adj = str_match(cap, "R2 adj\\s+ (\\d\\.\\d+)") %>% na.omit() %>% .[, 2] %>% as.numeric()
#coef stats lines
coef_lines = cap[which(str_detect(cap, "Coef\\s+S\\.E\\.")):(length(cap) - 1)]
#parse
coef_lines_table = suppressWarnings(readr::read_table(coef_lines %>% stringr::str_c(collapse = "\n")))
colnames(coef_lines_table)[1] = "Predictor"
list(
stats = stats,
coefs = coef_lines_table
)
}
Example:
> get_model_stats(fit)
$stats
$stats$R2.adj
[1] 0.86
$coefs
# A tibble: 6 x 5
Predictor Coef S.E. t `Pr(>|t|)`
<chr> <dbl> <dbl> <dbl> <chr>
1 Intercept 2.17 0.280 7.8 <0.0001
2 Sepal.Width 0.50 0.086 5.8 <0.0001
3 Petal.Length 0.83 0.069 12.1 <0.0001
4 Petal.Width -0.32 0.151 -2.1 0.0389
5 Species=versicolor -0.72 0.240 -3.0 0.0031
6 Species=virginica -1.02 0.334 -3.1 0.0026
This still has issues, e.g. p values are not returned as numerics and only has 4 digits, which can cause issues in some situations. The updated code should extract digits up to arbitrary precision.
Be extra careful when using this with long variable names as those could wrap the table into multiple rows and introduce missing values (NA) in output even though the stats are in there!
Package broom is a great way to extract model info.
library(pacman)
library(rms)
library(broom)
fit = ols(Sepal.Length ~ Sepal.Width + Petal.Length + Petal.Width + Species,
data = iris)
tidy(summary.lm(fit))
# term estimate std.error statistic p.value
# 1 Intercept 2.1712663 0.27979415 7.760227 1.429502e-12
# 2 Sepal.Width 0.4958889 0.08606992 5.761466 4.867516e-08
# 3 Petal.Length 0.8292439 0.06852765 12.100867 1.073592e-23
# 4 Petal.Width -0.3151552 0.15119575 -2.084418 3.888826e-02
# 5 Species=versicolor -0.7235620 0.24016894 -3.012721 3.059634e-03
# 6 Species=virginica -1.0234978 0.33372630 -3.066878 2.584344e-03
glance(fit)
# r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC df.residual
# 1 0.8673123 0.862705 0.3068261 188.251 2.666942e-61 6 -32.55801 79.11602 100.1905 144
The object fit also contains some easily accessible info that you can get and store in a dataframe:
fit$coefficients
# Intercept Sepal.Width Petal.Length Petal.Width Species=versicolor Species=virginica
# 2.1712663 0.4958889 0.8292439 -0.3151552 -0.7235620 -1.0234978
fit$stats
# n Model L.R. d.f. R2 g Sigma
# 150.0000000 302.9635115 5.0000000 0.8673123 0.8820479 0.3068261

Refer to variable by part of the variable name

It seems that, in R, I can refer to a variable with part of a variable name. But I am confused about why I can do that.
Use the following code as an example:
library(car)
scatterplot(housing ~ total)
house.lm <- lm(housing ~ total)
summary(house.lm)
str(summary(house.lm))
summary(house.lm)$coefficients[2,2]
summary(house.lm)$coe[2,2]
When I print the structure of summary(house.lm), I got the following output:
> str(summary(house.lm))
List of 11
$ call : language lm(formula = housing ~ total)
$ terms :Classes 'terms', 'formula' language housing ~ total
.. ..- attr(*, "variables")= language list(housing, total)
.. ..- attr(*, "factors")= int [1:2, 1] 0 1
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:2] "housing" "total"
.. .. .. ..$ : chr "total"
.. ..- attr(*, "term.labels")= chr "total"
.. ..- attr(*, "order")= int 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(housing, total)
.. ..- attr(*, "dataClasses")= Named chr [1:2] "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:2] "housing" "total"
$ residuals : Named num [1:162] -8.96 -11.43 3.08 8.45 2.2 ...
..- attr(*, "names")= chr [1:162] "1" "2" "3" "4" ...
$ coefficients : num [1:2, 1:4] 28.4523 0.0488 10.2117 0.0103 2.7862 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:2] "(Intercept)" "total"
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:2] FALSE FALSE
..- attr(*, "names")= chr [1:2] "(Intercept)" "total"
$ sigma : num 53.8
$ df : int [1:3] 2 160 2
$ r.squared : num 0.123
$ adj.r.squared: num 0.118
$ fstatistic : Named num [1:3] 22.5 1 160
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:2, 1:2] 3.61e-02 -3.31e-05 -3.31e-05 3.67e-08
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:2] "(Intercept)" "total"
.. ..$ : chr [1:2] "(Intercept)" "total"
- attr(*, "class")= chr "summary.lm"
However, it seems that I can refer to the variable coefficients with all of the following commands:
summary(house.lm)$coe[2,2]
summary(house.lm)$coef[2,2]
summary(house.lm)$coeff[2,2]
summary(house.lm)$coeffi[2,2]
summary(house.lm)$coeffic[2,2]
summary(house.lm)$coeffici[2,2]
summary(house.lm)$coefficie[2,2]
summary(house.lm)$coefficien[2,2]
summary(house.lm)$coefficient[2,2]
summary(house.lm)$coefficients[2,2]
They all give the same results: 0.01029709
Therefore, I was wondering when I can refer to a variable with only part of its name in R?
You can do it when rest of name is unambiguous. For example
df <- data.frame(abcd = c(1,2,3), xyz = c(4,5,6), abc = c(5,6,7))
> df$xy
[1] 4 5 6
> df$ab
NULL
> df$x
[1] 4 5 6
df$xy and even df$x gives right data, but df$ab results in NULL because it can refer to both df$abc and df$abcd. It's like when you type df$xy in RStudio and press Ctrl + Space you will get rigtht variable name, so you could refer to part of variable name.
http://adv-r.had.co.nz/Functions.html#lexical-scoping
When calling a function you can specify arguments by position, by
complete name, or by partial name. Arguments are matched first by
exact name (perfect matching), then by prefix matching, and finally by
position.
When you are doing quick coding to analyse some data, using partial names is not a problem, but I tend to agree, it's not good when writing code. In a package you can't do that, R-CMD check will find every occurence.

R kohonen - Is the input data scaled and centred automatically?

I have been following an online example for R Kohonen self-organising maps (SOM) which suggested that the data should be centred and scaled before computing the SOM.
However, I've noticed the object created seems to have attributes for centre and scale, in which case am I really applying a redundant step by centring and scaling first? Example script below
# Load package
require(kohonen)
# Set data
data(iris)
# Scale and centre
dt <- scale(iris[, 1:4],center=TRUE)
# Prepare SOM
set.seed(590507)
som1 <- som(dt,
somgrid(6,6, "hexagonal"),
rlen=500,
keep.data=TRUE)
str(som1)
The output from the last line of the script is:
List of 13
$ data :List of 1
..$ : num [1:150, 1:4] -0.898 -1.139 -1.381 -1.501 -1.018 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : NULL
.. .. ..$ : chr [1:4] "Sepal.Length" "Sepal.Width" "Petal.Length"
"Petal.Width"
.. ..- attr(*, "scaled:center")= Named num [1:4] 5.84 3.06 3.76 1.2
.. .. ..- attr(*, "names")= chr [1:4] "Sepal.Length" "Sepal.Width"
"Petal.Length" "Petal.Width"
.. ..- attr(*, "scaled:scale")= Named num [1:4] 0.828 0.436 1.765 0.762
.. .. ..- attr(*, "names")= chr [1:4] "Sepal.Length" "Sepal.Width"
"Petal.Length" "Petal.Width"
$ unit.classif : num [1:150] 3 5 5 5 4 2 4 4 6 5 ...
$ distances : num [1:150] 0.0426 0.0663 0.0768 0.0744 0.1346 ...
$ grid :List of 6
..$ pts : num [1:36, 1:2] 1.5 2.5 3.5 4.5 5.5 6.5 1 2 3 4 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : NULL
.. .. ..$ : chr [1:2] "x" "y"
..$ xdim : num 6
..$ ydim : num 6
..$ topo : chr "hexagonal"
..$ neighbourhood.fct: Factor w/ 2 levels "bubble","gaussian": 1
..$ toroidal : logi FALSE
..- attr(*, "class")= chr "somgrid"
$ codes :List of 1
..$ : num [1:36, 1:4] -0.376 -0.683 -0.734 -1.158 -1.231 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:36] "V1" "V2" "V3" "V4" ...
.. .. ..$ : chr [1:4] "Sepal.Length" "Sepal.Width" "Petal.Length"
"Petal.Width"
$ changes : num [1:500, 1] 0.0445 0.0413 0.0347 0.0373 0.0337 ...
$ alpha : num [1:2] 0.05 0.01
$ radius : Named num [1:2] 3.61 0
..- attr(*, "names")= chr [1:2] "66.66667%" ""
$ user.weights : num 1
$ distance.weights: num 1
$ whatmap : int 1
$ maxNA.fraction : int 0
$ dist.fcts : chr "sumofsquares"
- attr(*, "class")= chr "kohonen"
Note notice that in lines 7 and 10 of the output there are references to centre and scale. I would appreciate an explanation as to the process here.
Your step with scaling is not redundant because in source code there are no scaling, and attributes, that you see in 7 and 10 are attributes from train dataset.
To check this, just run and compare results of this chunk of code:
# Load package
require(kohonen)
# Set data
data(iris)
# Scale and centre
dt <- scale(iris[, 1:4],center=TRUE)
#compare train datasets
str(dt)
str(as.matrix(iris[, 1:4]))
# Prepare SOM
set.seed(590507)
som1 <- kohonen::som(dt,
kohonen::somgrid(6,6, "hexagonal"),
rlen=500,
keep.data=TRUE)
#without scaling
som2 <- kohonen::som(as.matrix(iris[, 1:4]),
kohonen::somgrid(6,6, "hexagonal"),
rlen=500,
keep.data=TRUE)
#compare results of som function
str(som1)
str(som2)

R-package(baseline) application to sample dataset

I am trying to use the R baseline-package on a sample dataset that I have for, to test and evaluate the current baseline algorithm that I have.
I wanted to apply the fillpeaks algorithm as a trend line to compare.
bc.fillPeaks <- baseline(milk$spectra[1, drop=FALSE], lambda=6,
hwi=50, it=10, int=2000, method="fillPeaks")
plot(bc.fillPeaks)
But my problem is that the sample data that I have does not fit the matrix structure which is used in the example. When I look at the data.frame used for the example I don't understand it
'data.frame': 45 obs. of 2 variables
$ cow : num 0 0.25 0.375 0.875 0.5 0.75 0.5 0.125 0 0.125 ...
$ spectra: num [1:45, 1:21451] 1029 371 606 368 554 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : NULL
.. ..$ : chr "4999.94078628963" "5001.55954267662" "5003.17856106153" "5004.79784144435" ...
- attr(*, "terms")=Classes 'terms', 'formula' length 3 cow ~ spectra
.. ..- attr(*, "variables")= language list(cow, spectra)
.. ..- attr(*, "factors")= int [1:2, 1] 0 1
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:2] "cow" "spectra"
.. .. .. ..$ : chr "spectra"
.. ..- attr(*, "term.labels")= chr "spectra"
.. ..- attr(*, "order")= int 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(cow, spectra)
.. ..- attr(*, "dataClasses")= Named chr [1:2] "numeric" "nmatrix.21451"
.. .. ..- attr(*, "names")= chr [1:2] "cow" "spectra"
My question is therefore if any of you have experience with the baseline-package and the dataset (milk) used and ideas to how I can convert my data set which is structed: Date, Visits, Old_baseline_visits
To fit and test the baseline algorithm from the R-package
I have used baseline, and found it slightly confusing at first, particularly the example data. As it says in the help file, baseline expects a matrix with the spectra in rows. Even if you only have one "spectrum", it needs to be in the form of a single row matrix. Try this:
foo <- data.frame(Date=seq.Date(as.Date("1957-01-01"), by = "day",
length.out = ncol(milk$spectra)),
Visits=milk$spectra[1,],
Old_baseline_visits=milk$spectra[1,], row.names = NULL)
foo.t <- t(foo$Visits) # Visits in a single row matrix
bc.fillPeaks <- baseline(foo.t, lambda=6,
hwi=50, it=10, int=2000, method='fillPeaks')
plot(bc.fillPeaks)
If you want the baseline and corrected spectra back in your original data frame, try this:
foo$New_baseline <- c(getBaseline(bc.fillPeaks))
foo$New_corrected <- c(getCorrected(bc.fillPeaks))
plot(foo$Date, foo$New_corrected, "l")
Alternatively, if you don't need the baseline object, you can use baseline.fillPeaks(), which returns a list.

Error using stargazer and coxph - Survival Data

I get the following error when trying to use stargazer::stargazer with the coxph (Survival):
> summary(firm.survcox)
> stargazer(firm.survcox)
> firm.survcox #only partial results are included for brevity
Call: coxph(formula = Surv(Duration, Event, type = "right") ~ Innovation + Avg_Wage)
coef exp(coef) se(coef) z p
Innovation -0.87680 0.4161 0.008040 -109.051 0.0e+00
Prior_Experience:Age -0.01297 0.9871 0.004174 -3.107 1.9e-03
Likelihood ratio test=131885 on 23 df, p=0 n= 535416,
number of events= 203037 (1060020 observations deleted due to missingness)
Error in .get.standard.errors.1(object.name, user.given) :
subscript out of bounds
Variables in the model are firm-, industry-, and region-level, and there is one interaction term. I try running the model on only one variable (E.g. Innovation), and I get the same error message. Rownames are NULL.
Updated Question and Response:
Sorry about the confusion. I did not realize I could edit the original question. Below is the model run on one variable - innovation, with the same error produced).
Call: coxph(formula = Surv(Duration, Event, type = "right") ~ Innovation)
coef exp(coef) se(coef) z p
Innovation -0.87680 0.4161 0.008040 -109.051 0.0e+00
Likelihood ratio test=131885 on 23 df, p=0 n= 535416,
number of events= 203037 (1060020 observations deleted due to missingness)
stargazer(firm.survcox)
Error in .get.standard.errors.1(object.name, user.given) :
subscript out of bounds
Structure of the Cox Model
str(firm.survcox)
List of 18
$ coefficients : Named num -0.772
..- attr(*, "names")= chr "Innovation"
$ var : num [1, 1] 3.91e-05
$ loglik : num [1:2] -5583174 -5573640
$ score : num 16015
$ iter : int 4
$ linear.predictors: num [1:1595436] 0.0807 0.0807 0.0807 -0.6915 0.0807 ...
$ residuals : Named num [1:1595436] 0.925 0.976 -0.516 0.888 0.976 ...
..- attr(*, "names")= chr [1:1595436] "1" "2" "3" "4" ...
$ means : Named num 0.104
..- attr(*, "names")= chr "Innovation"
$ concordance : Named num [1:5] 5.20e+10 1.97e+10 3.67e+11 1.14e+10 2.49e+08
..- attr(*, "names")= chr [1:5] "concordant" "discordant" "tied.risk" "tied.time"..
$ method : chr "efron"
$ n : int 1595436
$ nevent : num 404033
$ terms :Classes 'terms', 'formula' length 3 Surv(Duration, Event, type =
"right") ~ Innovation
.. ..- attr(*, "variables")= language list(Surv(Duration, Event, type = "right"),
Innovation)
.. ..- attr(*, "factors")= int [1:2, 1] 0 1
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:2] "Surv(Duration, Event, type = \"right\")" "Innovation"
.. .. .. ..$ : chr "Innovation"
.. ..- attr(*, "term.labels")= chr "Innovation"
.. ..- attr(*, "specials")=Dotted pair list of 3
.. .. ..$ strata : NULL .. .. ..$ cluster: NULL .. .. ..$ tt : NULL ..-
attr(*, "order")= int 1 .. ..- attr(*, "intercept")= int 1 .. ..- attr(*,
"response")= int 1 .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv> .. ..-
attr(*, "predvars")= language list(Surv(Duration, Event, type = "right"),
Innovation) .. ..- attr(*, "dataClasses")= Named chr [1:2] "nmatrix.2" "numeric"
.. .. ..- attr(*, "names")= chr [1:2] "Surv(Duration, Event, type = \"right\")"
"Innovation"
$ assign :List of 1 ..$ Innovation: num 1
$ wald.test : Named num 15249 ..- attr(*, "names")= chr "Innovation"
$ y : Surv [1:1595436, 1:2] 2 1 10+ 5 1 10+ 10+ 6 8+ 8+
... ..- attr(*, "dimnames")=List of 2 .. ..$ : chr [1:1595436] "1" "2" "3" "4"
$ : chr [1:2] "time" "status" ..- attr(*, "type")= chr "right" $ formula
:Class 'formula' length 3 Surv(Duration, Event, type = "right") ~ Innovation ..
attr(*, ".Environment")=<environment: R_GlobalEnv> $ call : language
coxph(formula = Surv(Duration, Event, type = "right") ~ Innovation)
- attr(*, "class")= chr "coxph"
The material posted in your comment (which I added to the question and applied the code block formatting function as you should have done) doesn't really make sense. The coefficients listed "Innovation" and "Prior_Experience:Age" do not match the formula in the call: "formula = Surv(Duration, Event, type = "right") ~ Innovation + Avg_Wage)". There seems to have been some mangling of the object along the way.

Resources