How to I change confidence interval display format - r

I'm looking to change the format of 95% CI from (0.1 -- 0.6) to (0.1 to 0.6) or (0.1, 0.6).
Using epikit::unite_ci() function
https://cran.r-project.org/web/packages/epikit/vignettes/intro.html

In general, you can substitute text strings with gsub:
gsub(" --", ",",x = "(0.1 -- 0.6)")
gsub("to", ",",x = "(0.1 -- 0.6)")
If you want a more appropriate answer, please provide a small reproducible example.

I suggest to modify the fmt_ci function of epikit as follows:
library(epikit)
fmt_ci <- function (e = numeric(), l = numeric(), u = numeric(), digits = 2,
percent = TRUE) {
stopifnot(is.numeric(e), is.numeric(l), is.numeric(u), is.numeric(digits))
# Below the modified row
msg <- "%s (CI %.2f to %.2f)"
msg <- gsub("2", digits, msg)
fun <- if (percent)
match.fun(scales::percent)
else match.fun(scales::number)
e <- fun(e, scale = 1, accuracy = 1/(10^digits), big.mark = ",")
sprintf(msg, e, l, u)
}
# Replace fmt_ci in epikit with the above modified function
assignInNamespace("fmt_ci", fmt_ci, pos="package:epikit")
Running the code:
fit <- lm(100/mpg ~ disp + hp + wt + am, data = mtcars)
df <- data.frame(v = names(coef(fit)), e = coef(fit), confint(fit), row.names = NULL)
names(df) <- c("variable", "estimate", "lower", "upper")
print(df)
out <- unite_ci(df, "slope (CI)", estimate, lower, upper, m100 = FALSE, percent = FALSE)
print(out)
now you get:
variable slope (CI)
1 (Intercept) 0.74 (-0.77 to 2.26)
2 disp 0.00 (-0.00 to 0.01)
3 hp 0.01 (-0.00 to 0.01)
4 wt 1.00 (0.38 to 1.62)
5 am 0.16 (-0.61 to 0.93)

Related

Find value of covariate given a probability in R

Given a fractional polynomial GLM, I am looking to find the value of a covariate that gives me an output of a given probability.
My data is simulated using:
# FUNCTIONS ====================================================================
logit <- function(p){
x = log(p/(1-p))
x
}
sigmoid <- function(x){
p = 1/(1 + exp(-x))
p
}
beta_duration <- function(D, select){
logit(
switch(select,
0.05 + 0.9 / (1 + exp(-2*D + 25)),
0.9 * exp(-exp(-0.5 * (D - 11))),
0.9 * exp(-exp(-(D - 11))),
0.9 * exp(-2 * exp(-(D - 9))),
sigmoid(0.847 + 0.210 * (D - 10)),
0.7 + 0.0015 * (D - 10) ^ 2,
0.7 - 0.0015 * (D - 10) ^ 2 + 0.03 * (D - 10)
)
)
}
beta_sex <- function(sex, OR = 1){
ifelse(sex == "Female", -0.5 * log(OR), 0.5 * log(OR))
}
plot_beta_duration <- function(select){
x <- seq(10, 20, by = 0.01)
y <- beta_duration(x, select)
data.frame(x = x,
y = y) %>%
ggplot(aes(x = x, y = y)) +
geom_line() +
ylim(0, 1)
}
# DATA SIMULATION ==============================================================
duration <- c(10, 12, 14, 18, 20)
sex <- factor(c("Female", "Male"))
eta <- function(duration, sex, duration_select, sex_OR, noise_sd){
beta_sex(sex, sex_OR) + beta_duration(duration, duration_select) + rnorm(length(duration), 0, noise_sd)
}
sim_data <- function(durations_type, sex_OR, noise_sd, p_female, n, seed){
set.seed(seed)
data.frame(
duration = sample(duration, n, TRUE),
sex = sample(sex, n, TRUE, c(p_female, 1 - p_female))
) %>%
rowwise() %>%
mutate(eta = eta(duration, sex, durations_type, sex_OR, noise_sd),
p = sigmoid(eta),
cured = sample(0:1, 1, prob = c(1 - p, p)))
}
# DATA SIM PARAMETERS
durations_type <- 4 # See beta_duration for functions
sex_OR <- 3 # Odds of cure for male vs female (ref)
noise_sd <- 1
p_female <- 0.7 # proportion of females in the sample
n <- 500
data <- sim_data(durations_type = 1, # See beta_duration for functions
sex_OR = 3, # Odds of cure for male vs female (ref)
noise_sd = 1,
p_female = 0.7, # proportion of females in the sample
n = 500,
seed = 21874564)
And my model is fitted by:
library(mfp)
model1 <- mfp(cured ~ fp(duration) + sex,
family = binomial(link = "logit"),
data = data)
summary(model1)
For each level of sex (i.e. "Male" or "Female"), I want to find the value of duration that gives me a probability equal to some value frontier <- 0.8.
So far, I can only think of using an approximation using a vector of possibilities:
pred_duration <- seq(10, 20, by = 0.1)
pred <- data.frame(expand.grid(duration = pred_duration,
sex = sex),
p = predict(model1,
newdata = expand.grid(duration = pred_duration,
sex = sex),
type = "response"))
pred[which(pred$p > 0.8), ] %>%
group_by(sex) %>%
summarize(min(duration))
But I am really after an exact solution.
The function uniroot allows you to detect the point at which the output of a function equals 0. If you create a function that takes duration as input, calculates the predicted probability from that duration, then subtracts the desired probability, then this function will have an output of 0 at the desired value of duration. uniroot will find this value for you. If you wrap this process in a little function, it makes it very easy to use:
find_prob <- function(p) {
f <- function(v) {
predict(model1, type = 'response',
newdata = data.frame(duration = v, sex = 'Male')) - p
}
uniroot(f, interval = range(data$duration), tol = 1e-9)$root
}
So, for example, to find the duration that gives an 80% probability, we just do:
find_prob(0.8)
#> [1] 12.86089
To prove that this is the correct value, we can feed it directly into predict to see what the predicted probability will be given sex = male and duration = 12.86089
predict(model1, type = 'response',
newdata = data.frame(sex = 'Male', duration = find_prob(0.8)))
#> 1
#> 0.8

How to plot the recursive partitioning from the rpart package

I want to plot a partition of a two-dimensional covariate space constructed by recursive binary splitting. To be more precise, I would like to write a function that replicates the following graph (taken from Elements of Statistical Learning, pag. 306):
Displayed above is a two-dimensional covariate space and a partition obtained by recursive binary splitting the space using axis-aligned splits (what is also called a CART algorithm). What I want to implement is a function that takes the output of the rpart function and generates such plot.
It follows some example code:
## Generating data.
set.seed(1975)
n <- 5000
p <- 2
X <- matrix(sample(seq(0, 1, by = 0.01), n * p, replace = TRUE), ncol = p)
Y <- X[, 1] + 2 * X[, 2] + rnorm(n)
## Building tree.
tree <- rpart(Y ~ ., data = data.frame(Y, X), method = "anova", control = rpart.control(cp = 0, maxdepth = 2))
Navigating SO I found this function:
rpart_splits <- function(fit, digits = getOption("digits")) {
splits <- fit$splits
if (!is.null(splits)) {
ff <- fit$frame
is.leaf <- ff$var == "<leaf>"
n <- nrow(splits)
nn <- ff$ncompete + ff$nsurrogate + !is.leaf
ix <- cumsum(c(1L, nn))
ix_prim <- unlist(mapply(ix, ix + c(ff$ncompete, 0), FUN = seq, SIMPLIFY = F))
type <- rep.int("surrogate", n)
type[ix_prim[ix_prim <= n]] <- "primary"
type[ix[ix <= n]] <- "main"
left <- character(nrow(splits))
side <- splits[, 2L]
for (i in seq_along(left)) {
left[i] <- if (side[i] == -1L)
paste("<", format(signif(splits[i, 4L], digits)))
else if (side[i] == 1L)
paste(">=", format(signif(splits[i, 4L], digits)))
else {
catside <- fit$csplit[splits[i, 4L], 1:side[i]]
paste(c("L", "-", "R")[catside], collapse = "", sep = "")
}
}
cbind(data.frame(var = rownames(splits),
type = type,
node = rep(as.integer(row.names(ff)), times = nn),
ix = rep(seq_len(nrow(ff)), nn),
left = left),
as.data.frame(splits, row.names = F))
}
}
Using this function, I am able to recover all the splitting variables and points:
splits <- rpart_splits(tree)[rpart_splits(tree)$type == "main", ]
splits
# var type node ix left count ncat improve index adj
# 1 X2 main 1 1 < 0.565 5000 -1 0.18110662 0.565 0
# 3 X2 main 2 2 < 0.265 2814 -1 0.06358597 0.265 0
# 6 X1 main 3 5 < 0.645 2186 -1 0.07645851 0.645 0
The column var tells me the splitting variables for each non-terminal node, and the column left tells the associated splitting points. However, I do not know how to use this information to produce my desired plots.
Of course if you have any alternative strategy that do not involve the use of rpart_splits feel free to suggest it.
You could use the (unpublished) parttree package, which you can install from GitHub via:
remotes::install_github("grantmcdermott/parttree")
This allows:
library(parttree)
ggplot() +
geom_parttree(data = tree, aes(fill = path)) +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
scale_fill_brewer(palette = "Pastel1", name = "Partitions") +
theme_bw(base_size = 16) +
labs(x = "X2", y = "X1")
Incidentally, this package also contains the function parttree, which returns something very similar to your
rpart_splits function:
parttree(tree)
node Y path xmin xmax ymin ymax
1 4 0.7556079 X2 < 0.565 --> X2 < 0.265 -Inf 0.265 -Inf Inf
2 5 1.3087679 X2 < 0.565 --> X2 >= 0.265 0.265 0.565 -Inf Inf
3 6 1.8681143 X2 >= 0.565 --> X1 < 0.645 0.565 Inf -Inf 0.645
4 7 2.4993361 X2 >= 0.565 --> X1 >= 0.645 0.565 Inf 0.645 Inf

How to add a column to a lm based flextable

Let's say I make a model with lm such as
library(flextable)
set.seed(123)
mydata <- data.frame(y=runif(100,1,100), x1=runif(100,1,100), x2=runif(100,1,100))
model <- lm(y~x1+x2, data=mydata)
as_flextable(model)
This gives me a flextable with the Estimate, Standard Error, t value, and Pr(>|t|). Let's say I want to add a column to the flextable, for instance, if my y is logged and I want a column that shows exp(model$coefficients)-1.
Is there a straightforward way to do that or do I have to recreate the table from scratch?
In referencing the source code of flextable's as_flextable.lm function it's clear there's no built in way to do it. I made a "new" function by copying from source.
pvalue_format <- function(x){
z <- cut(x, breaks = c(-Inf, 0.001, 0.01, 0.05, 0.1, Inf), labels = c("***", "**", "*", ".", ""))
as.character(z)
}
as_flextable_newcol<-function(x,new_cols=NULL) {
data_t <- broom::tidy(x)
data_g <- broom::glance(x)
##this is my addition
if(!is.null(new_cols)&is.list(new_cols)) {
for(i in names(new_cols)) {
data_t <- data_t %>% mutate("{i}":=new_cols[[i]](term, estimate, std.error, p.value))
}
}
##end of my addition
ft <- flextable(data_t, col_keys = c("term", "estimate", "std.error", "statistic", "p.value", "signif"))
ft <- colformat_double(ft, j = c("estimate", "std.error", "statistic"), digits = 3)
ft <- colformat_double(ft, j = c("p.value"), digits = 4)
ft <- compose(ft, j = "signif", value = as_paragraph(pvalue_format(p.value)) )
ft <- set_header_labels(ft, term = "", estimate = "Estimate",
std.error = "Standard Error", statistic = "t value",
p.value = "Pr(>|t|)", signif = "" )
dimpretty <- dim_pretty(ft, part = "all")
ft <- add_footer_lines(ft, values = c(
"Signif. codes: 0 <= '***' < 0.001 < '**' < 0.01 < '*' < 0.05 < '.' < 0.1 < '' < 1",
"",
sprintf("Residual standard error: %s on %.0f degrees of freedom", formatC(data_g$sigma), data_g$df.residual),
sprintf("Multiple R-squared: %s, Adjusted R-squared: %s", formatC(data_g$r.squared), formatC(data_g$adj.r.squared)),
sprintf("F-statistic: %s on %.0f and %.0f DF, p-value: %.4f", formatC(data_g$statistic), data_g$df.residual, data_g$df, data_g$p.value)
))
ft <- align(ft, i = 1, align = "right", part = "footer")
ft <- italic(ft, i = 1, italic = TRUE, part = "footer")
ft <- hrule(ft, rule = "auto")
ft <- autofit(ft, part = c("header", "body"))
ft
}
the new_cols parameter to this function needs to be a named list of functions where the name of each function in the list will become the new column name. The functions inside the list will take term, estimate, std.error, p.value as input as those are the names of the data_t tibble.
For example:
new_cols=list(perc_change=function(term, estimate, std.error, p.value) {
ifelse(term=="(Intercept)","", paste0(round(100*(exp(estimate)-1),0),"%"))
})

Automatically solve an equation of `pt` for `ncp`

I wonder if it is possible to efficiently change ncp in the below code such that x becomes .025 and .975 (within rounding error).
x <- pt(q = 5, df = 19, ncp = ?)
----------
Clarification
q = 5 and df = 19 (above) are just two hypothetical numbers, so q and df could be any other two numbers. What I expect is a function / routine, that takes q and df as input.
What is wrong with uniroot?
f <- function (ncp, alpha) pt(q = 5, df = 19, ncp = ncp) - alpha
par(mfrow = c(1,2))
curve(f(ncp, 0.025), from = 5, to = 10, xname = "ncp", main = "0.025")
abline(h = 0)
curve(f(ncp, 0.975), from = 0, to = 5, xname = "ncp", main = "0.975")
abline(h = 0)
So for 0.025 case, the root lies in (7, 8); for 0.975 case, the root lies in (2, 3).
uniroot(f, c(7, 8), alpha = 0.025)$root
#[1] 7.476482
uniroot(f, c(2, 3), alpha = 0.975)$root
#[1] 2.443316
---------
(After some discussion...)
OK, now I see your ultimate goal. You want to implement this equation solver as a function, with input q and df. So they are unknown, but fixed. They might come out of an experiment.
Ideally if there is an analytical solution, i.e., ncp can be written as a formula in terms of q, df and alpha, that would be so great. However, this is not possible for t-distribution.
Numerical solution is the way, but uniroot is not a great option for this purpose, as it relies on "plot - view - guess - specification". The answer by loki is also crude but with some improvement. It is a grid search, with fixed step size. Start from a value near 0, say 0.001, and increase this value and check for approximation error. We stop when this error fails to decrease.
This really initiates the idea of numerical optimization with Newton-method or quasi-Newton method. In 1D case, we can use function optimize. It does variable step size in searching, so it converges faster than a fixed step-size searching.
Let's define our function as:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Note, -37.62 / 37.62 is chosen as lower / upper bound for ncp, as it is the maximum supported by t-distribution in R (read ?dt).
For example, let's try this function. If you, as given in your question, has q = 5 and df = 19:
ncp_solver(alpha = 0.025, q = 5, df = 19)
# ncp abs.error
#7.476472e+00 1.251142e-07
The result is a named vector, with ncp and absolute approximation error.
Similarly we can do:
ncp_solver(alpha = 0.975, q = 5, df = 19)
# ncp abs.error
#2.443347e+00 7.221928e-07
----------
Follow up
Is it possible that in the function ncp_solver(), alpha takes a c(.025, .975) together?
Why not wrapping it up for a "vectorization":
sapply(c(0.025, 0.975), ncp_solver, q = 5, df = 19)
# [,1] [,2]
#ncp 7.476472e+00 2.443347e+00
#abs.error 1.251142e-07 7.221928e-07
How come 0.025 gives upper bound of confidence interval, while 0.975 gives lower bound of confidence interval? Should this relationship reversed?
No surprise. By default pt computes lower tail probability. If you want the "right" relationship, set lower.tail = FALSE in pt:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize squared approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha) ^ 2
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-37.62, 37.62), alpha = alpha, q = q, df = df)
## post processing
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
oo[2] <- sqrt(oo[2]) ## squared error to absolute error
## return
setNames(oo, c("ncp", "abs.error"))
}
Now you see:
ncp_solver(0.025, 5, 19)[[1]] ## use "[[" not "[" to drop name
#[1] 2.443316
ncp_solver(0.975, 5, 19)[[1]]
#[1] 7.476492
--------
Bug report and fix
I was reported that the above ncp_solver is unstable. For example:
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-8.880922 0.025000
But on the other hand, if we double check with uniroot here:
f <- function (ncp, alpha) pt(q = 0, df = 98, ncp = ncp, lower.tail = FALSE) - alpha
curve(f(ncp, 0.025), from = -3, to = 0, xname = "ncp"); abline(h = 0)
uniroot(f, c(-2, -1.5), 0.025)$root
#[1] -1.959961
So there is clearly something wrong with ncp_solver.
Well it turns out that we can not use too big bound, c(-37.62, 37.62). If we narrow it to c(-35, 35), it will be alright.
Also, to avoid tolerance problem, we can change objective function from squared error to absolute error:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(alpha = 0.025, q = 0, df = 98)
# ncp abs.error
#-1.959980e+00 9.190327e-07
Damn, this is a pretty annoying bug. But relax now.
Report on getting warning messages from pt
I also receive some report on annoying warning messages from pt:
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
#Warning message:
#In pt(q = q, df = df, ncp = ncp, lower.tail = FALSE) :
# full precision may not have been achieved in 'pnt{final}'
I am not too sure what is going on here, but meanwhile I did not observe misleading result. Therefore, I decide to suppress those warnings from pt, using suppressWarnings:
ncp_solver <- function (alpha, q, df) {
## objective function: we minimize absolute approximation error
obj_fun <- function (ncp, alpha = alpha, q = q, df = df) {
abs(suppressWarnings(pt(q = q, df = df, ncp = ncp, lower.tail = FALSE)) - alpha)
}
## now we call `optimize`
oo <- optimize(obj_fun, interval = c(-35, 35), alpha = alpha, q = q, df = df)
## post processing and return
oo <- unlist(oo, use.names = FALSE) ## list to numerical vector
setNames(oo, c("ncp", "abs.error"))
}
ncp_solver(0.025, -5, 19)
# ncp abs.error
#-7.476488e+00 5.760562e-07
OK, quiet now.
You could use two while loops like this:
i <- 0.001
lowerFound <- FALSE
while(!lowerFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.025){
lowerFound <- TRUE
print(paste("Lower is", i))
lower <- i
} else {
i <- i + 0.0005
}
}
i <- 0.001
upperFound <- FALSE
while(!upperFound){
x <- pt(q = 5, df = 19, ncp = i)
if (round(x, 3) == 0.975){
upperFound <- TRUE
print(paste("Upper is ", i))
upper <- i
} else {
i <- i + 0.0005
}
}
c(Lower = lower, Upper = upper)
# Lower Upper
# 7.4655 2.4330
Of course, you can adapt the increment in i <- i + .... or change the check if (round(x,...) == ....) to fit this solution to your specific needs of accuracy.
I know this is an old question, but there is now a one-line solution to this problem using the conf.limits.nct() function in the MBESS package.
install.packages("MBESS")
library(MBESS)
result <- conf.limits.nct(t.value = 5, df = 19)
result
$Lower.Limit
[1] 2.443332
$Prob.Less.Lower
[1] 0.025
$Upper.Limit
[1] 7.476475
$Prob.Greater.Upper
[1] 0.025
$Lower.Limit is the result where pt = 0.975
$Upper.Limit is the result where pt = 0.025
pt(q=5,df=19,ncp=result$Lower.Limit)
[1] 0.975
> pt(q=5,df=19,ncp=result$Upper.Limit)
[1] 0.025

R - Error using summary() from speedglm package

I'm using speedglm to estimate a logistic regression model on some data. I've created a reproducible example which generates the same error that I get using my original data.
library(speedglm)
n <- 10000
dtf <- data.frame( y = sample(c(0,1), n, 1),
x1 = as.factor(sample(c("a","b"), n, 1)),
x2 = rnorm(n, 30, 10))
m <- speedglm(y ~ x1 + x2, dtf, family=binomial())
summary(m)
The output is the following:
Generalized Linear Model of class 'speedglm':
Call: speedglm(formula = y ~ x1 + x2, data = dtf, family = binomial())
Coefficients:
------------------------------------------------------------------
Error in data.frame(..., check.names = FALSE) :
arguments imply differing number of rows: 3, 0
I've checked the source code of summary.speedglm by executing getS3method("summary", "speedglm") and found the code line which generates the error, but it didn't help to solve the problem.
PS: someone with 1500+ rep should create the speedglm tag.
UPDATE
Marco Enea, the maintainer of speedglm, asked to post the following temporary fix for summary.speedglm and print.summary.speedglm.
summary.speedglm <- function (object, correlation = FALSE, ...)
{
if (!inherits(object, "speedglm"))
stop("object is not of class speedglm")
z <- object
var_res <- as.numeric(z$RSS/z$df)
dispersion <- if (z$family$family %in% c("poisson", "binomial")) 1 else var_res
if (z$method == "qr") {
z$XTX <- z$XTX[z$ok, z$ok]
}
inv <- solve(z$XTX, tol = z$tol.solve)
covmat <- diag(inv)
se_coef <- rep(NA, length(z$coefficients))
se_coef[z$ok] <- sqrt(dispersion * covmat)
if (z$family$family %in% c("binomial", "poisson")) {
z1 <- z$coefficients/se_coef
p <- 2 * pnorm(abs(z1), lower.tail = FALSE)
} else {
t1 <- z$coefficients/se_coef
p <- 2 * pt(abs(t1), df = z$df, lower.tail = FALSE)
}
ip <- !is.na(p)
p[ip] <- as.numeric(format(p[ip], digits = 3))
dn <- c("Estimate", "Std. Error")
if (z$family$family %in% c("binomial", "poisson")) {
format.coef <- if (any(na.omit(abs(z$coef)) < 1e-04))
format(z$coefficients, scientific = TRUE, digits = 4) else
round(z$coefficients, digits = 7)
format.se <- if (any(na.omit(se_coef) < 1e-04))
format(se_coef, scientific = TRUE, digits = 4) else round(se_coef, digits = 7)
format.pv <- if (any(na.omit(p) < 1e-04))
format(p, scientific = TRUE, digits = 4) else round(p, digits = 4)
param <- data.frame(format.coef, format.se, round(z1,
digits = 4), format.pv)
dimnames(param) <- list(names(z$coefficients), c(dn,
"z value", "Pr(>|z|)"))
} else {
format.coef <- if (any(abs(na.omit(z$coefficients)) <
1e-04))
format(z$coefficients, scientific = TRUE, digits = 4) else
round(z$coefficients, digits = 7)
format.se <- if (any(na.omit(se_coef) < 1e-04))
format(se_coef, scientific = TRUE, digits = 4) else
round(se_coef, digits = 7)
format.pv <- if (any(na.omit(p) < 1e-04))
format(p, scientific = TRUE, digits = 4) else round(p, digits = 4)
param <- data.frame(format.coef, format.se, round(t1,
digits = 4), format.pv)
dimnames(param) <- list(names(z$coefficients), c(dn,
"t value", "Pr(>|t|)"))
}
eps <- 10 * .Machine$double.eps
if (z$family$family == "binomial") {
if (any(z$mu > 1 - eps) || any(z$mu < eps))
warning("fitted probabilities numerically 0 or 1 occurred")
}
if (z$family$family == "poisson") {
if (any(z$mu < eps))
warning("fitted rates numerically 0 occurred")
}
keep <- match(c("call", "terms", "family", "deviance", "aic",
"df", "nulldev", "nulldf", "iter", "tol", "n", "convergence",
"ngoodobs", "logLik", "RSS", "rank"), names(object),
0)
ans <- c(object[keep], list(coefficients = param, dispersion = dispersion,
correlation = correlation, cov.unscaled = inv, cov.scaled = inv *
var_res))
if (correlation) {
ans$correl <- (inv * var_res)/outer(na.omit(se_coef),
na.omit(se_coef))
}
class(ans) <- "summary.speedglm"
return(ans)
}
print.summary.speedglm <- function (x, digits = max(3, getOption("digits") - 3), ...)
{
cat("Generalized Linear Model of class 'speedglm':\n")
if (!is.null(x$call))
cat("\nCall: ", deparse(x$call), "\n\n")
if (length(x$coef)) {
cat("Coefficients:\n")
cat(" ------------------------------------------------------------------",
"\n")
sig <- function(z){
if (!is.na(z)){
if (z < 0.001)
"***"
else if (z < 0.01)
"** "
else if (z < 0.05)
"* "
else if (z < 0.1)
". "
else " "
} else " "
}
options(warn=-1)
sig.1 <- sapply(as.numeric(as.character(x$coefficients[,4])),
sig)
options(warn=0)
est.1 <- cbind(format(x$coefficients, digits = digits),
sig.1)
colnames(est.1)[ncol(est.1)] <- ""
print(est.1)
cat("\n")
cat("-------------------------------------------------------------------",
"\n")
cat("Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1",
"\n")
cat("\n")
}
else cat("No coefficients\n")
cat("---\n")
cat("null df: ", x$nulldf, "; null deviance: ", round(x$nulldev,
digits = 2), ";\n", "residuals df: ", x$df, "; residuals deviance: ",
round(x$deviance, digits = 2), ";\n", "# obs.: ", x$n,
"; # non-zero weighted obs.: ", x$ngoodobs, ";\n", "AIC: ",
x$aic, "; log Likelihood: ", x$logLik, ";\n", "RSS: ",
round(x$RSS, digits = 1), "; dispersion: ", x$dispersion,
"; iterations: ", x$iter, ";\n", "rank: ", round(x$rank,
digits = 1), "; max tolerance: ", format(x$tol, scientific = TRUE,
digits = 3), "; convergence: ", x$convergence, ".\n",
sep = "")
invisible(x)
if (x$correlation) {
cat("---\n")
cat("Correlation of Coefficients:\n")
x$correl[upper.tri(x$correl, diag = TRUE)] <- NA
print(x$correl[-1, -nrow(x$correl)], na.print = "", digits = 2)
}
}
Following 42' suggestion, I would also add the following:
environment(summary.speedglm) <- environment(speedglm)
environment(print.summary.speedglm) <- environment(speedglm)
The print.summary.speedglm function has a tiny bug in it. If you change this line:
sig.1 <- cbind(sapply(as.numeric(as.character(x$coefficients$"Pr(>|t|)")), sig))
To this line:
sig.1 <- cbind(sapply(as.numeric(as.character(x$coefficients$"Pr(>|z|)")), sig))
And also run:
environment(print.summary.speedglm) <- environment(speedglm)
You will not see the error message anymore.
The proper way to report bugs is to contact the maintainer (I'll send him an email):
maintainer('speedglm')
[1] "Marco Enea <emarco76#libero.it>"
It appears that this is a bug; in speedglm:::print.summary.speedglm there is the line:
sig.1 <- sapply(as.numeric(as.character(x$coefficients$"Pr(>|t|)")),
sig)
but when you look at the object, you can see:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.0546397 0.0655713 -0.8333 0.405
x1b -0.0618225 0.0400126 -1.5451 0.122
x2 0.0020771 0.0019815 1.0483 0.295
which has a Pr(>|z|) instead of Pr(>|t|), so the sig stars fail.

Resources