R Flexsurv and time-dependent covariates - r

I read that the R flexsurv package can also be used for modeling time-dependent covariates according to Christopher Jackson (2016) ["flexsurv: a platform for parametric survival modeling in R, Journal of Statistical Software, 70 (1)].
However, I was not able to figure out how, even after several adjustments and searches in online forums.
Before turning to the estimation of time-dependent covariates I tried to create a simple model with only time-independent covariates to test whether I specified the Surv object correctly. Here is a small example.
library(splitstackshape)
library(flexsurv)
## create sample data
n=50
set.seed(2)
t <- rpois(n,15)+1
x <- rnorm(n,t,5)
df <- data.frame(t,x)
df$id <- 1:n
df$rep <- df$t-1
Which looks like this:
t x id rep
1 12 17.696149 1 11
2 12 20.358094 2 11
3 11 2.058789 3 10
4 16 26.156213 4 15
5 13 9.484278 5 12
6 15 15.790824 6 14
...
And the long data:
long.df <- expandRows(df, "rep")
rep.vec<-c()
for(i in 1:n){
rep.vec <- c(rep.vec,1:(df[i,"t"]-1))
}
long.df$start <- rep.vec
long.df$stop <- rep.vec +1
long.df$censrec <- 0
long.df$censrec<-ifelse(long.df$stop==long.df$t,1,long.df$censrec)
Which looks like this:
t x id start stop censrec
1 12 17.69615 1 1 2 0
1.1 12 17.69615 1 2 3 0
1.2 12 17.69615 1 3 4 0
1.3 12 17.69615 1 4 5 0
1.4 12 17.69615 1 5 6 0
1.5 12 17.69615 1 6 7 0
1.6 12 17.69615 1 7 8 0
1.7 12 17.69615 1 8 9 0
1.8 12 17.69615 1 9 10 0
1.9 12 17.69615 1 10 11 0
1.10 12 17.69615 1 11 12 1
2 12 20.35809 2 1 2 0
...
Now I can estimate a simple Cox model to see whether it works:
coxph(Surv(t)~x,data=df)
This yields:
coef exp(coef) se(coef) z p
x -0.0588 0.9429 0.0260 -2.26 0.024
And in the long format:
coxph(Surv(start,stop,censrec)~x,data=long.df)
I get:
coef exp(coef) se(coef) z p
x -0.0588 0.9429 0.0260 -2.26 0.024
Taken together I conclude that my transformation into the long format was correct. Now, turning to the flexsurv framework:
flexsurvreg(Surv(time=t)~x,data=df, dist="weibull")
yields:
Estimates:
data mean est L95% U95% se exp(est) L95% U95%
shape NA 5.00086 4.05569 6.16631 0.53452 NA NA NA
scale NA 13.17215 11.27876 15.38338 1.04293 NA NA NA
x 15.13380 0.01522 0.00567 0.02477 0.00487 1.01534 1.00569 1.02508
But
flexsurvreg(Surv(start,stop,censrec) ~ x ,data=long.df, dist="weibull")
causes an error:
Error in flexsurvreg(Surv(start, stop, censrec) ~ x, data = long.df, dist = "weibull") :
Initial value for parameter 1 out of range
Would anyone happen to know the correct syntax for the latter Surv object? If you use the correct syntax, do you get the same estimates?
Thank you very much,
best,
David
===============
EDIT AFTER FEEDBACK FROM 42
===============
library(splitstackshape)
library(flexsurv)
x<-c(8.136527, 7.626712, 9.809122, 12.125973, 12.031536, 11.238394, 4.208863, 8.809854, 9.723636)
t<-c(2, 3, 13, 5, 7, 37 ,37, 9, 4)
df <- data.frame(t,x)
#transform into long format for time-dependent covariates
df$id <- 1:length(df$t)
df$rep <- df$t-1
long.df <- expandRows(df, "rep")
rep.vec<-c()
for(i in 1:length(df$t)){
rep.vec <- c(rep.vec,1:(df[i,"t"]-1))
}
long.df$start <- rep.vec
long.df$stop <- rep.vec +1
long.df$censrec <- 0
long.df$censrec<-ifelse(long.df$stop==long.df$t,1,long.df$censrec)
coxph(Surv(t)~x,data=df)
coxph(Surv(start,stop,censrec)~x,data=long.df)
flexsurvreg(Surv(time=t)~x,data=df, dist="weibull")
flexsurvreg(Surv(start,stop,censrec) ~ x ,data=long.df, dist="weibull",inits=c(shape=.1, scale=1))
Which yields the same estimates for both coxph models but
Call:
flexsurvreg(formula = Surv(time = t) ~ x, data = df, dist = "weibull")
Estimates:
data mean est L95% U95% se exp(est) L95% U95%
shape NA 1.0783 0.6608 1.7594 0.2694 NA NA NA
scale NA 27.7731 3.5548 216.9901 29.1309 NA NA NA
x 9.3012 -0.0813 -0.2922 0.1295 0.1076 0.9219 0.7466 1.1383
N = 9, Events: 9, Censored: 0
Total time at risk: 117
Log-likelihood = -31.77307, df = 3
AIC = 69.54614
and
Call:
flexsurvreg(formula = Surv(start, stop, censrec) ~ x, data = long.df,
dist = "weibull", inits = c(shape = 0.1, scale = 1))
Estimates:
data mean est L95% U95% se exp(est) L95% U95%
shape NA 0.8660 0.4054 1.8498 0.3353 NA NA NA
scale NA 24.0596 1.7628 328.3853 32.0840 NA NA NA
x 8.4958 -0.0912 -0.3563 0.1739 0.1353 0.9128 0.7003 1.1899
N = 108, Events: 9, Censored: 99
Total time at risk: 108
Log-likelihood = -30.97986, df = 3
AIC = 67.95973

Reading the error message:
Error in flexsurvreg(Surv(start, stop, censrec) ~ x, data = long.df, dist = "weibull", :
initial values must be a numeric vector
And then reading the help page, ?flexsurvreg, it seemed as though an attempt at setting values for inits to a named numeric vector should be attempted:
flexsurvreg(Surv(start,stop,censrec) ~ x ,data=long.df, dist="weibull", inits=c(shape=.1, scale=1))
Call:
flexsurvreg(formula = Surv(start, stop, censrec) ~ x, data = long.df,
dist = "weibull", inits = c(shape = 0.1, scale = 1))
Estimates:
data mean est L95% U95% se exp(est) L95% U95%
shape NA 5.00082 4.05560 6.16633 0.53454 NA NA NA
scale NA 13.17213 11.27871 15.38341 1.04294 NA NA NA
x 15.66145 0.01522 0.00567 0.02477 0.00487 1.01534 1.00569 1.02508
N = 715, Events: 50, Censored: 665
Total time at risk: 715
Log-likelihood = -131.5721, df = 3
AIC = 269.1443
Extremely similar results. My guess was basically a stab in the dark, so I have no guidance on how to make a choice if this had not succeeded other than to "expand the search."

I just want to mention that in flexsurv v1.1.1, running this code:
flexsurvreg(Surv(start,stop,censrec) ~ x ,data=long.df, dist="weibull")
doesn't return any errors. It also gives the same estimates as the non time-varying command
flexsurvreg(Surv(time=t)~x,data=df, dist="weibull")

Related

Inflection point for binomial mixed GLM model

I'd like to explore some possibilities and comparison approaches for inflection point calculation for the binomial mixed GLM model. I find the inflection package that used Extremum Surface Estimator (ESE) and Extremeum Distance Estimator (EDE). I make:
library(inflection)
library(dplyr)
library(glmmTMB)
library(DHARMa)
library(ggplot2)
library(ggeffects)
# My binomial data set
binom.ds <- read.csv("https://raw.githubusercontent.com/Leprechault/trash/main/mort_binon.csv")
str(binom.ds)
# 'data.frame': 400 obs. of 4 variables:
# $ temp : num 0 0 0 0 0 0 0 0 0 0 ...
# $ days : int 5 5 5 5 5 5 5 5 5 5 ...
# $ rep : chr "r1" "r2" "r3" "r4" ...
# $ mortality: int 0 1 1 1 1 1 1 1 0 1 ...
# Fit a binomial mixed GLM model
m_F <- glmmTMB(mortality ~ temp + days +
(1 | days ), data = binom.ds,
family = "binomial")
# Check the fitted model using DHARMa
plot(s1 <- simulateResiduals(m_F))
# All look likes OK
# Find a inflection point
# for temp
ds_F <- cbind(x=binom.ds$temp,y=exp(predict(m_F)))
ds_F<-as.data.frame(ds_F)
bb=bede(ds_F$x,ds_F$y,0);bb
bb$iplast
# [1] 12.5
# $iters
# n a b EDE
# 1 400 0 25 12.5
# Vizualize the inflection point for temp
ggpredict(m_F, terms = "temp [all]") %>% plot(add.data = TRUE) + geom_vline(xintercept = bb$iplast, colour="red", linetype = "longdash")
#for days
ds_F <- cbind(x=binom.ds$days,y=exp(predict(m_F)))
ds_F<-as.data.frame(ds_F)
bb2=bede(ds_F$x,ds_F$y,0);bb2
bb2$iplast
# [1] 22.5
# $iters
# n a b EDE
# 1 400 5 30 17.5
# 2 221 5 30 17.5
# 3 181 15 5 10.0
# 4 61 15 30 22.5
# Vizualize the inflection point for days
ggpredict(m_F, terms = "days [all]") %>% plot(add.data = TRUE) + geom_vline(xintercept = bb2$iplast, colour="red", linetype = "longdash")
My question is there other approaches/packages for this calculus?

ANOVA factor warning and no p value

I am doing one way ANOVA for a dataset and I am showing some rows here:-
Number Call Weight
1 X 33.29
2 Y 88.22
3 Y 70.19
4 Y 69.25
5 X 73.26
6 X 56.18
7 Y 16.19
8 Y 20.21
9 Y 50.26
10 X 95.29
I did anova using:-
aov <- aov(data$Weight ~ data$Call)
But it does not give any p value. I am also getting:-
Warning messages:
1: In model.response(mf, "numeric") :
using type = "numeric" with a factor response will be ignored
2: In Ops.factor(y, z$residuals) : ‘-’ not meaningful for factors
I have tried your code on these data and it works without issue. Try to check str of your data. Most probably issue is that Weight is factor in your case and you need to change it to numeric with as.numeric().
dta <- read.table(text=
"Number, Call, Weight
1, X, 33.29
2, Y, 88.22
3, Y, 70.19
4, Y, 69.25
5, X, 73.26
6, X, 56.18
7, Y, 16.19
8, Y, 20.21
9, Y, 50.26
1,0 X, 95.29", header=T, sep=",")
summary(aov(dta$Weight ~ dta$Call))
Result
Call:
aov(formula = dta$Weight ~ dta$Call)
Terms:
dta$Call Residuals
Sum of Squares 352.450 6303.466
Deg. of Freedom 1 8
Residual standard error: 28.07015
Estimated effects may be unbalanced
result for str(dta)
'data.frame': 10 obs. of 3 variables:
$ Number: int 1 2 3 4 5 6 7 8 9 1
$ Call : Factor w/ 3 levels " X"," Y",..: 1 2 2 2 1 1 2 2 2 3
$ Weight: num 33.3 88.2 70.2 69.2 73.3 ...

Wrong degrees of freedom in lsmeans and SE calculation in R

I have this sample data:
Sample Replication Days
1 1 10
1 1 14
1 1 13
1 1 14
2 1 NA
2 1 5
2 1 18
2 1 20
1 2 16
1 2 NA
1 2 18
1 2 21
2 2 15
2 2 7
2 2 12
2 2 14
I have four observations for each sample with a total of 64 samples in each of the two replications. In total, I have 512 values for both the replications. I also have some missing values designated as 'NA'. I prformed ANOVA for Mean values for each Sample for each Rep that I generated using
library(tidyverse)
df <- Data %>% group_by(Sample, Rep) %>% summarise(Mean = mean(Days, na.rm = TRUE))
curve.anova <- aov(Mean~Rep+Sample, data=df)
Result of anova is:
> summary(curve.anova)
Df Sum Sq Mean Sq F value Pr(>F)
Rep 1 6.1 6.071 2.951 0.0915 .
Sample 63 1760.5 27.945 13.585 <2e-16 ***
Residuals 54 111.1 2.057
I created a table for mean and SE values,
ANOVA<-lsmeans(curve.anova, ~Sample)
ANOVA<-summary(ANOVA)
write.csv(ANOVA, file="Desktop/ANOVA.csv")
A few lines from file are:
Sample lsmean SE df lower.CL upper.CL
1 24.875 1.014145417 54 22.84176086 26.90823914
2 25.5 1.014145417 54 23.46676086 27.53323914
3 31.32575758 1.440722628 54 28.43728262 34.21423253
4 26.375 1.014145417 54 24.34176086 28.40823914
5 26.42424242 1.440722628 54 23.53576747 29.31271738
6 25.5 1.014145417 54 23.46676086 27.53323914
7 28.375 1.014145417 54 26.34176086 30.40823914
8 24.875 1.014145417 54 22.84176086 26.90823914
9 21.16666667 1.014145417 54 19.13342752 23.19990581
10 23.875 1.014145417 54 21.84176086 25.90823914
df for all 64 samples is 54 and the error bars in the ggplot are mostly equal for all the Samples. SE values are larger than the manually calculated values. Based on anova results, df=54 is for residuals.
I want to double check the ANOVA results so that they are correct and I am correctly generating lsmeans and SE to plot a bargraph using ggplot with confirdence interval error bars.
I will appreciate any help. Thank you!
After reading your comments, I think your workflow as an issue. Basically, when you are applying your anova test, you are doing it on means of the different samples.
So, in your example, when you are doing :
curve.anova <- aov(Mean~Rep+Sample, data=df)
You are comparing these values:
> df
# A tibble: 4 x 3
# Groups: Sample [2]
Sample Replication Mean
<dbl> <dbl> <dbl>
1 1 1 12.8
2 1 2 18.3
3 2 1 14.3
4 2 2 12
So, basically, you are comparing two groups with two values per group.
So, when you tried to remove the Replication group, you get an error because the output of:
df = Data %>% group_by(Sample %>% summarise(Mean = mean(Days, na.rm = TRUE))
is now:
# A tibble: 2 x 2
Sample Mean
<dbl> <dbl>
1 1 15.1
2 2 13
So, applying anova test on that dataset means that you are comparing two groups with one value each. So, you can't compute residuals and SE.
Instead, you should do it on the full dataset without trying to calculate the mean first:
anova_data <- aov(Days~Sample+Replication, data=Data)
anova_data2 <- aov(Days~Sample, data=Data)
And their output are:
> summary(anova_data)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 16.07 16.071 0.713 0.416
Replication 1 9.05 9.054 0.402 0.539
Residuals 11 247.80 22.528
2 observations deleted due to missingness
> summary(anova_data2)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 16.07 16.07 0.751 0.403
Residuals 12 256.86 21.41
2 observations deleted due to missingness
Now, you can apply lsmeans:
A_d = summary(lsmeans(anova_data, ~Sample))
A_d2 = summary(lsmeans(anova_data2, ~Sample))
> A_d
Sample lsmean SE df lower.CL upper.CL
1 15.3 1.8 11 11.29 19.2
2 12.9 1.8 11 8.91 16.9
Results are averaged over the levels of: Replication
Confidence level used: 0.95
> A_d2
Sample lsmean SE df lower.CL upper.CL
1 15.1 1.75 12 11.33 19.0
2 13.0 1.75 12 9.19 16.8
Confidence level used: 0.95
It does not change a lot the mean and the SE (which is good because it means that your replicate are consistent and you don't have too much variabilities between those) but it reduces the confidence interval.
So, to plot it, you can:
library(ggplot2)
ggplot(A_d, aes(x=as.factor(Sample), y=lsmean)) +
geom_bar(stat="identity", colour="black") +
geom_errorbar(aes(ymin = lsmean - SE, ymax = lsmean + SE), width = .5)
Based on your initial question, if you want to check that the output of ANOVA is correct, you can mimick fake data like this:
d2 <- data.frame(Sample = c(rep(1,10), rep(2,10)),
Days = c(rnorm(10, mean =3), rnorm(10, mean = 8)))
Then,
curve.d2 <- aov(Days ~ Sample, data = d2)
ANOVA2 <- lsmeans(curve.d2, ~Sample)
ANOVA2 <- summary(ANOVA2)
And you get the following output:
> summary(curve.d2)
Df Sum Sq Mean Sq F value Pr(>F)
Sample 1 139.32 139.32 167.7 1.47e-10 ***
Residuals 18 14.96 0.83
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> ANOVA2
Sample lsmean SE df lower.CL upper.CL
1 2.62 0.288 18 2.02 3.23
2 7.90 0.288 18 7.29 8.51
Confidence level used: 0.95
And for the plot
ggplot(ANOVA2, aes(x=as.factor(Sample), y=lsmean)) +
geom_bar(stat="identity", colour="black") +
geom_errorbar(aes(ymin = lsmean - SE, ymax = lsmean + SE), width = .5)
As you can see, we get lsmeans for d2 close to 3 and 8 what we set at the first place. So, I think your output are correct. Maybe your data do not present any significant differences and the computation of SE are the same because the distribution of your data are the same. It is what it is.
I hope this answer helps you.
Data
df = data.frame(Sample = c(rep(1,4), rep(2,4),rep(1,4), rep(2,4)),
Replication = c(rep(1,8), rep(2,8)),
Days = c(10,14,13,14,NA,5,18,20,16,NA,18,21,15,7,12,14))

Specifying prior distribution on matrix in rstan

I am having trouble with getting a Bayesian mixed-effects model to yield stationary and well-mixed chains. I have created my own data so I know what parameters should be retrieved by the model. Unfortunately because the effective number of parameters is so low and the Rhat so high the parameter estimates are complete nonsense.
The data is designed so there are 60 subjects, split into three groups (g1, g2, g3) of 20 subjects each. Each subject is exposed to 3 conditions (cond1, cond2, cond3). I designed the data so there is no difference among the groups, but there are differences among the conditions, with cond1 scoring 100 on average, cond2 scoring 75 on average, and cond3 scoring 125.
df <- data.frame(id = factor(rep(1:60, 3)),
group = factor(rep(c("g1", "g2", "g3"), each = 20, length.out = 180)),
condition = factor(rep(c("cond1", "cond2", "cond3"), each = 60)),
score = c(ceiling(rnorm(60, 100, 15)), ceiling(rnorm(60, 75, 15)), ceiling(rnorm(60, 125, 15))))
Here are the descriptives
library(dplyr)
df %>% group_by(group, condition) %>% summarise(m = mean(score), sd = sd(score))
# group condition m sd
# <fct> <fct> <dbl> <dbl>
# 1 g1 cond1 108 12.4
# 2 g1 cond2 79.4 13.1
# 3 g1 cond3 128 11.5
# 4 g2 cond1 105 15.5
# 5 g2 cond2 71.6 10.6
# 6 g2 cond3 127 17.7
# 7 g3 cond1 106 13.3
# 8 g3 cond2 75.8 17.6
# 9 g3 cond3 124 14.5
Everything looks to be correct, the differences between conditions are preserved nicely across groups.
Now for the the model. The model I am running has a grand mean, a parameter for group, a parameter for condition, a parameter for the group x condition interaction, and a subject parameter.
Here is the data list
##### Step 1: put data into a list
mixList <- list(N = nrow(df),
nSubj = nlevels(df$id),
nGroup = nlevels(df$group),
nCond = nlevels(df$condition),
nGxC = nlevels(df$group)*nlevels(df$condition),
sIndex = as.integer(df$id),
gIndex = as.integer(df$group),
cIndex = as.integer(df$condition),
score = df$score)
Now to build the model in rstan, saving the string as a .stan file using the cat() function
###### Step 2: build model
cat("
data{
int<lower=1> N;
int<lower=1> nSubj;
int<lower=1> nGroup;
int<lower=1> nCond;
int<lower=1,upper=nSubj> sIndex[N];
int<lower=1,upper=nGroup> gIndex[N];
int<lower=1,upper=nCond> cIndex[N];
real score[N];
}
parameters{
real a0;
vector[nGroup] bGroup;
vector[nCond] bCond;
vector[nSubj] bSubj;
matrix[nGroup,nCond] bGxC;
real<lower=0> sigma_s;
real<lower=0> sigma_g;
real<lower=0> sigma_c;
real<lower=0> sigma_gc;
real<lower=0> sigma;
}
model{
vector[N] mu;
bCond ~ normal(100, sigma_c);
bGroup ~ normal(100, sigma_g);
bSubj ~ normal(0, sigma_s);
sigma ~ cauchy(0,2)T[0,];
for (i in 1:N){
mu[i] = a0 + bGroup[gIndex[i]] + bCond[cIndex[i]] + bSubj[sIndex[i]] + bGxC[gIndex[i],cIndex[i]];
}
score ~ normal(mu, sigma);
}
", file = "mix.stan")
Next is to generate the chains in rstan
##### Step 3: generate the chains
mix <- stan(file = "mix.stan",
data = mixList,
iter = 2e3,
warmup = 1e3,
cores = 1,
chains = 1)
And here is the output
###### Step 4: Diagnostics
print(mix, pars = c("a0", "bGroup", "bCond", "bGxC", "sigma"), probs = c(.025,.975))
# mean se_mean sd 2.5% 97.5% n_eff Rhat
# a0 -1917.21 776.69 2222.64 -5305.69 1918.58 8 1.02
# bGroup[1] 2368.36 2083.48 3819.06 -2784.04 9680.78 3 1.54
# bGroup[2] 7994.87 446.06 1506.31 4511.22 10611.46 11 1.00
# bGroup[3] 7020.78 2464.68 4376.83 81.18 14699.90 3 1.91
# bCond[1] -3887.06 906.99 1883.45 -7681.24 -247.48 4 1.60
# bCond[2] 4588.50 676.28 1941.92 -594.56 7266.09 8 1.10
# bCond[3] 73.91 1970.28 3584.74 -5386.96 5585.99 3 2.13
# bGxC[1,1] 3544.02 799.91 1819.18 -1067.27 6327.68 5 1.26
# bGxC[1,2] -4960.08 1942.57 3137.33 -10078.84 317.07 3 2.66
# bGxC[1,3] -396.35 418.34 1276.44 -2865.39 2543.45 9 1.42
# bGxC[2,1] -2085.90 1231.36 2439.58 -5769.81 3689.38 4 1.46
# bGxC[2,2] -10594.89 1206.58 2560.42 -14767.50 -5074.33 5 1.02
# bGxC[2,3] -6024.75 2417.43 4407.09 -12002.87 4651.14 3 1.71
# bGxC[3,1] -1111.81 1273.66 2853.08 -4843.38 5572.87 5 1.48
# bGxC[3,2] -9616.85 2314.56 4020.02 -15775.40 -4262.64 3 2.98
# bGxC[3,3] -5054.27 828.77 2245.68 -8666.01 -321.74 7 1.00
# sigma 13.81 0.14 0.74 12.36 15.17 27 1.00
The low number of effective samples and high Rhats tell me I am doing something terribly wrong here, but what?
Is it not specifying a prior on bGxC?
How does one specify a prior on a matrix?
Matrices are inefficient in Stan (see here). It's better to use a vector of vectors:
vector[nCond] bGxC[nGroup];
And to set a prior:
for(i in 1:nGroup){
bGxC[i] ~ normal(0, sigma_gc);
}
And:
for (i in 1:N){
mu[i] = a0 + bGroup[gIndex[i]] + bCond[cIndex[i]] + bSubj[sIndex[i]] + bGxC[gIndex[i]][cIndex[i]];
}

Run xgboost model on single test data point

I am trying to role Xg boost model on single test data point.
a <- data.frame(satisfaction_level=0.14,
last_evaluation=0.92,
number_project=2,
average_montly_hours=350,
time_spend_company=5,
Work_accident=0,
promotion_last_5years=1,
sales=factor("sales",levels=levels(Bdata$sales)),
salary=factor("medium",levels=levels(Bdata$salary)))
#Converting it into matrix format
str(a)
a <- as.data.frame.model.matrix(a)
I get below error when I predict using the model
xgb.preds = predict(xgb.model, a)
Error in xgb.DMatrix(newdata, missing = missing) :
xgb.DMatrix: does not support to construct from list
Created the model using:
xgb.model <- xgboost(param =param, data = xgb.train.data,nrounds = 1500 ,eta = 0.05,subsample = 1 )
and Bdata contains:
head(Bdata)
satisfaction_level last_evaluation number_project average_montly_hours time_spend_company Work_accident left promotion_last_5years sales salary
1 0.38 0.53 2 157 3 0 1 0 sales low
2 0.80 0.86 5 262 6 0 1 0 sales medium
3 0.11 0.88 7 272 4 0 1 0 sales medium
4 0.72 0.87 5 223 5 0 1 0 sales low
5 0.37 0.52 2 159 3 0 1 0 sales low
6 0.41 0.50 2 153 3 0 1 0 sales low
>
You should not use as.data.frame.model.matrix. Your a object is still a data.frame. You need to use a <- as.matrix(a).
See below for a workable example using the iris dataset.
library(xgboost)
x = as.matrix(iris[, 1:4])
y = as.numeric(factor(iris[, 5]))-1
model <- xgboost(data = x, label = y, nrounds = 10)
new <- data.frame(Sepal.Length = 5.1,
Sepal.Width = 3.5,
Petal.Length = 1.4,
Petal.Width = 0.2)
#error because it is a data.frame
preds <- predict(model, newdata = new)
# Error in xgb.DMatrix(newdata, missing = missing) :
# xgb.DMatrix: does not support to construct from list
# This works because data.frame is turned into a matrix
preds <- predict(model, newdata = as.matrix(new))

Resources