Translate SAS to (Random effect ANOVA) - r

I'm trying to translate my SAS code for random effect ANOVA to R
here is my code:
proc glm data=A;
class group;
model common = group;
random group;
run;
'group' is group membership, and common is IV.
Please, translate this code into R code.
(edited)
my data looks like this:
id common group
1 4 A
2 2 A
3 3 A
4 2 B
5 2 B
6 3 C
7 4 C
8 3 C

I think you are looking for lme and the code can be written as:
library(nlme)
#let's say A (a dataframe) has the sample data
A$group <- as.factor(A$group)
#model
fit <- lme(common ~ group, random = ~ 1 | group, data = A, na.action=T)

Related

inputs of 2 separate predict() return the same set of fitted values

Confession: I attempted to ask this question yesterday, but used a sample, congruent dataset which resembles the my "real" data in hopes this would be more convenient for readers here. One issue was resolved, but another remains that appears immutable.
My objective is creating a linear model of two predicted vectors: "yC.hat", and "yT.hat" which are meant to project average effects for unique observed values of pri2000v as a function of the average poverty level "I(avgpoverty^2) under control (treatment = 0) and treatment (treatment = 1) conditions.
While I appear to have no issues running the regression itself, the inputs of my data argument have no effect on predict(), and only the object itself affects the output. As a result, treatment = 0 and treatment = 1 in the data argument result in the same fitted values. In fact, I can plug in ANY value into the data argument and it makes do difference. So I suspect my failure to understand issue starts here.
Here is my code:
q6rega <- lm(pri2000v ~ treatment + I(log(pobtot1994)) + I(avgpoverty^2)
#interactions
+ treatment:avgpoverty + treatment:I(avgpoverty^2), data = pga)
## predicted PRI support under the Treatment condition
q6.yT.hat <- predict(q6rega,
data = data.frame(I(avgpoverty^2) = 9:25, treatment = 1))
## predicted PRI support rate under the Control condition
q6.yC.hat <- predict(q6rega,
data = data.frame(I(avgpoverty^2) = 9:25, treatment = 0))
q6.yC.hat == q6.yT.hat
TRUE[417]
dput(pga has been posted on my github, if needed
EDIT: There were a few things wrong with my code above, but not specifying pobtot1994 somehow resulted in R treating it as newdata being omitted. Since I'm fairly new to statistics, I confused fitted values with the prediction output that I was actually trying to achieve. I would have expected that an unexpected input is to produce an error instead.
I'm surprised you are able to run a prediction when it is lacking the required variable (pobtot1994) for your model in the new data frame for prediction.
Anyway, you would need to create a new data frame with the three variables in untransformed form used in the model. Since you are interested to compare the fitted values of avgpoverty 3 to 5 for treatment 1 and 0, you need to force the third variable pobtot1994 as a constant. I use the mean of pobtot9994 here for simplicity.
newdat <- expand.grid(avgpoverty=3:5, treatment=factor(c(0,1)), pobtot1994=mean(pga$pobtot1994))
avgpoverty treatment pobtot1994
1 3 0 2037.384
2 4 0 2037.384
3 5 0 2037.384
4 3 1 2037.384
5 4 1 2037.384
6 5 1 2037.384
The prediction will show you the different values for the two conditions.
newdat$fitted <- predict(q6rega, newdata=newdat)
avgpoverty treatment pobtot1994 fitted
1 3 0 2037.384 38.86817
2 4 0 2037.384 50.77476
3 5 0 2037.384 55.67832
4 3 1 2037.384 51.55077
5 4 1 2037.384 49.03148
6 5 1 2037.384 59.73910

Adjusted survival curve based on weigthed cox regression

I'm trying to make an adjusted survival curve based on a weighted cox regression performed on a case cohort data set in R, but unfortunately, I can't make it work. I was therefore hoping that some of you may be able to figure it out why it isn't working.
In order to illustrate the problem, I have used (and adjusted a bit) the example from the "Package 'survival'" document, which means im working with:
data("nwtco")
subcoh <- nwtco$in.subcohort
selccoh <- with(nwtco, rel==1|subcoh==1)
ccoh.data <- nwtco[selccoh,]
ccoh.data$subcohort <- subcoh[selccoh]
ccoh.data$age <- ccoh.data$age/12 # Age in years
fit.ccSP <- cch(Surv(edrel, rel) ~ stage + histol + age,
data =ccoh.data,subcoh = ~subcohort, id=~seqno, cohort.size=4028, method="LinYing")
The data set is looking like this:
seqno instit histol stage study rel edrel age in.subcohort subcohort
4 4 2 1 4 3 0 6200 2.333333 TRUE TRUE
7 7 1 1 4 3 1 324 3.750000 FALSE FALSE
11 11 1 2 2 3 0 5570 2.000000 TRUE TRUE
14 14 1 1 2 3 0 5942 1.583333 TRUE TRUE
17 17 1 1 2 3 1 960 7.166667 FALSE FALSE
22 22 1 1 2 3 1 93 2.666667 FALSE FALSE
Then, I'm trying to illustrate the effect of stage in an adjusted survival curve, using the ggadjustedcurves-function from the survminer package:
library(suvminer)
ggadjustedcurves(fit.ccSP, variable = ccoh.data$stage, data = ccoh.data)
#Error in survexp(as.formula(paste("~", variable)), data = ndata, ratetable = fit) :
# Invalid rate table
But unfortunately, this is not working. Can anyone figure out why? And can this somehow be fixed or done in another way?
Essentially, I'm looking for a way to graphically illustrate the effect of a continuous variable in a weighted cox regression performed on a case cohort data set, so I would, generally, also be interested in hearing if there are other alternatives than the adjusted survival curves?
Two reasons it is throwing errors.
The ggadjcurves function is not being given a coxph.object, which it's halp page indicated was the designed first object.
The specification of the variable argument is incorrect. The correct method of specifying a column is with a length-1 character vector that matches one of the names in the formula. You gave it a vector whose value was a vector of length 1154.
This code succeeds:
fit.ccSP <- coxph(Surv(edrel, rel) ~ stage + histol + age,
data =ccoh.data)
ggadjustedcurves(fit.ccSP, variable = 'stage', data = ccoh.data)
It might not answer your desires, but it does answer the "why-error" part of your question. You might want to review the methods used by Therneau, Cynthia S Crowson, and Elizabeth J Atkinson in their paper on adjusted curves:
https://cran.r-project.org/web/packages/survival/vignettes/adjcurve.pdf

How to get CI 95% for coefficients of linear model using simpleboot package

I'm trying to predict a linear model (basic linear regressions with 4 predictors) with the procedure lm(). This works all fine.
What I want to do now is bootstrapping the model. After a quick research on Google I found out about the package simpleboot which seemed to be quite easy to understand.
I can easily bootstrap the lm.object using something like this:
boot_mod <- lm.boot(mod,R=100,rows=TRUE)
and afterwards print the object boot_mod.
I can also access the list in which the coefficients for each bootstrap sample are among other metrics such as RSS, R² and so on.
Can anyone tell me how I can save all coefficients from the boot list in a list or dataframe?
The result would look like this at best:
boot_coef
sample coef 1 coef 2 coef 3...
1 1,1 1,4 ...
2 1,2 1,5 ...
3 1,3 1,6 ...
library(tidyverse)
library(simpleboot)
### Some Dummy-Data in a dataframe
a <- c(3,4,5,6,7,9,13,12)
b <- c(5,9,14,22,12,5,12,18)
c <- c(7,2,8,7,12,5,3,1)
df <- as_data_frame(list(x1=a,x2=b,y=c))
### Linear model
mod <- lm(y~x1+x2,data=df)
### Bootstrap
boot_mod <- lm.boot(mod,R=10,rows = TRUE)
You can also use the function sample of the same package simpleboot:
given the output from either lm.boot or loess.boot, you can specify what kind of information you want to extract:
samples(object, name = c("fitted", "coef", "rsquare", "rss"))
It outputs either a vector or matrix depending on the entity extracted.
Source:
https://rdrr.io/cran/simpleboot/man/samples.html
Here is a tidyverse option to save all coefficients from the boot.list:
library(tidyverse)
as.data.frame(boot_mod$boot.list) %>%
select(ends_with("coef")) %>% # select coefficients
t(.) %>% as.data.frame(.) %>% # model per row
rownames_to_column("Sample") %>% # set sample column
mutate(Sample = parse_number(Sample))
# output
Sample (Intercept) x1 x2
1 1 5.562417 -0.2806786 0.12219191
2 2 8.261905 -0.8333333 0.54761905
3 3 9.406171 -0.5863124 0.07783740
4 4 8.996784 -0.6040479 0.06737891
5 5 10.908036 -0.7249561 -0.03091908
6 6 8.914262 -0.5094340 0.05549390
7 7 7.947724 -0.2501127 -0.08607481
8 8 6.255539 -0.2033771 0.07463971
9 9 5.676581 -0.2668020 0.08236743
10 10 10.118126 -0.4955047 0.01233728

Machine learning in R is slow with decisiontree

I'm trying to predict the type of a vehicle (model) based on the vehicle identification number (VIN). The first 10 positions of the VIN says something about the type, so I use them as variables. See an example of the data below:
positie_1_tm_3 positie_4 positie_5 positie_6 positie_7 positie_8 positie_9 positie_10 MODEL
MBL B 7 5 L 7 A 6 SKODA YETI
JNF A A E 1 1 U 2 NISSAN NOTE
VWZ Z Z 5 Z Z 9 4 VOLKSWAGEN FOX
F1D Z 0 V 0 6 4 2 RENAULT MEGANE
NAK U 8 1 1 C A 5 KIA SORENTO
F1B R 1 J 0 H 4 1 RENAULT CLIO
I used this R code for it:
#make stratisfied train and test set:
library(caret)
train.index <- createDataPartition(VIN1$MODEL, p = .6, list = FALSE)
train <- VIN1[ train.index,]
overige_data <- VIN1[-train.index,]
test.index<-createDataPartition(overige_data$MODEL, p = .5, list = FALSE)
test<-overige_data[test.index,]
testset2<-overige_data[-test.index,]
#make decision three :
library(rpart)
library(rpart.plot)
library(rattle)
library(RColorBrewer)
tree<- rpart(MODEL ~., train, method="class")
But the last one, making the tree, is running for more than 2 weeks already.
The dataset is around 3 million rows, so the trainingset is around 1,8 million rows. Is it running so long because it’s too much data for rpart or is there another problem?
No, something is obviously wrong. It may take long, but not 2 weeks.
The question - how many labels (classes there are)? Decision trees tend to be slow when the number of classes is large (by large I mean more than 50).

How to organize data for a multivariate probit model?

I've conducted a psychometric test on some subjects, and I'm trying to create a multivariate probit model.
The test was conducted as follows:
To subject 1 was given a certain stimulous under 11 different conditions, 10 times for each condition. Answers (correct=1, uncorrect=0) were registered.
So for subject 1, I have the following results' table:
# Subj 1
correct
cnt 1 0
1 0 10
2 0 10
3 1 9
4 5 5
5 7 3
6 10 0
7 10 0
8 10 0
9 9 1
10 10 0
11 10 0
This means that Subj1 answered uncorrectly 10 times under condition 1 and 2, and answered 10 times correctly under condition 10 and 11. For the other conditions, the response was increasing from condition 3 to condition 9.
I hope I was clear.
I usually analyze the data using the following code:
prob.glm <- glm(resp.mat1 ~ cnt, family = binomial(link = "probit"))
Here resp.mat1 is the responses' table, while cnt is the contrast c(1,11). So I'm able to draw the sigmoid curve using the predict() function. The graph, for the subject-1 is the following.
Now suppose I've conducted the same test on 20 subjects. I have now 20 tables, organized like the first one.
What I want to do is to compare subgroups, for example: male vs. female; young vs. older and so on. But I want to keep the inter-individual variability, so simply "adding" the 20 tables will be wrong.
How can I organize the data in order to use the glm() function?
I want to be able to write a command like:
prob.glm <- glm(resp.matTOT ~ cnt + sex, family = binomial(link = "probit"))
And then graphing the curve for sex=M, and sex=F.
I tried using the rbind() function, to create a unique table, then adding columns for Subj (1 to 20), Sex, Age. But it looks me a bad solution, so any alternative solutions will be really appreciated.
Looks like you are using the wrong function for the job. Check the first example of glmer in package lme4; it comes quite close to what you want. herd should be replaced by the subject number, but make sure that you do something like
mydata$subject = as.factor(mydata$subject)
when you have numerical subject numbers.
# Stolen from lme4
library(lattice)
library(
xyplot(incidence/size ~ period|herd, cbpp, type=c('g','p','l'),
layout=c(3,5), index.cond = function(x,y)max(y))
(gm1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
data = cbpp, family = binomial))
There's a multivariate probit command in the mlogit library of all things. You can see an example of the data structure required here:
https://stats.stackexchange.com/questions/28776/multinomial-probit-for-varying-choice-set

Resources