R random forest - training set using target column for prediction - r

I am learning how to use various random forest packages and coded up the following from example code:
library(party)
library(randomForest)
set.seed(415)
#I'll try to reproduce this with a public data set; in the mean time here's the existing code
data = read.csv(data_location, sep = ',')
test = data[1:65] #basically data w/o the "answers"
m = sample(1:(nrow(factor)),nrow(factor)/2,replace=FALSE)
o = sample(1:(nrow(data)),nrow(data)/2,replace=FALSE)
train2 = data[m,]
train3 = data[o,]
#random forest implementation
fit.rf <- randomForest(train2[,66] ~., data=train2, importance=TRUE, ntree=10000)
Prediction.rf <- predict(fit.rf, test) #to see if the predictions are accurate -- but it errors out unless I give it all data[1:66]
#cforest implementation
fit.cf <- cforest(train3[,66]~., data=train3, controls=cforest_unbiased(ntree=10000, mtry=10))
Prediction.cf <- predict(fit.cf, test, OOB=TRUE) #to see if the predictions are accurate -- but it errors out unless I give it all data[1:66]
Data[,66] is the is the target factor I'm trying to predict, but it seems that by using "~ ." to solve for it is causing the formula to use the factor in the prediction model itself.
How do I solve for the dimension I want on high-ish dimensionality data, without having to spell out exactly which dimensions to use in the formula (so I don't end up with some sort of cforest(data[,66] ~ data[,1] + data[,2] + data[,3}... etc.?
EDIT:
On a high level, I believe one basically
loads full data
breaks it down to several subsets to prevent overfitting
trains via subset data
generates a fitting formula so one can predict values of target (in my case data[,66]) given data[1:65].
so my PROBLEM is now if I give it a new set of test data, let’s say test = data{1:65], it now says “Error in eval(expr, envir, enclos) :” where it is expecting data[,66]. I want to basically predict data[,66] given the rest of the data!

I think that if the response is in train3 then it will be used as a feature.
I believe this is more like what you want:
crtl <- cforest_unbiased(ntree=1000, mtry=3)
mod <- cforest(iris[,5] ~ ., data = iris[,-5], controls=crtl)

Related

Error when predicting new fitted values from R gamlss object

I have a gamlss model that I'd like to use to make new y predictions (and confidence intervals) from in order to visualize how well the model fits the real data. I'd like to make predictions from a new data set of randomized predictor values (rather than the original data), but I'm running into an error message. Here's some example code:
library(gamlss)
# example data
irr <- c(0,0,0,0,0,0.93,1.4,1.4,2.3,1.5)
lite <- c(0,1,2,2.5)
blck <- 1:8
raw <- data.frame(
css =abs(rnorm(500, mean=0.5, sd=0.1)),
nit =abs(rnorm(500, mean=0.72, sd=0.5)),
irr =sample(irr, 500, replace=TRUE),
lit =sample(lite, 500, replace=TRUE),
block =factor(sample(blck, 500, replace=TRUE))
)
# the model
mod <- gamlss(css~nit + irr + lit + random(block),
sigma.fo=~irr*nit + random(block), data=raw, family=BE)
# new data (predictors) for making css predictions
pred <- data.frame(
nit =abs(rnorm(500, mean=0.72, sd=0.5)),
irr =sample(irr, 500, replace=TRUE),
lit =sample(lite, 500, replace=TRUE),
block =factor(sample(blck, 500, replace=TRUE))
)
# make predictions
predmu <- predict(mod, newdata=pred, what="mu", type="response")
This gives the following error:
Error in data[match(names(newdata), names(data))] :
object of type 'closure' is not subsettable
When I run this on my real data, it gives this slightly different error:
Error in `[.data.frame`(data, match(names(newdata), names(data))) :
undefined columns selected
When I use predict without newdata, it works fine making predictions on the original data, as in:
predmu <- predict(mod, what="mu", type="response")
Am I using predict wrong? Any suggestions are greatly appreciated! Thank you.
No, you are not wrong. I have experienced the same issue.
The documentation indicates the implementation of predict is incomplete. this appears to be an example of an incomplete feature/function.
Hedgehog mentioned that predictions based on new-data is not possible yet.
BonnieM therefore "moved the model" into lmer().
I would like to further comment on this idea:
BonniM tried to get predictions based on the object mod
mod <- gamlss(css~nit + irr + lit + random(block),
sigma.fo=~irr*nit + random(block), data=raw, family=BE)
"Moving into lme()" in this scenario could look as follows:
mod2 <- gamlss(css~nit + irr + lit + re(random=~1|block),
sigma.fo=~irr*nit + re(random=~1|block),
data=raw,
family=BE)
Predictions on new-data based on mod2 are implemented within the gamlss2 package.
Furthermore, mod and mod2 should be the same models.
See:
Stasinopoulos, M. D., Rigby, R. A., Heller, G. Z., Voudouris, V., & De Bastiani, F. (2017). Flexible regression and smoothing: using GAMLSS in R. Chapman and Hall/CRC. Chapter 10.9.1
Best regards
Kai
I had a lot of random problems in this direction, and found fitting using the weights argument, and some extra dummy observations set to weight zero (but the predictors I was interested in) to be one workaround.
I was able to overcome the undefined columns selected error by ensuring that the new data for the newdata parameter had the EXACT column structure as what was used when running the gamlss model.

Using randomForest package in R, how to map Random forest prediction?

enter image description hereI am trying to use randomforest to generate a spatial prediction map.
I developed my model by using random forest regression, but I met a little difficulty in the last step to use the best predictors for building the predictive map. I want to create a map prediction map.
My code:
library(raster)
library(randomForest)
set.seed(12)
s <- stack("Density.tif", "Aqui.tif", "Rech.tif", "Rainfall.tif","Land Use.tif", "Cond.tif", "Nitrogen.tif", "Regions.tif","Soil.tif","Topo.tif", "Climatclass.tif", "Depth.tif")
points <- read.table("Coordonnées3.txt",header=TRUE, sep="\t", dec=",",strip.white=TRUE)
d <- extract(s, points)
rf <-randomForest(nitrate~ . , data=d, importance=TRUE, ntree=500, na.action = na.roughfix)
p <- predict(s, rf)
plot(p)
Sample Data:
> head(points)
LAT LONG
1 -13.057007 27.549580
2 -4.255000 15.233745
3 5.300000 -1.983610
4 7.245675 -4.233336
5 12.096330 15.036016
6 -4.255000 15.233745
The error when I run my short code is:
Error in eval(expr, envir, enclos) : object 'nitrate' not found.
I am guessing the error happens when you fit the model.
Why would there be a variable called nitrate. Given how you create your RasterStack, perhaps there is one called Nitrogen. Either way you can find out by looking at names(s) and colnames(d).
NOTE that your points are not good! They are in reverse order. The order should be (longitude, latitude).
Based on your comments (please edit your question instead), you should
add nitrate the points file (the third column) or something like that. Then do
xy <- points[, 2:1]
nitrate <- points[,3]
Extract points and combine with your observed data
d <- extract(s, xy)
d <- cbind(nitrate=nitrate, d)
Build model and predict
rf <-randomForest(nitrate~ . , data=d, importance=TRUE, ntree=500, na.action = na.roughfix)
p <- predict(s, rf)
It sounds like the error is coming when you are trying to build the forest. It may be most helpful to not use the formula interface. Also, if d is large, then using the formula interface is not advisable. From the help file on randomForest: "For large data sets, especially those with large number of variables, calling randomForest via the formula interface is not advised: There may be too much overhead in handling the formula."
Assuming d$nitrate exists then the solution is randomForest(y = d$nitrate, x = subset(d, select = -nitrate), importance=TRUE, ntree=500, na.action = na.roughfix)

Predict function from Caret package give an Error

I am doing just a regular logistic regression using the caret package in R. I have a binomial response variable coded 1 or 0 that is called a SALES_FLAG and 140 numeric response variables that I used dummyVars function in R to transform to dummy variables.
data <- dummyVars(~., data = data_2, fullRank=TRUE,sep="_",levelsOnly = FALSE )
dummies<-(predict(data, data_2))
model_data<- as.data.frame(dummies)
This gives me a data frame to work with. All of the variables are numeric. Next I split into training and testing:
trainIndex <- createDataPartition(model_data$SALE_FLAG, p = .80,list = FALSE)
train <- model_data[ trainIndex,]
test <- model_data[-trainIndex,]
Time to train my model using the train function:
model <- train(SALE_FLAG~. data=train,method = "glm")
Everything runs nice and I get a model. But when I run the predict function it does not give me what I need:
predict(model, newdata =test,type="prob")
and I get an ERROR:
Error in dimnames(out)[[2]] <- modelFit$obsLevels :
length of 'dimnames' [2] not equal to array extent
On the other hand when I replace "prob" with "raw" for type inside of the predict function I get prediction but I need probabilities so I can code them into binary variable given my threshold.
Not sure why this happens. I did the same thing without using the caret package and it worked how it should:
model2 <- glm(SALE_FLAG ~ ., family = binomial(logit), data = train)
predict(model2, newdata =test, type="response")
I spend some time looking at this but not sure what is going on and it seems very weird to me. I have tried many variations of the train function meaning I didn't use the formula and used X and Y. I used method = 'bayesglm' as well to check and id gave me the same error. I hope someone can help me out. I don't need to use it since the train function to get what I need but caret package is a good package with lots of tools and I would like to be able to figure this out.
Show us str(train) and str(test). I suspect the outcome variable is numeric, which makes train think that you are doing regression. That should also be apparent from printing model. Make it a factor if you want to do classification.
Max

Predicting with lm object in R - black box paradigm

I have a function that returns an lm object. I want to produce predicted values based on some new data. The new data is a data.frame in the exact format as the data passed to the lm function, except that the response has been removed (since we're predicting, not training). I would expect to execute the following, but get an error:
predict( model , newdata )
"Error in eval(expr, envir, enclos) : object 'ModelResponse' not found"
In my case, ModelResponse was the name of the response column in the data I originally trained on. So just for kicks, I tried to insert NA reponse:
newdata$ModelResponse = NA
predict( model , newdata )
Error in terms.default(object, data = data) : no terms component nor attribute
Highly frustrating! R's notion of models/regression doesn't match mine: 1. I train a model with some data and get a model object. 2. I can score new data from any environment/function/frame/etc. so long as I input data into the model object that "looks like" the data I trained on (i.e. same column names). This is a standard black-box paradigm.
So here are my questions:
1. What concept(s) am I missing here?
2. How do I get my scenario to work?
3. How can I get model object to be portable? str(model) shows me that the model object saved the original data it trained on! So the model object is massive. I want my model to be portable to any function/environment/etc. and only contain the data it needs to score.
In the absence of str() on either the model or the data offered to the model, here's my guess regarding this error message:
predict( model , newdata )
"Error in eval(expr, envir, enclos) : object 'ModelResponse' not found"
I guess that you made a model object named "model" and that your outcome variable (the left-hand-side of the formula( in the original call to lm was named "ModelResponse" and that you then named a column in newdata by the same name. But what you should have done was leave out the "ModelResponse" columns (because that is what you are predicting) and put in the "Model_Predictor1", Model_Predictor2", etc. ... i.e. all the names on the right-hand-side of the formula given to lm()
The coef() function will allow you to extract the information needed to make the model portable.
mod.coef <- coef(model)
mod.coef
Since you expressed interest in the rms/Hmisc package combo Function, here it is using the help-example from ols and comparing the output with an extracted function and the rms Predict method. Note the capitals, since these are designed to work with the package equivalents of lm and glm(..., family="binomial") and coxph, which in rms become ols, lrm, and cph.
> set.seed(1)
> x1 <- runif(200)
> x2 <- sample(0:3, 200, TRUE)
> distance <- (x1 + x2/3 + rnorm(200))^2
> d <- datadist(x1,x2)
> options(datadist="d") # No d -> no summary, plot without giving all details
>
>
> f <- ols(sqrt(distance) ~ rcs(x1,4) + scored(x2), x=TRUE)
>
> Function(f)
function(x1 = 0.50549065,x2 = 1) {0.50497361+1.0737604* x1-
0.79398383*pmax(x1-0.083887788,0)^3+ 1.4392827*pmax(x1-0.38792825,0)^3-
0.38627901*pmax(x1-0.65115162,0)^3-0.25901986*pmax(x1-0.92736774,0)^3+
0.06374433*x2+ 0.60885222*(x2==2)+0.38971577*(x2==3) }
<environment: 0x11b4568e8>
> ols.fun <- Function(f)
> pred1 <- Predict(f, x1=1, x2=3)
> pred1
x1 x2 yhat lower upper
1 1 3 1.862754 1.386107 2.339401
Response variable (y): sqrt(distance)
Limits are 0.95 confidence limits
# The "yhat" is the same as one produces with the extracted function
> ols.fun(x1=1, x2=3)
[1] 1.862754
(I have learned through experience that the restricted cubic-spline fit functions coming from rms need to have spaces and carriage returns added to improve readability. )
Thinking long-term, you should probably take a look at the caret package. Many or most modeling functions work with data frames and matrices, others have a preference, and there may be other variations of their expectations. It's important to quickly get your head around each, but if you want a single wrapper that will simplify life for you, making the intricacies into a "black box", then caret is as close as you can get.
As a disclaimer: I do not use caret, as I don't think modeling should be a be a black box. I've had more than a few emails to maintainers of modeling packages resulting from looking into their code and seeing something amiss. Wrapping that in another layer would not serve my interests. So, in the very long-run, avoid caret and develop an enjoyment for dissecting what's going into and out of the different modeling functions. :)

Using r and weka. How can I use meta-algorithms along with nfold evaluation method?

Here is an example of my problem
library(RWeka)
iris <- read.arff("iris.arff")
Perform nfolds to obtain the proper accuracy of the classifier.
m<-J48(class~., data=iris)
e<-evaluate_Weka_classifier(m,numFolds = 5)
summary(e)
The results provided here are obtained by building the model with a part of the dataset and testing it with another part, therefore provides accurate precision
Now I Perform AdaBoost to optimize the parameters of the classifier
m2 <- AdaBoostM1(class ~. , data = temp ,control = Weka_control(W = list(J48, M = 30)))
summary(m2)
The results provided here are obtained by using the same dataset for building the model and also the same ones used for evaluating it, therefore the accuracy is not representative of real life precision in which we use other instances to be evaluated by the model. Nevertheless this procedure is helpful for optimizing the model that is built.
The main problem is that I can not optimize the model built, and at the same time test it with data that was not used to build the model, or just use a nfold validation method to obtain the proper accuracy.
I guess you misinterprete the function of evaluate_Weka_classifier. In both cases, evaluate_Weka_classifier does only the cross-validation based on the training data. It doesn't change the model itself. Compare the confusion matrices of following code:
m<-J48(Species~., data=iris)
e<-evaluate_Weka_classifier(m,numFolds = 5)
summary(m)
e
m2 <- AdaBoostM1(Species ~. , data = iris ,
control = Weka_control(W = list(J48, M = 30)))
e2 <- evaluate_Weka_classifier(m2,numFolds = 5)
summary(m2)
e2
In both cases, the summary gives you the evaluation based on the training data, while the function evaluate_Weka_classifier() gives you the correct crossvalidation. Neither for J48 nor for AdaBoostM1 the model itself gets updated based on the crossvalidation.
Now regarding the AdaBoost algorithm itself : In fact, it does use some kind of "weighted crossvalidation" to come to the final classifier. Wrongly classified items are given more weight in the next building step, but the evaluation is done using equal weight for all observations. So using crossvalidation to optimize the result doesn't really fit into the general idea behind the adaptive boosting algorithm.
If you want a true crossvalidation using a training set and a evaluation set, you could do the following :
id <- sample(1:length(iris$Species),length(iris$Species)*0.5)
m3 <- AdaBoostM1(Species ~. , data = iris[id,] ,
control = Weka_control(W = list(J48, M=5)))
e3 <- evaluate_Weka_classifier(m3,numFolds = 5)
# true crossvalidation
e4 <- evaluate_Weka_classifier(m3,newdata=iris[-id,])
summary(m3)
e3
e4
If you want a model that gets updated based on a crossvalidation, you'll have to go to a different algorithm, eg randomForest() from the randomForest package. That collects a set of optimal trees based on crossvalidation. It can be used in combination with the RWeka package as well.
edit : corrected code for a true crossvalidation. Using the subset argument has effect in the evaluate_Weka_classifier() as well.

Resources