LDA model on weighted data - R - r

I would like to use linear discriminant analysis model (lda) on my weighted data. In my data set, I have one column with weights which are not integers (I cant just replicate rows). lda function from MASS package does not allow me to use vector of weights for observations. Do you know, how deal with it ? I have tried also with mlr package but learner classif.lda still uses lda implementation from MASS package, so I get error:
Error in checkLearnerBeforeTrain(task, learner, weights) :
Weights vector passed to train, but learner 'classif.lda' does not support that!
Do you know how to solve this problem ?

Related

Subject specific prediction from heterogenous linear mixed effect model package (lcmm)

I am fitting a heterogeneous linear mixed effect model which is in the lcmm package in R. Currently, I am only getting the class-specific and weighted subject-specific prediction from the predictY function. But, I want a subject-specific prediction. Is there any way to construct a subject-specific prediction from this package? Any help is appreciated.
I have found the answer. Looks like PredictY gives the mean class-specific predictions and adding them with the multiplication of the random effects from each subject (ranef(model)) and the model design matrix for the random part will provide the subject-specific prediction.

Is there a function to obtain pooled standardized coefficient of linear equation modelling related to analysis of a MI database?

I replaced missing data by using MICE package.
I realized the linear equation modelling by using : summary(pool(with(imputed_base_finale,lm(....)))
I tried to obtain standardized coefficients by using the function lm.beta, however it doesn't work.
lm.beta (with(imputed_base_finale,lm(...)))
Error in lm.beta(with(imputed_base_finale, lm(...)))
object has to be of class lm
How can I obtain these standardized coefficients ?
Thank you for you help!!!
lm.scale works on lm objects and adds standardized coefficients. This however was not build to work on mira objects.
Have you considered using scale on the data before you build a model, effectively getting standardized coefficients?
Instead of standardizing the data before imputation, you could also apply it with post processing during imputation.
I am not sure which of these would be the most robust option.
require(mice)
# non-standardized
imp <- mice(nhanes2)
pool(with(imp,lm(chl ~ bmi)))
# standardized
imp_scale <- mice(scale(nhanes2[,c('bmi','chl')]))
pool(with(imp_scale,lm(chl ~ bmi)))

Error in missing value imputation using MICE package

I have a huge data (4M x 17) that has missing values. Two columns are categorical, rest all are numerical. I want to use MICE package for missing value imputation. This is what I tried:
> testMice <- mice(myData[1:100000,]) # runs fine
> testTot <- predict(testMice, myData)
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "mids"
Running the imputation on whole dataset was computationally expensive, so I ran it on only the first 100K observations. Then I am trying to use the output to impute the whole data.
Is there anything wrong with my approach? If yes, what should I do to make it correct? If no, then why am I getting this error?
Neither mice nor hmisc provide the parameter estimates from the imputation process. Both Amelia and imputeMulti do. In both cases, you can extract the parameter estimates and use them for imputing your other observations.
Amelia assumes your data are distributed as a multivariate normal (eg. X \sim N(\mu, \Sigma).
imputeMulti assumes that your data is distributed as a multivariate multinomial distribution. That is the complete cell counts are distributed (X \sim M(n,\theta)) where n is the number of observations.
Fitting can be done as follows, via example data. Examining parameter estimates is shown further below.
library(Amelia)
library(imputeMulti)
data(tract2221, package= "imputeMulti")
test_dat2 <- tract2221[, c("gender", "marital_status","edu_attain", "emp_status")]
# fitting
IM_EM <- multinomial_impute(test_dat2, "EM",conj_prior = "non.informative", verbose= TRUE)
amelia_EM <- amelia(test_dat2, m= 1, noms= c("gender", "marital_status","edu_attain", "emp_status"))
The parameter estimates from the amelia function are found in amelia_EM$mu and amelia_EM$theta.
The parameter estimates in imputeMulti are found in IM_EM#mle_x_y and can be accessed via the get_parameters method.
imputeMulti has noticeably higher imputation accuracy for categorical data relative to either of the other 3 packages, though it only accepts multinomial (eg. factor) data.
All of this information is in the currently unpublished vignette for imputeMulti. The paper has been submitted to JSS and I am awaiting a response before adding the vignette to the package.
You don't use predict() with mice. It's not a model you're fitting per se. Your imputed results are already there for the 100,000 rows.
If you want data for all rows then you have to put all rows in mice. I wouldn't recommend it though, unless you set it up on a large cluster with dozens of CPU cores.

Random forest evaluation in R

I am a newbie in R and I am trying to do my best to create my first model. I am working in a 2- classes random forest project and so far I have programmed the model as follows:
library(randomForest)
set.seed(2015)
randomforest <- randomForest(as.factor(goodkit) ~ ., data=training1, importance=TRUE,ntree=2000)
varImpPlot(randomforest)
prediction <- predict(randomforest, test,type='prob')
print(prediction)
I am not sure why I don't get the overall prediction for my model.I must be missing something in my code. I get the OOB and the prediction per case in the test set but not the overall prediction of the model.
library(pROC)
auc <-roc(test$goodkit,prediction)
print(auc)
This doesn't work at all.
I have been through the pROC manual but I cannot get to understand everything. It would be very helpful if anyone can help with the code or post a link to a good practical sample.
Using the ROCR package, the following code should work for calculating the AUC:
library(ROCR)
predictedROC <- prediction(prediction[,2], as.factor(test$goodkit))
as.numeric(performance(predictedROC, "auc")#y.values))
Your problem is that predict on a randomForest object with type='prob' returns two predictions: each column contains the probability to belong to each class (for binary prediction).
You have to decide which of these predictions to use to build the ROC curve. Fortunately for binary classification they are identical (just reversed):
auc1 <-roc(test$goodkit, prediction[,1])
print(auc1)
auc2 <-roc(test$goodkit, prediction[,2])
print(auc2)

Random forest in R - other error measures in OOB sample

I am preparing a predictive model using randomForest package in R. However I would like the function to report the other than accurace OOB error measure. In fact I want to use Gini coefficient (some name it Powerstat). I know how to calculate Gini, but the proglem is in implementing the error measure.
Thanks

Resources