I am having trouble understanding what the variables in knn() mean in context of the R function as I don't come from a background of statistics.
Lets say that I am trying to predict a pool race results for each pool A, B, and C.
I know the height and weight of each racing candidate competing in the race. Assuming that the candidates competing are the same every year, I also know who won for the past 30 years.
How would I predict who is going to win at pool A, B, and C this year?
MY guess:
The train argument is a data frame with the columns of weight, height, and pool that he is competing in for each competitor. This is for the last 29 years.
The test argument is a data frame with the columns of weight, height, and pool that he is competing in for each competitor. This is for the last year.
The cl argument is a vector of which competitor won the race each year.
Is this how knn() was intended to be used?
Reference:
http://stat.ethz.ch/R-manual/R-patched/library/class/html/knn.html
Not exactly. Train data is used for training, but test for testing. You can't just train and apply it straight away - you need to cross-validate your model. The aim of model training is not to minimize the error, but to minimize the difference between in-sample and out-of-sample errors. Otherwise you will overfit it: the fact is if you do it good enough your in-sample error will be 0. Which will not give any good results for real prediction. Training set in that function is your in-sample and testing is out-of-sample.
The actual model is then built and you can make a prediction (i.e., for current year) using mymodel.predict().
Related
I'm relatively new to glm - so please bear with me.
I have created a glm (logistic regression) to predict whether an individual CONTINUES studies ("0") or does NOTCONTINUE ("1"). I am interested in predicting the latter. The glm uses seven factors in the dataset and the confusion matrices are very good for what I need and combining seven years' of data have also been done. Straight-forward.
However, I now need to apply the model to the current years' data, which of course does not have the NOTCONTINUE column in it. Lets say the glm model is "CombinedYears" and the new data is "Data2020"
How can I use the glm model to get predictions of who will ("0") or will NOT ("1") continue their studies? Do I need to insert a NOTCONTINUE column into the latest file ?? I have tried this structure
Predict2020 <- predict(CombinedYears, data.frame(Data2020), type = 'response')
but the output only holds values <0.5.
Any help very gratefully appreciated. Thank you in advance
You mentioned that you already created a prediction model to predict whether a particular student will continue studies or not. You used the glm package and your model name is CombinedYears.
Now, what you have to know is that your problem is a binary classification and you used logistic regression for this. The output of your model when you apply it on new data, or even the same data used to fit the model, is probabilities. These are values between zero and one. In the development phase of your model, you need to determine the cutoff threshold of these probabilities which you can use later on when you predict new data. For example, you may determine 0.5 as a cutoff, and every probability above that is considered NOTCONTINUE and below that is CONTINUE. However, the best threshold can be determined from your data as well by maximizing both specificity and sensitivity. This can be done by calculating the area under the receiver operating characteristic curve (AUC). There are many packages than can do this for you, such as pROC and AUC packages in R. The same packages can determine the best cutoff as well.
What you have to do is the following:
Determine the cutoff threshold after calculating the AUC
library(pROC)
roc_object = roc(your_fit_data$NOTCONTINUE ~ fitted(CombinedYears))
coords(roc.roc_object, "best", ret="threshold", transpose = FALSE)
Use your model to predict on your new data year (as you did)
Predict2020 = predict(CombinedYears, data.frame(Data2020), type = 'response')
Now, the content of Predict2020 is just probabilities for each
student. Use the cutoff you obtained from step (1) to classify your
students accordingly
I have behavioral data for many groups of birds over 10 days of observation. I wanted to investigate whether there is a temporal pattern in some behaviors (e.g. does mate competition increase over time?) And I was told that I had to account for the autocorrelation of the data, since behavior is unlikely to be independent in each day.
However I was wondering about two things:
Since I'm not interested in the differences in y among days but the trend of y over days, do I still need to correct for autocorrelation?
If yes, how do I control for the autocorrelation so that I'm left out only with the signal (and noise of course)?
For the second question, keep in mind I will be analyzing the effect of time on behavior using mixed models in R (since there are random effects such as pseudo-replication), but I have not found any straightforward method of correcting for autocorrelation in the data when modeling the responses.
(1) Yes, you should check for/account for autocorrelation.
The first example here shows an example of estimating trends in a mixed model while accounting for autocorrelation.
You can fit these models with lme from the nlme package. Here's a mixed model without autocorrelation included:
cmod_lme <- lme(GS.NEE ~ cYear,
data=mc2, method="REML",
random = ~ 1 + cYear | Site)
and you can explore the autocorrelation by using plot(ACF(cmod_lme)).
(2) Add correlation to the model something like this:
cmod_lme_acor <- update(cmod_lme,
correlation=corAR1(form=~cYear|Site)
#JeffreyGirard notes that
to check the ACF after updating the model to include the correlation argument, you will need to use plot(ACF(cmod_lme_acor, resType = "normalized"))
I am using R to do some evaluations for two different forecasting models. The basic idea of the evaluation is do the comparison of Pearson correlation and it corresponding p-value using the function of cor.() . The graph below shows the final result of the correlation coefficient and its p-value.
we suggestion that model which has lower correlation coefficient with corresponding lower p-value(less 0,05) is better(or, higher correlation coefficient but with pretty high corresponding p-value).
so , in this case, overall, we would say that the model1 is better than model2.
but the question here is, is there any other specific statistic method to quantify the comparison?
Thanks a lot !!!
Assuming you're working with time series data since you called out a "forecast". I think what you're really looking for is backtesting of your forecast model. From Ruey S. Tsay's "An Introduction to Analysis of Financial Data with R", you might want to take a look at his backtest.R function.
backtest(m1,rt,orig,h,xre=NULL,fixed=NULL,inc.mean=TRUE)
# m1: is a time-series model object
# orig: is the starting forecast origin
# rt: the time series
# xre: the independent variables
# h: forecast horizon
# fixed: parameter constriant
# inc.mean: flag for constant term of the model.
Backtesting allows you to see how well your models perform on past data and Tsay's backtest.R provides RMSE and Mean-Absolute-Error which will give you another perspective outside of correlation. Caution depending on the size of your data and complexity of your model, this can be a very slow running test.
To compare models you'll normally look at RMSE which is essentially the standard deviation of the error of your model. Those two are directly comparable and smaller is better.
An even better alternative is to set up training, testing, and validation sets before you build your models. If you train two models on the same training / test data you can compare them against your validation set (which has never been seen by your models) to get a more accurate measurement of your model's performance measures.
One final alternative, if you have a "cost" associated with an inaccurate forecast, apply those costs to your predictions and add them up. If one model performs poorly on a more expensive segment of data, you may want to avoid using it.
As a side-note, your interpretation of a p value as less is better leaves a little to be [desired] quite right.
P values address only one question: how likely are your data, assuming a true null hypothesis? It does not measure support for the alternative hypothesis.
I am using support vector regression in R to forecast future values for a uni-variate time series. Splitting the historical data into test and train sets, I find a model by using svm function in R to the test data and then use the predict() command with train data to predict values for the train set. We can then compute prediction errors. I wonder what happens then? we have a model and by checking the model on the train data, we see the model is efficient. How can I use this model to predict future values out of train data? Generally speaking, we use predict function in R and give it a forecast horizon (h=12) to predict 12 future values. Based on what I saw, the predict() command for SVM does not have such coomand and needs a train dataset. How should I build a train data set for predicting future data which is not in our historical data set?
Thanks
Just a stab in the dark... SVM is not for prediction but for classification, specifically supervised. I am guessing you are trying to predict stock values, no? How about classify your existing data, using some size of your choice say 100 values at a time, for noise (N), up (U), big up (UU), down (D), and big down (DD). In this way as your data comes in you slide your classification frame and get it to tell you if the upcoming trend is N, U, UU, D, DD.
What you can do is to build a data frame with columns representing the actual stock price and its n lagged values. And use it as a train set/test set (the actual value is the output and the previous values the explanatory variables). With this method you can do a 1-day (or whatever the granularity is) into the future forecast and then you can use your prediction to make another one and so on.
I want to exam which variable impacts most on the outcome, in my data, which is the stock yield. My data is like below.
And my code is also attached.
library(randomForest)
require(data.table)
data = fread("C:/stockcrazy.csv")
PEratio <- data$offeringPE/data$industryPE
data_update <- data.frame(data,PEratio)
train <- data_update[1:47,]
test <- data_update[48:57,]
For the above subset data set train and test, I am not sure if I need to do a cross validation on this data. And I don't know how to do it.
data.model <- randomForest(yield ~ offerings + offerprice + PEratio + count + bingo
+ purchase , data=train, importance=TRUE)
par(mfrow = c(1, 1))
varImpPlot(data.model, n.var = 6, main = "Random Forests: Top 6 Important Variables")
importance(data.model)
plot(data.model)
model.pred <- predict(data.model, newdata=test)
model.pred
d <- data.frame(test,model.pred)
I am sure not sure if the result of IncMSE is good or bad. Can someone interpret this?
Additionally, I found the predicted values of the test data is not a good prediction of the real data. So how can I improve this?
Let's see. Let's start with %IncMSE:
I found this really good answer on cross validated about %IncMSE which I quote:
if a predictor is important in your current model, then assigning
other values for that predictor randomly but 'realistically' (i.e.:
permuting this predictor's values over your dataset), should have a
negative influence on prediction, i.e.: using the same model to
predict from data that is the same except for the one variable, should
give worse predictions.
So, you take a predictive measure (MSE) with the original dataset and
then with the 'permuted' dataset, and you compare them somehow. One
way, particularly since we expect the original MSE to always be
smaller, the difference can be taken. Finally, for making the values
comparable over variables, these are scaled.
This means that in your case the most important variable is purchase i.e. when the variable purchase was permuted (i.e. the order of the values randomly changed) the resulting model was 12% worse than having the variable in its original order in terms of calculating the mean square error. The MSE was 12% higher using a permuted purchase variable meaning that the this variable is the most important. Variable importance is just a measure of how important your predictor variables were in the model you used. In your case purchase was the most important and P/E ratio was the least (for those 6 variables). This is not something you can interpret as good or bad because it doesn't show you how well the model fits unseen data. I hope this is clear now.
For the cross-validation:
You do not need to do a cross validation during the training phase because it happens automatically. Approximately, 2/3 of the records are used for the creation of a tree and the 1/3 that is left out (out-of-bag data) is used to assess the tree afterwards (the R squared for the tree is computed using the oob data)
As for the improvement of the model:
By showing just the 10 first lines of the predicted and the actual values of yield, you cannot make a safe decision on whether the model is good or bad. What you need is a test of fitness. The most common one is the R squared. It is simplistic but for comparing models and getting a first opinion about your model it does its job. This is calculated by the model for every tree that you make and can be accessed by data.model$rsq. This ranges from 0 to 1 with 1 being the perfect model and 0 showing really poor fit ( it can sometimes even take negative values which shows a bad fit). If your rsq is bad then you can try the following to improve your model although it is not certain that you will get the results you wish for:
Calibrate your trees in a different way. Change the number of trees grown and prune the trees by specifying a big nodesize number. (here you use the default 500 trees and a nodesize of 5 which might overfit your model.)
Increase the number of variables if possible.
Choose a different model. There are cases were a random Forest would not work well