Class Weight Syntax in Kernlab? - r

Hi I am trying out classification for imbalanced dataset in R using kernlab package, as the class distribution is not 1:1 I am using the option of class.weights in the ksvm() function call however I do not get any difference in the classification scenario when I add weights or remove weights? So the question is what is the correct syntax for declaring the class weights?
I am using the following function calls:
model = ksvm(dummy[1:466], lab_tr,type='C-svc',kernel=pre,cross=10,C=10,prob.model=F,class.weights=c("Negative"=0.7,"Positive"=0.3))
#this is the function call with class weights
model = ksvm(dummy[1:466], lab_tr,type='C-svc',kernel=pre,cross=10,C=10,prob.model=F)
Can anyone please comment on this, am I following the right syntax of adding weights? Also I discovered that if we use the weights with prob.model=T the ksvm function returns a error!

Your syntax is ok, but the problem of not-working-class-balance is fairly common in machine learning; in a way, the removal of some objects from the bigger class is an only method guaranteed to work, still it may be a source of error increase, and one must be careful to do it in an intelligent way (in SVM the potential support vectors should have the priority - of course now there is a question how to locate them).
You may also try to boost the weights over simple length ratio, lets say ten-fold, and check if it helped even a little or luckily rather overshoot the imbalance to the other side.

Related

Convergence Criteria in glmmTMB - what are my options?

When using glmmTMB() of the R-package {glmmTMB} (see CRAN with links to manual & vignettes), I am aware that I have certain options when dealing with the convergence of models. More specifically, there is the control = argument to which I can pass glmmTMBControl() parameters, whose section in the manual is this:
Furthermore, one of the vignettes - i.e. Troubleshooting with glmmTMB - talks explicitly about dealing with convergence problems. My key point is now, however, that to my knowledge any time glmmTMBControl() is mentioned, it is always in one of these two ways:
glmmTMBControl(optCtrl=list(iter.max=1e3,eval.max=1e3)) i.e. increase the number of iterations
glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS")) i.e. try a different optimizer
Regarding the second one, I am left with the impression that I have multiple options besides the one shown there since "The optimizer argument can be any optimization (minimizing) function [...]" and the following phrasing:
Yet, I was not able to find out about any other options I could actually put as my optimizer=, since it really seems to be the exact example shown above that is presented, and I would be thankful if someone could provide a list.
P.S.: I am trying to play around with glmmTMBs convergence criteria, because it seems to often estimate slightly smaller variance components compared to the same model fit via PROC MIXED in SAS.
From ?glmmTMB:
The ‘optimizer’ argument can be any optimization (minimizing) function, provided that:
• the first three arguments, in order, are the starting values, objective function, and gradient function;
• the function also takes a ‘control’ argument;
• the function returns a list with elements (at least) ‘par’,
‘objective’, ‘convergence’ (0 if convergence is successful)
and ‘message’ (‘glmmTMB’ automatically handles output from
‘optim()’, by renaming the ‘value’ component to ‘objective’)
The options built into base R that satisfy these requirements (gradient-based minimizers) are nlminb and optim with method="BFGS", "L-BFGS-B", or "CG". You could also look into optimizers provided by optimx or nloptr, but you'd probably have to write a wrapper function to make sure they satisfied the criteria above ...

R: Finding help for classes created by packages

It often seems to be the case that R packages contain multiple functions that create an object of some class, specified by the package, with generic or non-generic methods that apply to all objects of that class. Although it is generally easy to find out about the functions in a package, I have not found any equally straightforward way to find a precise description of the class itself for S3 classes. I think this is at least partly intentional. Class definitions may be regarded as the sort of internal workings that, on one hand, the user should not have to think about, and on the other, may be changeable by the package creator, who wants people not to rely on them.
However, I find that I sometimes want to create additional objects of the same class that work with the package functions that are methods for that class. And it is not always easy to deduce what features an object must have in order to be usable by package functions that do various things to objects of that class, especially as instances created by different functions may or may not all have exactly the same structure.
The example with which I am currently wrestling are forecast objects created by various functions of the forecast package. The forecast package provides a large number of functions that take forecast objects as inputs. This blog post by Rob Hyndman describes a function to do cross validation and requires an object of class forecast as an argument The tsCV function documentation says it takes a "forecastFunction" as an argument, which must return an object of class forecast and have a univariate time series as its first object (of forecasts, one assumes) and have an argument h giving the horizon. Well, that sounds easy enough. But then in Hyndman’s associated textbook, section 3.6, we are told that forecast objects contain information about the forecasting method, the data, the point forecasts, prediction intervals, residuals, and fitted values. That’s a lot of things, and I am not sure if they are all mandatory or if some are optional, or required only if you intend to use certain methods. And I don’t know anything about mandatory internal structure of the class.
Finally, I particularly want to know if the new fable package, intended as a forecast package replacement, uses the same forecast class mechanism and require the same internal structure., or if not, how they are different. I have not been able to find, in fpp3 or elsewhere, anything that either describes a change or contains a comparable description of objects of class forecast.
I’m going to be embarrassed if there is some simple function,
you_should_know_this_dummy(package = “forecast”, class = “forecast”),
that returns a detailed description of the class. But I have looked for such a function every way I could think of and not found it.
O.K., my bad. I was trying so hard to find a way of locating the help file for the class description (which I don't think exists) that I overlooked the existence of a pretty good description of the class forecast under the function forecast() in the manual for the package forecast. Here it is:
An object of class "forecast" is a list usually containing at least the following elements:
model A list containing information about the fitted model
method The name of the forecasting method as a character string
mean Point forecasts as a time series
lower Lower limits for prediction intervals
upper Upper limits for prediction intervals
level The confidence values associated with the prediction intervals
x The original time series (either object itself or the time series used to create the model stored as object).
residuals Residuals from the fitted model. For models with additive errors, the residuals will be x minus the fitted values.
fitted Fitted values (one-step forecasts)
This still leaves some questions unanswered, like the format for the model information argument model, and for the x argument with multivariate models. But I am hoping that these are similar to those handed to or returned by, e.g., lm(). I think this gives me enough to get started and to hope for informative errors.
I still don't know if the fable package also uses objects of class forecast. The forecast package documents the forecast() function as a generic. The fable package does not document the generic, though it has a very similar list of functions that look like methods, e.g., forecast.whatever. If I figure out the answer, I'll post it here.
I am also looking for a number of other package that provide time series forecast of particular types. I'm hoping that they provide output similar enough that I can use the forecast/fable functions for display, cross-validation, and so forth. We'll see.

Up-sampling in R - randomForest

I have a highly imbalanced data and want to up-sample the minority class to improve accuracy (the minority class is the object of interest).
I tried using the "sampsize" option in the "randomForest" function - but it only allows for down-sampling. I read someplace, the "classwt" option can be used - but i am not sure how to use it.
Can anyone suggest a way to run Random Forest in R by up-sampling the minority class (using the "randomForest" library or other such libraries).
Thanks.
The simplest approach is to just duplicate the data of the minority class enough, but then you lose the OOB estimates.
What you want do do directly does not appear to be implemented, see also this question.

caret package: Is it possible to implement my own bootstrapping method?

I am using caret package for R to select variables for my model. When using rfe command, one should pass rfeControl object, which has a method parameter. Options for this parameter are boot, cv, LOOCV and LGOCV. Since I am dealing with time series data I need to use special bootstrapping/cross-validation techniques as normal ones do not apply for time series data (otherwise distributions get corrupted etc.).
My question is how would I plug-in my own implementation of bootstrapping but still use caret rfe method, which has every other thing I need.
There isn't an easy way. If you study the code for rfe.default() you will note that in cases where method = "boot" the createResample() function is used. This is the function that generates the bootstrap samples. Similar functions are used for the other CV methods.
There is a hard way; overtake the create*() function that is most appropriate; say you want to do a block bootstrap or ME bootstrap, take over the createResample() function and use method = "boot", or if you want a special form of CV, use method = "cv" and take over createFolds().
You will need to write your own create*() function and replace the one in the caret NAMESPACE with your version. Not easy but eminently doable. Say you write your own createResample() function; first you need to note that this function creates n = times bootstrap samples returning this in a matrix with times columns and as many rows as your have samples. You need to write a custom createResample() function that returns the same object but which performs the time series bootstrapping you want to employ.
Once you have written that function you then need to get it into the caret namespace so that it is used by functions in the caret package. For this you use assignInNamespace(). Say your new bootstrapping function is called createMyResample() and it is your workspace, to insert this into the caret namespace do:
assignInNamespace("createResample", createMyResample, ns = "caret")
Sorry I can't be more specific but you don't say how you want the bootstrap/CV to be performed nor what R code you want to use to do the resampling. If you provide further details on how you would do the resampling I will take another look and see if I can help you write your create*() function.
Failing all of this, contact Max Kuhn, the author and maintainer of caret; he may be able to advice further or at least you can suggest this feature as a wish-list for a future version.

R script - nls function

Can anyone give me a good explanation for what the parameter "algorithm" does in the nls function in R?
Also, how does the formula work? I know it uses a tilda, but I can't really find a down-to-earth explanation of it.
Also, how important are the start values? Do I need to try multiple start values, or can I still have a guarantee that nls will find the correct parameters regardless of the start values I use?
In brief:
nls() is going to vary parameters to try to minimize the square error between your model and your data. There's several good methods it can try to find the minimum. Reading the details about "method" in ?optim will provide some good info and references.
In general, for nonlinear models, your results can be sensitive to initial guess. You should try several different guesses to make sure that the outputs are close. If your results are very sensitive to your guess, you can try re-parameterizing, using a different algorithm, or rethinking your model.
As for the formula, I'd echo the previous answer. Work through the examples in the bottom of ?nls and then try to ask a more specific question.

Resources