standard errors in R psoptim - r

I am working on estimating a non-linear function. I am using the optim and Rsolonp packages to do so and I am able to obtain the standard errors of my estimated parameters by using the Hessian that these functions calculate. As my problem seems to be a quite non-smooth one I found out that I can use the psoptim package. Nevertheless, I cannot figure out how I can obtain a Hessian using this package. Any help would be much appreciated.

Related

How can one calculate ROC's AUCs in complex designs with clustering in R?

The packages that calculate AUCs I've found so far do not contemplate sample clustering, which increases standard errors compared to simple random sampling. I wonder if the ones provided by these packages could be recalculated to allow for clustering.
Thank you.
Your best bet is probably replicate weights, as long as you can get point estimates of AUC that incorporate weights.
If you convert your design into a replicate-weights design object (using survey::as.svrepdesign()), you can then run any R function or expression using the replicate weights using survey::withReplicates() and return a standard error.

mlr3 optimized average of ensemble

I try to optimize the averaged prediction of two logistic regressions in a classification task using a superlearner.
My measure of interest is classif.auc
The mlr3 help file tells me (?mlr_learners_avg)
Predictions are averaged using weights (in order of appearance in the
data) which are optimized using nonlinear optimization from the
package "nloptr" for a measure provided in measure (defaults to
classif.acc for LearnerClassifAvg and regr.mse for LearnerRegrAvg).
Learned weights can be obtained from $model. Using non-linear
optimization is implemented in the SuperLearner R package. For a more
detailed analysis the reader is referred to LeDell (2015).
I have two questions regarding this information:
When I look at the source code I think LearnerClassifAvg$new() defaults to "classif.ce", is that true?
I think I could set it to classif.auc with param_set$values <- list(measure="classif.auc",optimizer="nloptr",log_level="warn")
The help file refers to the SuperLearner package and LeDell 2015. As I understand it correctly, the proposed "AUC-Maximizing Ensembles through Metalearning" solution from the paper above is, however, not impelemented in mlr3? Or do I miss something? Could this solution be applied in mlr3? In the mlr3 book I found a paragraph regarding calling an external optimization function, would that be possible for SuperLearner?
As far as I understand it, LeDell2015 proposes and evaluate a general strategy that optimizes AUC as a black-box function by learning optimal weights. They do not really propose a best strategy or any concrete defaults so I looked into the defaults of the SuperLearner package's AUC optimization strategy.
Assuming I understood the paper correctly:
The LearnerClassifAvg basically implements what is proposed in LeDell2015 namely, it optimizes the weights for any metric using non-linear optimization. LeDell2015 focus on the special case of optimizing AUC. As you rightly pointed out, by setting the measure to "classif.auc" you get a meta-learner that optimizes AUC. The default with respect to which optimization routine is used deviates between mlr3pipelines and the SuperLearner package, where we use NLOPT_LN_COBYLA and SuperLearner ... uses the Nelder-Mead method via the optim function to minimize rank loss (from the documentation).
So in order to get exactly the same behaviour, you would need to implement a Nelder-Mead bbotk::Optimizer similar to here that simply wraps stats::optim with method Nelder-Mead and carefully compare settings and stopping criteria. I am fairly confident that NLOPT_LN_COBYLA delivers somewhat comparable results, LeDell2015 has a comparison of the different optimizers for further reference.
Thanks for spotting the error in the documentation. I agree, that the description is a little unclear and I will try to improve this!

Difference between brglm & logistf?

I am currently fitting a penalized logistic regression model using the package logistf (due to quasi-complete separation).
I chose this package over brglm because I found much more recommendations for logistf. However, the brglm seems to integrate better with other functions such as predict() or margins::margins(). In the documentation of brglm it says:
"Implementations of the bias-reduction method for logistic regressions can also be found in thelogistf package. In addition to the obvious advantage ofbrglmin the range of link functions that can be used ("logit","probit","cloglog"and"cauchit"), brglm is also more efficient computationally."
Has anyone experience with those two packages and can tell me whether I am overlooking a weakness in brglm, or can I just use it instead of logistf?
I'd be grateful for any insights!

In the package `glm`, it is possible to change the default likelihood functions for the distributions?

I am currently trying to create my custom likelihood function for a logistic model in the glm package. The most standard approach is to use optim, mle2, etc. However, these are often slow and clunky. Would it be possible to go into the functions of the glm package and hardcode my custom likelihood function? Would there be any problems with this? Does this have anything to do with the link specification in glm? Thank you!

Programming/Calculating variance covariance matrix for DEoptim() and SCEoptim()

I am looking for an extension to the r-functions DEoptim() and SCEoptim() from
the packages DEoptim and SCEoptim, respectively.
The extension could be in an altered package or perhaps a guide how to calculate/program a Variance-Covariance Matrix from parameters results of global optimisers.
On a sidenote: I am a little surprised not to find the "standard" methods of e.g. optim() in the earlier mentioned optimisers.
you can also use the subplex package/function. Which gives you the hessian in the output list.

Resources