XGBoost Custom Objective Function - xgbclassifier

this will be a long question. I’m trying to define my own custom objective function
I want the XGBClassifier, so I run
from xgboost import XGBClassifier
the documentation of xgboost says:
A custom objective function can be provided for the objective parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess :
y_true: array_like of shape [n_samples], The target values
y_pred: array_like of shape [n_samples], The predicted values
grad: array_like of shape [n_samples], The value of the gradient for each sample point.
hess: array_like of shape [n_samples], The value of the second derivative for each sample point
Now, I’ve coded this custom:
def guess_averse_loss(y_true, y_pred):
y_true = y_true.astype(int)
y_pred = y_pred.astype(int)
... stuffs ...
return grad, hess
everything is compatible with the previous documentation.
If I run:
classifier=XGBClassifier(eval_metric=custom_weighted_accuracy,objective=guess_averse_loss,**params_common_model)
classifier.train(X_train, y_train)
(where custom_weighted_accuracy is a custom metric defined by me following the documentation of scikitlearn)
I get the error:
-> first_term = np.multiply(cost_matrix[y_true, y_pred], np.exp(y_pred - y_true))
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (4043,) (4043,5)
So, y_pred enters the function as a matrix (n_samples x n_classes) where the element ij is the probability that the sample i belongs to the class j.
Then, I modify the line as
first_term = np.multiply(cost_matrix[y_true, np.argmax(y_pred, axis=1)],np.exp(np.argmax(y_pred, axis=1) - y_true))
so it passes from a matrix to an array,
This leads to the error:
unknown custom metric
so it seems that the problem now is the metric.
I try to remove the custom obj function using the default one and another error comes:
XGBoostError: Check failed: in_gpair->Size() % ngroup == 0U (3 vs. 0) : must have exactly ngroup * nrow gpairs
WHAT CAN I DO???
You read what I've tried, I'm excepting some suggestion to solve this problems

Related

How can I optimize the expected value of a function in R?

I have derived a survival function for a system of components (ignore the details of how this system is setup) and I am trying to maximize its expected, or more specifically, maximizing the expected value of the function:
surv_func = function(x,mu) = {(exp(-(x/(mu))^(1/3))*((1-exp(-(4/3)*x^(3/2)))+exp(-(-(4/3)*x^(3/2)))))*exp(-(x/(3-mu))^(1/3))}
and I am supposed (since the pdf including my tasks gives a hint about it) to use the function
optimize()
and the expected value for a function can be computed with
# Computes expected value of a the function "function"
E <- integrate(function, 0, Inf)
but my function depends on x and mu. The expected value could (obviously) be computed if the integral had no mu but instead only depended on x. For those interested, the mu comes from the fact that one of the components has a Weibull-distribution with parameters (1/3,mu) and the 3-mu comes from that has a Weibull-distribution with parameters (1/3,lambda). In the task there is a constraint mu + lambda = 3, so I tought substituting the lambda-parameter in the second Weibull-distribution with lambda = 3 - mu and trying to maximize this problem would yield not only mu, but also lambda.
If I try to, just for the sake of learing about R, compute the expected value using the code below (in the console window), it just gives me the following:
> E <- integrate(surv_func,0,Inf)
Error in (function (x, mu) : argument "mu" is missing, with no default
I am new to R and seem to be a little bit "slow" at learning. How can I approach this problem?

How to weight observations in mxnet?

I am new to neural networks and the mxnet package in R. I want to do a logistic regression on my predictors since my observations are probabilities varying between 0 and 1. I'd like to weight my observations by a vector obsWeights I have, but I'm not sure where to implement the weights. There seems to be a weight= option in mx.symbol.FullyConnected but if I try weight=obsWeights I get the following error message
Error in mx.varg.symbol.FullyConnected(list(...)) :
Cannot find argument 'weight', Possible Arguments:
----------------
num_hidden : int, required
Number of hidden nodes of the output.
no_bias : boolean, optional, default=False
Whether to disable bias parameter.
How should I proceed to weight my observations? Here is my code at the moment.
# Prepare data
train.mm = model.matrix(obs ~ . , data = train_data)
train_label = train_data$obs
# Normalize
train.mm = apply(train.mm, 2, function(x) (x-min(x))/(max(x)-min(x)))
# Create MXDataIter compatible iterator
batch_size = 128
train.iter = mx.io.arrayiter(data=t(train.mm), label=train_label,
batch.size=batch_size, shuffle=T)
# Symbolic model definition
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data=data, num.hidden=128, name='fc1')
act1 = mx.symbol.Activation(data=fc1, act.type='relu', name='act1')
final = mx.symbol.FullyConnected(data=act1, num.hidden=1, name='final')
logistic = mx.symbol.LogisticRegressionOutput(data=final, name='logistic')
# Run model
mxnet_train = mx.model.FeedForward.create(
symbol = logistic,
X = train.iter,
initializer = mx.init.Xavier(rnd_type = 'gaussian', factor_type = 'avg', magnitude = 2),
num.round = 25)
Assigning the fully connected weight argument is not what you want to do at any rate. That weight is a reference to parameters of the layer; i.e., what you multiply in the inputs by to get output values These are the parameter values you're trying to learn.
If you want to make some samples matter more than others, then you'll need to adjust the loss function. For example, multiply the usual loss function by your weights so that they do not contribute as much to the overall average loss.
I do not believe the standard Mxnet loss functions have a spot for assigning weights (that is LogisticRegressionOutput won't cover this). However, you can make your own cost function that does. This would involve passing your final layer through a sigmoid activation function to first generate the usual logistic regression output value. Then pass that into the loss function you define. You could do squared error, but for logistic regression you'll probably want to use the cross entropy function:
l * log(y) + (1 - l) * log(1 - y),
where l is the label and y is the predicted value.
Ideally, you'd write a symbol with an efficient definition of the gradient (Mxnet has a cross entropy function, but its for softmax input, not a binary output. You could translate your output to two outputs with softmax as an alternative, but that seems less easy to work with in this case), but the easiest path would be to let Mxnet do its autodiff on it. Then you multiply that cross entropy loss by the weights.
I haven't tested this code, but you'd ultimately have something like this (this is what you'd do in python, should be similar in R):
label = mx.sym.Variable('label')
out = mx.sym.Activation(data=final, act_type='sigmoid')
ce = label * mx.sym.log(out) + (1 - label) * mx.sym.log(1 - out)
weights = mx.sym.Variable('weights')
loss = mx.sym.MakeLoss(weigths * ce, normalization='batch')
Then you want to input your weight vector into the weights Variable along with your normal input data and labels.
As an added tip, the output of an mxnet network with a custom loss via MakeLoss outputs the loss, not the prediction. You'll probably want both in practice, in which case its useful to group the loss with a gradient-blocked version of the prediction so that you can get both. You'd do that like this:
pred_loss = mx.sym.Group([mx.sym.BlockGrad(out), loss])

Portfolio Optimization using non-linear optimizer

I have been attempting to optimize a portfolio (max return subject to target risk) of returns using a known mu and covariance matrix subject to box and group constraints. It seems that the best solution is to create my own function using the 'Rdonlp2' package. However, when testing this package under very basic (long only) constraints it only produces an equal weighted portfolio. When I added a box and group constraint, it produced an equal weighted portfolio subject to the box constraint but did not register the group constraint.
install.packages("fPortfolio")
install.packages("Rdonlp2", repos="http://R-Forge.R-project.org")
If you do not already have the packages ^
library(fPortfolio)
library(Rdonlp2)
lppData=100*LPP2005.RET[,1:6]
mr1Spec = portfolioSpec()
setTargetRisk(mr1Spec) = 0.1
setSolver(mr1Spec) = "solveRdonlp2"
efficientPortfolio(data=lppData, spec=mr1Spec, constraints="LongOnly")
efficientPortfolio(data=lppData, spec=mr1Spec, constraints= c("maxsum[1:6]=.75", "maxW[1:6]=.1"))
has anyone had success using this optimizer? any help getting the portfolio to optimize correctly or setting up my own function would be greatly appreciated!
#WaltS
If I use only box constraints and don't specify the optimizer I get a solution (code below)
library(fPortfolio)
lppData=100*LPP2005.RET[,1:6]
mr1Spec = portfolioSpec()
portfolioFrontier(data=lppData, spec=mr1Spec, constraints=c("minW[1:6]=-1", "maxW[1:6]=2"))
However If I add group constraints...
portfolioFrontier(data=lppData, spec=mr1Spec, constraints= c("maxsumW[1:6]=.2", "maxsumW[1:6]=.75"))
and run it, it produces the following error...
Error in `colnames<-`(`*tmp*`, value = c("SBI", "SPI", "SII", "LMI", "MPI", :
attempt to set 'colnames' on an object with less than two dimensions
In the box only case, do you if its possible to call the portfolio weights corresponding to the vol closest to .15, which would be a var of about .38?

Estimate parameters of Frechet distribution using mmedist or fitdist(with mme) error

I'm relatively new in R and I would appreciated if you could take a look at the following code. I'm trying to estimate the shape parameter of the Frechet distribution (or inverse weibull) using mmedist (I tried also the fitdist that calls for mmedist) but it seems that I get the following error :
Error in mmedist(data, distname, start = start, fix.arg = fix.arg, ...) :
the empirical moment function must be defined.
The code that I use is the below:
require(actuar)
library(fitdistrplus)
library(MASS)
#values
n=100
scale = 1
shape=3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
memp=minvweibull(c(1,2), shape=3, rate=1, scale=1)
# estimating the parameters
para_lm = mmedist(data_fre,"invweibull",start=c(shape=3,scale=1),order=c(1,2),memp = "memp")
Please note that I tried many times en-changing the code in order to see if my mistake was in syntax but I always get the same error.
I'm aware of the paradigm in the documentation. I've tried that as well but with no luck. Please note that in order for the method to work the order of the moment must be smaller than the shape parameter (i.e. shape).
The example is the following:
require(actuar)
#simulate a sample
x4 <- rpareto(1000, 6, 2)
#empirical raw moment
memp <- function(x, order)
ifelse(order == 1, mean(x), sum(x^order)/length(x))
#fit
mmedist(x4, "pareto", order=c(1, 2), memp="memp",
start=c(shape=10, scale=10), lower=1, upper=Inf)
Thank you in advance for any help.
You will need to make non-trivial changes to the source of mmedist -- I recommend that you copy out the code, and make your own function foo_mmedist.
The first change you need to make is on line 94 of mmedist:
if (!exists("memp", mode = "function"))
That line checks whether "memp" is a function that exists, as opposed to whether the argument that you have actually passed exists as a function.
if (!exists(as.character(expression(memp)), mode = "function"))
The second, as I have already noted, relates to the fact that the optim routine actually calls funobj which calls DIFF2, which calls (see line 112) the user-supplied memp function, minvweibull in your case with two arguments -- obs, which resolves to data and order, but since minvweibull does not take data as the first argument, this fails.
This is expected, as the help page tells you:
memp A function implementing empirical moments, raw or centered but
has to be consistent with distr argument. This function must have
two arguments : as a first one the numeric vector of the data and as a
second the order of the moment returned by the function.
How can you fix this? Pass the function moment from the moments package. Here is complete code (assuming that you have made the change above, and created a new function called foo_mmedist):
# values
n = 100
scale = 1
shape = 3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
# estimating the parameters
para_lm = foo_mmedist(data_fre, "invweibull",
start= c(shape=5,scale=2), order=c(1, 2), memp = moment)
You can check that optimization has occurred as expected:
> para_lm$estimate
shape scale
2.490816 1.004128
Note however, that this actually reduces to a crude way of doing overdetermined method of moments, and am not sure that this is theoretically appropriate.

BUGS error: "expected the collection operator c error pos 13018"

I am trying to evaluate hierarchical models from R using the R2OpenBUGS library.
Relevant variables are:
N = 191,
p = 4,
k = 1,
x = N * p matrix (i.e. 191 * 4) of values,
t0 = k * (x' * x),
y = vector of continuous data with length N,
mu0 = vector of 4 zeros (i.e. c(0,0,0,0)),
prob = vector of 4 probabilities at 0.5 (i.e. c(0.5,0.5,0.5,0.5)),
indimodel = vector of 4 parameter groupings (i.e. c(1,2,4,8)).
Initial values for tau and gama are generated using the following function in R:
inits<-function()
{
list(tau=runif(1,0,10),gama=c(1,1,1,1))
}
Thus, BUGS should just generate initial values for relevant variables missing from the list in inits().
However, when I attempt to run the following BUGS model:
model
{
for (i in 1:N)
{
mu[i]<-inprod(x[i,],nu[])
y[i]~dnorm(mu[i],tau)
}
for (i in 1:p)
{
gama[i]~dbern(prob[i])
nu[i]<-beta[i]*gama[i]
}
for (i in 1:p)
{
beta[i]~dnorm(mu0[i],t0[i])
}
tau~dgamma(0.00001,0.00001)
model<-inprod(gama[],indimodel[])
sigma<-sqrt(1/tau)
}
...I get the following error:
"expected the collection operator c error pos 13018"
"variable N is not defined"
...described in the log as:
model is syntactically correct
expected the collection operator c error pos 13018
variable N is not defined
model must have been compiled but not updated to be able to change RN generator
BugsCmds:NoCompileInits
model must be compiled before generating initial values
model must be initialized before updating
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before DIC can be monitored
model must be initialized before updating
model must be initialized before monitors used
DIC monitor not set
I have a feeling that this issue stems from a missing declaration for some variable's (or variables') initial value(s).
I found the bug. I mistakenly specified t0[i] the vector, when I should have specified it as a matrix. From R, t0 is defined as a matrix (see list of variables above), and in WinBUGS the collection error is thrown because it expects t0 the vector.

Resources