Building a Pyomo model with the Piece-wise constraint - constraints

I am trying to build a Pyomo model which has the constraint of the following form
where y_t is an auxiliary variable and theta_t_in is also a pyomo variable.
I have tried to build it using the max function but the pyomo does not allow this type of configuration. Any idea how I can build this constraint in pyomo. y_t is a binary variable.

Related

How to get constraints in matrix format from Gurobi/JuMP?

I have built an LP model in JuMP/Julia using Gurobi solver. I wish to visualize the constraints for checking the overall correctness of my model. In python, we can define a function help to visualize the constraints. Please follow the link below for pythons solution. How to get access to constraint matrix and visualize it in JuMP?
Get constraints in matrix format from gurobipy
Best,
NG

XGBoost Explainer in R - How to enforce constraints to features Contribution?

There are several R functions and packages to estimate features contribution to the response variable as an output of the XGBoost model.
For example there are the following:
library(xgboostExplainer)
function: buildExplainer
library(DALEX)
function: prediction_breakdown
Each of those functions return the contributions of each feature in the X matrix to the response variable. This is particularly useful with continuous target variables.
Does anyone know how to constraint those contributions for example to be positive?
Thanks

Laplace smoothig for Bayesian Netoworks in bnlearn

I'm trying to work with Bayesian Networks using R and currently I am using bnlearn framework.
I'm trying to use score based structural learning from data and try different algorithms and approaches.
I would like to know if there is Laplace smoothing implemented in bnlearn or not. I could not find any information about it in the documentation. Am I missing somethings? Does anyone know?
No, it is not. However, this should be no problem as different priors are available in bnlearn and, unless you have some very specific reason to use Laplace smoothing, which is one particular prior, these should do.
Once you have a structure, you learn parameters with the bn.fit() function. Setting method = "bayes" uses Bayesian estimation and the optional argument iss determines the prior. The definition of iss: "the imaginary sample size used by the bayes method to estimate the conditional probability tables (CPTs) associated with discrete nodes".
As an example, consder a binary root node X in some network. bn.fit() returns (Nx + iss / cptsize) / (N + iss) as the probability of X = x, where N is your number of samples, Nx the number of samples with X = x, and cptsize the cardinality of X; in this case cptsize = 2 because X is binary. Laplace correction would require that iss / cptsize always be equal to 1. Yet, bnlearn uses the same iss value for all CPTs and, iss / cptsize will only be 1 if all variables have the same cardinality. Thus, for binary variables, you could indeed have Laplace correction by setting iss = 2. In the general case, however, it is not possible.
See bnlearn::bn.fit difference and calculation of methods "mle" and "bayes" for some additional details.

R: robust package -- lmRob how to find the psi function used in the calculations

I am using lmRob.
require(robust)
stack.rob.int <- lmRob(Loss ~.*., data = stack.dat)
Fine but, I was wondering how I could obtain the psi-function that is used by the lmRob function in the actual fitting. Thanks in advance for any help!
If I were to use the lmrob function in robustbase, is it possible to change the psi function to subtract it by a constant. I am trying to implement the bootstrap as per Lahiri (Annals of Statistics, 1992) where the way to still keep the bootstrap valid is mentioned to be to replace the psi() with the originalpsi() minus the mean ot the residuals while fitting the bootstrap for the robust linear model.
So, there is no way to access the psi function directly for robust::lmRob().
Simply put, lmRob() calls lmRob.fit() (or lmRob.wfit() if you supply weights) which subsequently calls lmRob.fit.compute() that then sets initial values for a Fortran version depending on the lmRob.control() set to either "bisquare" or "optimal".
As a result of the above discussion, if you need access to the psi functions, you may wish to use robustbase as it has easy access to many psi functions (c.f. the biweights)
Edit 1
Regarding:
psi function evaluated at the residuals in lmRob
No. The details of what is available after running lmRob is available in the lmRob.object. The documentation is accessible via ?lmRob.object. Regarding residuals, the following are available in the lmRob object.
residuals: the residual vector corresponding to the estimates returned in coefficients.
T.residuals: the residual vector corresponding to the estimates returned in T.coefficients.
M.weights: the robust estimate weights corresponding to the final MM-estimates in coefficients, if applies.
T.M.weights: the robust estimate weights corresponding to the initial S-estimates in T.coefficients, if applies.
Regarding
what does "optimal" do in lmRob?
Optimal refers to the following psi function:
sign(x)*(- (phi'(|x|) + c) / (phi(|x|) )
For other traditional psi functions, you may wish to look at robustbase's vignette
or a robust textbook.

R randomForest to PMML class index is wrong

I'm exporting an R randomForest model to PMML. The resulting PMML always has the class as the first element of the DataDictionary element, which is not always true.
Is there some way to fix this or at least increment the PMML with custom Extension elements? That way I could put the class index there.
I've looked in the pmml package documentation, as well as in the pmmlTransformations packages, but couldn't find anything there that could help me solve this issue.
By PMML class I assume you mean the model type (classification vs regression) in the PMML model attributes?
If so, it is not true that the model type is determined from the data type of the first element of the DataDictionary....these are completely independent. The model type is determined from the model type R thinks it is. The R random forest object determines the type it thinks it is (model$type) and that is the model type exported by the pmml function. If you want your model to be a certain type, just make sure you let R know that...for example, if you are using the iris data set, if your predicted variable is Sepal.Length, R will correctly assume it is a regression model. If you insist on treating it as a classification model, try using as.factor(Sepal.Length) instead.

Resources