GRG Non-Linear Least Squares (Optimization) - r

I am trying to convert an Excel spreadsheet that involves the solver function, using GRG Non-Linear to optimize 2 variables that return the lowest sum of squared errors. I have 4 known times (B) at 4 known distances(A). I need to create an optimization function to find what interaction of values for Vmax and Tau produce the lowest sum of squared errors. I have looked at the nls function and nloptr package but can't quite seem to piece them together. Current values for Vmax and Tau are what was determined via the excel solver function, just need to replicate in R. Any and all help would be greatly appreciated. Thank you.
A <- c(0,10, 20, 40)
B <- c(0,1.51, 2.51, 4.32)
Measured <- as.data.frame(cbind(A, B))
Corrected <- Measured
Corrected$B <- Corrected$B + .2
colnames(Corrected) <- c("Distance (yds)", "Time (s)")
Corrected$`X (m)` <- Corrected$`Distance (yds)`*.9144
Vmax = 10.460615006988
Tau = 1.03682513806393
Predicted_X <- c(Vmax * (Corrected$`Time (s)`[1] - Tau + Tau*exp(-Corrected$`Time (s)`[1]/Tau)),
Vmax * (Corrected$`Time (s)`[2] - Tau + Tau*exp(-Corrected$`Time (s)`[2]/Tau)),
Vmax * (Corrected$`Time (s)`[3] - Tau + Tau*exp(-Corrected$`Time (s)`[3]/Tau)),
Vmax * (Corrected$`Time (s)`[4] - Tau + Tau*exp(-Corrected$`Time (s)`[4]/Tau)))
Corrected$`Predicted X (m)` <- Predicted_X
Corrected$`Squared Error` <- (Corrected$`X (m)`-Corrected$`Predicted X (m)`)^2
#Sum_Squared_Error <- sum(Corrected$`Squared Error`)

is your issue still unsolved?
I'm working on a similar problem and I think I could help.
First you have to define a function that will be the sum of the errors, which has for variables Vmax and Tau.
Then you can call an optimisation algorithm that will change these variables and look for a minimum of your function. optim() might be sufficient for your application, but here is the documentation for nloptr:
https://www.rdocumentation.org/packages/nloptr/versions/1.0.4/topics/nloptr
and here is a list of optimisation packages in R:
https://cran.r-project.org/web/views/Optimization.html
Edit:
I quickly recoded the way I would do it. I'm a beginner, so it's probably not the best way but it still works.
A <- c(0,10, 20, 40)
B <- c(0,1.51, 2.51, 4.32)
Measured <- as.data.frame(cbind(A, B))
Corrected <- Measured
Corrected$B <- Corrected$B + .2
colnames(Corrected) <- c("Distance (yds)", "Time (s)")
Corrected$`X (m)` <- Corrected$`Distance (yds)`*.9144
#initialize values
Vmax0 = 15
Tau0 = 5
x0 = c(Vmax0,Tau0)
#define function to optimise: optim will minimize the output
f <- function(x) {
y=0
#variables will be optimise to find the minimum value of f
Vmax = x[1]
Tau = x[2]
Predicted_X <- Vmax * (Corrected$`Time (s)` - Tau + Tau*exp(-Corrected$`Time (s)`/Tau))
y = sum((Predicted_X - Corrected$`X (m)`)^2)
return(y)
}
#call optim: results will be available in variable Y
Y<-optim(x0,f)
If you type Y into the console, you will find that the solver finds the same values as Excel, and convergence is achieved.
In R, there is no need to define columns in data frames with brackets as you did, instead use vectors. You should probably follow a tutorial about this first.
Also it is misleading that you set inital values as values that were already the optimal ones. If you do this then optim() will not optimise further.
Here is the documentation for optim:
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html
and a tutorial on how to use functions:
https://www.datacamp.com/community/tutorials/functions-in-r-a-tutorial
Cheers

Related

Compute the Bayes factor of an A/B test dataset in r

I am trying to compute the Bayes factor of an A/B test dataset that can be found here. However, I end up with a NaN because the beta coefficient evaluates to zero. In calculating the likelihoods, I am assuming that it follows the binomial distribution. Hence, I am following this formula:
likelihood = choose(n,k) * Beta(k+1,n-k+1)
The code can be found below
data <- read.csv(file="ab_data.csv", header=TRUE, sep=",")
control <- data[which(data$group == "control"),]
treatment <- data[which(data$group == "treatment"),]
#compute bayes factor
n1 = nrow(control)
r1 = sum(control$converted)
n2 = nrow(treatment)
r2 = sum(treatment$converted)
likelihood_control <- choose(n1,r1) * beta(r1+1, n1-r1+1)
likelihood_treatment <- choose(n2,r2) * beta(r2+1, n2-r2+1)
bayes_factor <- likelihood_control/ likelihood_treatment
beta(r1+1, n1+r1+1)
beta(r2+1, n2-r2+1)
bayes_factor
As you observed, the problem is that the beta function is returning 0, but this is not because the likelihood is actually 0, it's just that the likelihood is so small the computer is storing it as 0. The second issue is that choose is returning Inf. Again, this is not because the value is actually infinite, it's just that R can't internally store values that large. The solution is to use logarithms, which grow much more slowly, and then exponentiate at the end. Below should work (I tested the logchoose function, and it seems to work)
logchoose <- function(n, k){
num <- sum(log(seq(n - k + 1, n)))
denom <- sum(log(1:k))
return(num - denom)
}
likelihood_control <- logchoose(n1,r1) + lbeta(r1+1, n1-r1+1)
likelihood_treatment <- logchoose(n2,r2) + lbeta(r2+1, n2-r2+1)
bayes_factor <- exp(likelihood_control - likelihood_treatment)
bayes_factor

Fast nonnegative quantile and Huber regression in R

I am looking for a fast way to do nonnegative quantile and Huber regression in R (i.e. with the constraint that all coefficients are >0). I tried using the CVXR package for quantile & Huber regression and the quantreg package for quantile regression, but CVXR is very slow and quantreg seems buggy when I use nonnegativity constraints. Does anybody know of a good and fast solution in R, e.g. using the Rcplex package or R gurobi API, thereby using the faster CPLEX or gurobi optimizers?
Note that I need to run a problem size like below 80 000 times, whereby I only need to update the y vector in each iteration, but still use the same predictor matrix X. In that sense, I feel it's inefficient that in CVXR I now have to do obj <- sum(quant_loss(y - X %*% beta, tau=0.01)); prob <- Problem(Minimize(obj), constraints = list(beta >= 0)) within each iteration, when the problem is in fact staying the same and all I want to update is y. Any thoughts to do all this better/faster?
Minimal example:
## Generate problem data
n <- 7 # n predictor vars
m <- 518 # n cases
set.seed(1289)
beta_true <- 5 * matrix(stats::rnorm(n), nrow = n)+20
X <- matrix(stats::rnorm(m * n), nrow = m, ncol = n)
y_true <- X %*% beta_true
eps <- matrix(stats::rnorm(m), nrow = m)
y <- y_true + eps
Nonnegative quantile regression using CVXR :
## Solve nonnegative quantile regression problem using CVX
require(CVXR)
beta <- Variable(n)
quant_loss <- function(u, tau) { 0.5*abs(u) + (tau - 0.5)*u }
obj <- sum(quant_loss(y - X %*% beta, tau=0.01))
prob <- Problem(Minimize(obj), constraints = list(beta >= 0))
system.time(beta_cvx <- pmax(solve(prob, solver="SCS")$getValue(beta), 0)) # estimated coefficients, note that they ocasionally can go - though and I had to clip at 0
# 0.47s
cor(beta_true,beta_cvx) # correlation=0.99985, OK but very slow
Syntax for nonnegative Huber regression is the same but would use
M <- 1 ## Huber threshold
obj <- sum(CVXR::huber(y - X %*% beta, M))
Nonnegative quantile regression using quantreg package :
### Solve nonnegative quantile regression problem using quantreg package with method="fnc"
require(quantreg)
R <- rbind(diag(n),-diag(n))
r <- c(rep(0,n),-rep(1E10,n)) # specify bounds of coefficients, I want them to be nonnegative, and 1E10 should ideally be Inf
system.time(beta_rq <- coef(rq(y~0+X, R=R, r=r, tau=0.5, method="fnc"))) # estimated coefficients
# 0.12s
cor(beta_true,beta_rq) # correlation=-0.477, no good, and even worse with tau=0.01...
To speed up CVXR, you can get the problem data once in the beginning, then modify it within a loop and pass it directly to the solver's R interface. The code for this is
prob_data <- get_problem_data(prob, solver = "SCS")
Then, parse out the arguments and pass them to scs from the scs library. (See Solver.solve in solver.R). You'll have to dig into the details of the canonicalization, but I expect if you're just changing y at each iteration, it should be a straightforward modification.

Error in Gradient Descent Calculation

I tried to write a function to calculate gradient descent for a linear regression model. However the answers I was getting does not match the answers I get using the normal equation method.
My sample data is:
df <- data.frame(c(1,5,6),c(3,5,6),c(4,6,8))
with c(4,6,8) being the y values.
lm_gradient_descent <- function(df,learning_rate, y_col=length(df),scale=TRUE){
n_features <- length(df) #n_features is the number of features in the data set
#using mean normalization to scale features
if(scale==TRUE){
for (i in 1:(n_features)){
df[,i] <- (df[,i]-mean(df[,i]))/sd(df[,i])
}
}
y_data <- df[,y_col]
df[,y_col] <- NULL
par <- rep(1,n_features)
df <- merge(1,df)
data_mat <- data.matrix(df)
#we need a temp_arr to store each iteration of parameter values so that we can do a
#simultaneous update
temp_arr <- rep(0,n_features)
diff <- 1
while(diff>0.0000001){
for (i in 1:(n_features)){
temp_arr[i] <- par[i]-learning_rate*sum((data_mat%*%par-y_data)*df[,i])/length(y_data)
}
diff <- par[1]-temp_arr[1]
print(diff)
par <- temp_arr
}
return(par)
}
Running this function,
lm_gradient_descent(df,0.0001,,0)
the results I got were
c(0.9165891,0.6115482,0.5652970)
when I use the normal equation method, I get
c(2,1,0).
Hope someone can shed some light on where I went wrong in this function.
You used the stopping criterion
old parameters - new parameters <= 0.0000001
First of all I think there's an abs() missing if you want to use this criterion (though my ignorance of R may be at fault).
But even if you use
abs(old parameters - new parameters) <= 0.0000001
this is not a good stopping criterion: it only tells you that progress has slowed down, not that it's already sufficiently accurate. Try instead simply to iterate for a fixed number of iterations. Unfortunately it's not that easy to give a good, generally applicable stopping criterion for gradient descent here.
It seems that you have not implemented a bias term. In a linear model like this, you always want to have an additional additive constant, i.e., your model should be like
w_0 + w_1*x_1 + ... + w_n*x_n.
Without the w_0 term, you usually won't get a good fit.
I know this is a couple of weeks old at this point but I'm going to take a stab at for several reasons, namely
Relatively new to R so deciphering your code and rewriting it is good practice for me
Working on a different Gradient Descent problem so this is all fresh to me
Need the stackflow points and
As far as I can tell you never got a working answer.
First, regarding your data structures. You start with a dataframe, rename a column, strip out a vector, then strip out a matrix. It would be a lot easier to just start with an X matrix (capitalized since its component 'features' are referred to as xsubscript i) and a y solution vector.
X <- cbind(c(1,5,6),c(3,5,6))
y <- c(4,6,8)
We can easily see what the desired solutions are, with and without scaling by fitting a linear fit model. (NOTE We only scale X/features and not y/solutions)
> lm(y~X)
Call:
lm(formula = y ~ X)
Coefficients:
(Intercept) X1 X2
-4 -1 3
> lm(y~scale(X))
Call:
lm(formula = y ~ scale(X))
Coefficients:
(Intercept) scale(X)1 scale(X)2
6.000 -2.646 4.583
With regards to your code, one of the beauties of R is that it can perform matrix multiplication which is significantly faster than using loops.
lm_gradient_descent <- function(X, y, learning_rate, scale=TRUE){
if(scale==TRUE){X <- scale(X)}
X <- cbind(1, X)
theta <- rep(0, ncol(X)) #your old temp_arr
diff <- 1
old.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
while(diff>0.000000001){
theta <- theta - learning_rate * t(X) %*% (X %*% theta - y) / length(y)
new.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
diff <- abs(old.error - new.error)
old.error <- new.error
}
return(theta)
}
And to show it works...
> lm_gradient_descent(X, y, .01, 0)
[,1]
[1,] -3.9360685
[2,] -0.9851775
[3,] 2.9736566
vs expected of (-4, -1, 3)
For what its worth while I agree with #cfh that I would prefer a loop with a defined number of iterations, I'm actually not sure you need the abs function. If diff < 0 then your function is not converging.
Finally rather than using something like old.error and new.error I'd suggest using a a vector that records all errors. You can then plot that vector to see how quickly your function converges.

Solve systems of non-linear equations in R / Black-Scholes-Merton Model

I am writing my masters thesis and I got stuck with this problem in my R code. Mathematically, I want to solve this system of non-linear equations with the R-package “nleqslv”:
fnewton <- function(x){
y <- numeric(2)
d1 = (log(x[1]/D1)+(R+x[2]^2/2)*T)/x[2]*sqrt(T)
d2 = d1-x[2]*sqrt(T)
y1 <- SO1 - (x[1]*pnorm(d1) - exp(-R*T)*D1*pnorm(d2))
y2 <- sigmaS*SO1 - pnorm(d1)*x[2]*x[1]
y}
xstart <- c(21623379, 0.526177094846878)
nleqslv(xstart, fnewton, control=list(btol=.01), method="Newton")
I have tried several versions of this code and right now I get the error:
error: error in pnorm(q, mean, sd, lower.tail, log.p): not numerical.
Pnorm is meant to be the cumulative standard Normal distribution of d1and d2 respectively. I really don’t know, what I am doing wrong as I oriented my model on Teterevas slides ( on slide no.5 is her model code), who’s presentation is the first result by googeling
https://www.google.de/search?q=moodys+KMV+in+R&rlz=1C1SVED_enDE401DE401&aq=f&oq=moodys+KMV+in+R&aqs=chrome.0.57.13309j0&sourceid=chrome&ie=UTF-8#q=distance+to+default+in+R
Like me, however more successfull, she calculates the Distance to Default risk measure via the Black-Scholes-Merton approach. In this model, the value of equity (usually represented by the market capitalization, ->SO1) can be written as a European call option – what I labeled y2 in the above code, however, the equation before is set to 0!
The other variables are:
x[1] -> the variable I want to derive, value of total assets
x[2] -> the variable I want to derive, volatility of total assets
D1 -> the book value of debt (1998-2009)
R -> a risk-free interest rate
T -> is set to 1 (time)
sigmaS -> estimated (historical) equity volatility
Thanks already!!! I would be glad, anyone could help me.
Caro
I am the author of nleqslv and I'm quite suprised at how you are using it.
As mentioned by others you are not returning anything sensible.
y1 should be y[1] and y2 should be y[2]. If you want us to say sensible things you will have to provide numerical values for D1, R, T, sigmaS and SO1. I have tried this:
T <- 1; D1 <- 1000; R <- 0.01; sigmaS <- .1; SO1 <- 1000
These have been entered before the function definition. See this
library(nleqslv)
T <- 1
D1 <- 1000
R <- 0.01
sigmaS <- .1
SO1 <- 1000
fnewton <- function(x){
y <- numeric(2)
d1 <- (log(x[1]/D1)+(R+x[2]^2/2)*T)/x[2]*sqrt(T)
d2 <- d1-x[2]*sqrt(T)
y[1] <- SO1 - (x[1]*pnorm(d1) - exp(-R*T)*D1*pnorm(d2))
y[2] <- sigmaS*SO1 - pnorm(d1)*x[2]*x[1]
y
}
xstart <- c(21623379, 0.526177094846878)
nleqslv has no problem in finding a solution in this case. Solution found is : c(1990.04983,0.05025). There appears to be no need to set the btol parameter and you can use method Broyden.

Easy way of counting precision, recall and F1-score in R

I am using an rpart classifier in R. The question is - I would want to test the trained classifier on a test data. This is fine - I can use the predict.rpart function.
But I also want to calculate precision, recall and F1 score.
My question is - do I have to write functions for those myself, or is there any function in R or any of CRAN libraries for that?
using the caret package:
library(caret)
y <- ... # factor of positive / negative cases
predictions <- ... # factor of predictions
precision <- posPredValue(predictions, y, positive="1")
recall <- sensitivity(predictions, y, positive="1")
F1 <- (2 * precision * recall) / (precision + recall)
A generic function that works for binary and multi-class classification without using any package is:
f1_score <- function(predicted, expected, positive.class="1") {
predicted <- factor(as.character(predicted), levels=unique(as.character(expected)))
expected <- as.factor(expected)
cm = as.matrix(table(expected, predicted))
precision <- diag(cm) / colSums(cm)
recall <- diag(cm) / rowSums(cm)
f1 <- ifelse(precision + recall == 0, 0, 2 * precision * recall / (precision + recall))
#Assuming that F1 is zero when it's not possible compute it
f1[is.na(f1)] <- 0
#Binary F1 or Multi-class macro-averaged F1
ifelse(nlevels(expected) == 2, f1[positive.class], mean(f1))
}
Some comments about the function:
It's assumed that an F1 = NA is zero
positive.class is used only in
binary f1
for multi-class problems, the macro-averaged F1 is computed
If predicted and expected had different levels, predicted will receive the expected levels
The ROCR library calculates all these and more (see also http://rocr.bioinf.mpi-sb.mpg.de):
library (ROCR);
...
y <- ... # logical array of positive / negative cases
predictions <- ... # array of predictions
pred <- prediction(predictions, y);
# Recall-Precision curve
RP.perf <- performance(pred, "prec", "rec");
plot (RP.perf);
# ROC curve
ROC.perf <- performance(pred, "tpr", "fpr");
plot (ROC.perf);
# ROC area under the curve
auc.tmp <- performance(pred,"auc");
auc <- as.numeric(auc.tmp#y.values)
...
Just to update this as I came across this thread now, the confusionMatrix function in caretcomputes all of these things for you automatically.
cm <- confusionMatrix(prediction, reference = test_set$label)
# extract F1 score for all classes
cm[["byClass"]][ , "F1"] #for multiclass classification problems
You can substitute any of the following for "F1" to extract the relevant values as well:
"Sensitivity", "Specificity", "Pos Pred Value", "Neg Pred Value", "Precision", "Recall", "F1", "Prevalence", "Detection", "Rate", "Detection Prevalence", "Balanced Accuracy"
I think this behaves slightly differently when you're only doing a binary classifcation problem, but in both cases, all of these values are computed for you when you look inside the confusionMatrix object, under $byClass
confusionMatrix() from caret package can be used along with a proper optional field "Positive" specifying which factor should be taken as positive factor.
confusionMatrix(predicted, Funded, mode = "prec_recall", positive="1")
This code will also give additional values such as F-statistic, Accuracy, etc.
I noticed the comment about F1 score being needed for binary classes. I suspect that it usually is. But a while ago I wrote this in which I was doing classification into several groups denoted by number. This may be of use to you...
calcF1Scores=function(act,prd){
#treats the vectors like classes
#act and prd must be whole numbers
df=data.frame(act=act,prd=prd);
scores=list();
for(i in seq(min(act),max(act))){
tp=nrow(df[df$prd==i & df$act==i,]);
fp=nrow(df[df$prd==i & df$act!=i,]);
fn=nrow(df[df$prd!=i & df$act==i,]);
f1=(2*tp)/(2*tp+fp+fn)
scores[[i]]=f1;
}
print(scores)
return(scores);
}
print(mean(unlist(calcF1Scores(c(1,1,3,4,5),c(1,2,3,4,5)))))
print(mean(unlist(calcF1Scores(c(1,2,3,4,5),c(1,2,3,4,5)))))
We can simply get F1 value from caret's confusionMatrix function
result <- confusionMatrix(Prediction, Lable)
# View confusion matrix overall
result
# F1 value
result$byClass[7]
You can also use the confusionMatrix() provided by caret package. The output includes,between others, Sensitivity (also known as recall) and Pos Pred Value(also known as precision). Then F1 can be easily computed, as stated above, as:
F1 <- (2 * precision * recall) / (precision + recall)
library(caret)
result <- confusionMatrix(Prediction, label)
#This shows all the measures you need including precision, recall and F1
result$byClass

Resources