Solve Constrained Quadratic Programming with R - r

I really love R but from time to time it really gives me a headache...
I have the following simple quadratic minimization problem which can be formulated and solved within no time in Excel (click on picture to enlarge):
and
The problem itself is pretty straightforward: I want to minimize (w1^2+w2^2)/2 by finding the best combination of w1,w2 and b under the constraints that for all Y*(w1*X1+w2*X2+b) >= 1
I know that there is the quadprog package for solving these kinds of problems but I find it so unintuitive that I am not able to specify the problem correctly :-( I hate to say it but Excel seems to be better suited for specifying optimization problems like these :-(((
My question
How to formulate the above problem correctly so that it can be solved with R (no matter which package) and the program arrives at the correct values for w1, w2 and b (as can be seen in the picture above). Please don't just post links but please give actual code that works. It would be great if you could comment your code so it becomes clear why you do the things you do. Thank you!
The necessary data is here:
data <- matrix(c(2.947814,6.626878, 1,
2.530388,7.785050, 1,
3.566991,5.651046, 1,
3.156983,5.467077, 1,
2.582346,4.457777,-1,
2.155826,6.222343,-1,
3.273418,3.520687,-1),ncol=3,byrow=T)
colnames(data) <- c("X1","X2","y")
Addendum
Some people took offense at my request to provide code (and not simply links). I apologized for that and gave my reasons that I did not find any good approaches in the answers so far on SO. The deeper reason for that is that the problem is unusual in the sense that b is only in the constraint and not in the objective function. So I still think that this question is a good fit for SO.

Actually, the problem is a little tricky because b is only present in the inequality constraint matrix but not in the objective function. Therefore the matrix in the quadratic programming problem is only positive semidefinite but not positive definite.
My approach is therefore to set the matrix entry corresponding to b to a very small value - in my case 1e-9. Someone else more familiar with such optimization problems might know how to solve the problem properly...
Calculate solve.QP input
c1=data[,"X1"]*data[,"y"]
c2=data[,"X2"]*data[,"y"]
#I use 1e-9 for the b entry
Dmat=matrix(`[<-`(numeric(9),c(1,5,9),c(1,1,1e-9)),3,3)
dvec=rep(0,3)
Amat=cbind(c1,c2,data[,"y"])
bvec=rep(1,nrow(Amat))
Solve with solve.QP
library(quadprog)
sol=solve.QP(Dmat=Dmat,dvec=dvec,Amat=t(Amat),bvec=bvec)$solution
sol
#[1] 2.903910 1.201258 -14.734964
Same as excel.

Related

How to find several solutions of nonlinear equation using R e.g. nleqslv?

As far as I understand R's nonlinear equation solver nleqslv(x, fn) finds only one solution of the nonlinear equation.
However (as Bhas commented) searchZeros function (the same package) can find my solutions depending on the starting points.
Question: are there some function in R which can help choosing the set of initial points for searchZeros ,which will help me to find all the solutions ?
I am interested in the case of function with several variables.
I undestand that solution to be found pretty much depends on the initial approximation. So the brute force way is to check some reasonable grid of intial approximations. However there might be some more intelligent way to get all the solutions ?

In R, how can I make a vector Y whose components are derived from normal distribution?

I am a novice in R programming.
I would like to ask experts here a question concerning a code of R.
First, let a vector x be c(2,5,3,6,5)
I hope to make another vector y whose i-th component is derived from N(sum(x[1]:x[i]),1)
(i.e. the i-th component of y follows normal distribuion with variance 1 and mean summation from x[1](=2) to x[i] (i=1,2,3,4,5))
For example, the third component of y follows normal distribuion with mean x[1]+x[2]+x[3]=2+5+3=10 and variance 1
I want to know a code of R making the vector y described above "without using repetition syntax such as for, while, etc."
Since I am a novice of R programming and have a congenitally poor sense of computational statistics, I don't seem to hit on a ingenious code of R at all.
Please let me know a code of R making a vector explained above without using repetition syntax such as for, while, etc.
Previously, I should like to thank you very much heartily for your mindful answer.
You can do
rnorm(length(x), mean = cumsum(x), sd = 1)
rnorm is part of the family of functions associated with the normal distribution *norm. To see how a function with a known name works, use
help("rnorm") # or ?rnorm
cumsum takes the cumulative sum of a vector.
Finding functionality
In R, it's generally a safe bet that most functionality you can think of has been implemented by someone already. So, for example, in the OP's case, it is not necessary to roll a custom loop.
The same naming convention as *norm is followed for other distributions, e.g., rbinom. You can follow the link at the bottom of ?rnorm to reach ?Distributions, which lists others in base R.
If you are starting from scratch and don't know the names of any related functions, consider using the built-in search tools, like:
help.search("normal distribution") # or ??"normal distribution"
If this reveals nothing and yet you still think a function must exist, consider installing and loading the sos package, which allows
findFn("{cumulative mean}") # or ???"{cumulative mean}"
findFn("{the pareto distribution}") # or ???"{the pareto distribution}"
Beyond that, there are other online resources, like Google, that are good. However, a question about functionality on Stack Overflow is a risky proposition, since it will not be received well (downvoted and closed as a "tool request") if the implementation of the desired functionality is nonexistent or unknown to folks here. Stack Overflow's new "Documentation" subsite will hopefully prove to be a resource for finding R functions as well.

Calculate D-efficiency of an experimental desgin in R

I have an experimental design. I want to calculate its D-efficiency.
I thought R package AlgDesign could help. I found function optFederov which generates the design and - if the user wants - returns its D efficiency. However, I don't want to use optFederov to generate the design - I already have my design!
I tried eval.design(~.,mydesign). But the only metrics it gives me are: determinant, A, diagonality, and gmean.variances. Maybe there is a way to get from determinant or A to D-efficiency (I am not a mathematician, so I am not sure). Or maybe some other way to calculate D-efficiency "manually" so to say?
Thanks a lot for any hint!
I was working on a similar project. I found out this formula Deff = (|X'X|^(1/p))/ND in this link. Where X is the model matrix, p is the number of betas in you linear model and ND the number of runs your experiment has. You could just make a code like this and it will do the trick.
det(t(X)%*%X)^(1/beta)/(numRuns)
I tested the results using JMP for my project so I believe this is the correct formula
Determinant, the first result given by eval.design, is the D-efficiency.

CVX-esque convex optimization in R?

I need to solve (many times, for lots of data, alongside a bunch of other things) what I think boils down to a second order cone program. It can be succinctly expressed in CVX something like this:
cvx_begin
variable X(2000);
expression MX(2000);
MX = M * X;
minimize( norm(A * X - b) + gamma * norm(MX, 1) )
subject to
X >= 0
MX((1:500) * 4 - 3) == MX((1:500) * 4 - 2)
MX((1:500) * 4 - 1) == MX((1:500) * 4)
cvx_end
The data lengths and equality constraint patterns shown are just arbitrary values from some test data, but the general form will be much the same, with two objective terms -- one minimizing error, the other encouraging sparsity -- and a large number of equality constraints on the elements of a transformed version of the optimization variable (itself constrained to be non-negative).
This seems to work pretty nicely, much better than my previous approach, which fudges the constraints something rotten. The trouble is that everything else around this is happening in R, and it would be quite a nuisance to have to port it over to Matlab. So is doing this in R viable, and if so how?
This really boils down to two separate questions:
1) Are there any good R resources for this? As far as I can tell from the CRAN task page, the SOCP package options are CLSCOP and DWD, which includes an SOCP solver as an adjunct to its classifier. Both have similar but fairly opaque interfaces and are a bit thin on documentation and examples, which brings us to:
2) What's the best way of representing the above problem in the constraint block format used by these packages? The CVX syntax above hides a lot of tedious mucking about with extra variables and such, and I can just see myself spending weeks trying to get this right, so any tips or pointers to nudge me in the right direction would be very welcome...
You might find the R package CVXfromR useful. This lets you pass an optimization problem to CVX from R and returns the solution to R.
OK, so the short answer to this question is: there's really no very satisfactory way to handle this in R. I have ended up doing the relevant parts in Matlab with some awkward fudging between the two systems, and will probably migrate everything to Matlab eventually. (My current approach predates the answer posted by user2439686. In practice my problem would be equally awkward using CVXfromR, but it does look like a useful package in general, so I'm going to accept that answer.)
R resources for this are pretty thin on the ground, but the blog post by Vincent Zoonekynd that he mentioned in the comments is definitely worth reading.
The SOCP solver contained within the R package DWD is ported from the Matlab solver SDPT3 (minus the SDP parts), so the programmatic interface is basically the same. However, at least in my tests, it runs a lot slower and pretty much falls over on problems with a few thousand vars+constraints, whereas SDPT3 solves them in a few seconds. (I haven't done a completely fair comparison on this, because CVX does some nifty transformations on the problem to make it more efficient, while in R I'm using a pretty naive definition, but still.)
Another possible alternative, especially if you're eligible for an academic license, is to use the commercial Mosek solver, which has an R interface package Rmosek. I have yet to try this, but may give it a go at some point.
(As an aside, the other solver bundled with CVX, SeDuMi, fails completely on the same problem; the CVX authors aren't kidding when they suggest trying multiple solvers. Also, in a significant subset of cases, SDTP3 has to switch from Cholesky to LU decomposition, which makes the processing orders of magnitude slower, with only very marginal improvement in the objective compared to the pre-LU steps. I've found it worth reducing the requested precision to avoid this, but YMMV.)
There is a new alternative: CVXR, which comes from the same people.
There is a website, a paper and a github project.
Disciplined Convex Programming seems to be growing in popularity observing cvxpy (Python) and Convex.jl (Julia), again, backed by the same people.

Minimisation using the nlminb R-function

I would like to find the 4-dimensional vector that minimises some function f which depends on 4 variables. The first three variables take on strictly positive values; the fourth one is unconstrained.
To do this, I would like to use R. I have tried to apply the nlminb function with lower=c(0.001, 0.001, 0.001, -Inf) as one of its optional argument. The procedure does converge but it turns out that the proposed solution does not satisfy the constraint !
I have an alternative solution that consists of using an exponential transformation. However, I would appreciate to figure out why R returns a solution that does not meet my requirements.
Any comment will be appreciated,
Thanks,
Marco
It would be very difficult for me to provide that function here. The reason is that it depends on a number of pre-defined stuff.
Anyway, I am not sure to understand why this occurs but I have realized that my function sometimes returns NaN due to very very large numbers. Actually, I have some doubt about convergence.
On the other hand, I have made some modifications and the alternative solution seems to work well.
As a conclusion, I think that the problem came from my function, not from nlminb.
Best,
Marco

Resources