How to use R package Quadprog to solve SVM? - r

I was wondering what's the proper way to implement Quadprog to solve quadratic programming.
I have the following question(ripped from the internet)and also was looking at the following http://cbio.ensmp.fr/~thocking/mines-course/2011-04-01-svm/svm-qp.pdf
What would be the proper way to solve this issue? Would this tutorial be useful to solve if i was given a question like above?
http://www.r-bloggers.com/solving-quadratic-progams-with-rs-quadprog-package/

Here is an implementation, for linear C-SVM, which is based on the primal optimization problem:
min_{beta_0, beta, zeta} 1/2 w^T w + C sum_{i = 1}^N zeta_i
subject to:
y_i (w^T x_i + b) >= 1 - zeta_i, for all i = 1, 2, ..., N
zeta_i >= 0, for all i = 1, 2, ..., N
where N is the number of data points.
Note that using quadprog to solve this is, to some degree, more of a pedagogical exercise, as quadprog relies on an interior point algorithm, while in practice a specialized algorithm would be used, such as Platt's SMO, which takes advantage of particular properties of the SVM optimization problem.
In order to use quadprog, given the equations above, it all boils down to setting up the matrices and vectors that specify the optimization problem.
One issue, however, is that quadprog requires the matrix appearing in the quadratic function to be positive definite (see, for example, http://www.r-bloggers.com/more-on-quadratic-progamming-in-r/), while the implementation used here leads to it being positive semi-definite, since the intercept beta_0 and the zeta_i do not appear in the quadratic function. To go around this issue, I set the diagonal elements corresponding to these values in the matrix to a very small value.
To setup the example code, using the spam dataset, a binary classification problem:
library(kernlab) # for the spam data
# Load the input data to be used
data(spam)
# Use only a subset of the data (20%)
spam <- spam[sample(nrow(spam), round(0.2 * nrow(spam)), replace = FALSE), ]
# Retrieve the features and data
X <- spam[, 1:(ncol(spam) - 1)]
Y_f <- spam[, ncol(spam)]
Y <- 2 * (as.numeric(Y_f) - 1.5) # {-1, 1}
# Sample size
N <- nrow(X)
# Number of dimensions
n_d <- ncol(X)
# Value of the regularization parameter
C <- 1
In order to setup the optimization problem, keep in mind the format employed by package quadprog:
#
# Formulation: min(−d^T * b + 0.5 * b^T * D * b) with the constraints A^T * b >= b_0
#
# solve.QP(Dmat, dvec, Amat, bvec, meq=0, factorized=FALSE)
#
# Arguments
# Dmat: matrix appearing in the quadratic function to be minimized.
# dvec: vector appearing in the quadratic function to be minimized.
# Amat: matrix defining the constraints under which we want to minimize the quadratic function.
# bvec: vector holding the values of b0 (defaults to zero).
# meq: the first meq constraints are treated as equality constraints, all further as inequality
# constraints (defaults to 0).
# factorized logical flag: if TRUE, then we are passing R−1 (where D = RT R) instead of the
# matrix D in the argument Dmat.
#
Then, organizing the parameter vector as:
# b = (beta_0, beta, zeta),
# where: beta_0 in R, beta in Re^n_d, zeta in Re^N
such that:
d <- c(0, rep(0, n_d), rep(-C, N)) # -C * sum(zeta)
# Need a work-around for the matrix D, which must be positive definite (being
# positive semi-definite is not enough...)
# See http://www.r-bloggers.com/more-on-quadratic-progamming-in-r/
eps <- 1e-10 # this will ultimately be the lowest eigenvalue of matrix D (with multiplicity N + 1)
D <- diag(c(eps, rep(1, n_d), rep(eps, N))) # beta^T * beta
#
# Matrix specifying the constraints
# For zeta_i > 0:
# beta_0 | beta | zeta
# A_1 = [ 0, 0, 0, ..., 0, 1, 0, 0, ..., 0]
# [ 0, 0, 0, ..., 0, 0, 1, 0, ..., 0]
# [ 0, 0, 0, ..., 0, 0, 0, 1, ..., 0]
# ...
# [ 0, 0, 0, ..., 0, 0, 0, 0, ..., 1]
# where matrix A_1 has N rows, and N + n_d + 1 columns
#
# For beta_0 * y_i + beta^T * x_i * y_i + zeta_i >= 1:
# beta_0 | beta | zeta
# A_2 = [ y_1, y_1 * x_{1, 1}, y_1 * x_{2, 2}, ..., y_1 * x{i, n_d}, 1, 0, 0, ..., 0]
# [ y_2, y_2 * x_{2, 1}, y_2 * x_{2, 2}, ..., y_2 * x{i, n_d}, 0, 1, 0, ..., 0]
# ...
# [ y_N, y_N * x_{N, 1}, y_2 * x_{N, 2}, ..., y_N * x{N, n_d}, 0, 0, 0, ..., 1]
#
I_N <- diag(N) # N x N identity matrix
A_1 <- cbind(matrix(0, ncol = n_d + 1, nrow = N), I_N) # zeta_i > 0, for all i; N rows
A_2 <- as.matrix(cbind(as.matrix(Y), X * as.matrix(Y)[, rep(1, n_d)], I_N)) # zeta_i + beta_0 * y_i + beta^T * x_i * y_i >= 1, for all i; N rows
rownames(A_1) <- NULL; rownames(A_2) <- NULL
colnames(A_1) <- NULL; colnames(A_2) <- NULL
A <- t(rbind(A_1, A_2))
b_0 <- c(rep(0, N), rep(1, N))
Finally, solve the optimization problem and retrieve the parameter values:
library(quadprog)
results <- solve.QP(D, d, A, b_0)
# Retrieve the results
b_optim <- results$solution
beta_0 <- b_optim[1]
beta <- b_optim[1 + (1:n_d)]
zeta <- b_optim[(n_d + 1) + (1:N)]
Afterwards, given a matrix X_test, the model can be used to predict via:
Y_pred <- sign(apply(X_test, 1, function(x) beta_0 + sum(beta * as.vector(x))))

Related

Simple linear transformation of variable in R: changing the scope of a variable. How to make it right?

I am trying to change the value range of a variable (array, set of values) while keeping its properties. I don't know the exact name in math, but I mean such a kind of transformation that the variable array has exactly the same properties, the spacing between the values is the same, but the range is different. Maybe the code below will explain what I mean.
I just want to "linearly transpose" (or smth?) values to some other range and the distribution should remain same. In other words - I'll just change the scope of the variable using the regression equation y = a * x + b. I assume that the transformation will be completely linear, the correlation between the variables is exactly 1, and I calculate new variable (array) from a regression equation, actually a system of equations where I simply substitute the maximum ranges of both variables:
minimum.y1 = minimum.x1 * a + b
maximum.y2 = maximum.x2 * a + b
from which I can work out the following code to obtain a and b coefficients:
# this is my input variable
x <- c(-1, -0.5, 0, 0.5, 1)
# this is the range i want to obtain
y.pred <- c(1,2,3,4,5)
max_y = 5
min_y = 1
min_x = min(x)
max_x = max(x)
c1 = max_x-min_x
c2 = max_y-min_y
a.coeff = c2/c1
b.coeff = a.coeff-min_x
y = x * a.coeff + b.coeff
y
# hey, it works! :)
[1] 1 2 3 4 5
the correlation between the variable before and after the transformation is exactly 1. So we have a basis for further action. Let's get it as a function:
linscale.to.int <- function(max.lengt, vector) {
max_y = max.lengt
min_y = 1
min_x = min(vector)
max_x = max(vector)
c1 = max_x-min_x
c2 = max_y-min_y
a.coeff = c2/c1
b.coeff = a.coeff-min_x
return(vector * a.coeff + b.coeff)
}
x <- c(-1, -0.5, 0, 0.5, 1)
linscale.to.int(5,x)
[1] 1 2 3 4 5
and it works again. But here's the thing: when i aplly this function to random distribution, like this:
x.rand <- rnorm(50)
y.rand <- linscale.to.int(5,x.rand)
plot(x.rand, y.rand)
or better seable this:
x.rand <- rnorm(500)
y.rand <- linscale.to.int(20,x.rand)
plot(x.rand, y.rand)
I get the values of the second variable completely out of range; it should be between 1 and 20 but i get scope of valuest about -1 to 15:
And now the question arises - what am I doing wrong here? Where do I go wrong with such a transformation?
What you are trying to do is very straightforward using rescale from the scales package (which you will already have installed if you have ggplot2 / tidyverse installed). Simply give it the new minimum / maximum values:
x <- c(-1, -0.5, 0, 0.5, 1)
scales::rescale(x, c(1, 5))
#> [1] 1 2 3 4 5
If you want to have your own function written in base R, the following one-liner should do what you want:
linscale_to_int <- function(y, x) (x - min(x)) * (y - 1) / diff(range(x)) + 1
(Note that it is good practice in R to avoid periods in function names because this can cause confusion with S3 method dispatch)
Testing, we have:
x <- c(-1, -0.5, 0, 0.5, 1)
linscale_to_int(5, x)
#> [1] 1 2 3 4 5
x.rand <- rnorm(50)
y.rand <- linscale_to_int(5, x.rand)
plot(x.rand, y.rand)
y.rand <- linscale_to_int(20, x.rand)
plot(x.rand, y.rand)
Created on 2022-08-31 with reprex v2.0.2

Quadratic optimization - portfolio maximization problems

In portfolio analysis, given the expectation, we aim to find the weight of each asset to minimize the variance
here is the code
install.packages("quadprog")
library(quadprog)
#Denoting annualized risk as an vector sigma
sigma <- c(0.56, 7.77, 13.48, 16.64)
#Formulazing the correlation matrix proposed by question
m <- diag(0.5, nrow = 4, ncol = 4)
m[upper.tri(m)] <- c(-0.07, -0.095, 0.959, -0.095, 0.936, 0.997)
corr <- m + t(m)
sig <- corr * outer(sigma, sigma)
#Defining the mean
mu = matrix(c(1.73, 6.65, 9.11, 10.30), nrow = 4)
m0 = 8
Amat <- t(matrix(c(1, 1, 1, 1,
c(mu),
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1), 6, 4, byrow = TRUE))
bvec <- c(1, m0, 0, 0, 0, 0)
qp <- solve.QP(sig, rep(0, nrow(sig)), Amat, bvec, meq = 2)
qp
x = matrix(qp$solution)
x
(t(x) %*% sig %*% x)^0.5
I understand the formulation of mu and covariance matrix and know the usage of the quadprog plot
However, I don‘t understand why Amat and bvec are defined in this way, why the are 6 by 4 matrix.
$mu0$ is the expectation we aim to have for the portfolio and it is fixed at value 8%
Attached is the question
As you are probably aware, the reason that Amat has four columns is that there are four assets that you are allocating over. It has six rows because there are six constraints in your problem:
The allocations add up to 1 (100%)
Expected return = 8%
'Money market' allocation >= 0
'Capital stable' allocation >= 0
'Balance' allocation >= 0
'Growth' allocation >= 0
Look at the numbers that define each constraint. They are why bvec is [1, 8, 0, 0, 0, 0]. Of these six, the first two are equality constraints, which is why meq is set to 2 (the other four are greater than or equal constraints).
Edited to add:
The way the constraints work is this: each column of Amat defines a constraint, which is then multiplied by the asset allocations, with the result equal to (or greater-than-or-equal-to) some target that is set in bvec. For example:
The first column of Amat is [1, 1, 1, 1], and the first entry of bvec is 1. So the first constraint is:
1 * money_market + 1 * capital_stable + 1 * balance + 1 * growth = 1
This is a way of saying that the asset allocations add up to 1.
The second constraint says that the expected returns add up to 8:
1.73 * money_market + 6.65 * capital_stable + 9.11 * balance + 10.32 * growth = 8
Now consider the third constraint, which says that the 'Money market' allocation is greater than or equal to zero. That's because the 3rd column of Amat is [1, 0, 0, 0] and the third entry of bvec is 0. So this constraint looks like:
1 * money_market + 0 * capital_stable + 0 * balance + 0 * growth >= 0
Simplifying, that's the same as:
money_market >= 0

R: how to compute the mean and covariance of a truncated normal distribution

I'm interested in finding the mean and covariance of a truncated normal random vector. Suppose Y is a vector containing [Y1 Y2 Y3]. Y follows a multivariate normal distribution with the following mean and covariance:
mu <- c(0.5, 0.5, 0.5)
sigma <- matrix(c( 1, 0.6, 0.3,
0.6, 1, 0.2,
0.3, 0.2, 2), 3, 3)
The truncation region is the set of Ys such that AY >= 0. For instance,
A <- matrix(c(1, -2, -0.5, 1.5, -2, 0, 3, -1, -1, 4, 0, -2), byrow = TRUE, nrow = 4)
> A
[,1] [,2] [,3]
[1,] 1.0 -2 -0.5
[2,] 1.5 -2 0.0
[3,] 3.0 -1 -1.0
[4,] 4.0 0 -2.0
For the following draw of Y, it does not satisfy AY >= 0:
set.seed(3)
Y <- rmvnorm(n = 1, mean = mu, sigma = sigma)
> all(A %*% as.matrix(t(Y)) >= 0)
[1] FALSE
But for other draws of Y, they will satisfy AY >= 0, and I want to find the mean and covariance of those Ys that satisfy AY >= 0.
There are existing packages in R that compute the mean and covariance of a truncated normal distribution. For example, mtmvnorm from the tmvtnorm package:
library(tmvtnorm)
mtmvnorm(mu, sigma, lower = ???, upper = ???)
However, the truncation set that I have, i.e, set of Ys that satisfy AY >= 0, cannot be described by just lower and upper bounds. Is there another way to R to compute the mean and covariance of a truncated normal?
You had correct understanding (or maybe noticed) that this is NOT truncated multivariate normal distribution. You have AY>=0 as a linear constraint over Y, rather than simple element-wise lower/upper bounds.
If you are not a math guy, i.e., pursuing explicit solutions of mean and covariance, I guess a straightforward and efficient way is using Monte Carlo simulation.
More specifically, you can presume a sufficient large N to generate big enough set of samples Y and then filter out the samples that satisfy the constraint AY>=0. In turn, you can compute mean and covariance over the selected samples. An attempt is given as below
N <- 1e7
Y <- rmvnorm(n = N, mean = mu, sigma = sigma)
Y_h <- subset(Y, colSums(tcrossprod(A, Y) >= 0) == nrow(A))
mu_h <- colMeans(Y_h)
sigma_h <- cov(Y_h)
and you will see
> mu_h
[1] 0.8614791 -0.1365222 -0.3456582
> sigma_h
[,1] [,2] [,3]
[1,] 0.5669915 0.29392671 0.37487421
[2,] 0.2939267 0.36318397 0.07193513
[3,] 0.3748742 0.07193513 1.37194669
Another way follows the similar idea, but we can presume the set size of selected samples, i.e., N samples Y all should make AY>=0 stand. Then we can use while loop to do this
N <- 1e6
Y_h <- list()
nl <- 0
while (nl < N) {
Y <- rmvnorm(n = N, mean = mu, sigma = sigma)
v <- subset(Y, colSums(tcrossprod(A, Y) >= 0) == nrow(A))
nl <- nl + nrow(v)
Y_h[[length(Y_h) + 1]] <- v
}
Y_h <- head(do.call(rbind, Y_h), N)
mu_h <- colMeans(Y_h)
sigma_h <- cov(Y_h)
and you will see
> mu_h
[1] 0.8604944 -0.1364895 -0.3463887
> sigma_h
[,1] [,2] [,3]
[1,] 0.5683498 0.29492573 0.37524248
[2,] 0.2949257 0.36352022 0.07252898
[3,] 0.3752425 0.07252898 1.37427521
Note: The advantage of the second option is that, it gives you the sufficient large number of selected Y_h as you want.

Is it mathematically possible to solve this problem?

x <- abs(rnorm(8))
C <- (x[1]*x[2]*x[3])^(1/3)
y <- log(x/C)
Is it mathematically possible to determine x[1:3] given you only have y? Here, x and y are always vectors of length 8. I should note that x is known for some of my dataset, which could be useful to find a solution for the other portion of the data where x is unknown. All of my code is implemented in R, so R code would be appreciated if this is solvable!
Defining f as
f <- function(x) {
C <- (x[1]*x[2]*x[3])^(1/3)
log(x/C)
}
we first note that if k is any scalar constant then f(x) and f(k*x) give the same result so if we have y = f(x) we can't tell whether y came from x or from k*x. That is, y could have come from any scalar multiple of x; therefore, we cannot recover x from y.
Linear formulation
Although we cannot recover x we can determine x up to a scalar multiple. Define the matrix A:
ones <- rep(1, 8)
a <- c(1, 1, 1, 0, 0, 0, 0, 0)
A <- diag(8) - outer(ones, a) / 3
in which case f(x) equals:
A %*% log(x)
Inverting formula
From this formula, given y and solving for x, the value of x would equal
exp(solve(A) %*% y) ## would equal x if A were invertible
if A were invertible but unfortunately it is not. For example, rowSums(A) equals zero which shows that the columns of A are linearly dependent which implies non-invertibility.
all.equal(rowSums(A), rep(0, 8))
## [1] TRUE
Rank and nullspace
Note that A is a projection matrix. This follows from the fact that it is idempotent, i.e. A %*% A equals A.
all.equal(A %*% A, A)
## [1] TRUE
It also follows from the fact that its eigenvalues are all 0 and 1:
zapsmall(eigen(A)$values)
## [1] 1 1 1 1 1 1 1 0
From the eigenvalues we see that A has rank 7 (the number of nonzero eigenvalues) and the dimension of the nullspace is 1 (the number of zero eigenvalues).
Another way to see this is that knowing that A is a projection matrix its rank equals its trace, which is 7, so its nullspace must have dimension 8-7=1.
sum(diag(A)) # rank of A
## [1] 7
Taking scalar multiples spans a one dimensional space so from the fact that the nullspace has dimension 1 it must be the entirely of the values that map into the same y.
Key formula
Now replacing solve in ## above with the generalized inverse, ginv, we have this key formula for our approximation to x given that y = f(x) for some x:
library(MASS)
exp(ginv(A) %*% y) # approximation to x accurate up to scalar multiple
or equivalently if y = f(x)
exp(y - mean(y))
While these do not give x they do determine x up to a scalar multiple. That is if x' is the value produced by the above expressions then x equals k * x' for some scalar constant k.
For example, using x and y from the question:
exp(ginv(A) %*% y)
## [,1]
## [1,] 1.2321318
## [2,] 0.5060149
## [3,] 3.4266146
## [4,] 0.1550034
## [5,] 0.2842220
## [6,] 3.7703442
## [7,] 1.0132635
## [8,] 2.7810703
exp(y - mean(y)) # same
## [1] 1.2321318 0.5060149 3.4266146 0.1550034 0.2842220 3.7703442 1.0132635
## [8] 2.7810703
exp(y - mean(y))/x
## [1] 2.198368 2.198368 2.198368 2.198368 2.198368 2.198368 2.198368 2.198368
Note
Note that y - mean(y) can be written as
B <- diag(8) - outer(ones, ones) / 8
B %*% y
and if y = f(x) then y must be in the range of A so we can verify that:
all.equal(ginv(A) %*% A, B %*% A)
## [1] TRUE
It is not true that the matrix ginv(A) equals B. It is only true that they act the same on the range of A which is all that we need.
No, it's not possible. You have three unknowns. That means you need three independent pieces of information (equations) to solve for all three. y gives you only one piece of information. Knowing that the x's are positive imposes a constraint, but doesn't necessarily allow you to solve. For example:
x1 + x2 + x3 = 6
Doesn't allow you to solve. x1 = 1, x2 = 2, x3 = 3 is one solution, but so is x1 = 1, x2 = 1, x3 = 4. There are many other solutions. [Imposing your "all positive" constraint would rule out solutions such as x1 = 100, x2 = 200, x3 = -294, but in general would leave more than one remaining solution.]
x1 + x2 + x3 = 6,
x1 + x2 - x3 = 0
Constrains x3 to be 3, but allows arbitrary solutions for x1 and x2, subject to x1 + x2 = 3.
x1 + x2 + x3 = 6,
x1 + x2 - x3 = 0,
x1 - x2 + x3 = 2
Gives the unique solution x1 = 1, x2 = 2, x3 = 3.

Punif and Quinf Functions in R

I have a question:
If X is a random variable in a density function which is uniform between -2 and 3.
I want to find these two questions:
What is the upper quartile of X?
What is the 44% quantile of X?
Now the things I have tried are below:
z <- 1 - punif(0.75, min = -2, max = 3, lower.tail = TRUE)
answer: 0.45
y <- qunif(0.44, min = -2, max = 3, lower.tail = TRUE)
answer: 0.2
First is this even the right way to go about it.
Second, I understand that Punif finds the accumulate probability of X. What does qunif find, and what does the result tell me about X and the distribution?
If is you have a random variable x with uniform distribution from a to b
X ~ U(a,b)
Then punif(x, a, b) is the probability that U <= x
And qunif(x, a, b) finds the value y such that Pr(U <= y)=x
You can visualize these plots with
curve(punif(x, -2, 3), from=-2, to=3, main="punif")
curve(qunif(x, -2, 3), from=0, to=1, main="qunif")
Note how punif expects a value anywhere between a and b but qunif expects a probability so it must be between 0 and 1.

Resources