Weights from linear SVM model (in R)? - r

Using kernlab I've trained a model with code like the following:
my.model <- ksvm(result ~ f1+f2+f3, data=gold, kernel="vanilladot")
Since it's a linear model, I prefer at run-time to compute the scores as a simple weighted sum of the feature values rather than using the full SVM machinery. How can I convert the model to something like this (some made-up weights here):
> c(.bias=-2.7, f1=0.35, f2=-0.24, f3=2.31)
.bias f1 f2 f3
-2.70 0.35 -0.24 2.31
where .bias is the bias term and the rest are feature weights?
EDIT:
Here's some example data.
gold <- structure(list(result = c(-1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), f1 = c(0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0,
1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1), f2 = c(13.4138113499447,
13.2216999857095, 12.964145772169, 13.1975227965938, 13.1031520152764,
13.59351759447, 13.1031520152764, 13.2700658838026, 12.964145772169,
13.1975227965938, 12.964145772169, 13.59351759447, 13.59351759447,
13.0897162110721, 13.364151238365, 12.9483051847806, 12.964145772169,
12.964145772169, 12.964145772169, 12.9483051847806, 13.0937231331592,
13.5362700880482, 13.3654209223623, 13.4356400945176, 13.59351759447,
13.2659406408724, 13.4228886221088, 13.5103065354936, 13.5642812689161,
13.3224757352068, 13.1779418771704, 13.5601730479315, 13.5457299603578,
13.3729010596517, 13.4823595997866, 13.0965264603473, 13.2710281801434,
13.4489887206797, 13.5132372154748, 13.5196188787197), f3 = c(0,
1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0)), .Names = c("result",
"f1", "f2", "f3"), class = "data.frame", row.names = c(NA, 40L
))

To get the bias, just evaluate the model with a feature vector of all zeros. To get the coefficient of the first feature, evaluate the model with a feature vector with a "1" in the first position, and zeros everywhere else - and then subtract the bias, which you already know. I'm afraid I don't know R syntax, but conceptually you want something like this:
bias = my.model.eval([0, 0, 0])
f1 = my.model.eval([1, 0, 0]) - bias
f2 = my.model.eval([0, 1, 0]) - bias
f3 = my.model.eval([0, 0, 1]) - bias
To test that you did it correctly, you can try something like this:
assert(bias + f1 + f2 + f3 == my.model.eval([1, 1, 1]))

If I'm not mistaken, I think you're asking how to extract the W vector of the SVM, where W is defined as:
W = \sum_i y_i * \alpha_i * example_i
Ugh: don't know best way to write equations here, but this just is the sum of the weight * support vectors. After you calculate the W, you can extract the "weight" for the feature you want.
Assuming this is correct, you'd:
Get the indices of your data that are the support vectors
Get their weights (alphas)
Calculate W
kernlab stores the support vector indices and their values in a list (so it works on multiclass problems, too), anyway any use of list manipulation is just to get at the real data (you'll see that the length of the lists returned by alpha and alphaindex are just 1 if you just have a 2-class problem, which I'm assuming you do).
my.model <- ksvm(result ~ f1+f2+f3, data=gold, kernel="vanilladot", type="C-svc")
alpha.idxs <- alphaindex(my.model)[[1]] # Indices of SVs in original data
alphas <- alpha(my.model)[[1]]
y.sv <- gold$result[alpha.idxs]
# for unscaled data
sv.matrix <- as.matrix(gold[alpha.idxs, c('f1', 'f2', 'f3')])
weight.vector <- (y.sv * alphas) %*% sv.matrix
bias <- b(my.model)
kernlab actually scales your data first before doing its thing. You can get the (scaled) weights like so (where, I guess, the bias should be 0(?))
weight.vector <- (y.sv * alphas) %*% xmatrix(my.model)[[1]]
If I understood your question, this should get you what you're after.

Related

How to create a confusion matrix using a function in R

I created the following data set:
actual <- c(1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0)
predicted <- c(1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0)
The following code works, but I want to use a function to create a confusion matrix instead:
#create new data frame
new_data <- data.frame(actual, predicted)
new_data["class"] <- ifelse(new_data["actual"]==0 & new_data["predicted"]==0, "TN",
ifelse(new_data["actual"]==0 & new_data["predicted"]==1, "FP",
ifelse(new_data["actual"]==1 & new_data["predicted"]==0, "FN", "TP")))
(conf.val <- table(new_data["class"]))
What might be the code to do that?
If you want the same output format as the one you posted, then consider this function
confusion <- function(pred, real) {
stopifnot(all(c(pred, real) %in% 0:1))
table(matrix(c("TN", "FP", "FN", "TP"), 2L)[cbind(pred, real) + 1L])
}
Output
> confusion(predicted, actual)
FN FP TN TP
1 2 5 4
The caret library offers a great collection of methods for machine learning
library(caret)
actual <- as.factor(c(1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0))
predicted <- as.factor(c(1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0))
caret::confusionMatrix(data = predicted, actual, positive="1")

In-Degree Bonacich Power Centrality in R?

Thank you for your time in advance. I am attempting to identify a method to calculate in-degree Bonacich Power Centrality in R. I'm a long-time UCINET user attempting to make the switch. In UCINET, this is done selecting Beta Centrality (Bonacich Power), and selecting "in-centrality" for the direction.
In R, it doesn't seem as though there is a way to calculate this using either sna or igraph packages. Here it is for bonpow in sna:
bonpow(dat, g=1, nodes=NULL, gmode="digraph", diag=FALSE, tmaxdev=FALSE,
exponent=1, rescale=FALSE, tol=1e-07)
I do specify digraph, but I am not able to replicate the analysis in R.
Similarly, here it is for power_centrality in igraph:
power_centrality(graph, nodes = V(graph), loops = FALSE,
exponent = 1, rescale = FALSE, tol = 1e-07, sparse = TRUE)
Here, there does not seem to be a way to specify that it is a directed graph (although you can specify it when defining the network). However, you can estimate it for betweenness centrality.
In neither case do I seem to be able to specify in-degree or out-degree power centrality. Any help is appreciated. Is there something either in these or in a different package that I may be overlooking?
I'm not sure about what do you mean by direction since the original paper, seems to me, does not deal with it. Now, a thing that is usually done with these statistics that are calculated directly from the adjacency matrix is "change the direction" by taking the transpose of the statistic (for example, when computing exposure in the netdiffuseR package we allow the user to compute "incoming" or "outgoing" exposure by just taking the transpose of the adjacency matrix). When you take the transpose, you are essentially flipping the directionality of the ties, i.e. i->j turns to j->i.
If that's what UCINET does (again, not completely sure what it is), then you can get the "incoming"/"outgoing" version by transposing the network. Here is a toy example:
# Loading the sna package (btw: igraph's implementation is a copy of
# sna's). I wrap it around suppressMessages to avoid the verbose
# print that the package has
suppressMessages(library(sna))
# This is random graph I generated with 10 vertices
net <- structure(
c(0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1,
0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1,
0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0),
.Dim = c(10L, 10L)
)
# Here is the default
bonpow(net)
#> [1] -0.8921521 -0.7658658 -0.9165947 -1.4176664 -0.6151369 -0.7862345
#> [7] -0.9206684 -1.3565601 -1.0347335 -1.0062173
# Here I'm getting the transpose of the adjmat
net2 <- t(net)
# The output is different (as you can see)
bonpow(net2)
#> [1] -0.8969158 -1.1026305 -0.6336011 -0.7158869 -1.2960022 -0.9545159
#> [7] -1.1684592 -0.8845729 -1.0368018 -1.1190876
Created on 2019-11-20 by the reprex package (v0.3.0)

Specifying number of trials, bootstrap

For an assignment, I am applying mixture modeling with the mixtools package on R. When I try to figure out the optimal amount of components with bootstrap. I get the following error
Error in boot.comp(y, x, N = NULL, max.comp = 2, B = 5, sig = 0.05, arbmean = TRUE, :
Number of trials must be specified!
I found out that I have to fill an N: An n-vector of number of trials for the logistic regression type logisregmix. If
NULL, then N is an n-vector of 1s for binary logistic regression.
But, I don't know how to find out what the N is in fact to make my bootstrap working.
Link to my codes:
https://www.kaggle.com/blastchar/telco-customer-churn
My codes:
data <- read.csv("Desktop/WA_Fn-UseC_-Telco-Customer-Churn.csv", stringsAsFactors = FALSE,
na.strings = c("NA", "N/A", "Unknown*", "NULL", ".P"))
data <- droplevels(na.omit(data))
data <- data[c(1:5032),]
testdf <- data[c(5033:7032),]
data <- subset(data, select = -customerID)
set.seed(100)
library(plyr)
library(mixtools)
data$Churn <- revalue(data$Churn, c("Yes"=1, "No"=0))
y <- as.numeric(data$Churn)
x <- model.matrix(Churn ~ . , data = data)
x <- x[, -1] #remove intercept
x <-x[,-c(7, 11, 13, 15, 17, 19, 21)] #multicollinearity
a <- boot.comp(y, x, N = NULL, max.comp = 2, B = 100,
sig = 0.05, arbmean = TRUE, arbvar = TRUE,
mix.type = "logisregmix", hist = TRUE)
Below there is more information about my predictors:
dput(x[1:4,])
structure(c(0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1,
34, 2, 45, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 1, 1, 0, 29.85, 56.95, 53.85, 42.3, 29.85, 1889.5, 108.15,
1840.75), .Dim = c(4L, 23L), .Dimnames = list(c("1", "2", "3",
"4"), c("genderMale", "SeniorCitizen", "PartnerYes", "DependentsYes",
"tenure", "PhoneServiceYes", "MultipleLinesYes", "InternetServiceFiber optic",
"InternetServiceNo", "OnlineSecurityYes", "OnlineBackupYes",
"DeviceProtectionYes", "TechSupportYes", "StreamingTVYes", "StreamingMoviesYes",
"ContractOne year", "ContractTwo year", "PaperlessBillingYes",
"PaymentMethodCredit card (automatic)", "PaymentMethodElectronic check",
"PaymentMethodMailed check", "MonthlyCharges", "TotalCharges"
)))
My response variable is binary
I hope you guys can help me out!
Looking in the source code of mixtools::boot.comp, which is scary as it is over 800 lines long and in serious need of refactoring, the offending lines are:
if (mix.type == "logisregmix") {
if (is.null(N))
stop("Number of trials must be specified!")
Despite what the documentation says, N must be specified.
Try to set it to a vector of 1s: N = rep(1, length(y)) or N = rep(1, nrow(x))
In fact, if you look in mixtools::logisregmixEM, the internal function called by boot.comp, you'll see how N is set if NULL:
n <- length(y)
if (is.null(N)) {
N = rep(1, n)
}
Too bad this is never reached if N is NULL since it stops with an error before. This is a bug.

How can I make Gurobi (Using R) show all solutions

As the question states: I know there are several solutions (see output of GA and check that value and constraints are correct), but I can't get them out of Gurobi.
Edit after #Paleo13's answer: As he states, his answer is a good workround. However I would also love to see, if there is a more efficient option. Therefore, I added a bounty. See here and here for what I know.
Reproducible example:
my_fun <- function(x) {
f <- sum(model$obj * x)
penalty <- sum(abs(model$A %*% x - model$rhs))
return_value <- -f - 1e8 * penalty # sum(model$obj^2) * // 1e7 *
return(return_value)
}
model <- structure(
list(modelsense = "min",
obj = c(0, 40, 20, 40, 0, 20, 20, 20, 0),
A = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 1, 0, 0, 1,
1, 0, -1, 0, 0, 0, 0, -1, 1, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, -1, 0, 0, 1, 0, -1, 0, 1, 0,
0, 1, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0),
.Dim = c(7L, 9L),
.Dimnames = list(
c("constraint1", "constraint2", "", "", "", "", ""),
NULL)),
rhs = c(1, 1, 0, 0, 0, 1, 1),
sense = c("=", "=", "=", "=", "=", "=", "="),
vtype = "B"),
.Names = c("modelsense", "obj", "A", "rhs", "sense", "vtype"))
# Gurobi:
params <- list(OutputFlag = 1, Presolve = 2, LogToConsole = 1, PoolSearchMode = 2, PoolSolutions = 10)
ilp_result <- gurobi::gurobi(model, params)
print(ilp_result$x)
# GA for cross-check
GA <- GA::ga(type = "binary", fitness = my_fun, nBits = length(model$obj),
maxiter = 3000, run = 2000, popSize = 10, seed = 12)
# Crosscheck:
summary(GA)
my_fun(ilp_result$x)
my_fun(GA#solution[1, ])
my_fun(GA#solution[2, ])
sum(abs(model$A %*% ilp_result$x - model$rhs))
sum(abs(model$A %*% GA#solution[1, ] - model$rhs))
sum(abs(model$A %*% GA#solution[2, ] - model$rhs))
What you describe can be done with the Solution Pool. Gurobi added the R API for the solution pool in version 8.0. You set parameters to control the solution pool; the multiple solutions are returned in the Solution Pool named components. This is illustrated in the poolsearch.R example, which can also be found in the examples\R subdirectory.
Disclaimer: I manage technical support for Gurobi.
Gurobi can indeed store feasible solutions it that encounters while searing for the optimal solution (or rather a solution that fits within a specified opitmality gap). These solutions are stored in a "solution pool". Unfortunately, the gurobi R package does not have the functionality to access the solutions in the solution pool, so if we are looking for a solution that just uses R then we cannot use the solution pool. Also, it's worth noting that the solution pool may not necessarily contain all the feasible solutions, it only contains the solutions that Gurobi found along the way, so if we require all the feasible solutions then we cannot just rely on the solution pool in a single run of Gurobi.
So, with regards to your question, one strategy is to use a method referred to as "Bender's cuts". This basically involves solving the problem, adding in constraints to forbid the solution we just obtained, and then solving the problem again, and repeating this process until there aren't any more feasible solutions. I have written a function that implements this method using the gurobi R package below and applied it to your example. This method may not scale very well to problems with a large number of feasible solutions, because ideally we would access the solution pool to reduce the total number of Gurobi runs, but this is the best approach to my knowledge (but I would love to hear if anyone has any better ideas).
# define functions
find_all_feasible_solutions <- function(model, params) {
# initialize variables
counter <- 0
solutions <- list()
objs <- numeric(0)
# search for feasible solutions until no more exist
while (TRUE) {
# increment counter
counter <- counter + 1
# solve problem
s <- gurobi::gurobi(model, params)
# break if status indicates that no feasible solution found
if (s$status %in% c("INFEASIBLE")) break()
# store set of solutions
solutions[[counter]] <- s$x
objs[[counter]] <- s$objval
# add constraint to forbid solution this solution
model$rhs <- c(model$rhs, sum(s$x) - 1)
model$sense <- c(model$sense, "<=")
model$A <- rbind(model$A, (s$x * 2) - 1)
}
# throw error if no feasible sets of solutions found
if (length(solutions) == 0) {
stop("no feasible solutions found.")
}
# return solutions as matrix
list(x = do.call(rbind, solutions), obj = objs)
}
# create initial model
model <- list(
modelsense = "min",
obj = c(0, 40, 20, 40, 0, 20, 20, 20, 0),
A = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 1, 0, 0, 1,
1, 0, -1, 0, 0, 0, 0, -1, 1, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, -1, 0, 0, 1, 0, -1, 0, 1, 0,
0, 1, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0),
.Dim = c(7L, 9L),
.Dimnames = list(c("constraint1", "constraint2", "", "", "", "", ""),
NULL)),
rhs = c(1, 1, 0, 0, 0, 1, 1),
sense = c("=", "=", "=", "=", "=", "=", "="),
vtype = "B")
# create parameters
params <- list(OutputFlag = 1, Presolve = 2, LogToConsole = 1)
# find all feasible solutions
output <- find_all_feasible_solutions(model, params)
# print number of feasible solutions
print(length(output$obj))

PCA and Constant-Zero Column Error

I have a question about PCA using the caret package and an error message I'm getting, "cannot rescale a constant/zero column to unit variance".
Consider two sets of similar code. The first works just fine:
a = c(0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, -1, -1, NA)
b = c(1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, -1, -1, NA)
c = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0)
df = data.frame(a, b, c)
trans = preProcess(df, method = c("center", "scale", "pca"))
The variance of each column can be seen as:
apply(df, 2, var, na.rm=TRUE)
Note that the variance of column "c" is 0.11
Let's say I change the second to last integer in column "c" to 1 instead of 0, and then run the same code:
a = c(0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, -1, -1, NA)
b = c(1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, -1, -1, NA)
c = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0)
df = data.frame(a, b, c)
trans = preProcess(df, method = c("center", "scale", "pca"))
I get an error message:
Error in prcomp.default(x, scale = TRUE, retx = FALSE) :
cannot rescale a constant/zero column to unit variance
If you look at the variance for column c, it's 0.059:
apply(df, 2, var, na.rm=TRUE)
Can anyone please help me understand the difference between these two sets of code and why the second gives an error when the first does not?
Thank you
PCA only uses complete observations. In your second definition of df above, a PCA analysis will drop the last row due to missingness. And column c is constant within the remaining rows.
Note: my answer is around PCA generally and not specific to the caret package.

Resources