I would like to extract the alpha lagrange multipliers from the SVM function in the e1071 R package, however I am not sure if svm$coef is producing these?
Alphas are defined as in Equation 9.23, p352, An Introduction to Statistical Learning
In the documentation for SVM, it says that
SVM$Coefs = The corresponding coefficients times the training labels
Could someone please explain it?
$coefs produces alpha_i * y_i, but as alpha_i are by definition non-negative, you can simply take absolute value of coefs and it gives you Lagrange multipliers, and extract y_i by taking a sign (as they are only +1 or -1). This is just a simplification, often used in SVM packages, as multipliers are never actually used - only their product with the label, thus they are stored as a single number, for simplicity and efficiency, and in a case of need (like this one) - you can always reconstruct them.
Related
I'm trying to analyze repairable systems reliability using growth models.
I have already fitted a Crow-Amsaa model but I wonder if there is any package or any code for fitting a Generalized Renewal Process (Kijima Model I) or type II
in R and find it's parameters Beta, Lambda(or alpha) and q.
(or some other model for the mean cumulative function MCF)
The equation number 15 of this article gives an expression for the
Log-likelihood
I tried to create the function like this:
likelihood.G1=function(theta,x){
# x is a vector with the failure times, theta vector of parameters
a=theta[1] #Alpha
b=theta[2] #Beta
q=theta[3] #q
logl2=log(b/a) # First part of the equation
for (i in 1:length(x)){
logl2=logl2 +(b-1)*log(x[i]/(a*(1+q)^(i-1))) -(x[i]/(a*(1+q)^(i-1)))^b
}
return(-logl2) #Negavite of the log-likelihood
}
And then use some rutine for minimize the -Log(L)
theta=c(0.5,1.2,0.8) #Start parameters (lambda,beta,q)
nlm(likelihood.G1,theta, x=Data)
Or also
optim(theta,likelihood.G1,method="BFGS",x=Data)
However it seems to be some mistake, since the parameters it returns has no sense
Any ideas of what I'm doing wrong?
Thanks
Looking at equation (16) of the paper you reference and comparing it with your code it looks like you are missing one term in the for loop. It seems that each data point contributes to three terms of the log-likelihood but in your code (inside the loop) you only have two terms (not considering the updating term)
Specifically, your code does not include the 4th term in equation (16):
and neither it does the 7th term, and so on. This is at least one error in the code. An extra consideration would be that α and β are constrained to be greater than zero. I am not sure if the solver you are using is considering this constraint.
The example is based on an example in Shumway and Stoffer's: "Time Series Analysis and it's Applications with R Examples". In the original example phi, cq, and cr were scalar so the authors could use fdHess without any issues (see the hashed out version of the code).
para=list(phi, cq, cr)
Linn=function(para){# to evaluate likelhood at estimates
#kf=Kfilter0(num,y,1,mu0,Sigma0,para[1],para[2],para[3])
kf=Kfilter0(num,y,A=h,mu0,Sigma0,para[[1]],para[[2]],para[[3]])
return(kf$like)}
emhess=fdHess(para, function(para) Linn(para))
SE=sqrt(diag(solve(emhess$Hessian)))
I would like to generalize the code so that it can be applied to multivariate time series models. So in the code shown phi, cq, and cr are n*n arrays.
Is there a package that can calculate the Hessian for a scalar valued function with matrix arguments?
The closest match I can find is this (I also looked at nlme and numDeriv):
calculating the Gradient and the Hessian in R
In this case all the arguments are passed as a vector so the function being called has to be modified so that it can take the list of arguments and reconstruct the required matrices.
Is there a method that would allow me to calculate the Hessian for a scalar valued function with matrix arguments without changing the function being called? Seems this would be such a common problem that there would be an off the shelf answer but I haven't been able to find one.
Baz
I am trying to better understand how the values of my feature vector may influence the result. For example, let's say I have the following vector with the final value being the result (this is a classification problem using an SVC, for example):
0.713, -0.076, -0.921, 0.498, 2.526, 0.573, -1.117, 1.682, -1.918, 0.251, 0.376, 0.025291666666667, -200, 9, 1
You'll notice that most of the values center around 0, however, there is one value that is orders of magnitude smaller, -200.
I'm concerned that this value is skewing the prediction and is being weighted unfairly heavier than the rest simply because the value is so much different.
Is this something to be concerned about when creating a feature vector? Or will the statistical test I use to evaluate my vector control for this large (or small) value based on the training set I provide it with? Are there methods available in sci-kit learn specifically that you would recommend to normalize the vector?
Thank you for your help!
Yes, it is something you should be concerned about. SVM is heavily influenced by any feature scale variances, so you need a preprocessing technique in order to make it less probable, from the most popular ones:
Linearly rescale each feature dimension to the [0,1] or [-1,1] interval
Normalize each feature dimension so it has mean=0 and variance=1
Decorrelate values by transformation sigma^(-1/2)*X where sigma = cov(X) (data covariance matrix)
each can be easily performed using scikit-learn (although in order to achieve the third one you will need a scipy for matrix square root and inversion)
I am trying to better understand how the values of my feature vector may influence the result.
Then here's the math for you. Let's take the linear kernel as a simple example. It takes a sample x and a support vector sv, and computes the dot product between them. A naive Python implementation of a dot product would be
def dot(x, sv):
return sum(x_i * sv_i for x_i, sv_i in zip(x, sv))
Now if one of the features has a much more extreme range than all the others (either in x or in sv, or worse, in both), then the term corresponding to this feature will dominate the sum.
A similar situation arises with the polynomial and RBF kernels. The poly kernel is just a (shifted) power of the linear kernel:
def poly_kernel(x, sv, d, gamma):
return (dot(x, sv) + gamma) ** d
and the RBF kernel is the square of the distance between x and sv, times a constant:
def rbf_kernel(x, sv, gamma):
diff = [x_i - sv_i for x_i, sv_i in zip(x, sv)]
return gamma * dot(diff, diff)
In each of these cases, if one feature has an extreme range, it will dominate the result and the other features will effectively be ignored, except to break ties.
scikit-learn tools to deal with this live in the sklearn.preprocessing module: MinMaxScaler, StandardScaler, Normalizer.
I am using R to run a simulation in which I use a likelihood ratio test to compare two nested item response models. One version of the LRT uses the joint likelihood function L(θ,ρ) and the other uses the marginal likelihood function L(ρ). I want to integrate L(θ,ρ) over f(θ) to obtain the marginal likelihood L(ρ). I have two conditions: in one, f(θ) is standard normal (μ=0,σ=1), and my understanding is that I can just pick a number of abscissa points, say 20 or 30, and use Gauss-Hermite quadrature to approximate this density. But in the other condition, f(θ) is a linearly transformed beta distribution (a=1.25,b=10), where the linear transformation B'=11.14*(B-0.11) is such that B' also has (approximately) μ=0,σ=1.
I am confused enough about how to implement quadrature for a beta distribution but then the linear transformation confuses me even more. My question is threefold: (1) can I use some variation of quadrature to approximate f(θ) when θ is distributed as this linearly transformed beta distribution, (2) how would I implement this in R, and (3) is this a ridiculous waste of time such that there is an obviously much faster and better method to accomplish this task? (I tried writing my own numerical approximation function but found that my implementation of it, being limited to the R language, was just too slow to suffice.)
Thanks!
First, I assume you can express your L(θ,ρ) and f(θ) in terms of actual code; otherwise you're kinda screwed. Given that assumption, you can use integrate to perform the necessary computations. Something like this should get you started; just plug in your expressions for L and f.
marglik <- function(rho) {
integrand <- function(theta, rho) L(theta, rho) * f(theta)
# set your lower/upper integration limits as appropriate
integrate(integrand, lower=-5, upper=5, rho=rho)
}
For this to work, your integrand has to be vectorized; ie, given a vector input for theta, it must return a vector of outputs. If your code doesn't fit the bill, you can use Vectorize on the integrand function before passing it to integrate:
integrand <- Vectorize(integrand, "theta")
Edit: not sure if you're also asking how to define f(θ) for the transformed beta distribution; that seems rather elementary for someone working with joint and marginal likelihoods. But if you are, then the density of B' = a*B + b, given f(B), is
f'(B') = f(B)/a = f((B' - b)/a) / a
So in your case, f(theta) is dbeta(theta/11.14 + 0.11, 1.25, 10) / 11.14
I applied the fitdistrplus package in order to fit an empirical distribution.
It turned out the best fit was the negative binomial distribution with parameters:
size=0.6900788
mu=2.6522087
dnbinom(0:10, mu = 2.6522087, size =0.6900788)
[1] 0.33666338 0.18435650 0.12362301 0.08796440 0.06439416 0.04793144 0.03607044 0.02735574 0.02086667 0.01598815 0.01229390
I am now trying to generate the same numbers on EXCEL where the parameters are required in different format:
NEGBINOMDIST(number_f,number_s,probability_s)
How am I meant to do this? Any ideas?
Many thanks..
According to Microsoft's documentation, Excel uses the standard "number of draws before n failures" definition; the parameterization used by fitdistrplus is the alternative referred to in ?dnbinom as:
An alternative parametrization (often used in ecology) is by the
mean ‘mu’, and ‘size’, the dispersion parameter, where ‘prob’
= ‘size/(size+mu)’. The variance is ‘mu + mu^2/size’ in this
parametrization.
So if you want to get back from mu and size to prob and size (Excel's probability_s and number_s respectively) you need
number_s=size
probability_s=size/(size+mu)
muval <- 2.6522087
sizeval <- 0.6900788
(probval <- sizeval/(sizeval+muval))
## [1] 0.206469
all.equal(dnbinom(0:10,mu=muval,size=sizeval),
dnbinom(0:10,prob=probval,size=sizeval))
## TRUE
However, you're not done yet, because (as commented above by #James) Excel only allows positive integers for number_s, and the estimated value above is 0.69. You may need to search/ask on an Excel-related forum about how to overcome this limitation ... at worst, since Excel does have an implementation of the gamma function, you can use the formula given in ?dnbinom
Gamma(x+n)/(Gamma(n) x!) p^n (1-p)^x
to implement your own calculation of the NB (this formulation allows non-integer values of n). It would be best to use the GAMMLN function in Excel to calculate the numerator and denominator of the normalization constant on the log scale ... if you're lucky, someone out there will have saved you some trouble and implemented this already ...