Is there any non-linear mixed integer solver in julia? - julia

would you please help me in this error . ERROR: Solver does not support discrete variables.
for example in the following code
using JuMP,CPUTime, Distributions, Ipopt
#parameters--------------------------------------------------------
sig=0.86;
#---------------------------------------------------------------------------
ALT=Model(solver=IpoptSolver());
# variables-----------------------------------------------------------------
f(x) = cdf(Normal(0, 1), x);
JuMP.register(ALT, :f, 1, f; autodiff = true);
#variable(ALT, h >= 0);
#variable(ALT, L >= 0);
#variable(ALT, n, Int);
#-------------------------------------------------------------------
#NLexpression(ALT,k7,1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n)));
#constraints--------------------------------------------------------
#NLconstraint(ALT, f(-L) <= 1/400);
#-------------------------------------------------------------------
#NLobjective(ALT, Min, 1/k7)
solve(ALT)
How is it possible to solve the problem? Thanks very much.

The full list of JuMP solvers and their capabilities with regard to model types is availabe here https://jump.dev/JuMP.jl/dev/installation/
According to this list the following solver support mixed-integer nonlinear programming:
KNITRO.jl
Juniper.jl
SCIP.jl
There is also worth noting Alpine.jl from Los Alamos not mentioned in JuMP docs.
I recommend trying to start with Juniper.jl. Since it is using heuristics and other solvers you Model line could look like this:
m = Model(optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0), "mip_solver"=>optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0)))

Related

MethodError: some problems with JuMP in Julia

I have some problems with JuMP. When I run it, it says:
MethodError: no method matching (::Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}})(::AffExpr)
Use square brackets [] for indexing an Array.
thanks!
using JuMP
import Ipopt
β = 0.88
Nb = 1000
δ = 1.5
wage = 1
rate = 1
grid_b = range(0, 5, length = 1000)
w = 5 * (grid_b).^2
w_func = LinearInterpolation(grid_b, w)
choice1 = Model(Ipopt.Optimizer)
#variable(choice1, x >= 0)
#NLobjective(choice1, Max, x^δ/(1-δ) + β * (w_func.((grid_b[3]*(1+rate)+wage-x) * 3)))
optimize!(choice1)
If I try to run your code, I get
ERROR: UndefVarError: LinearInterpolation not defined
which package is that from? Also, what version of JuMP are you using?
You can’t use arbitrary functions in JuMP. You need to use a user-defined function:
https://jump.dev/JuMP.jl/stable/manual/nlp/#User-defined-Functions
But this will only work if it’s possible to automatically differentiate the function. I don’t know if that works with Interpolations.jl.
p.s. Please provide a link when posting the same question in multiple places: https://discourse.julialang.org/t/methoderror-no-method-matching-in-jump/66543/2

what is NonlinearConstraintIndex in julia?

I tired to change right hand of non-linear constraint in the following code. although kind people helped me a lot, I couldn't to find how should I fix it. would you please help me again? Thank so much.
using JuMP, Ipopt, Juniper,Gurobi,CPUTime
#-----Model parameters--------------------------------------------------------
sig=0.86;
landa=50;
E=T0=T1=.0833;
T2=0.75;
gam2=1; gam1=0;
a1=5; a2=4.22; a3=977.4; ap=977.4;
C1=949.2; c0=114.24;
f(x) = cdf(Normal(0, 1), x);
#---------------------------------------------------------------------------
ALT= Model(optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0),
"mip_solver"=>optimizer_with_attributes(Gurobi.Optimizer, "logLevel" => 0),"registered_functions" =>[Juniper.register( :f, 1, f; autodiff = true)])
);
# variables-----------------------------------------------------------------
JuMP.register(ALT, :f, 1, f; autodiff = true);
#variable(ALT, h >= 0.1);
#variable(ALT, L >= 0.00001);
#variable(ALT, n>=2, Int);
#---------------------------------------------------------------------------
#NLexpression(ALT,k1,h/(1-f(L-sig*sqrt(n))+f(-L - sig*sqrt(n))));
#NLexpression(ALT,k2,(1-(1+landa*h)*exp(-landa*h))/(landa*(1-exp(-landa*h))));
#NLexpression(ALT,k3,E*n+T1*gam1+T2*gam2);
#NLexpression(ALT,k4,1/landa+h/(1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n))));
#NLexpression(ALT,k5,-(1-(1+landa*h)*exp(-landa*h))/(landa*(1-exp(-landa*h)))+E*n+T1*gam1+T2*gam2);
#NLexpression(ALT,k6,(exp(-landa*h)/1-exp(-landa*h))*(a3/(2*f(-L)))+ap);
#NLexpression(ALT,k7,1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n)));
#NLexpression(ALT,F,c0/landa+C1*(k1-k2+k3)+((a1+a2*n)/h)*(k4+k5+k3)+k6);
#NLexpression(ALT,FF,k4-k2+E*n+T1+T2+(1-gam1)*((exp(-landa*h)/1-exp(-landa*h)*T0)/(2*f(-L))));
#routing constraints--------------------------------------------------------
#NLconstraint(ALT, f(-L) <= 1/400);
#objective function---------------------------------------------------------
#NLexpression(ALT,f1,F/FF);
#NLexpression(ALT,f2,1/k7);
#-------------------------------------------------------------------------
#NLparameter(ALT, rp1 == 10000);
#NLparameter(ALT, lp1 == -10000);
#NLparameter(ALT, rp2 == 10000);
#NLparameter(ALT, lp2 == -10000);
#NLconstraint(ALT,rf1,f1<=rp1);
#NLconstraint(ALT,lf1,f1>=lp1);
#NLconstraint(ALT,rf2,f2<=rp2);
#NLconstraint(ALT,lf2,f2>=lp2);
#------------------------------------------------------------------------
ZT=zeros(2,1);
ZB=zeros(2,1);
#-----------------------------------------------------------------------------
#NLobjective(ALT,Min,f2);
optimize!(ALT);
f2min=getvalue(f2);
ZB[2]=f2min;
set_value(rp2, f2min);
set_value(lp2, f2min);
#NLobjective(ALT,Min,f1);
optimize!(ALT);
ZB[1]=getvalue(f1);
#--------------------------------------------------------------------------
set_value(rp2, 10000);
set_value(lp2, ZB[2]+0.1);**
#NLobjective(ALT,Min,f1);
optimize!(ALT);
f1min=getvalue(f1);
ZT[1]=f1min;
although the constraint (**) limits getting to ZB (objective values when second objective optimized), it gets 949.2000589366443 when the first objective optimized. would you please help me what are the reasons?
is choosing solvers can be effective?
is the non-linear model cant be solve with these solvers?
Thank you very much
julia> ZB
2×1 Array{Float64,2}:
949.2000092739842
1.0000000053425355
#--------------------------------------------------
julia> ZT
2×1 Array{Float64,2}:
949.2000589366443
0.0
the code is updated. in fact, this code is trying to find two points of pareto front.
this is an example
using JuMP,CPLEX,CPUTime
#----------------------------------------------------------------------
WES=Model(CPLEX.Optimizer)
#-----------------------------------------------------------------------
#variable(WES,x[i=1:4]>=0);
#variable(WES,y[i=5:6]>=0,Int);
#variable(WES,xp[i=1:4]>=0);
#variable(WES,yp[i=5:6]>=0,Int);
#-----------------------------------------------------------------------
ofv1=[3 6 -3 -5]
ofv2=[-15 -4 -1 -2];
f1=sum(ofv1[i]*x[i] for i=1:4);
f2=sum(ofv2[i]*x[i] for i=1:4);
f1p=sum(ofv1[i]*xp[i] for i=1:4);
f2p=sum(ofv2[i]*xp[i] for i=1:4);
#------------------------------------------------------------------------
#constraint(WES,con1,-x[1]+3y[5]<=0);
#constraint(WES,con2,x[1]-6y[5]<=0);
#constraint(WES,con3,-x[2]+3y[5]<=0);
#constraint(WES,con4,x[2]-6y[5]<=0);
#constraint(WES,con5,-x[3]+4y[6]<=0);
#constraint(WES,con6,x[3]-4.5y[6]<=0);
#constraint(WES,con7,-x[4]+4y[6]<=0);
#constraint(WES,con8,x[4]-4.5y[6]<=0);
#constraint(WES,con9,y[5]+y[6]<=5);
#constraint(WES,con14,-xp[1]+3yp[5]<=0);
#constraint(WES,con15,xp[1]-6yp[5]<=0);
#constraint(WES,con16,-xp[2]+3yp[5]<=0);
#constraint(WES,con17,xp[2]-6yp[5]<=0);
#constraint(WES,con18,-xp[3]+4yp[6]<=0);
#constraint(WES,con19,xp[3]-4.5yp[6]<=0);
#constraint(WES,con20,-xp[4]+4yp[6]<=0);
#constraint(WES,con21,xp[4]-4.5yp[6]<=0);
#constraint(WES,con22,yp[5]+yp[6]<=5);
#------------------------------------------------------------------------
ZT=zeros(2,1);
ZB=zeros(2,1);
#--------------------------------------------------------------------------------
#objective(WES,Min,f2);
optimize!(WES);
f2min=JuMP.value(f2)
set_normalized_rhs(rf2,f2min);
set_normalized_rhs(lf2,f2min);
ZB[2]=getvalue(f2);
#objective(WES,Min,f1);
optimize!(WES);
ZB[1]=getvalue(f1);
#----------------
JuMP.setRHS(rf2,10000);
JuMP.setRHS(lf2,ZB[2]);
#objective(WES,Min,f1);
optimize!(WES);
set_normalized_rhs(rf1,getvalue(f1));
set_normalized_rhs(lf1,getvalue(f1));
ZT[1]=getvalue(f1);
#objective(WES,Min,f2);
optimize!(WES);
ZT[2]=getvalue(f2);
but it has that error again when the right hand sides functions are run.
set_normalized_rhs(rf2,f2min)
ERROR: MethodError: no method matching set_normalized_rhs(::ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape}, ::Float64)
Closest candidates are:
set_normalized_rhs(::ConstraintRef{Model,MathOptInterface.ConstraintIndex{F,S},Shape} where Shape<:AbstractShape, ::Any) where {T, S<:Union{MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.LessThan{T}}, F<:Union{MathOptInterface.ScalarAffineFunction{T}, MathOptInterface.ScalarQuadraticFunction{T}}} at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\constraints.jl:478
Stacktrace:
[1] top-level scope at none:1
I cant find what is the problem. this example was run in Julia 0.6.4.2. ZB and ZT were:
julia>ZB
2×1 Array{Float64,2}:
270.0
-570.0
julia> ZT
2×1 Array{Float64,2}:
-180.0
-67.5.0
thanks indeed.
Duplicate of is there any possibility to change the RHS of non-linear constraints in julia?.
You can use set_value to update the value of a nonlinear parameter. https://jump.dev/JuMP.jl/v0.21.3/nlp/#JuMP.set_value-Tuple{NonlinearParameter,Number}
Here's an example
using JuMP
model = Model()
#variable(model, x)
#NLparameter(model, p == 1)
#NLconstraint(model, sqrt(x) <= p)
# To make RHS p=2
set_value(p, 2)

Comparing the results from lpSolve to linprog, is it a problem in implementation?

I would like to minimize a linear programming system with linear constraints "equalities".
The system summarized in the following code "Python 3"
>>> obj_func = [1,1,1]
>>> const = [[[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 1]]]
>>> constraints= np.reshape(const, (-1, 3))
>>> constraints
array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 1, 1]])
>>> rhs = [0.4498162176582741, 0.4498162176582741, 0.10036756468345168, 1.0]
Using scipy.optimization.linprg:
>>> res = linprog(obj_func, constraints, rhs, method="interior-point", options={"disp":True})
>>> res
con: array([], dtype=float64)
fun: 1.4722956444515663e-09
message: 'Optimization terminated successfully.'
nit: 4
slack: array([0.44981622, 0.44981622, 0.10036756, 1. ])
status: 0
success: True
x: array([4.34463075e-10, 4.34463075e-10, 6.03369494e-10])
The same system summarized in R and minimized using lpSolve:
> obj.func = c(1,1,1)
> constraints = matrix(c(1,0,0,0,1,0,0,0,1,1,1,1), nrow= 4, byrow = TRUE)
> rhs = c(0.4498162+0i, 0.4498162+0i, 0.1003676+0i, 1.0000000+0i)
> f.dir = c("=","=","=","=")
>
> res = lp("min",obj.func,constraints,f.dir,rhs,compute.sens=FALSE)
> res
Success: the objective function is 1
As detailed above, the results are not close to each other although it is the same system so I did the same work for other systems but the results are also far.
My question: I know it is not necessary that every LP has a unique solution but I think they should produce close values ! In my case, I tried to minimize many systems using both solvers but the results are too far. For example,
First system: linprog gave 1.4722956444515663e-09 while lpSolve gave 1
Another system: linprog gave 1.65952852061376e-11 while lpSolve gave 0.8996324
Another system: linprog gave 3.05146726445553e-12 while lpSolve gave 0.8175745
You are solving different models.
res = linprog(obj_func, constraints, rhs, method="interior-point", options={"disp":True})
means
res = linprog(obj_func, A_ub=constraints, b_ub=rhs, method="interior-point", options={"disp":True})
effecting in constraints:
x0 <= 0.4498162176582741
...
instead of
x0 == 0.4498162176582741
So linprog is using inequalities only while lpsolve is using equalities only (without me checking if f.dir = c("=","=","=","=") is doing what i think it's doing; but the result shows this more or less).
The linprog-result:
x: array([4.34463075e-10, 4.34463075e-10, 6.03369494e-10])
is a typical zero-vector output of an interior-point method (only approximates integral solutions)! In contrast to commercial solvers like Gurobi, there is no crossover step.
Be careful when reading the docs (which contain this information).

Julia JuMP Multiavariate ML Estimation

I am trying to perform a ML-Estimation of a normally distributed variable in a linear regression setting in Julia using JuMP and the NLopt solver.
There exists a good working example here however if I try to estimate the regression parameters (slope) the code becomes quite tedious to write, in particular if the parameter space increases.
Maybe someone has an idea how to write it more concise. Here is my Code:
#type definition to store data
type data
n::Int
A::Matrix
β::Vector
y::Vector
ls::Vector
err::Vector
end
#generate regression data
function Data( n = 1000 )
A = [ones(n) rand(n, 2)]
β = [2.1, 12.9, 3.7]
y = A*β + rand(Normal(), n)
ls = inv(A'A)A'y
err = y - A * ls
data(n, A, β, y, ls, err)
end
#initialize data
d = Data()
println( var(d.y) )
function ml( )
m = Model( solver = NLoptSolver( algorithm = :LD_LBFGS ) )
#defVar( m, b[1:3] )
#defVar( m, σ >= 0, start = 1.0 )
#this is the working example.
#As you can see it's quite tedious to write
#and becomes rather infeasible if there are more then,
#let's say 10, slope parameters to estimate
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\cont. next line
-sum{(d.y[i]-d.A[i,1]*b[1] \\
-d.A[i,2]*b[2] \\
-d.A[i,3]*b[3])^2, i=1:d.n}/(2σ^2) )
#julia returns:
> slope: [2.14,12.85,3.65], variance: 1.04
#which is what is to be expected
#however:
#this is what I would like the code to look like:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{(d.y[i]-(d.A[i,j]*b[j]))^2, \\
i=1:d.n, j=1:3}/(2σ^2) )
#I also tried:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{sum{(d.y[i]-(d.A[i,j]*b[j]))^2, \\
i=1:d.n}, j=1:3}/(2σ^2) )
#but unfortunately it returns:
> slope: [10.21,18.89,15.88], variance: 54.78
solve(m)
println( getValue(b), " ", getValue(σ^2) )
end
ml()
Any ideas?
EDIT
As noted by Reza a working example is:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{(d.y[i]-sum{d.A[i,j]*b[j],j=1:3})^2,
i=1:d.n}/(2σ^2) )
The sum{} syntax is a special syntax that only works inside JuMP macros, and is the preferred syntax for sums.
So your example would be written as:
function ml( )
m = Model( solver = NLoptSolver( algorithm = :LD_LBFGS ) )
#variable( m, b[1:3] )
#variable( m, σ >= 0, start = 1.0 )
#NLobjective(m, Max,
-(d.n/2)*log(2π*σ^2)
- sum{
sum{(d.y[i]-d.A[i,j]*b[j], j=1:3}^2,
i=1:d.n}/(2σ^2) )
where I've expanded it across multiple lines to be as clear as possible.
Reza's answer isn't technically wrong, but isn't idiomatic JuMP and won't be as efficient for larger models.
I didn't trace your code but anywhere, I wish that the following works for you:
sum([(d.y[i]-sum([d.A[i,j]*b[j] for j=1:3]))^2 for i=1:d.n])
as #IainDunning mentioned, JuMP package has a special syntax for summation inside it's macros, so the more efficient and abstract way to do this is:
sum{sum{(d.y[i]-d.A[i,j]*b[j], j=1:3}^2,i=1:d.n}

(in R) Why is result of ksvm using user-defined linear kernel different from that of ksvm using "vanilladot"?

I wanted to use user-defined kernel function for Ksvm in R.
so, I tried to make a vanilladot kernel and compare with "vanilladot" which is built in "kernlab" as practice.
I write my kernel as follow.
#
###vanilla kernel with class "kernel"
#
kfunction.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "kernel"
k}
l<-0.1 ; C<-1/(2*l)
###use kfunction.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.k
I thouhgt result of this operation is eqaul to that of following.
However It dosn't.
#
###use "vanilladot"
#
l<-0.1 ; C<-1/(2*l)
tmp1<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel="vanilladot", C = C)
alpha(tmp1)[[1]]
ind1<-alphaindex(tmp1)[[1]]
x.s<-x[ind1,] ; y.s<-y[ind1]
w.tmp1<-t(alpha(tmp1)[[1]]*y.s)%*%x.s
w.tmp1
I think maybe this problem is related to kernel class.
When class is set to "kernel", this problem is occured.
However When class is set to "vanillakernel", the result of ksvm using user-defined kernel is equal to that of ksvm using "vanilladot" which is built in Kernlab.
#
###vanilla kernel with class "vanillakernel"
#
kfunction.v.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "vanillakernel"
k}
# The only difference between kfunction.k and kfunction.v.k is "class(k)".
l<-0.1 ; C<-1/(2*l)
###use kfunction.v.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.v.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.v.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.v.k
I don't understand why the result is different from "vanilladot", when setting the class to "kernel".
Is there an error in my operation?
First, it seems like a really good question!
Now to the point. In the sources of ksvm we can find when is a line drawn between using user-defined kernel, and the built-ins:
if (type(ret) == "spoc-svc") {
if (!is.null(class.weights))
weightedC <- class.weights[weightlabels] * rep(C,
nclass(ret))
else weightedC <- rep(C, nclass(ret))
yd <- sort(y, method = "quick", index.return = TRUE)
xd <- matrix(x[yd$ix, ], nrow = dim(x)[1])
count <- 0
if (ktype == 4)
K <- kernelMatrix(kernel, x)
resv <- .Call("tron_optim", as.double(t(xd)), as.integer(nrow(xd)),
as.integer(ncol(xd)), as.double(rep(yd$x - 1,
2)), as.double(K), as.integer(if (sparse) xd#ia else 0),
as.integer(if (sparse) xd#ja else 0), as.integer(sparse),
as.integer(nclass(ret)), as.integer(count), as.integer(ktype),
as.integer(7), as.double(C), as.double(epsilon),
as.double(sigma), as.integer(degree), as.double(offset),
as.double(C), as.double(2), as.integer(0), as.double(0),
as.integer(0), as.double(weightedC), as.double(cache),
as.double(tol), as.integer(10), as.integer(shrinking),
PACKAGE = "kernlab")
reind <- sort(yd$ix, method = "quick", index.return = TRUE)$ix
alpha(ret) <- t(matrix(resv[-(nclass(ret) * nrow(xd) +
1)], nclass(ret)))[reind, , drop = FALSE]
coef(ret) <- lapply(1:nclass(ret), function(x) alpha(ret)[,
x][alpha(ret)[, x] != 0])
names(coef(ret)) <- lev(ret)
alphaindex(ret) <- lapply(sort(unique(y)), function(x)
which(alpha(ret)[,
x] != 0))
xmatrix(ret) <- x
obj(ret) <- resv[(nclass(ret) * nrow(xd) + 1)]
names(alphaindex(ret)) <- lev(ret)
svindex <- which(rowSums(alpha(ret) != 0) != 0)
b(ret) <- 0
param(ret)$C <- C
}
The important parts are two things, first, if we provide ksvm with our own kernel, then ktype=4 (while for vanillakernel, ktype=0) so it makes two changes:
in case of user-defined kernel, the kernel matrix is computed instead of actually using the kernel
tron_optim routine is ran with the information regarding the kernel
Now, in the svm.cpp we can find the tron routines, and in the tron_run (called from tron_optim), that LINEAR kernel has a separate optimization routine
if (param->kernel_type == LINEAR)
{
/* lots of code here */
while (Cpj < Cp)
{
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w,
Cpj, Cnj, param->eps, sii, param->shrinking,
param->qpsize);
/* lots of code here */
}
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
delete[] w;
}
else
{
Solver_B s;
s.Solve(l, BSVC_Q(*prob,*param,y), minus_ones, y, alpha, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
}
As you can see, the linear case is treated in the more complex, more detailed way. There is an inner optimization loop calling the solver many times. It would require really deep analysis of actual optimization being performed here, but at this step one can answer your question in a following way:
There is no error in your operation
kernlab's svm has a separate routine for training SVM with linear kernel, which is based on the type of kernel passed to the code, changing "kernel" to "vanillakernel" made the ksvm think it is actually working with vanillakernel, and so performed this separate optimization routine
It does not seem as a bug in fact, as the linear SVM is in fact very different from the kernelized version in terms of efficient optimization techniques. Amount of heuristic as well as numerical issues that has to be taken care of is really big. As a result, some approximations are required and can lead to the different results. While for the rich feature space (like those induced by RBF kernel) it should not really matter, for simple kernels line linear ones - this simplifications can lead to significant output changes.

Resources