I have tried to follow the approach mentioned here: JuMP constraints involving matrix inverse. But I am still not able to get my code to run.
My code is as follows:
using JuMP, Ipopt, LinearAlgebra
FP = Model(solver=IpoptSolver())
#variable(FP, x[1:2,1:2] >= 0)
#objective(FP, Max, 0)
#NLconstraint(FP, inv(x) <= 0.5*I)
status = solve(FP)
I get the following error:
ERROR: LoadError: Unexpected object x[i,j] >= 0 for all i in {1,2}, j in {1,2} in nonlinear expression.
I am not sure what is going wrong. I am using JuMP 0.18.6. Could you please help? Thanks.
Related
Here is my code:
using Plots
using SpecialFunctions
using QuadGK
kappa = 1
B = 1
xi = (kappa/B)^4
function correlation(x)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
r = range(-20, 20, length = 1001)
plot(r, correlation(r))
I get an error on the Bessel function. I get that the argument is the problem and that it should be of the format ::BigFloat, ::Float16, or ::Float32, but I don't know how to do it. I tried to write x .* q instead, but the problem remains the same, I get the error:
ERROR: MethodError: no method matching besselj0(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}})
Also, I'm searching for a way to write +infinity instead of 1e6.
Just replace correlation(r) with correlation.(r) in your code to use broadcasting, as is explained here.
The core of your problem that in Julia functions are not broadcastable by default - you usually have to opt-in to have it (especially when you work with numerical code). Here is a basic example:
julia> sin(1)
0.8414709848078965
julia> sin([1])
ERROR: MethodError: no method matching sin(::Array{Int64,1})
Closest candidates are:
sin(::BigFloat) at mpfr.jl:727
sin(::Missing) at math.jl:1197
sin(::Complex{Float16}) at math.jl:1145
...
Stacktrace:
[1] top-level scope at REPL[2]:1
julia> sin.([1])
1-element Array{Float64,1}:
0.8414709848078965
However, in your case the correlation function is quite expensive. In such a case I usually use ProgressMeter.jl to monitor the progress of the computations (it shows how long you can expect for the computations to finish). So you can write:
using ProgressMeter
result = #showprogress map(correlation, r)
and use the map function to apply correlation function to all elements of r (in this case the result will be the same as for broadcasting).
Finally, your computations will be much faster if you do not use global variables in quadgk. It is better to pass kappa and xi as arguments to the function like this:
function correlation(x, kappa, xi)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
result = #showprogress map(x -> correlation(x, kappa, xi), r)
I'm having troubles with the following constraint in JuMP
#constraint(m, rBalance[h in H, k in P, m in M], sum(X[i,h,k,m] for i in SO) == (sum(X[h,h,k,r] for r in M if r!=m) + sum(X[h,j,k,m] for j in SD if j!= h)).
I got the following error msg
"No method matching add_Contraint(::String, ::ScalarConstraint{GenericAffExpr{Float64,VariableRef},MathOptInterface.EqualTo{Float64}},::String)" (see attached screenshot for more details)
Any thoughts?
Cheers
Guillermo
Both your model and your index are named m. This is a common error; it's why we removed all instances of m = Model() from the examples that we provide.
I am taking a numeric calculus class and we are not required to know any scilab programming except the very basic, which is taught through a booklet, since the class is mostly theoretical. I was reading the booklet and found this scilab code meant to find a root of a function through bissection method.
The problem is, I can't find a way to make it work. I tried to call it with bissecao(x,-1,1,0.1,40) however it didn't work.
The error I got was:
at line 3 of function bissecao ( E:\Downloads\bisseccao3.sce line 3 )
Invalid index.
As I highly doubt that the code itself isn't working, and I tried to search for anything I could spot that seemed wrong, to no avail, I guess I am probably calling it wrong, somehow.
The code is the following:
function p = bissecao(f, a, b, TOL, N)
i = 1
fa = f(a)
while (i <= N)
//iteraction of the bissection
p = a + (b-a)/2
fp = f(p)
//stop condition
if ((fp == 0) | ((b-a)/2 < TOL)) then
return p
end
//bissects the interval
i = i+1
if (fa * fp > 0) then
a = p
fa = fp
else
b = p
end
end
error ('Max number iter. exceded!')
endfunction
Where f is a function(I guess), a and b are the limits of the interval in which we will be iterating, TOL is the tolerance at which the program terminates close to a zero, and N is the maximum number of iteractions.
Any help on how to make this run is greatly appreciated.
Error in bissecao
The only error your bissecao function have is the call to return :
In a function return stops the execution of the function,
[x1,..,xn]=return(a1,..,an) stops the execution of the function and
put the local variables ai in calling environment under names xi.
So you should either call it without any argument (input our output) and the function will exit and return p.
Or you could call y1 = return(p) and the function will exit and p will be stored in y1.
It is better to use the non-arguments form return in functions to avoid changing values of variables in the parent/calling script/functions (possible side-effect).
The argument form is more useful when interactively debugging with pause:
In pause mode, it allows to return to lower level.
[x1,..,xn]=return(a1,..,an) returns to lower level and put the local
variables ai in calling environment under names xi.
Error in calling bissecao
The problem may come by your call: bissecao(x,-1,1,0.1,40) because you didn't defined x. Just fixing this by creating a function solves the problem:
function y=x(t)
y=t+0.3
enfunction
x0=bissecao(x,-1,1,0.1,40) // changed 'return p' to 'return'
disp(x0) // gives -0.3 as expected
I am currently writing a program in R to find solutions of a general polynomial difference equation using Picard's method.
For an insight in the mathematics behind it (as math mode isn't available here):
https://math.stackexchange.com/questions/2064669/picard-iterations-for-general-polynomials/2064732
Now since then I've been trying to work with the Ryacas package for integration. However I ran into trouble trying to work with the combination of expression and integration function.
library(Ryacas)
degrees = 3
a = c(3,5,4,6)
x0 = -1
maxIterations(10)
iteration = vector('expression', length = maxIterations)
iteration[1] = x0
for(i in 2:maxIterations){
for(i in 1:degrees){
exp1 = expression( a[i] * iteration[i-1] ^ i)
}
iteration[i] = x0 + Integrate(exp1, t)
}
but this results in
"Error in paste("(", ..., ")") :
cannot coerce type 'closure' to vector of type 'character'"
and exp1 = expression(a[j] * iteration[i-1]^j) instead of an actual expression as I tried to achieve. Is there anyway I can make sure R reads this as a real expression (i.e. for example 3 * ( x0 ) ^ j for i = 2)?
Thanks in advance!
Edit:
I also found the Subst() function, and currently trying to see if anything is fixable using it. Now I am mainly struggling to actually set up an expression for m coefficients of a, as I can't find a way to create e.g. a for loop in the expression() command.
I'm doing maximum likelihood estimation using the R optim function.
The command I used is
optim(3, func, lower=1.0001, method="L-BFGS-B")$par
The function func has infinite value if the parameter is 1.
Thus I set the lower value to be 1.0001.
But sometime an error occurs.
Error in optim(3, func, lower = 1.0001, method = "L-BFGS-B", sx = sx, :
L-BFGS-B needs finite values of 'fn'
What happened next is hard to understand.
If I run the same command again, then it gives the result 1.0001 which is lower limit.
It seems that the optim function 'learns' that 1 is not the proper answer.
How can the optim function can give the answer 1.0001 at my first run?
P.S.
I just found that this problem occurs only in stand-alone R-console. If I run the same code in R Studio, it does not occur. Very strange.
The method "L-BFGS-B" requires all computed values of the function to be finite.
It seems, for some reason, that optim is evaluating your function at the value of 1.0, giving you an inf, then throwing an error.
If you want a quick hack, try defining a new function that gives a very high value(or low if you're trying to maximize) for inputs of 1.
func2 <- function(x){
if (x == 1){
return -9999
}
else{
return func(x)
}
}
optim(3, func2, lower=1.0001, method="L-BFGS-B")$par
(Posted as answer rather than comment for now; will delete later if appropriate.)
For what it's worth, I can't get this example (with a singularity at 1) to fail, even using the default control parameters (e.g. ndeps=1e-3):
func <- function(x) 1/(x-1)*x^2
library(numDeriv)
grad(func,x=2) ## critical point at x=2
optim(par=1+1e-4,fn=func,method="L-BFGS-B",lower=1+1e-4)
Try a wide range of starting values:
svec <- 1+10^(seq(-4,2,by=0.5))
sapply(svec,optim,fn=func,method="L-BFGS-B",lower=1+1e-4)
These all work.