I'm trying to solve a problem with JuMp and it's a non linear problem. So I have the error bellow :
/(::VariableRef,::QuadExpr) is not defined. Are you trying to build a nonlinear problem? Make sure you use #NLconstraint/#NLobjective.
And I'm using #NLobjective and #NLconstraint.
I have the problem at the second line of this function when I call it in my function bvpsolve:
function hamiltonien(z,eps)
u = z[:,8]./(2*eps*Cd2*z[:,6].*z[:,2])
h = z[:,5] .* z[:,1] .* [sin(z[i,4]) for i in size(z[:,4])] + z[:,6] .* (T(z[:,1])./z[:,3] - phi(z[:,1]) * S * z[:,2].^2 /(2*z[:,3])* (Cd1+Cd2*u^2) - g *sin(z[:,4])) - z[:,7] *Cs(z[:,2])*T(z[:,1]) + 1/eps * z[:,8] *(phi(z[:,1])*S*z[:,2] *u /(2*z[:,3]) - g/z[:,2] *cos(z[:,4]))
return 1
end
function bvpsolve(eps,N)
sys = Model(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 5))
set_optimizer_attribute(sys,"tol",1e-8)
set_optimizer_attribute(sys,"constr_viol_tol",1e-6)
#variables(sys, begin
tf
x[1:N+1 , 1:n]
y[1:N+1 , 1:n]
0. ≤ h[1:N] ≤ 10
end)
Δt = (tf-t0)/N
# Objective
#NLobjective(sys, Min, sum(sum((x[i,j]-y[i,j])^2 for i in 1:N+1) for j in 1:n )/N + α*sum((h[i]-Δt)^2 for i in 1:N))
hx = hamiltonien(x,eps)
hy = hamiltonien(y,eps)
xpoint , ppointx = hvfun(hx,x)
ypoint , ppointy = hvfun(hy,y)
#NLconstraints(sys, begin
con_h0, x[1,1] - 3480. == 0
con_hf, x[N+1,1] - 9144. == 0
con_v0, x[1,2] - 151.67 == 0
con_vf, x[N+1,2] - 191. == 0
con_m0, x[1,3] - 69000. == 0
con_mf, x[N+1,3] - 68100. == 0
con_g0, x[1,4] - 69000. == 0
con_gf, x[N+1,4] - 68100. == 0
end)
...
end)
You can't construct nonlinear expressions outside the macros.
See the documentation: https://jump.dev/JuMP.jl/stable/manual/nlp/
You can use a user-defined function, but I don't understand your example. It hard-codes return 1, so I don't know what u and h are for.
Your example is also non-reproducible, because I don't know what hvfun is.
p.s., If you want to have a longer discussion, please post on the community forum: https://discourse.julialang.org/c/domain/opt/13. It's a bit easier to have a back-and-forward than on stack overflow.
Related
I am trying to check for counterexamples to the conjecture stated in this MSE question, using the Pari-GP interpreter of Sage Cell Server.
I reproduce the statement of the conjecture here: If N > 8 is an even deficient-perfect number and Q = N/(2N - sigma(N)), then Q is prime.
Here, sigma(N) is the classical sum of divisors of N.
I am using the following code:
for(x=9, 1000, if(((Mod(x,(2*x - sigma(x))) == 0)) && ((fromdigits(Vecrev(digits(x / (2*x - sigma(x)))))) == (x / (2*x - sigma(x)))) && !(isprime((x / (2*x - sigma(x))))), print(x,factor(x))))
However, the Pari-GP interpreter of Sage Cell Server would not accept it, and instead gives the following error message:
*** at top-level: for(x=9,1000,if(((Mod(x,(2*x-sigma(x)))==0))&&
*** ^----------------------------
*** Mod: impossible inverse in %: 0.
What am I doing wrong?
Here's a better implementation of your algorithm
{
forfactored(X = 9, 10^7,
my (s = sigma(X), t = 2*X[1] - s);
if (t <= 0, next);
my ([q, r] = divrem(X[1], t));
if (r == 0 && fromdigits(Vecrev(digits(q))) == q && !ispseudoprime(q),
print(X)))
}
It's a bit more readable but most importantly it avoids factoring the same x over and over again: each time you write sigma(x), we need to factor x (the interpreter is not clever enough to compute subexpressions once). In fact, it doesn't perform a single factorization, through the use of forfactored which performs a sieve instead (and the variable X contains [x, factor(x)]). This is about 3 times faster than the original implementation in this range.
I let it run to 10^9 (about 10 minutes), there was no further counterexample.
I got it to work myself.
Here is the code that I used:
for(x=9, 10000000, if((2*x > sigma(x)) && ((Mod(x,(2*x - sigma(x))) == 0)) && ((fromdigits(Vecrev(digits(x / (2*x - sigma(x)))))) == (x / (2*x - sigma(x)))) && !(isprime((x / (2*x - sigma(x))))), print(x,factor(x))))
The search returns the odd counterexample N = 9018009, which is expected.
It did not return any even counterexamples, in the specified range.
I'm trying to translate a C routine from an old sound synthesis program into R, but have indexing issues which I'm struggling to understand (I'm a beginner when it comes to using loops).
The routine creates an exponential lookup table - the vector exptab:
# Define parameters
sinetabsize <- 8192
prop <- 0.8
BP <- 10
BD <- -5
BA <- -1
# Create output vector
exptab <- vector("double", sinetabsize)
# Loop
while(abs(BD) > 0.00001){
BY = (exp(BP) -1) / (exp(BP*prop)-1)
if (BY > 2){
BS = -1
}
else{
BS = 1
}
if (BA != BS){
BD = BD * -0.5
BA = BS
BP = BP + BD
}
if (BP <= 0){
BP = 0.001
}
BQ = 1 / (exp(BP) - 1)
incr = 1 / sinetabsize
x = 0
stabsize = sinetabsize + 1
for (i in (1:(stabsize-1))){
x = x + incr
exptab [[sinetabsize-i]] = 1 - (BQ * (exp(BP * x) - 1))
}
}
Running the code gives the error:
Error in exptab[[sinetabsize - i]] <- 1 - (BQ * (exp(BP * x) - 1)) :
attempt to select less than one element in integerOneIndex
Which, I understand from looking at other posts, indicates an indexing problem. But, I'm finding it difficult to work out the exact issue.
I suspect the error may lie in my translation. The original C code for the last few lines is:
for (i=1; i < stabsize;i++){
x += incr;
exptab[sinetabsize-i] = 1.0 - (float) (BQ*(exp(BP*x) - 1.0));
}
I had thought the R code for (i in (1:(stabsize-1))) was equivalent to the C code for (i=1; i< stabsize;i++) (i.e. the initial value of i is i = 1, the test is whether i < stabsize, and the increment is +1). But now I'm not so sure.
Any suggestions as to where I'm going wrong would be greatly appreciated!
As you say, array indexing in R starts at 1. In C it starts at zero. I reckon that's your problem. Can sinetabsize-i ever get to zero?
I am writing a simple newton method
x_(n+1) = x_n - f(x_n) / f_prime(x_n)
to find the roots (can be a real number or a complex number) of a quadratic function:
f(x) = a*x*x + b*x + c
(a, b, c are given constants and are all real numbers). I know Newton method will fail if the start point or some iteration point in the loop has a zero derivative. I want to use a if statement inside my for/while loop to avoid this situation. Does Julia have something like stop 0 syntax in Fortran ?
The generic Newton's Method root-finding code:
function newton_root_finding(f, f_diff, x0, rtol=1e-8, atol=1e-8)
f_x0 = f(x0)
f_diff_x0 = f_diff(x0)
x1 = x0 - f_x0 / f_diff_x0
f_diff_x1 = f_diff(x1)
#assert abs(f_diff_x0) > atol + rtol * abs(f_diff_x0) "Zero derivative. No solution found."
while abs(f_x0) > atol + rtol * (abs(f_x0))
x0 = x1
f_x0 = f(x0)
f_diff_x0 = f_diff(x0)
x1 = x0 - f_x0 / f_diff_x0
end
return x1
end
function quadratic_func(x)
a = 1.0
b = 0.0
c = 2.0
return a*x*x + b*x + c
end
function quadratic_func_diff(x)
a = 1.0
b = 0.0
c = 2.0
return 2.0*a*x + 1.0*b + 0.0*c
end
newton_root_finding(quadratic_func, quadratic_func_diff, 1.0 + 0.5im)
In the above code I used a #assert macro to make it happens, but I don't want to use any macro. I want to use a if statement inside my while loop to halt it. Another thing I've noticed is that if I change to #assert abs(f_diff_x0) != 0 this test will be ignored. Is that because of some round-off errors that "zero derivative" doesn't exactly equal to 0?
The way to exit from the inside of a loop in general is a break statement; a return fulfills the same purpose, because it just exits the whole function.
For the comparisons you can use Base.isapprox(x, y; atol=atol, rtol=rtol). It's documentation starts with:
Inexact equality comparison: true if norm(x-y) <= max(atol, rtol*max(norm(x), norm(y))).
norm falls back to abs for numbers. And I think you might have a bug in both comparisons, always comparing the value at x0 to itself.
As for the breaking on zero derivatives, an #assert is, I think, appropriate here: if you get zero derivative, you don't stop iteration and return the result, but you throw an error to signify an infeasible condition. I'd thus write your function as follows:
function newton_root_finding(f, ∂f, x0, rtol=1e-8, atol=1e-8)
x_old = x0
y_old = f(x0)
while true
df_old = ∂f(x_old)
#assert !isapprox(df_old, 0, rtol=rtol, atol=atol) "Zero derivative. No solution found."
x_new = x_old - y_old / df_old
y_new = f(x_new)
isapprox(y_old, y_new, rtol=rtol, atol=atol) && return x_new
x_old, y_old = x_new, y_new
end
end
This returns 3.357392012620626e-26 + 1.4142135623730951im on your test case, approximately sqrt(2)im.
To address your first question, you can use break to exit the while loop, like
function test()
i = 0
while true
i += 1
if i > 10
break
end
end
return i
end
As to your second question, when comparing floating point numbers it is often better to use isapprox (provide an atol if you compare against zero) instead of == or !=.
I'm not understanding why the following snippet of code is returning a NoMethodError in Julia
using Calculus
nx = 101
nt = 101
dx = 2*pi / (nx - 1)
nu = 0.07
dt = dx*nu
function init(x, nu, t)
phi = exp( -x^2 / 4.0*nu ) + exp( -(x - 2.0*pi)^2 / 4.0*nu )
dphi_dx = derivative(phi)
u = ( 2.0*nu /phi )*dphi_dx + 4.0
return u
end
x = range(0.0,stop=2*pi,length=nx)
t = 0.0
u = [init(x0,nu,t) for x0 in x]
My aim here is to populate the elements of an array named u with values as calculated by my function init. The u array should have nx elements with u calculated at every x value in the range between 0.0 and 2*pi.
Next time please also post the error message and take a detailed at it before, so you can try to spot the mistake by yourself.
I don't really know the Calculus package but it seems you are using it wrong. Your phi is a number and not a function. You can't take a derivative from just a single number. Change it to
phi = x -> exp( -x^2 / 4.0*nu ) + exp( -(x - 2.0*pi)^2 / 4.0*nu )
an then call the phi and derivative at argument x, so phi(x) and derivative(phi,x) or dphi_x(x). As I don't know much about the Calculus package you should take a look at its documentation again to verify that the derivative command is doing exactly what you want like that.
Little extra: there are also element-wise operations in Julia (similar to Matlab for example) that apply functions to the whole array. Instead of [init(x0,nu,t) for x0 in x], you can also write init.(x,nu,t).
How can I find intersection points in the graph shown below using fsolve function (from scilab)?
Here is what I've tried so far:
function y=f(x)
y = 30 + 0 * x;
endfunction
function y= g(x)
y=zeros(x)
k1 = find(x >= 5 & x <= 11);
if k1<>[] then
y(k1)= -59.535905 +24.763399*x(k1) -3.135727*x(k1)^2+0.1288967*x(k1)^3;
end;
k2=find(x >= 11 & x <= 12);
if k2 <> [] then
y(k2)=1023.4465 - 270.59543 * x(k2) + 23.715076 * x(k2)^2 - 0.684764 * x(k2)^3;
end;
k3 = find(x >= 12 & x <= 17);
if k3 <> [] then
y(k3) =-307.31448 + 62.094807 *x(k3) - 4.0091108 * x(k3)^2 + 0.0853523 * x(k3)^3;
end;
k4 = find(x >= 17 & x <= 50);
if k4 <> [] then
y(k4) = 161.42601 - 20.624104 *x(k4) + 0.8567075 * x(k4)^2 - 0.0100559 * x(k4)^3;
end;
endfunction
t=[5:50];
plot(t, g(t));
plot2d(t, f(t));
deff('res = fct', ['res(1) = f(x)'; 'res(2) = g(x)']);
k1=[5, 45];
xsol1 = fsolve(k1, f, g)
Your original post was utterly unreadable and chaotic. It took me while to edit it and understand what you are trying to achieve. However I will try to help you. Lets go step by step:
I am not sure why you have used find function this way. probably you were trying to vectorize the g function? Please consider that Scilab does not broadcast functions over arrays by default. You need to either vectorize them or use feval to do so. Please read this other answer I have written before. find is a vectorized operation applying on an array, a Boolean operation and a scalar, finding the elements of the array which satisfy the operation. For example from the find page:
beers = ["Desperados", "Leffe", "Kronenbourg", "Heineken"];
find(beers == "Leffe")
returns 2 and
A = rand(1, 20);
w = find(A < 0.4)
returns those elements of array A which are smaller than 0.4.
Please learn about conditionals and specifically if, then, elsif, else, end statements. If you learn this you will not use the find function in that way. Sometimes you have so many ifs in a row, then try to use select, case, else, end instead. Your second function could be written as:
function y = g(x)
if x < 5 | 50 < x then
error("Out of range");
elseif x <= 11 then
y = -59.535905 + 24.763399 * x - 3.135727 * x^2 + 0.1288967 * x^3;
return;
elseif x <= 12 then
y = 1023.4465 - 270.59543 * x + 23.715076 * x^2 - 0.684764 * x^3;
return;
elseif x <= 17 then
y = -307.31448 + 62.094807 * x - 4.0091108 * x^2 + 0.0853523 * x^3;
return;
else
y = 161.42601 - 20.624104 * x + 0.8567075 * x^2 - 0.0100559 * x^3;
end
endfunction
Now apparently you want to find the points on this curve which have a value of 30. Although there are methods to find these points automatically plotting can be very helpful to find the proper range:
t = [5:50];
plot(t, feval(t, g) - 30)
showing that the the two solutions are in the range of 20 < x1 < 30 and 40 < x < 50.
Now if we use fsolve with the proper initial values it gives us good results:
--> deff('[y] = g2(x)', 'y = g(x) - 30');
--> fsolve([25; 45], g2)
ans =
26.67373
48.396547
The third parameter of the fsolve function is the Jacobin / derivative of the g(x) function. You either should calculate the derivatives of the above polynomials manually (or use a proper symbolic software like Maxima), or define them as polynomials using poly function. See this tutorial for example. Then differentiate them, defining a new function like dgdx.