function Base.+ must be explicitly imported to be extended - julia

i'm pretty new to julia forgive me if my question is dumb,
for exmaple i defined a type like this:
type Vector2D
x::Float64
y::Float64
end
and 2 object w and v:
v = Vector2D(3, 4)
w = Vector2D(5, 6)
if i add them up it will raise this err : MethodError: no method matching +(::Vector2D, ::Vector2D) it's ok , but when i want to define a method for
summing theses object
+(a::Vector2D, b::Vector2D) = Vector2D(a.x+b.x, a.y+b.y)
it raise this error :
error in method definition: function Base.+ must be explicitly imported to be extended
julia version 0.5

As the error message says, you must tell Julia that you want to extend the + function from Base (the standard library):
import Base: +, -
+(a::Vector2D, b::Vector2D) = Vector2D(a.x + b.x, a.y + b.y)
-(a::Vector2D, b::Vector2D) = Vector2D(a.x - b.x, a.y - b.y)

Related

Write Julia macro that returns a function

First post here, thanks for reading!
Problem: I have a Vector{String} - call it A - where each element is a part of an equation, e.g. the first element of A is "x[1] - (0.8*x[1])". I would like to write a macro that takes as arguments i) a String - call it fn_name - with the name of a function, ii) the vector A, and returns a function named fn_name which looks like
function fn_name(f, x)
f[1] = x[1] - (0.8*x[1])
f[2] = (exp(x[4]) - 0.8*exp(x[3]))^(-1.1) - (0.99*(exp(x[4]) - 0.8*exp(x[4]))^(-1.1)*(1.0 - 0.025 + 0.30*exp(x[1])*exp(x[2])^(0.30 - 1.0)))
f[3] = exp(x[2]) - ((1.0 - 0.025)*exp(x[2]) + exp(x[1])*exp(x[2])^0.30 - exp(x[4]))
f[4] = x[3] - (x[4])
end
where each rhs is one element of
A = ["x[1] - (0.8*x[1])", "(exp(x[4]) - 0.8*exp(x[3]))^(-1.1) - (0.99*(exp(x[4]) - 0.8*exp(x[4]))^(-1.1)*(1.0 - 0.025 + 0.30*exp(x[1])*exp(x[2])^(0.30 - 1.0)))", "exp(x[2]) - ((1.0 - 0.025)*exp(x[2]) + exp(x[1])*exp(x[2])^0.30 - exp(x[4]))", "x[3] - (x[4])"]
What I tried: my best attempt at solving the problem is the following
macro make_fn(fn_name, A)
esc(quote
function $(Symbol(fn_name))(f, x)
for i = 1:length($(A))
f[$i] = Meta.parse($(A)[$i])
end
end
end)
end
which however doesn't work: when I run #make_fn("my_name", A) I get the error LoadError: UndefVarError: i not defined.
I find it quite hard to wrap my head around Julia metaprogramming, and while I'd be very happy to avoid using it, I think for this problem it's unavoidable.
Can you please help me understand where my mistake is?
Thanks
Macros in this case are not only avoidable, but even inapplicable, unless A is literal known at compile time.
I can provide a solution using eval and some closures:
julia> function make_fn2(A)
Af = [#eval(x -> $(Meta.parse(expr))) for expr in A]
function (f, x)
for i in eachindex(A, f)
f[i] = Af[i](x)
end
return f
end
end
make_fn2 (generic function with 1 method)
julia> fn_name = make_fn2(A)
#46 (generic function with 1 method)
julia> fn_name(zeros(4), [1,2,3,4])
4-element Array{Float64,1}:
0.19999999999999996
-0.06594092302655707
49.82984401122239
-1.0
with the restrictions that
eval will evaluate the expressions in a global scope of the module where this is defined (so it is potentially a different scope that a scope of the calling function), and
the newly created function will work only if you first return to global scope (i.e. it will not work if you try to run it within a function in which you have created it).
But I'd really recommend thinking about a better input format than strings.

How to use new Initialization Schemes in DifferentialEquations.jl?

I am trying to use the new Initialization Schemes option of DifferentialEquations.jl
https://diffeq.sciml.ai/dev/solvers/dae_solve/#Initialization-Schemes-1
But I do not know how to access the new methods.
using DifferentialEquations
import DifferentialEquations: ShampineCollocationInit
using Sundials
using Plots
function f(out,du,u,p,t)
out[1] = - 0.04u[1] + 1e4*u[2]*u[3] - du[1]
out[2] = + 0.04u[1] - 3e7*u[2]^2 - 1e4*u[2]*u[3] - du[2]
out[3] = u[1] + u[2] + u[3] - 1.0
end
u₀ = [1.0, 0, 0]
du₀ = [-0.04, 0.04, 0.0]
tspan = (0.0,100000.0)
differential_vars = [true,true,false]
prob = DAEProblem(f,du₀,u₀,tspan,differential_vars=differential_vars)
sol = solve(prob,IDA(initializealg = ShampineCollocationInit))
plot(sol, xscale=:log10, tspan=(1e-6, 1e5), layout=(3,1))
The previous example return the following Error:
WARNING: could not import DifferentialEquations.ShampineCollocationInit into Main
LoadError: UndefVarError: ShampineCollocationInit not defined
Stacktrace:
[1] top-level scope at /home/Documents/test.jl:19
in expression starting at /home/Documents/test.jl:19
What am I doing wrong?
Those initialization schemes only apply to the OrdinaryDiffEq algorithms, while the initialization of IDA (Sundials.jl) is defined in the Sundials.jl portion of the documentation this may change in the near future (with a deprecation warning of course) as it gets more and more homogenized.

Using invalid character "²" for squared. Extend Julia syntax with custom operators

In my equations we have many expressions with a^2, and so on. I would like to map "²" to ^2, to obtain something like that:
julia> a² == a^2
true
The above is not however a legal code in Julia. Any idea on how could I implement it ?
Here is a sample macro #hoo that does what you requested in a simplified scenario (since the code is long I will start with usage).
julia> x=5
5
julia> #hoo 3x² + 4x³
575
julia> #macroexpand #hoo 2x³+3x²
:(2 * Main.x ^ 3 + 3 * Main.x ^ 2)
Now, let us see the macro code:
const charsdict=Dict(Symbol.(split("¹²³⁴⁵⁶⁷⁸⁹","")) .=> 1:9)
const charsre = Regex("[$(join(String.(keys(charsdict))))]")
function proc_expr(e::Expr)
for i=1:length(e.args)
el = e.args[i]
typeof(el) == Expr && proc_expr(el)
if typeof(el) == Symbol
mm = match(charsre, String(el))
if mm != nothing
a1 = Symbol(String(el)[1:(mm.offset-1)])
a2 = charsdict[Symbol(mm.match)]
e.args[i] = :($a1^$a2)
end
end
end
end
macro hoo(expr)
typeof(expr) != Expr && return expr
proc_expr(expr)
expr
end
Of course it would be quite easy to expand this concept into "pure-math" library for Julia.
I don't think that there is any reasonable way of doing this.
When parsing your input, Julia makes no real difference between the unicode character ² and any other characters you might use in a variable name. Attempting to make this into an operator would be similar to trying to make the suffix square into an operator
julia> asquare == a^2
The a and the ² are not parsed as two separate things, just like the a and the square in asquare would not be.
a^2, on the other hand, is parsed as three separate things. This is because ^ is not a valid character for a variable name and it is therefore parsed as an operator instead.

How do I evaluate the function in only one of its variables in Scilab

How do I evaluate the function in only one of its variables, that is, I hope to obtain another function after evaluating the function. I have the following piece of code.
deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
fun (4, y)
I hope to get 16-3y ^ 2 + 4y ^ 3
If what you want to do is to write x = f(4,y), and later just do x(2) to get -36, that is called partial application:
Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments".
This is a very useful feature, and very common Functional Programming Languages, such as Haskell, but even JS and Python now are able to do it. It is also possible to do this in MATLAB and GNU/Octave using anonymous functions (see this answer). In Scilab, however, this feature is not available.
Workround
Nonetheless, Scilab itself uses a workarounds to carry a function with its arguments without fully evaluating. You see this being used in ode(), fsolve(), optim(), and others:
Create a list containing the function and the arguments to partial evaluation: list(f,arg1,arg2,...,argn)
Use another function to evaluate such list and the last argument: evalPartList(list(...),last_arg)
The implementation of evalPartList() can be something like this:
function y = evalPartList(fList,last_arg)
//fList: list in which the first element is a function
//last_arg: last argument to be applied to the function
func = fList(1); //extract function from the list
y = func(fList(2:$),last_arg); //each element of the list, from second
//to last, becomes an argument
endfunction
You can test it on Scilab's console:
--> deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
--> x = list(fun,4)
x =
x(1)
[F]= x(1)(x,y)
x(2)
4.
--> evalPartList(x,2)
ans =
36.
This is a very simple implementation for evalPartList(), and you have to be careful not to exceed or be short on the number of arguments.
In the way you're asking, you can't.
What you're looking is called symbolic (or formal) computational mathematics, because you don't pass actual numerical values to functions.
Scilab is numerical software so it can't do such thing. But there is a toolbox scimax (installation guide) that rely on a the free formal software wxmaxima.
BUT
An ugly, stupid but still sort of working solution is to takes advantages of strings :
function F = fun (x, y) // Here we define a function that may return a constant or string depending on the input
fmt = '%10.3E'
if (type(x)==type('')) & (type(y)==type(0)) // x is string is
ys = msprintf(fmt,y)
F = x+'^2 - 3*'+ys+'^2 + '+x+'*'+ys+'^3'
end
if (type(y)==type('')) & (type(x)==type(0)) // y is string so is F
xs = msprintf(fmt,x)
F = xs+'^2 - 3*'+y+'^2 + '+xs+'*'+y+'^3'
end
if (type(y)==type('')) & (type(x)==type('')) // x&y are strings so is F
F = x+'^2 - 3*'+y+'^2 + '+x+'*'+y+'^3'
end
if (type(y)==type(0)) & (type(x)==type(0)) // x&y are constant so is F
F = x^2 - 3*y^2 + x*y^3
end
endfunction
// Then we can use this 'symbolic' function
deff('F2 = fun2(y)',' F2 = '+fun(4,'y'))
F2=fun2(2) // does compute fun(4,2)
disp(F2)

Error on serialize lambda function with closured data

I use code like this:
p = _belineInterpolateGrid( map( p -> sin(norm(p)), grid ), grid )
f = open("/data/test.function", "w")
serialize( f, p )
close(f)
p0 = deserialize( open("/data/test.function", "r") )
where _belineInterpolateGrid is
function _belineInterpolateGrid(PP, Grid)
...
P = Array(Function, N-1, M-1);
...
poly = (x,y) -> begin
i_x, i_y = i(x, y);
return P[i_x, i_y](x, y);
end
return poly
And now, since some of v0.4, a have an error:
ERROR: MethodError: `convert` has no method matching
convert(::Type{LambdaStaticData}, ::Array{Any,1})
This may have arisen from a call to the constructor LambdaStaticData(...),
since type constructors fall back to convert methods.
Closest candidates are:
call{T}(::Type{T}, ::Any)
convert{T}(::Type{T}, ::T)
...
in deserialize at serialize.jl:435
Why It's happend? Is it bug and how to fix it?
This looks like a bug in Julia to me, and it looks like it has been fixed as of v0.4.6. Try upgrading to that version or newer and see if the problem persists.
You're returning a lambda, that's why. Can't tell if it's a bug (you can serialize a lambda but you can't deserialize it?).
You can avoid this by defining your "get interpolation at x,y" as a type:
import Base: getindex
type MyPoly
thepoly
end
function getindex(p::MyPoly, x::Int, y::Int)
p.thepoly[x+5*y]
end
function getindex(p::MyPoly, I...)
p.thepoly[I...]
end
function call(p::MyPoly, v)
#collect helps keep eltype(ans) == Int
powered_v = map( i->v^i, collect(
take(countfrom(),size(p.thepoly,1))))
powered_v.*p.thepoly
end
p=MyPoly(collect(1:10))
println(p[1])
f = open("serializedpoly", "w")
serialize( f, p)
close(f)
p0 = deserialize( open("serializedpoly", "r"))
println(p[1,1])
v=call(p, 4) #evaluate poly on 4
EDIT: added extension for call

Resources