How come julia won't run this code? - julia

I am in the process of trying to learn a little about using Julia by translating an old example code which solves the time dependent Schrodinger equation. Here is what I have so far:
require("setparams.jl")
require("V.jl")
require("InitialRIM.jl")
#require("expevolve.jl")
function doit()
Nspace, R, Im, x0, width, k0, xmax, xmin, V0, a, dx, dx2, n, dt = setparams()
R, Im = initialRIm(width,n,k0,dt,xmin)
ProbDen = zeros(Nspace)
ProbDen = R.*R + Im.*Im
plot(ProbDen)
#Imold = Im;
t=0.0
#t, R =evolve!(R,Im,t,V0,width,a,dx,dx2,dt,xmin,n)
println("Done")
end
After requiring the above code, I then do using Winston. Then I attempt to run the code by typing doit(). Nothing appears.
Can someone please let me know what I am doing wrong? I can provide the innards of setuparame() if needed, as well as initialRIm() but thought at first I'd ask whether my expectations about what should happen are in fault. Note that if I run setuparams() and initialRIm() in a terminal session, then do the plot(ProbDen), the correct graph appears.
Thanks for your help.
Update:
I have now restarted julia, have done using Winston, and then have done doit() to wit:
julia> using Winston
julia> require("driveSch.jl")
julia> doit()
ERROR: dx not defined
in initialRIm at /Users/comerduncan/juliaexamples/TDSch/InitialRIM.jl:8
in doit at /Users/comerduncan/juliaexamples/TDSch/driveSch.jl:11
However, the call to setparams() sets dx along with all the other things. This I see when I run the setparams() interactively. So I am not understanding what is the problem...

It seems like you use dx in initialRIm, but dx is not one of the arguments you pass to it. If you access a variable that is not a parameter nor assigned inside a Julia function, Julia will look for a variable with the same name in the surrounding scopes. When you run
Nspace, R, Im, x0, width, k0, xmax, xmin, V0, a, dx, dx2, n, dt = setparams()
in the global scope, and you create a global variable dx that initialRIm could access. When you wrap the calls into a function, you create a local variable dx that can not be accessed from initialRIm.

Related

Julia CUDA: UndefVarError: parameters not defined

I have a program for doing Fourier series and I wanted to switch to CuArrays to make it faster. The code is as follows (extract):
#Arrays I want to use
coord = CuArray{ComplexF64,1}(complex.(a[:,1],a[:,2]))
t=CuArray{Float64,1}(-L:(2L/(N-1)):L)
#Array of indexes in the form [0,1,-1,2,-2,...]
n=[((-1)^i)div(i,2) for i in 1:grado]
#Array of functions I need for calculations
base= [x -> exp(π * im * i * x / L) / L for i in n]
base[i](1.) #This line is OK
base[i](-1:.1:1) #This line is OK
base[i].(t) #This line gives error!
base[i].(CuArray{Float64,1}(t)) #This line gives error!
And the error is:
GPU broadcast resulted in non-concrete element type Any.
This probably means that the function you are broadcasting contains an error or type instability.
If I change it like this
base= [(x::Float64) -> (exp(π * im * i * x / L) / L)::ComplexF64 for i in n]
the same lines still give error, but the error now is:
UndefVarError: parameters not defined
Any idea how I could fix this?
Thank you in advance!
Package information:
(#v1.6) pkg> st CUDA
Status `C:\Users\marce\.julia\environments\v1.6\Project.toml`
[052768ef] CUDA v2.6.2
P.S.: This other function has the same problem:
function integra(inizio, fine, arr)
N=size(arr,1)
h=(fine-inizio)/N
integrale=sum(arr)
integrale -= (first(arr)+last(arr))/2
integrale *= h
end
L=2
integra(-L,L,coord)
The first and easier problem is that you should take care to declare global variables to be constant so that the compiler can assume a constant type: const L = 2. A mere L = 2 allows you to do something like L = SomeOtherType(), and if that type can be Anything, so must the return type of your functions. On the CPU that's only a performance hit, but it's a no-no for the GPU. If you actually want L to vary in value, pass it in as an argument so the compiler can still infer types within a function.
Your ::ComplexF64 assertion did actually force a concrete return type, though the middle of the function is still type unstable (check with #code_warntype). The second problem you ran into after that patch was probably caused by this recently patched conflict between ExprTools.jl and LLVM.jl. Seems like you just need to update the packages or maybe reinstall them.

plotting a python function that uses an array

In sagemath, I would like to plot the following function foo (Coef is an array that is big enough) :
def foo(x):
x_approx = floor (x*4)
return Coef[x_approx]
I wanted to use the command plot(foo(x), (x,0,0.1)).
But I got the error unable to convert floor(4*x) to an integer.
Whereas when `foo is not using an array, it works:
def foo(x):
x_approx = floor (x*4)
return 4*x_approx
Use plot(foo, (x, 0, 0.1)) instead (that is, replace foo(x) with foo). If you use foo(x), then Sage tries to evaluate foo(x) first, in which case it treats x as a symbolic variable and can't turn it into a number to plot. If you use foo, then it knows to treat it as a plottable/callable function, and it does the right thing.
Edit: I think the issue is that for plotting, Sage requires a certain type of function, a symbolic function, and using a Python construct like Coef[...] doesn't fit into that framework.

using DifferentialEquations: u is not updating

I believe there is a bug in this code. For the sake of brevity though I will just write the function which defines the ODE
function clones(du,u,p,t)
(Nmut,f) = p
# average fitness
phi = sum(f.*u)
# constructing mutation kernel
eps = 0.01
Q = Qmatrix(Nmut,eps)
# defining differential equations
Nclones=2^Nmut;
du = zeros(Nclones)
ufQ = transpose(transpose(u.*f)*Q)
du = ufQ .- phi*u
end
If the whole code is needed I can provide it, but it is messy and I'm not sure how to create a minimal example. I tried this when Nmut = 2 so I can compare to a hard-coded version. The output of du at the first time steps are identical. But this version does not seem to ever update u it stays at the u0 prescribed.
Does anyone have an idea why this might be the case? I can also provide the full script, but wanted to avoid that if someone could just see why u would not update.
EDIT:
maxdim=4;
for i in 1:maxdim
du[i] = 0.0;
for j in 1:maxdim
du[i] += u[j].*w[j].*Q[j,i]
end
du[i] -= u[i].*phi
end
The du is updated correctly if we use this version. Why would that be the case?
You are using the in-place form. With this, you have to update the values of du. In your script you are using
du = ufQ .- phi*u
That is replacing the name du with a new array, but not mutating the values in the original du array. A quick fix would be to use the mutating equals:
du .= ufQ .- phi*u
Notice that this is .=.
To understand what this means in a more example based format, think about this. We have an array:
a = [1,2,3,4]
Now we point a new variable to that same array
a2 = a
When we change a value of a we see that reflected in a2 since they point to the same memory:
a[1] = 5
println(a2) # [5,2,3,4]
But now if we replace a we notice nothing happens to a2 since they no longer refer to the same array
a = [1,2,3,4]
println(a2) # [5,2,3,4]
Packages like DifferentialEquations.jl utilize mutating forms so that way users can get rid of array allocations by repeatedly changing the values in cached arrays. As a consequence these means that you should be updating the values of du and not replacing its pointer.
If you feel more comfortable not using mutation, you can use the function syntax f(u,p,t), although there will be a performance hit for doing so if the state variables are (non-static) arrays.

Scilab pointer function

I am working on converting code from MATLAB to scilab included here.
The # symbol is used as a memory pointer in MATLAB pointing to the location of the function tst_callback.
Scilab does not like this however. Is there a scilab equivalent for the #?
function test
sysIDgui(#tst_callback)
end
function tst_callback()
disp("Hello Ron")
endfunction
What you are trying to do is to pass a function as argument to another function. In Scilab, you don't need any special syntax.
Try it yourself. Define these two functions:
function y = applyFunction(f,x)
y = f(x);
endfunction
function y = double(x)
y = x * 2;
endfunction
Then test it on the console:
--> applyFunction(double,7)
ans =
14.
Note: the main usage of # in MATLAB, is to create anonymous functions (see documentation), ones that are not defined in a separate file. As for Scilab, there is no way to create anonymous functions.

Function as an argument in Erlang

I'm trying to do something like this:
-module(count).
-export([main/0]).
sum(X, Sum) -> X + Sum.
main() ->
lists:foldl(sum, 0, [1,2,3,4,5]).
but see a warning and code fails:
function sum/2 is unused
How to fix the code?
NB: this is just a sample which illustrates problem, so there is no reason to propose solution which uses fun-expression.
Erlang has slightly more explicit syntax for that:
-module(count).
-export([main/0]).
sum(X, Sum) -> X + Sum.
main() ->
lists:foldl(fun sum/2, 0, [1,2,3,4,5]).
See also "Learn you some Erlang":
If function names are written without a parameter list then those names are interpreted as atoms, and atoms can not be functions, so the call fails.
...
This is why a new notation has to be added to the language in order to let you pass functions from outside a module. This is what fun Module:Function/Arity is: it tells the VM to use that specific function, and then bind it to a variable.

Resources