In Julia 1.4.2 language, I have generated a statement dynamically. What command should I use to execute it?
Example:
import Pkg;
Pkg.add("DataFrames");
using DataFrames
i=1;
e="df_original$i = DataFrame(a = Int[], b = String[])"
#i.e., the statement is "df_original1 = DataFrame(a = Int[], b = String[])"
Julia_exec(e)
What is the equivalent of Julia_exec in Julia that can execute the above statement?
Thanks
eval(Meta.parse(e))
For your example:
julia> eval(Meta.parse(e));
julia> df_original1
0×2 DataFrame
More information can be found in Julia metaprogramming tutorial https://docs.julialang.org/en/v1/manual/metaprogramming/
However, most tasks in Julia can be achieved without meta-programming and I strongly encourage you not to use it in normal workflows.
Related
The result variable is a json-type string, which is very long. What is the option in Julia REPL that allows only a limited output when the variable is this long? DataFrame is originally only partially output. I hope that the general variables will also be output like that.
You can overwrite the display method for AbstractStrings:
import Main.display
display(x::AbstractString) =
show(length(x)<=50 ? x : SubString(x,1,50)*"…")
Let us test it:
julia> str = join(rand('a':'z', 200))
"wcbifwzglgqyenrcdgdxagohlwdoxrrumoaltklkjauptwzrmi…"
I have an ODE that I need to solver over a wide range of parameters.
Previously I have used MATLAB's parfor to divide the parameter ranges between multiple threads. I am new to Julia and need to do the same thing in Julia now. Here is the code that I am using
using DifferentialEquations, SharedArrays, Distributed, Plots
function SingleBubble(du,u,p,t)
du[1]=#. u[2]
du[2]=#. ((-0.5*u[2]^2)*(3-u[2]/(p[4]))+(1+(1-3*p[7])*u[2]/p[4])*((p[6]-p[5])/p[2]+2*p[1]/(p[2]*p[8]))*(p[8]/u[1])^(3*p[7])-2*p[1]/(p[2]*u[1])-4*p[3]*u[2]/(p[2]*u[1])-(1+u[2]/p[4])*(p[6]-p[5]+p[10]*sin(2*pi*p[9]*t))/p[2]-p[10]*u[1]*cos(2*pi*p[9]*t)*2*pi*p[9]/(p[2]*p[4]))/((1-u[2]/p[4])*u[1]+4*p[3]/(p[2]*p[4]))
end
R0=2e-6
f=2e6
u0=[R0,0]
LN=1000
RS = SharedArray(zeros(LN))
P = SharedArray(zeros(LN))
bif = SharedArray(zeros(LN,6))
#distributed for i= 1:LN
ps=1e3+i*1e3
tspan=(0,60/f)
p=[0.0725,998,1e-3,1481,0,1.01e5,7/5,R0,f,ps]
prob = ODEProblem(SingleBubble,u0,tspan,p)
sol=solve(prob,Tsit5(),alg_hints=:stiff,saveat=0.01/f,reltol=1e-8,abstol=1e-8)
RS[i]=maximum((sol[1,5000:6000])/R0)
P[i]=ps
for j=1:6
nn=5500+(j-1)*100;
bif[i,j]=(sol[1,nn]/R0);
end
end
plotly()
scatter(P/1e3,bif,shape=:circle,ms=0.5,label="")#,ma=0.6,mc=:black,mz=1,label="")
When using one worker, the for loop is basically executed as a normal single threaded loop and it works fine. However, when I am using addprocs(n) to add n more workers, nothing gets written into the SharedArrays RS, P and bif. I appreciate any guidance anyone may provide.
These changes are required to make your program work with multiple workers and display the results you need:
Whatever packages and functions are used under #distributed loop must be made available in all the processes using #everywhere as explained here. So, in your case it would be DifferentialEquations and SharedArrays packages as well as the SingleBubble() function.
You need to draw the plot only after all the workers have finished their tasks. For this, you would need to use #sync along with #distributed.
With these changes, your code would look like:
using Distributed, Plots
#everywhere using DifferentialEquations, SharedArrays
#everywhere function SingleBubble(du,u,p,t)
du[1]=#. u[2]
du[2]=#. ((-0.5*u[2]^2)*(3-u[2]/(p[4]))+(1+(1-3*p[7])*u[2]/p[4])*((p[6]-p[5])/p[2]+2*p[1]/(p[2]*p[8]))*(p[8]/u[1])^(3*p[7])-2*p[1]/(p[2]*u[1])-4*p[3]*u[2]/(p[2]*u[1])-(1+u[2]/p[4])*(p[6]-p[5]+p[10]*sin(2*pi*p[9]*t))/p[2]-p[10]*u[1]*cos(2*pi*p[9]*t)*2*pi*p[9]/(p[2]*p[4]))/((1-u[2]/p[4])*u[1]+4*p[3]/(p[2]*p[4]))
end
R0=2e-6
f=2e6
u0=[R0,0]
LN=1000
RS = SharedArray(zeros(LN))
P = SharedArray(zeros(LN))
bif = SharedArray(zeros(LN,6))
#sync #distributed for i= 1:LN
ps=1e3+i*1e3
tspan=(0,60/f)
p=[0.0725,998,1e-3,1481,0,1.01e5,7/5,R0,f,ps]
prob = ODEProblem(SingleBubble,u0,tspan,p)
sol=solve(prob,Tsit5(),alg_hints=:stiff,saveat=0.01/f,reltol=1e-8,abstol=1e-8)
RS[i]=maximum((sol[1,5000:6000])/R0)
P[i]=ps
for j=1:6
nn=5500+(j-1)*100;
bif[i,j]=(sol[1,nn]/R0);
end
end
plotly()
scatter(P/1e3,bif,shape=:circle,ms=0.5,label="")#,ma=0.6,mc=:black,mz=1,label="")
Output using multiple workers:
I would like to retrieve the path of the currently running Julia interpreter from Julia. In Python, this can be achieved with sys.executable.
Base.julia_cmd() is probably what you need. It returns the full command line that was used to invoke the current julia process, with the default options spelled out. Base.julia_exename() returns the name of the executable.
julia> Base.julia_cmd()
/Users/aviks/dev/julia/julia5/usr/bin/julia -Cnative -J/usr/lib/julia/sys.dylib --compile=yes --depwarn=yes
julia> Base.julia_exename()
"julia"
If you just want the location of the julia executable, try one of these:
julia> julia_bin_exe = joinpath(Base.Sys.BINDIR,Base.julia_exename())
"/home/mkitti/src/julia/usr/bin/julia"
julia> Base.julia_cmd()
`/home/mkitti/src/julia/usr/bin/julia -Cnative -J/home/mkitti/src/julia/usr/lib/julia/sys.so -g1`
julia> typeof(Base.julia_cmd())
Cmd
julia> Base.julia_cmd()[1]
"/home/mkitti/src/julia/usr/bin/julia"
julia> julia_bin_exe == Base.julia_cmd()[1]
true
Is there a command that will prevent Julia from printing out all of the values in an array when I create it? This can be annoying when creating large arrays in the REPL. For example when I run this:
julia> bigArray = Array[ [1:1000000], [1:1000000] ]
I would rather it not print all of the values to the console. Any help is appreciated!
Add a semicolon
julia> bigArray = Array[ [1:1000000], [1:1000000] ];
I need an efficient implementation of the cartesian product for a variable number of arrays.
I have tried the product function from Iterators.jl, but the performance was lacking.
I am a python hacker and have used this function from sklearn and have had good performance results with it.
I have tried to write a Julia version of this function, but am not able to produce the same results as the python function.
My code is:
function my_repeat(a, n)
# mimics numpy.repeat
m = size(a, 1)
out = Array(eltype(a), n * m)
out[1:n] = a[1]
for i=2:m
out[(i-1)*n+1:i*n] = a[i]
end
return out
end
function cartesian(arrs; out=None)
dtype = eltype(arrs[1])
n = prod([size(i, 1) for i in arrs])
if is(out, None)
out = Array(dtype, n, length(arrs))
end
m = int(n / size(arrs[1], 1))
out[:, 1] = my_repeat(arrs[1], m)
if length(arrs[2:]) > 0
cartesian(arrs[2:], out=out[1:m, 2:])
for j = 1:size(arrs[1], 1)-1
out[(j*m + 1):(j+1)*m, 2:] = out[1:m, 2:]
end
end
return out
end
I test it with the following:
aa = ([1, 2, 3], [4, 5], [6, 7])
cartesian(aa)
The return value is:
12x3 Array{Float64,2}:
1.0 9.88131e-324 2.13149e-314
1.0 2.76235e-318 2.13149e-314
1.0 9.88131e-324 2.13676e-314
1.0 9.88131e-324 2.13676e-314
2.0 9.88131e-324 2.13149e-314
2.0 2.76235e-318 2.13149e-314
2.0 9.88131e-324 2.13676e-314
2.0 9.88131e-324 2.13676e-314
3.0 9.88131e-324 2.13149e-314
3.0 2.76235e-318 2.13149e-314
3.0 9.88131e-324 2.13676e-314
3.0 9.88131e-324 2.13676e-314
I believe that the problem here is that when I use this line: cartesian(arrs[2:], out=out[1:m, 2:]), the keyword argument out is not updated inplace in the recursive calls.
As can be seen, I have done a very naive translation of the Python version of this function (see link from above). It might well be possible that there are internal language differences that make a naive translation impossible. I don't think this is true because of this quote from the functions section of the julia documentation:
Julia function arguments follow a convention sometimes called “pass-by-sharing”, which means that values are not copied when they are passed to functions. Function arguments themselves act as new variable bindings (new locations that can refer to values), but the values they refer to are identical to the passed values. Modifications to mutable values (such as Arrays) made within a function will be visible to the caller. This is the same behavior found in Scheme, most Lisps, Python, Ruby and Perl, among other dynamic languages.
How can I make this (or an equivalent) function work in Julia?
There's a repeat function in Base.
A shorter and faster variant might use the #forcartesian macro in the Cartesian package:
using Cartesian
function cartprod(arrs, out=Array(eltype(arrs[1]), prod([length(a) for a in arrs]), length(arrs)))
sz = Int[length(a) for a in arrs]
narrs = length(arrs)
#forcartesian I sz begin
k = sub2ind(sz, I)
for i = 1:narrs
out[k,i] = arrs[i][I[i]]
end
end
out
end
The order of rows is different than your solution, but perhaps that doesn't matter?
I figured it out.
It it not an issue of Julia not updating function arguments in place, but instead a problem with the using slice operator a[ind], which makes a copy of the data, instead of indexing my array by reference. This part of the multi dimensional array documentation held the answer:
SubArray is a specialization of AbstractArray that performs indexing by reference rather than by copying. A SubArray is created with the sub function, which is called the same way as getindex (with an array and a series of index arguments). The result of sub looks the same as the result of getindex, except the data is left in place. sub stores the input index vectors in a SubArray object, which can later be used to index the original array indirectly.
The problem was fixed by changing this line from:
cartesian(arrs[2:], out=out[1:m, 2:])
to the following:
out_end = size(out, 2)
cartesian(arrs[2:], out=sub(out, 1:m, 2:out_end))
This is an old question, but the answer has changed as Julia has progressed.
The basic problem is that slices like a[1:3,:] make a copy. If you update that copy in a function, it has no effect on a itself.
The modern answer is to use #view a[1:3,:] to get a reference to part of the underlying array. Updates to this view will be reflected in the underlying array.
You can force an entire block of code to use view-semantics with the #views macro.
See Inplace updating of function arguments for more discussion.