Does SymPy.jl work for parallel computing?
#everywhere using SymPy
using LinearAlgebra
using SharedArrays, Distributed
julia> xx = Sym[]
julia> #syms x
julia> #sync #distributed for i = 1:3
xx = [xx, x]
end
Unhandled Task ERROR: On worker 3:
KeyError: key SymPy [24249f21-da20-56a4-8eb1-6a02cf4ae2e6] not found
https://github.com/JuliaPy/SymPy.jl/issues/483
Distributed is for multi-process (as opposed to multi-threaded) parallelism. Think of it as running multiple independent Julia sessions simultaneously. That means you need to declare your imports on all workers - the #everywhere macro is your friend here.
#everywhere using SymPy
will import the package on all workers, which then allows you to use its functionality across all of them in distributed workloads.
Related
I am trying to implement parallel processing for nested loops. However I am getting the following syntax error.
I am modifying the example from here ( Parallel computing in Julia - running a simple for-loop on multiple cores )
This works
for N in 1:5:20, H in 1:5:20
println("The N of this iteration in $N, $H")
end
This is giving syntax error
using Distributed
#distributed for N in 1:5:20, H in 1:5:20
println("The N of this iteration in $N, $H")
end
The #distributed macro supports iterating only over one parameter. Hence you could do:
#distributed for (N,H) in collect(Iterators.product(1:5:20,1:5:20))
println("The N of this iteration in $N, $H")
end
Another option is of course having a nested loop. In that case only one of the loops should be #distributed.
An alternative (for single node, multi core parallelization) would be multithreading. This is often easier / more efficient due to lower overhead and shared memory (note that there is no such thing like a Python GIL in Julia).
The corresponding Julia commands are Threads.#threads (in front of for loops) or Threads.#spawn for spawning tasks to be executed parallel.
Edit: example added
Threads.#threads for (N,H) in collect(Iterators.product(1:5:20,1:5:20))
println("The N of this iteration in $N, $H")
end
2nd case:
f(x) = x^2
futures = [Threads.#spawn f(x) for x=1:10]
fetch.(futures)
julia> using Random: rand; Random
ERROR: UndefVarError: Random not defined
julia> using Random; Random
Random
Is this working as intended? I have a dirty workaround: using Random: Random, rand. But that is quite ugly. Does there exist a better one?
I think you might be looking for:
import Random
I am working with big data arrays, something of order of 10^10 elements. I fill the entries of these arrays by calling a certain function. All entries are independent so I would like to make use of this and fill the array simultaneously by a parallel for loop that runs over a set of indices and calls the function. I know about SharedArrays and this how I usually implement such thing but because I am using huge arrays, I don't want to share them over all the workers. I want to keep my array only on the main worker and then execute a parallel for loops and transferring the result of each loop to the main worker to be stored in the array.
For example, this is what I normally do for small arrays.
H = SharedArray{ComplexF64}(n,n) #creates a shared array of size n*n
#sync #distributed for i in 1:q
H[i] = f(i) #f is a function defined on every worker
end
The problem with such construction is that if the size of the array n is too big, sharing it with all the workers is not very efficient. Is there a way of getting around this? I realize my question might be very naive and I apologize for this.
A SharedArray is not copied among workers! It simply allows the same memory area to be accessible by all processes. This is indeed very fast because there is no communication overhead between the workers. The master process can simply look at the memory area filled by workers and that's it.
The only disadvantage of the SharedArrays is that all workers in to be on the same host. If using DistributedArrays you only add unnecessary allocations due to the inter-process communication because each worker is holding only its own part of the array.
Let us have a look (these are two equivalent codes for shared and distributed arrays):
using Distributed
using BenchmarkTools
addprocs(4)
using SharedArrays
function f1()
h = SharedArray{Float64}(10_000) #creates a shared array of size n*n
#sync #distributed for i in 1:10_000
h[i] = sum(rand(1_000))
end
h
end
using DistributedArrays
#everywhere using DistributedArrays
function f2()
d = dzeros(10_000) #creates a shared array of size n*n
#sync #distributed for i in 1:10_000
p = localpart(d)
p[((i-1) % 2500)+1] = sum(rand(1_000))
end
d
end
Now the benchamrks:
julia> #btime f1();
7.151 ms (1032 allocations: 42.97 KiB)
julia> #btime(sum(f1()));
7.168 ms (1022 allocations: 42.81 KiB)
julia> #btime f2();
7.110 ms (1057 allocations: 42.14 KiB)
julia> #btime sum(f2());
7.405 ms (1407 allocations: 55.95 KiB)
Conclusion:
on a single machine the execution times are approximately equal, but collecting the data by the master node adds a significant number of memory allocations when DistributedArrays are used. Hence, on a single machine you always want to go for SharedArrays (moreover the API is simpler as well).
I'm attempting to use Julia for some Linear Algebra. The documentation lists a number of functions suitable for working with matrices. Some of these work directly on running Julia e.g.
julia> ones(2,2)
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
while others give an UndefVarError e.g.
julia> eye(2,2)
ERROR: UndefVarError: eye not defined
Stacktrace:
[1] top-level scope at none:0
Why am I only able to access some of the functions listed on the Linear Algebra section? https://michaelhatherly.github.io/julia-docs/en/latest/stdlib/linalg.html#Base.LinAlg.expm
I have also tried importing the LinearAlgebra package but this doesn't make a difference:
julia> using LinearAlgebra
julia> eye(2,2)
ERROR: UndefVarError: eye not defined
Stacktrace:
[1] top-level scope at none:0
In fact some functions now become available e.g. dot, whilst others which according to the documentation are also part of the Linear Algebra library continue to give an error:
julia> dot
ERROR: UndefVarError: dot not defined
julia> using LinearAlgebra
julia> dot
dot (generic function with 12 methods)
julia> vecdot
ERROR: UndefVarError: vecdot not defined
Both of the above functions are listed as Base.LinAlg.dot in the documentation.
The packages I currently have installed are:
(v1.0) pkg> status
Status `~/.julia/environments/v1.0/Project.toml`
[0c46a032] DifferentialEquations v5.3.1
[7073ff75] IJulia v1.13.0
[91a5bcdd] Plots v0.21.0
[37e2e46d] LinearAlgebra
[2f01184e] SparseArrays
This problem occurs for many other functions discussed on the linear algebra page:
julia> repmat([1, 2, 3], 2)
ERROR: UndefVarError: repmat not defined
Stacktrace:
[1] top-level scope at none:0
I have Julia vs1.01 installed
The documentation you linked to is not the official documentation, which is found at docs.julialang.org. The docs you linked to are an old version on some developer's website. That is the reason why it doesn't line up with the current Julia.
I have been interested about GPU programming since a couple of months ago and now I am trying to learn how to do it in Julia. Ideally I would like to be able to write an analogous code as the below using the GPU:
addprocs(4);
a = cell(nworkers())
#sync for (idx, pid) in enumerate(workers())
#async a[idx] = remotecall_fetch(pid, fun, vargs...)
end
I have looked around and I have tried ArrayFire, however I haven't been able to find a way to use something similar to #sync #async. I understand that CUDArt should be able to do something similar, but it seems not to be ready for Julia 0.5.
Would you please show me how to re-write the example above so that the #async operations are executed on the GPU?
Please notice that my graphic card supports CUDA drivers.