Julia pmap performance - r

I am trying to port some of my R code to Julia;
Basically I have rewritten the following R code in Julia:
library(parallel)
eps_1<-rnorm(1000000)
eps_2<-rnorm(1000000)
large_matrix<-ifelse(cbind(eps_1,eps_2)>0,1,0)
matrix_to_compare = expand.grid(c(0,1),c(0,1))
indices<-seq(1,1000000,4)
large_matrix<-lapply(indices,function(i)(large_matrix[i:(i+3),]))
function_compare<-function(x){
which((rowSums(x==matrix_to_compare)==2) %in% TRUE)
}
> system.time(lapply(large_matrix,function_compare))
user system elapsed
38.812 0.024 38.828
> system.time(mclapply(large_matrix,function_compare,mc.cores=11))
user system elapsed
63.128 1.648 6.108
As one can notice I am getting significant speed-up when going from one core to 11. Now I am trying to do the same in Julia:
#Define cluster:
addprocs(11);
using Distributions;
#everywhere using Iterators;
d = Normal();
eps_1 = rand(d,1000000);
eps_2 = rand(d,1000000);
#Create a large matrix:
large_matrix = hcat(eps_1,eps_2).>=0;
indices = collect(1:4:1000000)
#Split large matrix:
large_matrix = [large_matrix[i:(i+3),:] for i in indices];
#Define the function to apply:
#everywhere function function_split(x)
matrix_to_compare = transpose(reinterpret(Int,collect(product([0,1],[0,1])),(2,4)));
matrix_to_compare = matrix_to_compare.>0;
find(sum(x.==matrix_to_compare,2).==2)
end
#time map(function_split,large_matrix )
#time pmap(function_split,large_matrix )
5.167820 seconds (22.00 M allocations: 2.899 GB, 12.83% gc time)
18.569198 seconds (40.34 M allocations: 2.082 GB, 5.71% gc time)
As one can notice I am not getting any speed up with pmap. Maybe somebody can suggest alternatives.

I think that some of the problem here is that #parallel and #pmap don't always handle moving data to and from the workers very well. Thus, they tend to work best in situations where what you are executing doesn't require very much data movement at all. I also suspect that there are probably things that could be done to improve their performance, but I'm not certain on the details.
For situations in which you do need more data moving around, it may be best to stick with options that directly call functions on workers, with those functions then accessing objects within the memory space of those workers. I give one example below, which speeds up your function using multiple workers. It uses perhaps the simplest option, which is #everywhere, but #spawn, remotecall() etc. are also worth considering, depending on your situation.
addprocs(11);
using Distributions;
#everywhere using Iterators;
d = Normal();
eps_1 = rand(d,1000000);
eps_2 = rand(d,1000000);
#Create a large matrix:
large_matrix = hcat(eps_1,eps_2).>=0;
indices = collect(1:4:1000000);
#Split large matrix:
large_matrix = [large_matrix[i:(i+3),:] for i in indices];
large_matrix = convert(Array{BitArray}, large_matrix);
function sendto(p::Int; args...)
for (nm, val) in args
#spawnat(p, eval(Main, Expr(:(=), nm, val)))
end
end
getfrom(p::Int, nm::Symbol; mod=Main) = fetch(#spawnat(p, getfield(mod, nm)))
#everywhere function function_split(x::BitArray)
matrix_to_compare = transpose(reinterpret(Int,collect(product([0,1],[0,1])),(2,4)));
matrix_to_compare = matrix_to_compare.>0;
find(sum(x.==matrix_to_compare,2).==2)
end
function distribute_data(X::Array, WorkerName::Symbol)
size_per_worker = floor(Int,size(X,1) / nworkers())
StartIdx = 1
EndIdx = size_per_worker
for (idx, pid) in enumerate(workers())
if idx == nworkers()
EndIdx = size(X,1)
end
#spawnat(pid, eval(Main, Expr(:(=), WorkerName, X[StartIdx:EndIdx])))
StartIdx = EndIdx + 1
EndIdx = EndIdx + size_per_worker - 1
end
end
distribute_data(large_matrix, :large_matrix)
function parallel_split()
#everywhere begin
if myid() != 1
result = map(function_split,large_matrix );
end
end
results = cell(nworkers())
for (idx, pid) in enumerate(workers())
results[idx] = getfrom(pid, :result)
end
vcat(results...)
end
## results given after running once to compile
#time a = map(function_split,large_matrix); ## 6.499737 seconds (22.00 M allocations: 2.899 GB, 13.99% gc time)
#time b = parallel_split(); ## 1.097586 seconds (1.50 M allocations: 64.508 MB, 3.28% gc time)
julia> a == b
true
Note: even with this, the speedup is not perfect from the multiple processes. But, this is to be expected, since there is still a moderate amount of data to be returned as a result of your function, and that data's got to be moved, taking time.
P.S. See this post (Julia: How to copy data to another processor in Julia) or this package (https://github.com/ChrisRackauckas/ParallelDataTransfer.jl) for more on the sendto and getfrom functions I used here.

Related

Julia: type-stability with DataFrames

How can I access the columns of a DataFrame in a type-stable way?
Let's assume I have the following data:
df = DataFrame(x = fill(1.0, 1000000), y = fill(1, 1000000), z = fill("1", 1000000))
And now I want to do some recursive computation (so I cannot use transform)
function foo!(df::DataFrame)
for i in 1:nrow(df)
if (i > 1) df.x[i] += df.x[i-1] end
end
end
This has terrible performance:
julia> #time foo!(df)
0.144921 seconds (6.00 M allocations: 91.529 MiB)
A quick fix in this simplified example would be the following:
function bar!(df::DataFrame)
x::Vector{Float64} = df.x
for i in length(x)
if (i > 1) x[i] += x[i-1] end
end
end
julia> #time bar!(df)
0.000004 seconds
However, I'm looking for a solution that is generalisable, eg when the recursive computation is just specified as a function
function foo2!(df::DataFrame, fn::Function)
for i in 1:nrow(df)
if (i > 1) fn(df, i) end
end
end
function my_fn(df::DataFrame, i::Int64)
x::Vector{Float64} = df.x
x[i] += x[i-1]
end
While this (almost) doesn't allocate, it is still very slow.
julia> #time foo2!(df, my_fn)
0.050465 seconds (1 allocation: 16 bytes)
Is there an approach that is performant and allows this kind of flexibility / generalisability?
EDIT: I should also mention that in practice it is not known a priori on which columns the function fn depends on. Ie I'm looking for an approach that allows performant access to / updating of arbitrary columns inside fn. The needed columns could be specified together with fn as a Vector{Symbol} for example if necessary.
EDIT 2: I tried using barrier functions as follows, but it's not performant
function foo3!(df::DataFrame, fn::Function, colnames::Vector{Symbol})
cols = map(cname -> df[!,cname], colnames)
for i in 1:nrow(df)
if (i > 1) fn(cols..., i) end
end
end
function my_fn1(x::Vector{Float64}, i::Int64)
x[i] += x[i-1]
end
function my_fn2(x::Vector{Float64}, y::Vector{Int64}, i::Int64)
x[i] += x[i-1] * y[i-1]
end
#time foo3!(df, my_fn1, [:x])
#time foo3!(df, my_fn2, [:x, :y])
This issue is intended (to avoid excessive compilation for wide data frames) and the ways how to handle it are explained in https://github.com/bkamins/Julia-DataFrames-Tutorial/blob/master/11_performance.ipynb.
In general you should reduce the number of times you index into a data frame. So in this case do:
julia> function foo3!(x::AbstractVector, fn::Function)
for i in 2:length(x)
fn(x, i)
end
end
foo3! (generic function with 1 method)
julia> function my_fn(x::AbstractVector, i::Int64)
x[i] += x[i-1]
end
my_fn (generic function with 1 method)
julia> #time foo3!(df.x, my_fn)
0.010746 seconds (16.60 k allocations: 926.036 KiB)
julia> #time foo3!(df.x, my_fn)
0.002301 seconds
(I am using the version where you want to have a custom function passed)
My current approach involves wrapping the DataFrame in a struct and overloading getindex / setindex!. Some additional trickery using generated functions is needed to get the ability to access columns by name. While this is performant, it is also a quite hacky, and I was hoping there was a more elegant solution using only DataFrames.
For simplicity this assumes all (relevant) columns are of Float64 type.
struct DataFrameWrapper{colnames}
cols::Vector{Vector{Float64}}
end
function df_to_vectors(df::AbstractDataFrame, colnames::Vector{Symbol})::Vector{Vector{Float64}}
res = Vector{Vector{Float64}}(undef, length(colnames))
for i in 1:length(colnames)
res[i] = df[!,colnames[i]]
end
res
end
function DataFrameWrapper{colnames}(df::AbstractDataFrame) where colnames
DataFrameWrapper{colnames}(df_to_vectors(df, collect(colnames)))
end
get_colnames(::Type{DataFrameWrapper{colnames}}) where colnames = colnames
#generated function get_col_index(x::DataFrameWrapper, ::Val{col})::Int64 where col
id = findfirst(y -> y == col, get_colnames(x))
:($id)
end
Base.#propagate_inbounds Base.getindex(x::DataFrameWrapper, col::Val)::Vector{Float64} = x.cols[get_col_index(x, col)]
Base.#propagate_inbounds Base.getindex(x::DataFrameWrapper, col::Symbol)::Vector{Float64} = getindex(x, Val(col))
Base.#propagate_inbounds Base.setindex!(x::DataFrameWrapper, value::Float64, row::Int64, col::Val) = setindex!(x.cols[get_col_index(x, col)], value, row)
Base.#propagate_inbounds Base.setindex!(x::DataFrameWrapper, value::Float64, row::Int64, col::Symbol) = setindex!(x, value, row, Val(col))

How to run computations for n seconds in Julia?

I'd like to run heavy computations in Julia for a fixed duration, for example 10 seconds. I tried this:
timer = Timer(10.0)
while isopen(timer)
computation()
end
But this does not work, since the computations never let Julia's task scheduler take control. So I added yield() in the loop:
timer = Timer(10.0)
while isopen(timer)
yield()
computation()
end
But now there is significant overhead from calling yield(), especially when one call to computation() is short. I guess I could call yield() and isopen() only every 1000 iterations or so, but I would prefer a solution where I would not have to tweak the number of iterations every time I change the computations. Any ideas?
This pattern below uses threads and on my laptop has a latency of around 35ms for each 1,000,000 calls which is more than acceptable for any job.
Tested on Julia 1.5 release candidate:
function should_stop(timeout=10)
handle = Threads.Atomic{Bool}(false)
mytask = Threads.#spawn begin
sleep(timeout)
Threads.atomic_or!(handle, true)
end
handle
end
function do_some_job_with_timeout()
handle = should_stop(5)
res = BigInt() # save results to some object
mytask = Threads.#spawn begin
for i in 1:10_000_000
#TODO some complex computations here
res += 1 # mutate the result object
handle.value && break
end
end
wait(mytask) # wait for the job to complete
res
end
You can also used Distributed instead. The code below seems to have a much better latency - only about 1ms for each 1,000,000 timeout checks.
using Distributed
using SharedArrays
addprocs(1)
function get_termination_handle(timeout=5,workerid::Int=workers()[end])::SharedArray{Bool}
handle = SharedArray{Bool}([false])
proc = #spawnat workerid begin
sleep(timeout)
handle[1]=true
end
handle
end
function fun_within_timeout()
res = 0
h = get_termination_handle(0.1)
for i = 1:100_000_000
res += i % 2 == 0 ? 1 : 0
h[1] && break
end
res
end

Use of Memory-mapped in Julia

I have a Julia code, version 1.2, which performs a lot of operations on a 10000 x 10000 Array . Due to OutOfMemory() error when I run the code, I’m exploring other options to run it, such as Memory-mapping. Concerning the use of Mmap.mmap, I’m a bit confused with the use of the Array that I map to my disk, due to little explanations on https://docs.julialang.org/en/v1/stdlib/Mmap/index.html. Here is the beginning of my code:
using Distances
using LinearAlgebra
using Distributions
using Mmap
data=Float32.(rand(10000,15))
Eucldist=pairwise(Euclidean(),data,dims=1)
D=maximum(Eucldist.^2)
sigma2hat=mean(((Eucldist.^2)./D)[tril!(trues(size((Eucldist.^2)./D)),-1)])
L=exp.(-(Eucldist.^2/D)/(2*sigma2hat))
L is the 10000 x 10000 Array with which I want to work, so I mapped it to my disk with
s = open("mmap.bin", "w+")
write(s, size(L,1))
write(s, size(L,2))
write(s, L)
close(s)
What am I supposed to do after that? The next step is to perform K=eigen(L) and apply other commands to K. How should I do that? With K=eigen(L) or K=eigen(s)? What’s the role of the object s and when does it get involved? Moreover, I don’t understand why I have to use Mmap.sync! and when. After each subsequent lines after eigen(L)? At the end of the code? How can I be sure that I’m using my disk space instead of RAM memory?Would like some highlights about memory-mapping, please. Thank you!
If memory usage is a concern, it is often best to re-assign your very large arrays to 0, or to a similar type-safe small matrix, so that the memory can be garbage collected, assuming you are done with those intermediate matrices. After that, you just call Mmap.mmap() on your stored data file, with the type and dimensions of the data as second and third arguments to mmap, and then assign the function's return value to your variable, in this case L, resulting in L being bound to the file contents:
using Distances
using LinearAlgebra
using Distributions
using Mmap
function testmmap()
data = Float32.(rand(10000, 15))
Eucldist = pairwise(Euclidean(), data, dims=1)
D = maximum(Eucldist.^2)
sigma2hat = mean(((Eucldist.^2) ./ D)[tril!(trues(size((Eucldist.^2) ./ D)), -1)])
L = exp.(-(Eucldist.^2 / D) / (2 * sigma2hat))
s = open("./tmp/mmap.bin", "w+")
write(s, size(L,1))
write(s, size(L,2))
write(s, L)
close(s)
# deref and gc collect
Eucldist = data = L = zeros(Float32, 2, 2)
GC.gc()
s = open("./tmp/mmap.bin", "r+") # allow read and write
m = read(s, Int)
n = read(s, Int)
L = Mmap.mmap(s, Matrix{Float32}, (m, n)) # now L references the file contents
K = eigen(L)
K
end
testmmap()
#time testmmap() # 109.657995 seconds (17.48 k allocations: 4.673 GiB, 0.73% gc time)

Speed up deepcopy for new type

Question: I have a new type type MyFloat; x::Float64 ; end. I want to perform a deepcopy on a Vector{MyFloat}. Using Julia v0.5.0 on Ubuntu 16.04, the operation runs roughly 150 times slower than a deepcopy call on an equivalent length Vector{Float64}. Is it possible to speed up a deepcopy on my Vector{MyFloat}?
Code snippet: The 150 times slowdown can be seen with the following code snippet which can be pasted to the REPL:
#Just my own floating point type
type MyFloat
x::Float64
end
#This function performs N deepcopy operations on a Vector{MyFloat} of length J
function f1(J::Int, N::Int)
v = MyFloat.(rand(J))
x = [ deepcopy(v) for n = 1:N ]
end
#The same as f1, but on Vector{Float64} instead of Vector{MyFloat}
function f2(J::Int, N::Int)
v = rand(J)
x = [ deepcopy(v) for n = 1:N ]
end
#Pre-compilation step
f1(2, 2);
f2(2, 2);
#Timings
#time f1(100, 15000);
#time f2(100, 15000);
On my machine this produces:
julia> #time f1(100, 15000);
1.944410 seconds (4.61 M allocations: 167.888 MB, 7.72% gc time)
julia> #time f2(100, 15000);
0.013513 seconds (45.01 k allocations: 19.113 MB, 78.80% gc time)
Looking at the answer here it sounds like I can speed things up by defining my own copy method for MyFloat. I've tried things like:
Base.deepcopy(x::MyFloat)::MyFloat = MyFloat(x.x);
Base.deepcopy(v::Vector{MyFloat})::Vector{MyFloat} = [ MyFloat(y.x) for y in v ]
Base.copy(x::MyFloat)::MyFloat = MyFloat(x.x)
Base.copy(v::Vector{MyFloat})::Vector{MyFloat} = [ MyFloat(y.x) for y in v ]
but this doesn't make any difference.
Final note: Letting a = MyFloat.([1.0, 2.0]), I could just use b = copy(a) and there is no speed penalty. This is fine, as long as I am careful to only ever do operations like b[1] = MyFloat(3.0) (which will modify b but not a). But if I get sloppy and accidentally write b[1].x = 3.0, then this will modify both a and b.
By the way, it is entirely possible that I do not have a deep understanding of the differences between copy and deepcopy... I have read this great blog post (thanks #ChrisRackauckas), but I'm certainly a bit fuzzy about what is happening at a deeper level.
Try changing type MyFloat in the definition to immutable MyFloat or struct MyFloat (the keyword changed in 0.6). This makes the times almost equal.
As #Gnimuc mentioned, a mutable, which is not a bitstype, makes Julia keep track of a lot of other stuff. See here and in the comments.

Julia pi approximation slow

I have pi approximation code very similar to that on official page:
function piaprox()
sum = 1.0
for i = 2:m-1
sum = sum + (1.0/(i*i))
end
end
m = parse(Int,ARGS[1])
opak = parse(Int,ARGS[2])
#time for i = 0:opak
piaprox()
end
When I try to compare time of C and Julia, then Julia is significantly slower, almost 38 sec for m = 100000000 (time of C is 0.1608328933 sec). Why this is happening?
julia> m=100000000
julia> function piaprox()
sum = 1.0
for i = 2:m-1
sum = sum + (1.0/(i*i))
end
end
piaprox (generic function with 1 method)
julia> #time piaprox()
28.482094 seconds (600.00 M allocations: 10.431 GB, 3.28% gc time)
I would like to mention two very important paragraphs from Performance Tips section of julia documentation:
Avoid global variables A global variable might have its value, and
therefore its type, change at any point. This makes it difficult for
the compiler to optimize code using global variables. Variables should
be local, or passed as arguments to functions, whenever possible.....
The macro #code_warntype (or its function variant code_warntype()) can
sometimes be helpful in diagnosing type-related problems.
julia> #code_warntype piaprox();
Variables:
sum::Any
#s1::Any
i::Any
It's clear from #code_warntype output that compiler could not recognize types of local variables in piaprox(). So we try to declare types and remove global variables:
function piaprox(m::Int)
sum::Float64 = 1.0
i::Int = 0
for i = 2:m-1
sum = sum + (1.0/(i*i))
end
end
julia> #time piaprox(100000000 )
0.009023 seconds (11.10 k allocations: 399.769 KB)
julia> #code_warntype piaprox(100000000);
Variables:
m::Int64
sum::Float64
i::Int64
#s1::Int64
EDIT
as #user3662120 commented, the super fast behavior of the answer is result of a mistake, without a return value LLVM might ignore the for loop, by adding a return line the #time result would be:
julia> #time piaprox(100000000)
0.746795 seconds (11.11 k allocations: 400.294 KB, 0.45% gc time)
1.644934057834575

Resources