Broadcasting with ArrayFire.jl - julia

I'm trying to broadcast a simple conditional function on an AFArray type.
using ArrayFire
f = x -> x > .5 ? "foo" : "bar"
a = rand(Float32, 10, 10)
b = rand(AFArray{Float32}, 10, 10)
display( f.(a) ) # This one works...
display( f.(b) ) # ...this one does not
Here's the stacktrace :
ERROR: LoadError: TypeError: non-boolean (ArrayFire.AFArray{Bool,2}) used in boolean context
Stacktrace:
[1] (::##1#2)(::ArrayFire.AFArray{Float32,2}) at D:\LeMinaw\Dev\MandelFrac\test.jl:3
[2] broadcast_c(::Function, ::Type{ArrayFire.AFArray}, ::ArrayFire.AFArray{Float32,2}) at C:\Users\LeMinaw\.julia\v0.6\ArrayFire\src\array.jl:317
[3] broadcast(::Function, ::ArrayFire.AFArray{Float32,2}) at .\broadcast.jl:455
[4] include_from_node1(::String) at .\loading.jl:576
[5] include(::String) at .\sysimg.jl:14
while loading D:\LeMinaw\Dev\MandelFrac\test.jl, in expression starting on line 9
How can I handle this without copying the array from the GPU back to the host?

Related

ZeroMQ and Dash on linux cause "Address already in use" error\

Puzzled by this error message as nothing is running on the port for Dash (I run sudo netstat -nlp to make sure) and the ZMQ is working, on it’s own, perfectly as is the Dash functionality. Put them together, which doesn’t seem unreasonable, and they should work well. I don’t see how the bindings are causing the issue.
I’m using linux and I was told that ZMQ and Dash are using the same port. My problem is that I don’t know how to address this issue.
here’s the error
ave#deepthought:~/tontine_2022/just_messing$ julia 6_25_min_dash.jl
starting IN Socket 5555
Received request: END
dying
ERROR: LoadError: TaskFailedException
Stacktrace:
[1] wait
# ./task.jl:334 [inlined]
[2] hot_restart(func::Dash.var"#72#74"{Dash.DashApp, String, Int64}; check_interval::Float64, env_key::String, suppress_warn::Bool)
# Dash ~/.julia/packages/Dash/yscRy/src/utils/hot_restart.jl:23
[3] run_server(app::Dash.DashApp, host::String, port::Int64; debug::Bool, dev_tools_ui::Nothing, dev_tools_props_check::Nothing, dev_tools_serve_dev_bundles::Nothing, dev_tools_hot_reload::Nothing, dev_tools_hot_reload_interval::Nothing, dev_tools_hot_reload_watch_interval::Nothing, dev_tools_hot_reload_max_retry::Nothing, dev_tools_silence_routes_logging::Nothing, dev_tools_prune_errors::Nothing)
# Dash ~/.julia/packages/Dash/yscRy/src/server.jl:64
[4] build_dash()
# Main ~/tontine_2022/just_messing/6_25_min_dash.jl:46
[5] top-level scope
# ~/tontine_2022/just_messing/6_25_min_dash.jl:68
nested task error: LoadError: StateError("Address already in use")
Stacktrace:
[1] bind(socket::Socket, endpoint::String)
# ZMQ ~/.julia/packages/ZMQ/R3wSD/src/socket.jl:58
[2] top-level scope
# ~/tontine_2022/just_messing/6_25_min_dash.jl:10
[3] include(mod::Module, _path::String)
# Base ./Base.jl:418
[4] include(x::String)
# Main.##274 ~/.julia/packages/Dash/yscRy/src/utils/hot_restart.jl:21
[5] top-level scope
# ~/.julia/packages/Dash/yscRy/src/utils/hot_restart.jl:21
[6] eval
# ./boot.jl:373 [inlined]
[7] eval
# ./Base.jl:68 [inlined]
[8] (::Dash.var"#21#22"{String, Symbol})()
# Dash ./task.jl:423
in expression starting at /home/dave/tontine_2022/just_messing/6_25_min_dash.jl:10
in expression starting at /home/dave/tontine_2022/just_messing/6_25_min_dash.jl:68
dave#deepthought:~/tontine_2022/just_messing$
can someone look at my code to see what I am doing wrong please?
here’s the ZMQ pull code
using ZMQ
using Dash
using DataFrames
context = Context()
in_socket = Socket(context, PULL)
ZMQ.bind(in_socket, "tcp://*:5555")
println("starting IN Socket 5555")
dash_columns = ["sym","price","sdmove","hv20","hv10","hv5","iv","iv%ile","prc%ile","volume"]
df_dash_table = DataFrame([col => (col == "sym" ? String : Float64)[] for col in dash_columns ])
function build_dash()
app = dash()
app.layout = html_div() do
html_h1("tontine2"),
dash_datatable( id="table", columns=[Dict("name" =>i, "id" => i) for i in names(df_dash_table)],
data = Dict.(pairs.(eachrow(df_dash_table))),
editable=false,
filter_action="native",
sort_action="native",
sort_mode="multi",
row_selectable="multi",
row_deletable=false,
selected_rows=[],
## page_action="native",
## page_current= 0,
## page_size= 10,
)#end dash_datatable
end
run_server(app, "0.0.0.0", debug=true)
end
while true
message = String(ZMQ.recv(in_socket))
println("Received request: $message")
if message == "END"
println("dying")
break
end
end
build_dash()
and here’s the code to trigger the event
using ZMQ
context = Context()
stk_socket = Socket(context, PUSH)
ZMQ.connect(stk_socket, "tcp://localhost:5555")
ZMQ.send(stk_socket,"END")
ZMQ.close(stk_socket)
ZMQ.close(context)
This works here:
using ZMQ
using Dash
using Distributed
app = dash(external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"])
app.layout = html_div() do
html_h1("Hello Dash"),
html_div("Dash.jl: Julia interface for Dash"),
dcc_graph(
id = "example-graph",
figure = (
data = [
(x = [1, 2, 3], y = [4, 1, 2], type = "bar", name = "SF"),
(x = [1, 2, 3], y = [2, 4, 5], type = "bar", name = "Montréal"),
],
layout = (title = "Dash Data Visualization",)
)
)
end
#spawn run_server(app, "0.0.0.0", 8080)
function testpush()
context = Context()
stk_socket = Socket(context, PUSH)
ZMQ.connect(stk_socket, "tcp://localhost:5555")
ZMQ.send(stk_socket,"END")
ZMQ.close(stk_socket)
ZMQ.close(context)
end
context = Context()
in_socket = Socket(context, PULL)
ZMQ.bind(in_socket, "tcp://*:5555")
sleep(1)
#spawn testpush()
sleep(5)
If you are running the program in an editor task, please note that the spawned process will now continue until you restart the editor. Perhaps a spawned process is clinging to the port?

LoadError: UndefVarError: #defVar not defined

I have a piece of code about JuMP. When I run it ,it says that LoadError: UndefVarError: #defVar not defined. I have tried using global forward or backward but both fails.
See:
function T1(w_func,grid_b,β,u,z)
# objective for each grid point
for j in 1:cp.Nb
b = grid_b[j]
choice1 = Model(solver=GLPKSolverLP())
#defVar (choice1, a >= 0)
#setObjective(choice1, Max, u(a) + cp.β * (w_func.((b*(1+cp.r)+cp.w-a) .* cp.z[i])))
results1 = solve(choice1)
Tw1 = getObjectiveValue(choice1)
c_choice1 = getValue(x)
return Tw, σ
end
end
LoadError: UndefVarError: #defVar not defined
in expression starting at In[44]:37
Stacktrace:
[1] top-level scope
# :0
[2] eval
# ./boot.jl:360 [inlined]
[3] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
# Base ./loading.jl:1094
thanks
It seems that you're using an outdated code. Look at the fresh documentation and make sure you have installed the latest versions of libraries and Julia.
In short, #defVar and #setObjective were replaced by #variable and #objective correspondingly.
function T1(w_func,grid_b,β,u,z)
# objective for each grid point
for j in 1:cp.Nb
b = grid_b[j]
choice1 = Model(solver=GLPKSolverLP())
#variable(choice1, a >= 0)
#objective(choice1, Max, u(a) + cp.β * (w_func.((b*(1+cp.r)+cp.w-a) .* cp.z[i])))
results1 = solve(choice1)
Tw1 = getObjectiveValue(choice1)
c_choice1 = getValue(x)
return Tw, σ
end
end

How to pass a list of parameters to workers in Julia Distributed

with Julia 1.5.3, I wanted to pass a list or parameters to the distributed workers.
I first tried in a non distributed way :
using Distributed
#everywhere begin
using SharedArrays
solve(a,b,c) = return (1,2,3)
d_rates = LinRange(0.01, 0.33, 5)
m_rates = LinRange(0.01, 0.25, 5)
population_size = 10^3
max_iterations_perloop = 10^3
nb_repeats = 2
nb_params = length(d_rates)*length(m_rates)*nb_repeats
para = enumerate(Base.product(d_rates, m_rates, population_size, max_iterations_perloop, 1:nb_repeats))
results = SharedArray{Tuple{Int, Int, Int}}(nb_params)
end
for (y , x) in para
results[y] = solve(x[1], x[2], x[3])
end
which worked fine. And then changed the final loop to:
#sync #distributed for (y , x) in para
results[y] = solve(x[1], x[2], x[3])
end
I then got an error (truncated):
ERROR: LoadError: TaskFailedException:
MethodError: no method matching firstindex(::Base.Iterators.Enumerate{Base.Iterators.ProductIterator{Tuple{LinRange{Float64},LinRange{Float64},Int64,Int64,UnitRange{Int64}}}})
Closest candidates are:
firstindex(::Cmd) at process.jl:638
firstindex(::Core.SimpleVector) at essentials.jl:599
firstindex(::Base64.Buffer) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Base64/src/buffer.jl:18
...
Stacktrace:
[1] (::Distributed.var"#159#161"{var"#271#272",Base.Iterators.Enumerate{Base.Iterators.ProductIterator{Tuple{LinRange{Float64},LinRange{Float64},Int64,Int64,UnitRange{Int64}}}}})() at ./task.jl:332
Stacktrace:
[1] sync_end(::Channel{Any}) at ./task.jl:314
[2] top-level scope at task.jl:333
[3] include_string(::Function, ::Module, ::String, ::String) at ./loading.jl:1088
[4] include_string(::Module, ::String, ::String) at ./loading.jl:1096
[5] invokelatest(::Any, ::Any, ::Vararg{Any,N} where N; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at ./essentials.jl:710
[6] invokelatest(::Any, ::Any, ::Vararg{Any,N} where N) at ./essentials.jl:709
Is it possible to pass such a list, if so how?
I assume that all your workers are on a single server and that you have actually added some workers using the addprocs command. The first problem with your code is that you create the SharedArray on all workers. Rather than that the syntax of a SharedArray is the following:
help?> SharedArray
SharedArray{T}(dims::NTuple; init=false, pids=Int[])
SharedArray{T,N}(...)
Construct a SharedArray of a bits type T and size dims across the processes specified by pids - all of which have to be on the same host. (...)
This means that you create SharedArray only once from the master worker and you can specify the workers that are aware of it using the pids argument (if you do not specify pids all worker processes have the access).
Hence your code will look like this:
using Distributed, SharedArrays
addprocs(4)
#everywhere using SharedArrays
#everywhere solve(a,b,c) = return (1,2,3)
#(...) # your setup code without #everywhere
results = SharedArray{Tuple{Int, Int, Int}}(nb_params)
#sync #distributed for (y , x) in collect(para)
results[y] = solve(x[1], x[2], x[3])
end
Note that you will need collect because #distributed macro needs to know the size of the Vector and it does not work good with iterators.

Saving an OrderedDict to Julia Data Format

I wish to use the JLD package to write an OrderedDict to file in such a way that I can subsequently read it back unchanged.
Here was my first effort:
using JLD, HDF5, DataStructures
function testjld()
res = OrderedDict("A" => 1, "B" => 2)
filename = "c:/temp/test.jld"
save(File(format"JLD", filename), "res", res)
res2 = load(filename)["res"]
#Check if round-tripping works
res == res2
end
But the "round-tripping" doesn't work - the function returns false. It also raises a warning:
julia> testjld()
┌ Warning: type JLD.AssociativeWrapper{Core.String,Core.Int64,OrderedCollections.OrderedDict{Core.String,Core.Int64}} not present in workspace; reconstructing
└ # JLD C:\Users\Philip\.julia\packages\JLD\1BoSz\src\jld_types.jl:703
false
After reading the docs, I thought that JLD does not support OrderedDict "out of the box", but does support Dict and I can use that fact to write my own custom serialisation for OrderedDict. Something like this:
struct OrderedDictSerializer
d::Dict
end
JLD.writeas(data::OrderedDict) = OrderedDictSerializer(Dict("contents" => convert(Dict, data),
"keyorder" => [k for (k, v) in data]))
function JLD.readas(serdata::OrderedDictSerializer)
unordered = serdata.d["contents"]
keyorder = serdata.d["keyorder"]
OrderedDict((k, unordered[k]) for k in keyorder)
end
Hardly an exhaustive test, but this does seem to work:
julia> testjld()
true
Am I correct in thinking I need to write my own serializer for OrderedDict, and can my serializer be improved?
EDIT
The answer to to my question "Can my serializer be improved?" seems to be "It will have to be, though I don't yet understand how."
Consider the two following test functions:
function testjld2()
res = OrderedDict("A" => [1.0,2.0],"B" => [3.0,4.0])
#check if round-tripping of readas and writeas methods works:
JLD.readas(JLD.writeas(res)) == res
end
function testjld3()
res = OrderedDict("A" => [1.0,2.0],"B" => [3.0,4.0])
filename = "c:/temp/test.jld"
save(File(format"JLD", filename), "res", res)
res2 = load(filename)["res"]
#Check if round-tripping to jld file and back works
res == res2
end
testjld2 shows that my writeas and readas methods correctly round-trip for an OrderedDict{String,Array{Float64,1}} with 2 entries
julia> testjld2()
true
and yet testjld3 doesn't work at all, but yields an error:
julia> testjld3()
HDF5-DIAG: Error detected in HDF5 (1.10.5) thread 0:
#000: E:/mingwbuild/mingw-w64-hdf5/src/hdf5-1.10.5/src/H5Tfields.c line 60 in H5Tget_nmembers(): not a datatype
major: Invalid arguments to routine
minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.10.5) thread 0:
#000: E:/mingwbuild/mingw-w64-hdf5/src/hdf5-1.10.5/src/H5Tfields.c line 60 in H5Tget_nmembers(): not a datatype
major: Invalid arguments to routine
minor: Inappropriate type
ERROR: Error getting the number of members
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] h5t_get_nmembers at C:\Users\Philip\.julia\packages\HDF5\rF1Fe\src\HDF5.jl:2279 [inlined]
[3] _gen_h5convert!(::Any) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\jld_types.jl:638
[4] #s27#9(::Any, ::Any, ::Any, ::Any, ::Any, ::Any) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\jld_types.jl:664
[5] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any,N} where N) at .\boot.jl:524
[6] #write_compound#24(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(JLD.write_compound), ::JLD.JldGroup, ::String, ::JLD.AssociativeWrapper{String,Any,Dict{String,Any}}, ::JLD.JldWriteSession) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:700
[7] write_compound at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:694 [inlined]
[8] #_write#23 at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:690 [inlined]
[9] _write at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:690 [inlined]
[10] write_ref(::JLD.JldFile, ::Dict{String,Any}, ::JLD.JldWriteSession) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:658
[11] macro expansion at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\jld_types.jl:648 [inlined]
[12] h5convert!(::Ptr{UInt8}, ::JLD.JldFile, ::OrderedDictSerializer, ::JLD.JldWriteSession) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\jld_types.jl:664
[13] #write_compound#24(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(JLD.write_compound), ::JLD.JldFile, ::String, ::OrderedDictSerializer, ::JLD.JldWriteSession) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:700
[14] write_compound at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:694 [inlined]
[15] #_write#23 at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:690 [inlined]
[16] _write at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:690 [inlined]
[17] #write#17(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(write), ::JLD.JldFile, ::String, ::OrderedDict{String,Array{Float64,1}}, ::JLD.JldWriteSession) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:514
[18] write at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:514 [inlined]
[19] #35 at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:1223 [inlined]
[20] #jldopen#14(::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:compatible, :compress),Tuple{Bool,Bool}}}, ::typeof(jldopen), ::getfield(JLD, Symbol("##35#36")){String,OrderedDict{String,Array{Float64,1}},Tuple{}},
::String, ::Vararg{String,N} where N) at C:\Users\Philip\.julia\packages\JLD\1BoSz\src\JLD.jl:246
[21] testjld3() at .\none:0
[22] top-level scope at REPL[48]:1
Use JLD2 instead:
using JLD2, DataStructures, FileIO
function testjld2()
res = OrderedDict("A" => 1, "B" => 2)
myfilename = "c:/temp/test.jld2"
save(myfilename, "res", res)
res2 = load(myfilename)["res"]
#Check if round-tripping works
res == res2
end
Testing:
julia> testjld2()
true
Personally, whenever I can I use BJSON:
using DataStructures, BSON, OrderedCollections
function testbson()
res = OrderedDict("A" => 1, "B" => 2)
myfilename = "c:/temp/test.bjson"
BSON.bson(myfilename, Dict("res" => res))
res2 = BSON.load(myfilename)["res"]
#Check if round-tripping works
res == res2
end
julia> testbson()
true

(Julia 1.x) BoundsError using pmap?

I am having trouble with pmap() throwing a BoundsError when setting the values of array elements - my code works for 1 worker but not >1. I have written a minimum working example which roughly follows the real code flow:
Get source data
Define set of points over which to iterate
Initialise array points to be calculated
Calculate each array point
The main file:
#pmapdemo.jl
using Distributed
#addprocs(length(Sys.cpu_info())) # uncomment this line for error
#everywhere include(joinpath(#__DIR__, "pmapdemo2.jl"))
function main()
# Get source data
source = Dict{String, Any}("t"=>zeros(5),
"x"=>zeros(5,6),
"y"=>zeros(5,3),
"z"=>zeros(5,3))
# Define set of points over which to iterate
iterset = Dict{String, Any}("t"=>source["t"],
"x"=>source["x"],
"y"=>fill(2, size(source["t"])[1], 1),
"z"=>fill(2, size(source["t"])[1], 1))
data = Dict{String, Any}()
# Initialise array points to be calculated
MyMod.initialisearray!(data, iterset)
# Calculate each array point
MyMod.calcarray!(data, iterset, source)
#show data
end
main()
The functionality file:
#pmapdemo2.jl
module MyMod
using Distributed
#everywhere using SharedArrays
# Initialise data array
function initialisearray!(data, fieldset)
zerofield::SharedArray{Float64, 4} = zeros(size(fieldset["t"])[1],
size(fieldset["x"])[2],
size(fieldset["y"])[2],
size(fieldset["z"])[2])
data["field"] = deepcopy(zerofield)
end
# Calculate values of array elements according to values in source
function calcpoint!((data, source, a, b, c, d))
data["field"][a,b,c,d] = rand()
end
# Set values in array
function calcarray!(data, iterset, source)
for a in eachindex(iterset["t"])
# [additional functionality f(a) here]
b = eachindex(iterset["x"][a,:])
c = eachindex(iterset["y"][a,:])
d = eachindex(iterset["z"][a,:])
pmap(calcpoint!, Iterators.product(Iterators.repeated(data,1), Iterators.repeated(source,1), Iterators.repeated(a,1), b, c, d))
end
end
end
The error output:
ERROR: LoadError: On worker 2:
BoundsError: attempt to access 0×0×0×0 Array{Float64,4} at index [1]
setindex! at ./array.jl:767 [inlined]
setindex! at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/SharedArrays/src/SharedArrays.jl:500 [inlined]
_setindex! at ./abstractarray.jl:1043
setindex! at ./abstractarray.jl:1020
calcpoint! at /home/dave/pmapdemo2.jl:25
#112 at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:269
run_work_thunk at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:56
macro expansion at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/process_messages.jl:269 [inlined]
#111 at ./task.jl:259
Stacktrace:
[1] (::getfield(Base, Symbol("##696#698")))(::Task) at ./asyncmap.jl:178
[2] foreach(::getfield(Base, Symbol("##696#698")), ::Array{Any,1}) at ./abstractarray.jl:1866
[3] maptwice(::Function, ::Channel{Any}, ::Array{Any,1}, ::Base.Iterators.ProductIterator{Tuple{Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Int64}},Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}}}) at ./asyncmap.jl:178
[4] #async_usemap#681 at ./asyncmap.jl:154 [inlined]
[5] #async_usemap at ./none:0 [inlined]
[6] #asyncmap#680 at ./asyncmap.jl:81 [inlined]
[7] #asyncmap at ./none:0 [inlined]
[8] #pmap#213(::Bool, ::Int64, ::Nothing, ::Array{Any,1}, ::Nothing, ::Function, ::Function, ::WorkerPool, ::Base.Iterators.ProductIterator{Tuple{Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Int64}},Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}}}) at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:126
[9] pmap(::Function, ::WorkerPool, ::Base.Iterators.ProductIterator{Tuple{Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Int64}},Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}}}) at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:101
[10] #pmap#223(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Function, ::Base.Iterators.ProductIterator{Tuple{Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Int64}},Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}}}) at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:156
[11] pmap(::Function, ::Base.Iterators.ProductIterator{Tuple{Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Dict{String,Any}}},Base.Iterators.Take{Base.Iterators.Repeated{Int64}},Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}}}) at /build/julia/src/julia-1.1.1/usr/share/julia/stdlib/v1.1/Distributed/src/pmap.jl:156
[12] calcarray!(::Dict{String,Any}, ::Dict{String,Any}, ::Dict{String,Any}) at /home/dave/pmapdemo2.jl:20
[13] main() at /home/dave/pmapdemo.jl:19
[14] top-level scope at none:0
in expression starting at /home/dave/pmapdemo.jl:23
In pmapdemo2.jl, replacing data["field"][a,b,c,d] = rand() with #show a, b, c, d demonstrates that all workers are running and have full access to the variables being passed, however instead replacing it with #show data["field"] throws the same error. Surely the entire purpose of SharedArrays is to avoid this? Or am I misunderstanding how to use it with pmap?
This is a crosspost from the Julia discourse here.
pmap will do the work of passing the data to the processes, so you don't need to use SharedArrays. Typically, the function provided to pmap (and indeed map) will be a pure function (and therefore doesn't mutate any variable) which returns one element of an output array. That function is mapped across each element of the input array, and the pmap function will construct the output array for you. For example, in your case, the code may look a bit like this
calcpoint(source, (a,b,c,d)) = rand() # Or some function of source and the indices a,b,c,d
field["data"] = pmap(calcpoint, Iterators.repeated(source), Iterators.product(a,b,c,d))

Resources