Julia shared variables within a script [closed] - julia

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm using a julia script where I've defined two functions that need to share variables. What is the best practice for doing this? Should I just use global variables or is there a julia-specfic workaround that I can use instead?

Normally you would pass around the variable to both functions as Benoit suggested.
However, if you need to avoid it for some reason, this is often the case e.g. when passing a function to an optimization routine that should depend on some external parameter, then use a closure instead of using a global variable. In general non-const global variables should be avoided in Julia for performance reasons.
Here is an example:
No closure:
julia> using Optim, BenchmarkTools
julia> c = 10
10
julia> f(x) = (x-c)^2
f (generic function with 1 method)
julia> #btime optimize(f, -100, 100);
5.500 μs (235 allocations: 3.81 KiB)
Now the same with closure. First I use let:
julia> #btime let c = 10
f(x) = (x-c)^2
optimize(f, -100, 100)
end;
139.605 ns (2 allocations: 176 bytes)
You can also use a function:
julia> function wrap()
c = 10
f(x) = (x-c)^2
optimize(f, -100, 100)
end
wrap (generic function with 1 method)
julia> #btime wrap();
139.100 ns (2 allocations: 176 bytes)
Also if c were a const things would be also fast (start a fresh Julia session for this):
julia> using Optim, BenchmarkTools
julia> const c = 10
10
julia> f(x) = (x-c)^2
f (generic function with 1 method)
julia> #btime optimize(f, -100, 100);
138.603 ns (2 allocations: 176 bytes)

Related

Problem with power operator for specific values

I am trying to do a simple function to check the differences between factorial and Stirling's approximation:
using DataFrames
n = 24
df_res = DataFrame(i = BigInt[],
f = BigInt[],
st = BigInt[])
for i in 1:n
fac = factorial(big(i))
sterling = i^i*exp(-i)*sqrt(i)*sqrt(2*pi)
res = DataFrame(i = [i],
f = [fac],
st = [sterling])
df_res = [df_res;res]
end
first(df_res, 24)
The result for sterling when i= 16 and i= 24 is 0!. So, I checked power for both values and the result is 0:
julia> 16^16
0
julia> 24^24
0
I did the same code in R, and there are no issues. What am I doing wrong or what I don't know about Julia and I probably should?
It appears that Julia integers are either 32-bit or 64-bit, depending on your system, according to the Julia documentation for Integers and Floating-Point Numbers. Your exponentiation is overflowing your values, even if they're 64 bits.
Julia looks like it supports Arbitrary Precision Arithmetic, which you'll need to store the large resultant values.
According to the Overflow Section, writing big(n) makes n arbitrary precision.
While the question has been answered at the another post one more thing is worth saying.
Julia is one of very few languages that allows you to define your very own primitive types - so you can be still with fast fixed precision numbers yet handle huge values. There is a package BitIntegers for that:
BitIntegers.#define_integers 512
Now you can do:
julia> Int512(2)^500
3273390607896141870013189696827599152216642046043064789483291368096133796404674554883270092325904157150886684127560071009217256545885393053328527589376
Usually you will get better performance for for even big fixed point arithmetic numbers. For an example:
julia> #btime Int512(2)^500;
174.687 ns (0 allocations: 0 bytes)
julia> #btime big(2)^500;
259.643 ns (9 allocations: 248 bytes)
There is a simple solution to your problem that does not involve using BigInt or any specialized number types, and which is much faster. Simply tweak your mathematical expression slightly.
foo(i) = i^i*exp(-i)*sqrt(i)*sqrt(2*pi) # this is your function
bar(i) = (i / exp(1))^i * sqrt(i) * sqrt(2*pi) # here's a better way
OK, let's test it:
1.7.2> foo(16)
0.0 # oops. not what we wanted
1.7.2> foo(big(16)) # works
2.081411441522312838373895982304611417026205959453251524254923609974529540404514e+13
1.7.2> bar(16) # also works
2.0814114415223137e13
Let's try timing it:
1.7.2> using BenchmarkTools
1.7.2> #btime foo(n) setup=(n=16)
18.136 ns (0 allocations: 0 bytes)
0.0
1.7.2> #btime foo(n) setup=(n=big(16))
4.457 μs (25 allocations: 1.00 KiB) # horribly slow
2.081411441522312838373895982304611417026205959453251524254923609974529540404514e+13
1.7.2> #btime bar(n) setup=(n=16)
99.682 ns (0 allocations: 0 bytes) # pretty fast
2.0814114415223137e13
Edit: It seems like
baz(i) = float(i)^i * exp(-i) * sqrt(i) * sqrt(2*pi)
might be an even better solution, since the numerical values are closer to the original.

Julia: Array of ones: add one to zero or repeat?

I have a concrete Array and want to efficiently construct a similar array of the same dimensions filled with ones. What would be the recommended approach?
Here's a random array to work with:
julia> A = rand(0:1, 10, 5)
10×5 Matrix{Int64}:
or A = rand(0:1., 10, 5) (with a dot on 0. and/or 1.) for a random matrix of floats.
Two approaches are very natural. I could do this:
julia> zero(A) .+ 1
5×10 Matrix{Int64}:
Or I could do it this way:
julia> repeat(ones(size(A)[2])', outer = size(A)[1])
5×10 Matrix{Float64}:
The first approach is more elegant. The second approach feels more clunky and prone to error (accidentally exchanging [1] and [2]), but at the same time it doesn't involve the addition operation and so possibly involves fewer allocations (or maybe not because the compiler is super smart Edit: quick benchmark below suggests the compiler is super smart).
And of course there may be another, better approach.
using BenchmarkTools
A = rand(0:1, 1000, 1000)
#btime zero(A) .+ 1
## 1.609 ms (6 allocations: 15.26 MiB)
#btime repeat(ones(size(A)[2])', outer = size(A)[1])
## 3.032 ms (10 allocations: 7.64 MiB)
Edit 2: Follow-up onBogumił's answer
The following method for a unit-array J, defined for convenience, is efficient:
function J(A::AbstractArray{T,N}) where {T,N}
ones(T, size(A))
end
J(A)
#btime J(A)
## 789.929 μs (2 allocations: 7.63 MiB)
What about:
ones(Int, size(A))
or
fill(1, size(A))

Unpacking return of map in Julia

I have a function that returns an array. I'd like to map the function to a vector of inputs, and the the output to be a simple concatenation of all the arrays. The function is:
function log_it(r, bzero = 0.25, N = 400)
main = rand(Float16, (N+150));
main[1] = bzero;
for i in 2:N+150
main[i] = *(r, main[i-1], (1-main[i-1]))
end;
y = unique(main[(N+1):(N+150)]);
r_vec = repeat([r], size(y)[1]);
hcat(r_vec, y)
end;
and I can map it fine:
map(log_it, 2.4:0.001:2.405)
but the result is gross:
[2.4 0.58349609375]
[2.401 0.58349609375]
[2.402 0.583984375; 2.402 0.58349609375]
[2.403 0.583984375]
[2.404 0.583984375]
[2.405 0.58447265625; 2.405 0.583984375]
NB, the length of the nested arrays is unbounded - I'm looking for a solution that doesn't depend on knowing the the length of nested arrays in advance.
What I want is something like this:
2.4 0.583496
2.401 0.583496
2.402 0.583984
2.402 0.583496
2.403 0.583984
2.404 0.583984
2.405 0.584473
2.405 0.583984
Which I made using a for loop:
results = Array{Float64, 2}(undef, 0, 2)
for i in 2.4:0.001:2.405
results = cat(results, log_it(i), dims = 1)
end
results
The code works fine, but the for loop takes about four times as long. I also feel like map is the right way to do it and I'm just missing something - either in executing map in such a way that it returns a nice vector of arrays, or in some mutation of the array that will "unnest". I've tried looking through functions like flatten and collect but can't find anything.
Many thanks in advance!
Are you sure you're benchmarking this correctly? Especially with very fast operations benchmarking can sometimes be tricky. As a starting point, I would recommend to ensure you always wrap any code you want to benchmark into a function, and use the BenchmarkTools package to get reliable timings.
There generally shouldn't be a performance penalty for writing loops in Julia, so a 3x increase in runtime for a loop compared to map sounds suspicious.
Here's what I get:
julia> using BenchmarkTools
julia> #btime map(log_it, 2.4:0.001:2.405)
121.426 μs (73 allocations: 14.50 KiB)
julia> function with_loop()
results = Array{Float64, 2}(undef, 0, 2)
for i in 2.4:0.001:2.405
results = cat(results, log_it(i), dims = 1)
end
results
end
julia> #btime with_loop()
173.492 μs (295 allocations: 23.67 KiB)
So the loop is about 50% slower, but that's because you're allocating more.
When you're using map there's usually a more Julia way of expressing what you're doing using broadcasting. This works for any user defined function:
julia> #btime log_it.(2.4:0.001:2.405)
121.434 μs (73 allocations: 14.50 KiB)
Is equivalent to your map expression. What you're looking for I think is just a way to stack all the resulting vectors - you can use vcat and splatting for that:
julia> #btime vcat(log_it.(2.4:0.001:2.405)...)
122.837 μs (77 allocations: 14.84 KiB)
and just to confirm:
julia> vcat(log_it.(2.4:0.001:2.405)...) == with_loop()
true
So using broadcasting and concatenating gives the same result as your loop at the speed and memory cost of your map solution.

using Julia 1.0 findmax equivalent of numpy.argmax

In Julia I want to find the column index of a matrix for the maximum value in each row, with the result being a Vector{Int}. Here is how I am doing it currently (Samples has 7 columns and 10,000 rows):
mxindices = [ i[2] for i in findmax(Samples, dims = 2)[2]][:,1]
This works but feels rather clumsy and verbose. Wondered if there was a better way.
Even simpler: Julia has an argmax function and Julia 1.1+ has an eachrow iterator. Thus:
map(argmax, eachrow(x))
Simple, readable, and fast — it matches the performance of Colin's f3 and f4 in my quick tests.
UPDATE: For the sake of completeness, I've added Matt B.'s excellent solution to the test-suite (and I also forced the transpose in f4 to generate a new matrix rather than a lazy view).
Here are some different approaches (yours is the base-case f0):
f0(x) = [ i[2] for i in findmax(x, dims = 2)[2]][:,1]
f1(x) = getindex.(argmax(x, dims=2), 2)
f2(x) = [ argmax(vec(x[n,:])) for n = 1:size(x,1) ]
f3(x) = [ argmax(vec(view(x, n, :))) for n = 1:size(x,1) ]
f4(x) = begin ; xt = Matrix{Float64}(transpose(x)) ; [ argmax(view(xt, :, k)) for k = 1:size(xt,2) ] ; end
f5(x) = map(argmax, eachrow(x))
Using BenchmarkTools we can examine the efficiency of each (I've set x = rand(100, 200)):
julia> #btime f0($x);
76.846 μs (13 allocations: 4.64 KiB)
julia> #btime f1($x);
76.594 μs (11 allocations: 3.75 KiB)
julia> #btime f2($x);
53.433 μs (103 allocations: 177.48 KiB)
julia> #btime f3($x);
43.477 μs (3 allocations: 944 bytes)
julia> #btime f4($x);
73.435 μs (6 allocations: 157.27 KiB)
julia> #btime f5($x);
43.900 μs (4 allocations: 960 bytes)
So Matt's approach is the fairly obvious winner, as it appears to just be a syntactically cleaner version of my f3 (the two probably compile to something very similar, but I think it would be overkill to check that).
I was hoping f4 might have an edge, despite the temporary created via instantiating the transpose, since it could operate on the columns of a matrix rather than the rows (Julia is a column-major language, so operations on columns will always be faster since the elements are synchronous in memory). But it doesn't appear to be enough to overcome the disadvantage of the temporary.
Note, if it is ever the case that you want the full CartesianIndex, that is, both the row and column index of the maximum in each row, then obviously the appropriate solution is just argmax(x, dims=2).
Mapslices function is also a great option for this problem:
julia> Samples = rand(10000, 7);
julia> res = mapslices(row -> findmax(row)[2], Samples, dims=[2])[:,1];
julia> res[1:10]
10-element Array{Int64,1}:
3
1
3
5
4
4
1
4
5
3
Although this is a lot slower than what Colin suggested above, it might be more readable for some people. This is essentially exactly the same code as you started out with, but uses mapslices instead of list comprehensions.

Julia: How to convert numeric string with power symbol "^" into floating point number

I'm having trouble converting an array of numeric strings into an array of corresponding floating point numbers. A (hypothetical) string array is:
arr = ["8264.", "7.1050^-7", "9970.", "2.1090^-6", "5.2378^-7"]
I would like to convert it into:
arr = [8264., 1.0940859076672388e-6, 9970., 0.011364243260505457, 9.246079446497013e-6]
As a novice of Julia, I have no clue on how to make power operator "^" in the string format to do the correct job in the conversion. I highly appreciate your suggestions!
This function would parse both forms, with without the exponent.
function foo(s)
a=parse.(Float64,split(s,'^'))
length(a)>1 && return a[1]^a[2]
a[1]
end
Somewhat ugly, but gets the job done:
eval.(Meta.parse.(arr))
UPDATE:
Let me elaborate a bit what this does and why it's maybe not good style.
Meta.parse converts a String into a Julia Expression. The dot indicates that we want to broadcast Meta.parse to every string in arr, that is apply it to every element. Afterwards, we use eval - again broadcasted - to evalute the expressions.
This produces the correct result as it literally takes every string as a Julia "command" and hence knows that ^ indicates a power. However, besides being slow, this is potentially insecure as one could inject arbitrary Julia code.
UPDATE:
A safer and faster way to obtain the desired result is to define a short function that does the conversion:
julia> function mystr2float(s)
!occursin('^', s) && return parse(Float64, s)
x = parse.(Float64, split(s, '^'))
return x[1]^x[2]
end
mystr2float (generic function with 1 method)
julia> mystr2float.(arr)
5-element Array{Float64,1}:
8264.0
1.0940859076672388e-6
9970.0
0.011364243260505457
9.246079446497013e-6
julia> using BenchmarkTools
julia> #btime eval.(Meta.parse.($arr));
651.000 μs (173 allocations: 9.27 KiB)
julia> #btime mystr2float.($arr);
5.567 μs (18 allocations: 1.02 KiB)
UPDATE:
Performance comparison with #dberge's suggestion below:
julia> #btime mystr2float.($arr);
5.516 μs (18 allocations: 1.02 KiB)
julia> #btime foo.($arr);
5.767 μs (24 allocations: 1.47 KiB)

Resources