Following up How to add vectors to the columns of some array in Julia?, I would like to have some analogous clarifications for DataArrays.
Let y=randn(100, 2). I would like to create a matrix x with the lagged value (with lags > 0) of y. I have already written a code which it seems is working properly (see below). I was wondering if there is a better way for concatenating a DataArray than the one I have used.
T, n = size(y);
x = #data(zeros(T-lags, 0));
for lag in 1:lags
x = hcat(x, y[lags-lag+1:end-lag, :]);
end
Unless there is a specific reason to do otherwise, my recommendation would be to start with your DataArray x being the size that you want it to be and then fill in the column values you want.
This will give you better performance than if you need to recreate the DataArray for each new column, which is what any method for "adding" columns will actually be doing. It's conceivable that the DataArray package might have some more pretty syntax for it than what you have in your question, but fundamentally, that's what it would still be doing.
Thus, in a simplified version of your example, I would recommend:
using DataArrays
N = 5; T = 10;
X = #data(zeros(T, N));
initial_data_cols = 2; ## specify how much of the initial data is filled in
lags = size(X,2) - initial_data_cols
X[:,1:initial_data_cols] = rand(size(X,1), initial_data_cols) ## First two columns of X are fixed in advance
for lag in 1:lags
X[:,(lag+initial_data_cols)] = rand(size(X,1))
end
If you did find yourself in a situation where you need to add columns to an already created object, you could improve somewhat upon the code that you have by first creating all of the new objects together and then doing a single addition of them to your initial DataArray. E.g.
X = #data(zeros(10, 2))
X = [X rand(10,3)]
For instance, consider the difference in execution time, and number and quantity of memory allocations in the two examples below:
n = 10^5; m = 10;
A = #data rand(n,m);
n_newcol = 10;
function t1(A::Array, n_newcol)
n = size(A,1)
for idx = 1:n_newcol
A = hcat(A, zeros(n))
end
return A
end
function t2(A::Array, n_newcol)
n = size(A,1)
[A zeros(n, n_newcol)]
end
# Stats after running each function once to compile
#time r1 = t1(A, n_newcol); ## 0.154082 seconds (124 allocations: 125.888 MB, 75.33% gc time)
#time r2 = t2(A, n_newcol); ## 0.007981 seconds (9 allocations: 22.889 MB, 31.73% gc time)
Related
I have a Julia code, version 1.2, which performs a lot of operations on a 10000 x 10000 Array . Due to OutOfMemory() error when I run the code, I’m exploring other options to run it, such as Memory-mapping. Concerning the use of Mmap.mmap, I’m a bit confused with the use of the Array that I map to my disk, due to little explanations on https://docs.julialang.org/en/v1/stdlib/Mmap/index.html. Here is the beginning of my code:
using Distances
using LinearAlgebra
using Distributions
using Mmap
data=Float32.(rand(10000,15))
Eucldist=pairwise(Euclidean(),data,dims=1)
D=maximum(Eucldist.^2)
sigma2hat=mean(((Eucldist.^2)./D)[tril!(trues(size((Eucldist.^2)./D)),-1)])
L=exp.(-(Eucldist.^2/D)/(2*sigma2hat))
L is the 10000 x 10000 Array with which I want to work, so I mapped it to my disk with
s = open("mmap.bin", "w+")
write(s, size(L,1))
write(s, size(L,2))
write(s, L)
close(s)
What am I supposed to do after that? The next step is to perform K=eigen(L) and apply other commands to K. How should I do that? With K=eigen(L) or K=eigen(s)? What’s the role of the object s and when does it get involved? Moreover, I don’t understand why I have to use Mmap.sync! and when. After each subsequent lines after eigen(L)? At the end of the code? How can I be sure that I’m using my disk space instead of RAM memory?Would like some highlights about memory-mapping, please. Thank you!
If memory usage is a concern, it is often best to re-assign your very large arrays to 0, or to a similar type-safe small matrix, so that the memory can be garbage collected, assuming you are done with those intermediate matrices. After that, you just call Mmap.mmap() on your stored data file, with the type and dimensions of the data as second and third arguments to mmap, and then assign the function's return value to your variable, in this case L, resulting in L being bound to the file contents:
using Distances
using LinearAlgebra
using Distributions
using Mmap
function testmmap()
data = Float32.(rand(10000, 15))
Eucldist = pairwise(Euclidean(), data, dims=1)
D = maximum(Eucldist.^2)
sigma2hat = mean(((Eucldist.^2) ./ D)[tril!(trues(size((Eucldist.^2) ./ D)), -1)])
L = exp.(-(Eucldist.^2 / D) / (2 * sigma2hat))
s = open("./tmp/mmap.bin", "w+")
write(s, size(L,1))
write(s, size(L,2))
write(s, L)
close(s)
# deref and gc collect
Eucldist = data = L = zeros(Float32, 2, 2)
GC.gc()
s = open("./tmp/mmap.bin", "r+") # allow read and write
m = read(s, Int)
n = read(s, Int)
L = Mmap.mmap(s, Matrix{Float32}, (m, n)) # now L references the file contents
K = eigen(L)
K
end
testmmap()
#time testmmap() # 109.657995 seconds (17.48 k allocations: 4.673 GiB, 0.73% gc time)
I have the following:
include("as_mod.jl")
solvetimes = 50:200
timevector = Array{Float64}(undef,length(solvetimes))
for i in solvetimes
global T
T = i
include("as_dat_large.jl")
m, x, z = build_model(true,true)
setsolver(m, GurobiSolver(MIPGap = 2e-2, TimeLimit = 3600))
solve(m)
timevector[i-49] = getsolvetime(m)
end
plot(solvetimes,log.(timevector),
title = "solvetimes vs T", xlabel = "T", ylabel = "log(t)")
And this works great as long as my solvetimes vector is incremented by only 1. However, I'm interested in an 30-increment and it obviously does not work then since my timevector then goes out of bounds. Is there any way of solving this issue? I read about and attempted to use the push! function but to no avail.
I apologize if my question is not good but I don't see how to improve it. The question is essentially about for loops where the index does NOT start at 1 and is only incremented with 1 up to an upper bound, but rather a non-one increment and a start different from 0 or one, if that makes sense.
The : syntax in 50:200 or 50:30:200 creates a range object in Julia. These range objects are not only iterable but also implement the method getindex which means that you can simply access the steps in the range with a[index] syntax as if it is an array.
julia> solvetimes = 50:30:200 # 50, 80, 110, 140, ...
50:30:200
julia> solvetimes[3]
110
You can solve your problem in several ways.
First, you can introduce an itercount variable to count the number of iterations and know at which index of timevector you will put the solve-time.
solvetimes = 50:30:200 # increment by 30
timevector = Vector{Float64}(undef,length(solvetimes))
itercount = 1
for i in solvetimes
...
timevector[itercount] = getsolvetime(m)
global itercount
itercount += 1
end
Other way would be to create an empty timevector and push!.
solvetimes = 50:30:200 # increment by 30
timevector = Float64[] # an empty Float64 vector
for i in solvetimes
...
push!(timevector, getsolvetime(m)) # push the value `getsolvetime(m)` into `timevector`
end
push! operation may require julia to allocate memory and copy data to compensate increasing array size, hence might not be very efficient, although it does not really matter in your problem.
Another way would be to iterate from 1 to length of solvetimes. Your loop control variable is still incremented one-by-one but now it represents the index in solvetimes rather than the time point.
solvetimes = 50:30:200 # increment by 30
len = length(solvetimes)
timevector = Vector{Float64}(undef, len)
for i in 1:len
global T
T = solvetimes[i]
...
timevector[i] = getsolvetime(m)
end
With these modifications, kth value in timevector, timevector[k] stands for the solve-time for solvetime[k].
You might also find other ways to solve the issue, like using Dicts etc.
I need to identify the rows (/columns) that have defined values in a large sparse Boolean Matrix. I want to use this to 1. slice (actually view) the Matrix by those rows/columns; and 2. slice (/view) vectors and matrices that have the same dimensions as the margins of a Matrix. I.e. the result should probably be a Vector of indices / Bools or (preferably) an iterator.
I've tried the obvious:
a = sprand(10000, 10000, 0.01)
cols = unique(a.colptr)
rows = unique(a.rowvals)
but each of these take like 20ms on my machine, probably because they allocate about 1MB (at least they allocate cols and rows). This is inside a performance-critical function, so I'd like the code to be optimized. The Base code seems to have an nzrange iterator for sparse matrices, but it is not easy for me to see how to apply that to my case.
Is there a suggested way of doing this?
Second question: I'd need to also perform this operation on views of my sparse Matrix - would that be something like x = view(a,:,:); cols = unique(x.parent.colptr[x.indices[:,2]]) or is there specialized functionality for this? Views of sparse matrices appear to be tricky (cf https://discourse.julialang.org/t/slow-arithmetic-on-views-of-sparse-matrices/3644 – not a cross-post)
Thanks a lot!
Regarding getting the non-zero rows and columns of a sparse matrix, the following functions should be pretty efficient:
nzcols(a::SparseMatrixCSC) = collect(i
for i in 1:a.n if a.colptr[i]<a.colptr[i+1])
function nzrows(a::SparseMatrixCSC)
active = falses(a.m)
for r in a.rowval
active[r] = true
end
return find(active)
end
For a 10_000x10_000 matrix with 0.1 density it takes 0.2ms and 2.9ms for cols and rows, respectively. It should also be quicker than method in question (apart from the correctness issue as well).
Regarding views of sparse matrices, a quick solution would be to turn view into a sparse matrix (e.g. using b = sparse(view(a,100:199,100:199))) and use functions above. In code:
nzcols(b::SubArray{T,2,P}) where {T,P<:AbstractSparseArray} = nzcols(sparse(b))
nzrows(b::SubArray{T,2,P}) where {T,P<:AbstractSparseArray} = nzrows(sparse(b))
A better solution would be to customize the functions according to view. For example, when the view uses UnitRanges for both rows and columns:
# utility predicate returning true if element of sorted v in range r
inrange(v,r) = searchsortedlast(v,last(r))>=searchsortedfirst(v,first(r))
function nzcols(b::SubArray{T,2,P,Tuple{UnitRange{Int64},UnitRange{Int64}}}
) where {T,P<:SparseMatrixCSC}
return collect(i+1-start(b.indexes[2])
for i in b.indexes[2]
if b.parent.colptr[i]<b.parent.colptr[i+1] &&
inrange(b.parent.rowval[nzrange(b.parent,i)],b.indexes[1]))
end
function nzrows(b::SubArray{T,2,P,Tuple{UnitRange{Int64},UnitRange{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
for c in b.indexes[2]
for r in nzrange(b.parent,c)
if b.parent.rowval[r] in b.indexes[1]
active[b.parent.rowval[r]+1-start(b.indexes[1])] = true
end
end
end
return find(active)
end
which work faster than the versions for the full matrices (for 100x100 submatrix of above 10,000x10,000 matrix cols and rows take 16μs and 12μs, respectively on my machine, but these are unstable results).
A proper benchmark would use fixed matrices (or at least fix the random seed). I'll edit this line with such a benchmark if I do it.
In case the indices are not ranges, the fallback to converting to a sparse matrix works, but here are versions for indices which are Vectors. If the indices are mixed, yet another set of versions needs to be made. Quite repetitive, but this is the strength of Julia, when the versions are done, the code will choose optimized methods correctly using the types in the caller without too much effort.
function sortedintersecting(v1, v2)
i,j = start(v1), start(v2)
while i <= length(v1) && j <= length(v2)
if v1[i] == v2[j] return true
elseif v1[i] > v2[j] j += 1
else i += 1
end
end
return false
end
function nzcols(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
brows = sort(unique(b.indexes[1]))
return [k
for (k,i) in enumerate(b.indexes[2])
if b.parent.colptr[i]<b.parent.colptr[i+1] &&
sortedintersecting(brows,b.parent.rowval[nzrange(b.parent,i)])]
end
function nzrows(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
for c in b.indexes[2]
active[findin(b.indexes[1],b.parent.rowval[nzrange(b.parent,c)])] = true
end
return find(active)
end
-- ADDENDUM --
Since it was noted nzrows for Vector{Int} indices is a bit slow, this is an attempt to improve its speed by replacing findin with a version exploiting sortedness:
function findin2(inds,v,w)
i,j = start(v),start(w)
res = Vector{Int}()
while i<=length(v) && j<=length(w)
if v[i]==w[j]
push!(res,inds[i])
i += 1
elseif (v[i]<w[j]) i += 1
else j += 1
end
end
return res
end
function nzrows(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
inds = sortperm(b.indexes[1])
brows = (b.indexes[1])[inds]
for c in b.indexes[2]
active[findin2(inds,brows,b.parent.rowval[nzrange(b.parent,c)])] = true
end
return find(active)
end
I want to find the key corresponding to the min or max value of a dictionary in julia. In Python I would to the following:
my_dict = {1:20, 2:10}
min(my_dict, my_dict.get)
Which would return the key 2.
How can I do the same in julia ?
my_dict = Dict(1=>20, 2=>10)
minimum(my_dict)
The latter returns 1=>20 instead of 2=>10 or 2.
You could use reduce like this, which will return the key of the first smallest value in d:
reduce((x, y) -> d[x] ≤ d[y] ? x : y, keys(d))
This only works for non-empty Dicts, though. (But the notion of the “key of the minimal value of no values” does not really make sense, so that case should usually be handled seperately anyway.)
Edit regarding efficiency.
Consider these definitions (none of which handle empty collections)...
m1(d) = reduce((x, y) -> d[x] ≤ d[y] ? x : y, keys(d))
m2(d) = collect(keys(d))[indmin(collect(values(d)))]
function m3(d)
minindex(x, y) = d[x] ≤ d[y] ? x : y
reduce(minindex, keys(d))
end
function m4(d)
minkey, minvalue = next(d, start(d))[1]
for (key, value) in d
if value < minvalue
minkey = key
minvalue = value
end
end
minkey
end
...along with this code:
function benchmark(n)
d = Dict{Int, Int}(1 => 1)
m1(d); m2(d); m3(d); m4(d); m5(d)
while length(d) < n
setindex!(d, rand(-n:n), rand(-n:n))
end
#time m1(d)
#time m2(d)
#time m3(d)
#time m4(d)
end
Calling benchmark(10000000) will print something like this:
1.455388 seconds (30.00 M allocations: 457.748 MB, 4.30% gc time)
0.380472 seconds (6 allocations: 152.588 MB, 0.21% gc time)
0.982006 seconds (10.00 M allocations: 152.581 MB, 0.49% gc time)
0.204604 seconds
From this we can see that m2 (from user3580870's answer) is indeed faster than my original solution m1 by a factor of around 3 to 4, and also uses less memory. This is appearently due to the function call overhead, but also the fact that the λ expression in m1 is not optimized very well. We can alleviate the second problem by defining a helper function like in m3, which is better than m1, but not as good as m2.
However, m2 still allocates O(n) memory, which can be avoided: If you really need the efficiency, you should use an explicit loop like in m4, which allocates almost no memory and is also faster.
another option is:
collect(keys(d))[indmin(collect(values(d)))]
it depends on properties of keys and values iterators which are not guaranteed, but in fact work for Dicts (and are guaranteed for OrderedDicts). like the reduce answer, d must be non-empty.
why mention this, when the reduce, pretty much nails it? it is 3 to 4 times faster (at least on my computer) !
Here is another way to find Min with Key and Value
my_dict = Dict(1 => 20, 2 =>10)
findmin(my_dict) gives the output as below
(10, 2)
to get only key use
findmin(my_dict)[2]
to get only value use
findmin(my_dict)[1]
Hope this helps.
If you only need the minimum value, you can use
minimum(values(my_dict))
If you need the key as well, I don't know a built-in function to do so, but you can easily write it yourself for numeric keys and values:
function find_min_key{K,V}(d::Dict{K,V})
minkey = typemax(K)
minval = typemax(V)
for key in keys(d)
if d[key] < minval
minkey = key
minval = d[key]
end
end
minkey => minval
end
my_dict = Dict(1=>20, 2=>10)
find_min_key(my_dict)
findmax(dict)[2]
findmin(dict)[2]
Should also return the key corresponding to the max and min value(s). Here [2] is the index of the key in the returned tuple.
I'm using Julia 0.3.4
I'm trying to write LU-decomposition using Gaussian elimination. So I have to swap rows. And here's my problem:
If I'm using a,b = b,a I get an error,
but if I'm using:
function swapRows(row1, row2)
temp = row1
row1 = row2
row2 = temp
end
then everything works just fine.
Am I doing something wrong or it's a bug?
Here's my source code:
function lu_t(A::Matrix)
# input value: (A), where A is a matrix
# return value: (L,U), where L,U are matrices
function swapRows(row1, row2)
temp = row1
row1 = row2
row2 = temp
return null
end
if size(A)[1] != size(A)[2]
throw(DimException())
end
n = size(A)[1] # matrix dimension
U = copy(A) # upper triangular matrix
L = eye(n) # lower triangular matrix
for k = 1:n-1 # direct Gaussian elimination for each column `k`
(val,id) = findmax(U[k:end,k]) # find max pivot element and it's row `id`
if val == 0 # check matrix for singularity
throw(SingularException())
end
swapRows(U[k,k:end],U[id,k:end]) # swap row `k` and `id`
# U[k,k:end],U[id,k:end] = U[id,k:end],U[k,k:end] - error
for i = k+1:n # for each row `i` > `k`
μ = U[i,k] / U[k,k] # find elimination coefficient `μ`
L[i,k] = μ # save to an appropriate position in lower triangular matrix `L`
for j = k:n # update each value of the row `i`
U[i,j] = U[i,j] - μ⋅U[k,j]
end
end
end
return (L,U)
end
###### main code ######
A = rand(4,4)
#time (L,U) = lu_t(A)
#test_approx_eq(L*U, A)
The swapRows function is a no-op and has no effect whatsoever – all it does is swap around some local variable names. See various discussions of the difference between assignment and mutation:
https://groups.google.com/d/msg/julia-users/oSW5hH8vxAo/llAHRvvFVhMJ
http://julia.readthedocs.org/en/latest/manual/faq/#i-passed-an-argument-x-to-a-function-modified-it-inside-that-function-but-on-the-outside-the-variable-x-is-still-unchanged-why
http://julia.readthedocs.org/en/latest/manual/faq/#why-does-x-y-allocate-memory-when-x-and-y-are-arrays
The constant null doesn't mean what you think it does – in Julia v0.3 it's a function that computes the null space of a linear transformation; in Julia v0.4 it still means this but has been deprecated and renamed to nullspace. The "uninteresting" value in Julia is called nothing.
I'm not sure what's wrong with your commented out row swapping code, but this general approach does work:
julia> X = rand(3,4)
3x4 Array{Float64,2}:
0.149066 0.706264 0.983477 0.203822
0.478816 0.0901912 0.810107 0.675179
0.73195 0.756805 0.345936 0.821917
julia> X[1,:], X[2,:] = X[2,:], X[1,:]
(
1x4 Array{Float64,2}:
0.478816 0.0901912 0.810107 0.675179,
1x4 Array{Float64,2}:
0.149066 0.706264 0.983477 0.203822)
julia> X
3x4 Array{Float64,2}:
0.478816 0.0901912 0.810107 0.675179
0.149066 0.706264 0.983477 0.203822
0.73195 0.756805 0.345936 0.821917
Since this creates a pair of temporary arrays that we can't yet eliminate the allocation of, this isn't the most efficient approach. If you want the most efficient code here, looping over the two rows and swapping pairs of scalar values will be faster:
function swapRows!(X, i, j)
for k = 1:size(X,2)
X[i,k], X[j,k] = X[j,k], X[i,k]
end
end
Note that it is conventional in Julia to name functions that mutate one or more of their arguments with a trailing !. Currently, closures (i.e. inner functions) have some performance issues, so you'll want such a helper function to be defined at the top-level scope instead of inside of another function the way you've got it.
Finally, I assume this is an exercise since Julia ships with carefully tuned generic (i.e. it works for arbitrary numeric types) LU decomposition: http://docs.julialang.org/en/release-0.3/stdlib/linalg/#Base.lu.
-
It's quite simple
julia> A = rand(3,4)
3×4 Array{Float64,2}:
0.241426 0.283391 0.201864 0.116797
0.457109 0.138233 0.346372 0.458742
0.0940065 0.358259 0.260923 0.578814
julia> A[[1,2],:] = A[[2,1],:]
2×4 Array{Float64,2}:
0.457109 0.138233 0.346372 0.458742
0.241426 0.283391 0.201864 0.116797
julia> A
3×4 Array{Float64,2}:
0.457109 0.138233 0.346372 0.458742
0.241426 0.283391 0.201864 0.116797
0.0940065 0.358259 0.260923 0.578814