GaussianMixtures.jl - Equivalent of sklearn GaussianMixture.predict - julia

How can I do the equivalent of the following SK Learn code using GaussianMixtures.jl
import numpy as np
from sklearn.mixture import GaussianMixture
X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
gm = GaussianMixture(n_components=2, random_state=0).fit(X)
gm.predict([[0, 0], [12, 3]]) #prints "array([1, 0])"
Here's how I'm currently handling this requirement:
using GaussianMixtures
using Random
Random.seed!(23);
function _cluster_predict(gmm::GMM, X::Matrix)
llpg_X = llpg(gmm, X)
return map(argmin, eachrow(llpg_X))
end
X = [1.0 2; 1 4; 1 0; 10 2; 10 4; 10 0] + rand(Float64, (6, 2))
gm = GMM(2, X)
_cluster_predict(gm, [0.0 0.0; 12.0 3.0]) #returns [2,1]
Is there a better approach?

Related

Julia JuMP: define a multidimensional variable when a dimension depends on other dimension

While I am defining a linear programming variables, I have to consider
index_i = 1:3 index_j = J = [1:2, 1:5, 1:3]
I want to define a variable x indexed with both i and j such that i is {1,2,3} and j is in {1,2} if i is 1, {1,2,3,4,5} if i is 2 and {1,2,3} if i is 3.
I tried several syntaxes but non of them delivered it successfully. Any suggestion?
I wonder why this is not working
#variable(m, e[i for i in I, j for j in J[i]])
I m expecting a result like this
e[1,1]
e[1,2]
e[1,3]
e[2,1]
e[2,2]
e[2,3]
e[2,4]
e[2,5]
e[3,1]
e[3,2]
e[3,3]
Assuming I=1:3 and J=[1:2, 1:5, 1:3]
you can do:
julia> #variable(m, e[i in I, j in J[i]])
JuMP.Containers.SparseAxisArray{VariableRef, 2, Tuple{Int64, Int64}} with 10 entries:
[1, 1] = e[1,1]
[1, 2] = e[1,2]
[2, 1] = e[2,1]
[2, 2] = e[2,2]
[2, 3] = e[2,3]
[2, 4] = e[2,4]
[2, 5] = e[2,5]
[3, 1] = e[3,1]
[3, 2] = e[3,2]
[3, 3] = e[3,3]

How to solve a linear system where both inputs are sparse?

is there any equivalent to scipy.sparse.linalg.spsolve in Julia? Here's the description of the function in Python.
In [59]: ?spsolve
Signature: spsolve(A, b, permc_spec=None, use_umfpack=True)
Docstring:
Solve the sparse linear system Ax=b, where b may be a vector or a matrix.
I couldn't find this in Julia's LinearAlgebra and SparseArrays. Is there anything I miss or any alternatives?
Thanks
EDIT
For example:
In [71]: A = sparse.csc_matrix([[3, 2, 0], [1, -1, 0], [0, 5, 1]], dtype=float)
In [72]: B = sparse.csc_matrix([[2, 0], [-1, 0], [2, 0]], dtype=float)
In [73]: spsolve(A, B).data
Out[73]: array([ 1., -3.])
In [74]: spsolve(A, B).toarray()
Out[74]:
array([[ 0., 0.],
[ 1., 0.],
[-3., 0.]])
In Julia, with \ operator
julia> A = Float64.(sparse([3 2 0; 1 -1 0; 0 5 1]))
3×3 SparseMatrixCSC{Float64,Int64} with 6 stored entries:
[1, 1] = 3.0
[2, 1] = 1.0
[1, 2] = 2.0
[2, 2] = -1.0
[3, 2] = 5.0
[3, 3] = 1.0
julia> B = Float64.(sparse([2 0; -1 0; 2 0]))
3×2 SparseMatrixCSC{Float64,Int64} with 3 stored entries:
[1, 1] = 2.0
[2, 1] = -1.0
[3, 1] = 2.0
julia> A \ B
ERROR: MethodError: no method matching ldiv!(::SuiteSparse.UMFPACK.UmfpackLU{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64})
Closest candidates are:
ldiv!(::Number, ::AbstractArray) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/generic.jl:236
ldiv!(::SymTridiagonal, ::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T; shift) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/tridiag.jl:208
ldiv!(::LU{T,Tridiagonal{T,V}}, ::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T) where {T, V} at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/lu.jl:588
...
Stacktrace:
[1] \(::SuiteSparse.UMFPACK.UmfpackLU{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/LinearAlgebra/src/factorization.jl:99
[2] \(::SparseMatrixCSC{Float64,Int64}, ::SparseMatrixCSC{Float64,Int64}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.3/SparseArrays/src/linalg.jl:1430
[3] top-level scope at REPL[81]:1
Yes, it's the \ function.
julia> using SparseArrays, LinearAlgebra
julia> A = sprand(Float64, 20, 20, 0.01) + I # just adding the identity matrix so A is non-singular.
julia> typeof(A)
SparseMatrixCSC{Float64,Int64}
julia> v = rand(20);
julia> A \ v
20-element Array{Float64,1}:
0.5930744938331236
0.8726507741810358
0.6846427450637211
0.3135234897986168
0.8366321472466727
0.11338490488638651
0.3679058951515244
0.4931583108292607
0.3057947282994271
0.27481281228206955
0.888942874188458
0.905356044150361
0.17546911165214607
0.13636389619386557
0.9607381212005248
0.2518153541168824
0.6237205353883974
0.6588050295549153
0.14748809413104935
0.9806131247053784
Edit in response to question edit:
If you want v here to instead be a sparse matrix B, then we can proceed by using the QR decomposition of B (note that cases where B is truly sparse are rare:
function myspsolve(A, B)
qrB = qr(B)
Q, R = qrB.Q, qrB.R
R = [R; zeros(size(Q, 2) - size(R, 1), size(R, 2))]
A\Q * R
end
now:
julia> A = Float64.(sparse([3 2 0; 1 -1 0; 0 5 1]))
3×3 SparseMatrixCSC{Float64,Int64} with 6 stored entries:
[1, 1] = 3.0
[2, 1] = 1.0
[1, 2] = 2.0
[2, 2] = -1.0
[3, 2] = 5.0
[3, 3] = 1.0
julia> B = Float64.(sparse([2 0; -1 0; 2 0]))
3×2 SparseMatrixCSC{Float64,Int64} with 3 stored entries:
[1, 1] = 2.0
[2, 1] = -1.0
[3, 1] = 2.0
julia> mysolve(A, B)
3×2 Array{Float64,2}:
0.0 0.0
1.0 0.0
-3.0 0.0
and we can test to make sure we did it right:
julia> mysolve(A, B) ≈ A \ collect(B)
true

Pushing arrays of one variable in one array

I have this piece of code:
for i=1:10
v=[2i,i]
#show v
end
and I get this result:
v = [2, 1]
v = [4, 2]
v = [6, 3]
v = [8, 4]
v = [10, 5]
v = [12, 6]
v = [14, 7]
v = [16, 8]
v = [18, 9]
v = [20, 10]
Now what I want to do is to collect all these outputs of into one array of arrays, something like:
[[2,1],[4,2],[6,3]]
and I don't really know how to do it, I've tried several solutions that didn't work.
You can use array comprehensions for this:
julia> x = [[2i,i] for i in 1:10]
10-element Array{Array{Int64,1},1}:
[2, 1]
[4, 2]
[6, 3]
[8, 4]
[10, 5]
[12, 6]
[14, 7]
[16, 8]
[18, 9]
[20, 10]
or go with the manual route of constructing an empty initial array, and pushing the inner arrays into it one-by-one:
julia> y = []
0-element Array{Any,1}
julia> for i in 1:10
push!(y,[2i,i])
end
julia> y
10-element Array{Any,1}:
[2, 1]
[4, 2]
[6, 3]
[8, 4]
[10, 5]
[12, 6]
[14, 7]
[16, 8]
[18, 9]
[20, 10]

Graph convolutions in Keras

How can we implement graph convolutions in Keras?
Ideally in the form of a layer accepting 2 inputs - the set (as time-sequence) of nodes and (same time dimension length) set of integer indexes (into the time dimension) of each node's neighbours.
If we would be able to gather items into the style and shape of Conv layers, we could use normal convolutions.
The gather can be done using this Keras layer which uses tensorflow's gather.
class GatherFromIndices(Layer):
"""
To have a graph convolution (over a fixed/fixed degree kernel) from a given sequence of nodes, we need to gather
the data of each node's neighbours before running a simple Conv1D/conv2D,
that would be effectively a defined convolution (or even TimeDistributed(Dense()) can be used - only
based on data format we would output).
This layer should do exactly that.
Does not support non integer values, values lesser than 0 zre automatically masked.
"""
def __init__(self, mask_value=0, include_self=True, flatten_indices_features=False, **kwargs):
Layer.__init__(self, **kwargs)
self.mask_value = mask_value
self.include_self = include_self
self.flatten_indices_features = flatten_indices_features
def get_config(self):
config = {'mask_value': self.mask_value,
'include_self': self.include_self,
'flatten_indices_features': self.flatten_indices_features,
}
base_config = super(GatherFromIndices, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
#def build(self, input_shape):
#self.built = True
def compute_output_shape(self, input_shape):
inp_shape, inds_shape = input_shape
indices = inds_shape[-1]
if self.include_self:
indices += 1
features = inp_shape[-1]
if self.flatten_indices_features:
return tuple(list(inds_shape[:-1]) + [indices * features])
else:
return tuple(list(inds_shape[:-1]) + [indices, features])
def call(self, inputs, training=None):
inp, inds = inputs
# assumes input in the shape of (inp=[...,batches, sequence_len, features],
# inds = [...,batches,sequence_ind_len, neighbours]... indexing into inp)
# for output we want to get [...,batches,sequence_ind_len, indices,features]
assert_shapes = tf.Assert(tf.reduce_all(tf.equal(tf.shape(inp)[:-2], tf.shape(inds)[:-2])), [inp])
assert_positive_ins_shape = tf.Assert(tf.reduce_all(tf.greater(tf.shape(inds), 0)), [inds])
# the shapes need to be the same (with the exception of the last dimension)
with tf.control_dependencies([assert_shapes, assert_positive_ins_shape]):
inp_shape = tf.shape(inp)
inds_shape = tf.shape(inds)
features_dim = -1
# ^^ todo for future variablility of the last dimension, because maybe can be made to take not the last
# dimension as features, but something else.
inp_p = tf.reshape(inp, [-1, inp_shape[features_dim]])
ins_p = tf.reshape(inds, [-1, inds_shape[features_dim]])
# we have lost the batchdimension by reshaping, so we save it by adding the size to the respective indexes
# we do it because we use the gather_nd as nonbatched (so we do not need to provide batch indices)
resized_range = tf.range(tf.shape(ins_p)[0])
different_seqs_ids_float = tf.scalar_mul(1.0 / tf.to_float(inds_shape[-2]), tf.to_float(resized_range))
different_seqs_ids = tf.to_int32(tf.floor(different_seqs_ids_float))
different_seqs_ids_packed = tf.scalar_mul(inp_shape[-2], different_seqs_ids)
thseq = tf.expand_dims(different_seqs_ids_packed, -1)
# in case there are negative indices, make them all be equal to -1
# and add masking value to the ending of inp_p - that way, everything that should be masked
# will get the masking value as features.
mask = tf.greater_equal(ins_p, 0) # extract where minuses are, because the will all default to default value
# .. before the mod operation, if provided greater id numbers, to wrap correctly small sequences
offset_ins_p = tf.mod(ins_p, inp_shape[-2]) + thseq # broadcast to ins_p
minus_1 = tf.scalar_mul(tf.shape(inp_p)[0], tf.ones_like(mask, dtype=tf.int32))
'''
On GPU, if we use index = -1 anywhere it would throw a warning:
OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument:
flat indices = [-1] does not index into param.
Which is a warning, that there are -1s. We are using that as feature and know about that.
'''
offset_ins_p = tf.where(mask, offset_ins_p, minus_1)
# also possible to do something like tf.multiply(offset_ins_p, mask) + tf.scalar_mul(-1, mask)
mask_value_last = tf.zeros((inp_shape[-1],))
if self.mask_value != 0:
mask_value_last += tf.constant(self.mask_value) # broadcasting if needed
inp_p = tf.concat([inp_p, tf.expand_dims(mask_value_last, 0)], axis=0)
# expand dims so that it would slice n times instead having slice of length n indices
neighb_p = tf.gather_nd(inp_p, tf.expand_dims(offset_ins_p, -1)) # [-1,indices, features]
out_shape = tf.concat([inds_shape, inp_shape[features_dim:]], axis=-1)
neighb = tf.reshape(neighb_p, out_shape)
# ^^ [...,batches,sequence_len, indices,features]
if self.include_self: # if is set, add self at the 0th position
self_originals = tf.expand_dims(inp, axis=features_dim-1)
# ^^ [...,batches,sequence_len, 1, features]
neighb = tf.concat([neighb, self_originals], axis=features_dim-1)
if self.flatten_indices_features:
neighb = tf.reshape(neighb, tf.concat([inds_shape[:-1], [-1]], axis=-1))
return neighb
With a debuggable interactive test:
def allow_tf_debug(func):
"""
Decorator for tests that use tensorflow, to make them more breakpoint-friendly, i.e. to be able to call .eval()
on tensors immediately.
"""
def interactive_wrapper():
sess = tf.InteractiveSession()
ret = func()
sess.close()
return ret
return interactive_wrapper
#allow_tf_debug
def test_gather_from_indices():
gat = GatherFromIndices(include_self=False, flatten_indices_features=False)
# test for include_self=True is not included
# test for flatten_indices_features not included
seq = [ # batch of sequences
# sequences of 2d features
[[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8]],
[[10, 1], [11, 2], [12, 3], [13, 4], [14, 5], [15, 6], [16, 7], [17, 8]]
]
ids = [ # batch of sequences
# sequences of 3 ids of each item in sequence
[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [5, 5, 5], [6, 6, 6], [7, 7, 7]],
[[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [5, 6, 7], [6, 7, 0], [7, 0, -1]]
# minus one should mean masking
]
def compute_assert_2ways_gathers(seq, ids):
seq = np.array(seq, dtype=np.float32)
ids = np.array(ids, dtype=np.int32)
# intended_look
result_np = None
if len(ids.shape) == 3: # classical batches
result_np = np.empty(list(ids.shape) + [seq.shape[-1]])
for b, seq_in_batch in enumerate(ids):
for i, sid in enumerate(seq_in_batch):
for c, copyid in enumerate(sid):
assert ids[b,i,c] == copyid
if ids[b,i,c] < 0:
result_np[b, i, c, :] = 0
else:
result_np[b, i, c, :] = seq[b, ids[b,i,c], :]
elif len(ids.shape) == 4: # some other batching format...
result_np = np.empty(list(ids.shape) + [seq.shape[-1]])
for mb, mseq_in_batch in enumerate(ids):
for b, seq_in_batch in enumerate(mseq_in_batch):
for i, sid in enumerate(seq_in_batch):
for c, copyid in enumerate(sid):
assert ids[mb, b, i, c] == copyid
if ids[mb, b, i, c] < 0:
result_np[mb, b, i, c, :] = 0
else:
result_np[mb, b, i, c, :] = seq[mb, b, ids[mb, b, i, c], :]
output_shape_kerascomputed = gat.compute_output_shape([seq.shape, ids.shape])
assert isinstance(output_shape_kerascomputed, tuple)
assert list(output_shape_kerascomputed) == list(result_np.shape)
#with tf.get_default_session() as sess:
sess = tf.get_default_session()
gat.build(seq.shape)
result = gat.call([tf.constant(seq), tf.constant(ids)])
tf_result = sess.run(result)
assert list(tf_result.shape) == list(output_shape_kerascomputed)
assert np.all(np.equal(tf_result, result_np))
compute_assert_2ways_gathers(seq, ids)
compute_assert_2ways_gathers(seq * 5, ids * 5)
compute_assert_2ways_gathers([seq] * 3, [ids] * 3)
And usage example for 5 neighbours per node:
fields_input = Input(shape=(None, 10, name='nodedata')
neighbours_ids_input = Input(shape=(None, 5), name='nodes_neighbours_ids', dtype='int32')
fields_input_with_neighbours = GatherFromIndices(mask_value=0,
include_self=True, flatten_indices_features=True)\
([fields_input, neighbours_ids_input])
fields = Conv1D(128, kernel_size=5, padding='same',
activation='relu')(fields_input_with_neighbours) # data_format="channels_last"

Sum of integers with restrictrions

Getting right to the gist of the problem:
In how many ways can we add k positive integers to reach a sum of exactly n if each number is smaller or equal to given number m?
The problem is solvable with dynamic programming but I am stuck because I cannot find the optimal substructure or recursion for the solution.
Here's a simple function in Python 3 that should fit your description. I assume that 0 is not an acceptable value but it's a trivial change if it is.
def howMany(k, n, m):
def sub(pos, currentSum, path):
if currentSum == n and pos == k: # reached the sum, print result and increase counter by 1
print(path)
return 1
elif currentSum < n and pos < k: # still worth trying
count = 0
for i in range(1, m):
count += sub(pos + 1, currentSum + i, path+[i])
return count
else: # abort
return 0
return sub(0, 0, [])
print(howMany(3, 10, 6))
yields
[1, 4, 5]
[1, 5, 4]
[2, 3, 5]
[2, 4, 4]
[2, 5, 3]
[3, 2, 5]
[3, 3, 4]
[3, 4, 3]
[3, 5, 2]
[4, 1, 5]
[4, 2, 4]
[4, 3, 3]
[4, 4, 2]
[4, 5, 1]
[5, 1, 4]
[5, 2, 3]
[5, 3, 2]
[5, 4, 1]
18
It could be optimised but that would obfuscate the logic at this stage.

Resources