I was wondering if it's possible to use the SIMD types defined in <simd/simd.h> (such as vector_float3) in Swift. I can't seem to figure out a way to do it.
Thanks!
Just a heads up for this question for it to be up to date - SIMD Vectors have been announced at WWDC '15 for Swift 2.0:
SIMD Support:Clang extended vectors are imported and usable in Swift, enabling many graphics and other
low-level numeric APIs (e.g. simd.h) to be usable in Swift.
So the answer from now on - yes, it's possible.
Looks like not for now. They might be adding it in a later version, though. https://groups.google.com/forum/#!topic/swift-language/slwe62yKsWo
Swift 5 comes with a native support for SIMD vector types. The following Playground sample code shows an element-wise multiplication using SIMD4:
let vector1 = SIMD4(1, 2, 3, 4)
let vector2: SIMD4 = [2, 3, 4, 5]
let vector3 = vector1 &* vector2
print(vector3) // prints: SIMD4<Int>(2, 6, 12, 20)
Sources: Swift Evolution - SIMD Vectors proposal
Here check my repo: https://github.com/noxytrux/SwiftGeom i build whole library for that. So you can now use vec2, vec3 etc in your swift project.
Related
I want to keep only the first 2 elements in a Vec and release any unused capacity. Here is my current solution:
let mut data = vec![1, 2, 3, 4, 5, 6]; // produced by another function
data.truncate(2);
data.shrink_to_fit();
Is there a better way to do this?
Truncating and shrinking is the best way. Releasing unused capacity is a distinct operation; there's no way around it. Rust doesn't do it automatically since you might be removing and then adding more elements.
Rust docs https://static.rust-lang.org/doc/master/std/vec/struct.Vec.html#guarantees
In general, Vec's allocation details are subtle enough that it is strongly recommended that you only free memory allocated by a Vec by creating a new Vec and dropping it.
I'm a Rust noob, but it seems to say that the solution would be:
let v = vec![v[0], v[1]];
(or vec![&v[0], &v[1]] if appropriate);
BTW. https://static.rust-lang.org/doc/master/std/vec/struct.Vec.html#guarantees also says:
push and insert will never (re)allocate if the reported capacity is sufficient. push and insert will (re)allocate if len()==capacity(). That is, the reported capacity is completely accurate, and can be relied on. It can even be used to manually free the memory allocated by a Vec if desired.
I don't understand how to use this information :)
I want to use the plotting functionality of Plots.jl with an image loaded using the load_image() function of ArrayFire.
What I have is :
AFArray: 1000×300×3 Array{Float32,3}
What I want is :
300×1000 Array{RGB{Any},2} with eltype RGB
I couldn't be able to find direct conversion in documentations. Is there any efficient way to do this?
I don't know specifically about ArrayFire arrays, but in general you can use reinterpret for operations like this. If you want the new array to reside on the cpu, then copy it over.
Then, ideally, you could just do
rgb = reinterpret(RGB{Float32}, A)
Unfortunately, MxNx3 is not the optimal layout for RGB arrays, since you want the 3-values to be located sequentially. So you should either make sure that the array has 3xMxN-layout, or you can do permutedims(A, (3, 1, 2)).
Finally, to get a matrix, you must drop the leading singleton dimension, otherwise you get a 1xMxN array.
So,
rgb = dropdims(reinterpret(RGB{Float32}, permutedims(A, (3, 1, 2))); dims=1)
I assumed that you actually want RGB{Float32} instead of RGB{Any}.
BTW, I'm not sure how this will work if you want to keep the final array on the GPU.
Edit: You might consider reshape instead of dropdims, it seems slightly faster on my pc.
Just playing around with Julia (1.0) and one thing that I need to use a lot in Python/numpy/matlab is the squeeze function to drop the singleton dimensions.
I found out that one way to do this in Julia is:
a = rand(3, 3, 1);
a = dropdims(a, dims = tuple(findall(size(a) .== 1)...))
The second line seems a bit cumbersome and not easy to read and parse instantly (this could also be my bias that I bring from other languages). However, I wonder if this is the canonical way to do this in Julia?
The actual answer to this question surprised me. What you are asking could be rephrased as:
why doesn't dropdims(a) remove all singleton dimensions?
I'm going to quote Tim Holy from the relevant issue here:
it's not possible to have squeeze(A) return a type that the compiler
can infer---the sizes of the input matrix are a runtime variable, so
there's no way for the compiler to know how many dimensions the output
will have. So it can't possibly give you the type stability you seek.
Type stability aside, there are also some other surprising implications of what you have written. For example, note that:
julia> f(a) = dropdims(a, dims = tuple(findall(size(a) .== 1)...))
f (generic function with 1 method)
julia> f(rand(1,1,1))
0-dimensional Array{Float64,0}:
0.9939103383167442
In summary, including such a method in Base Julia would encourage users to use it, resulting in potentially type-unstable code that, under some circumstances, will not be fast (something the core developers are strenuously trying to avoid). In languages like Python, rigorous type-stability is not enforced, and so you will find such functions.
Of course, nothing stops you from defining your own method as you have. And I don't think you'll find a significantly simpler way of writing it. For example, the proposition for Base that was not implemented was the method:
function squeeze(A::AbstractArray)
singleton_dims = tuple((d for d in 1:ndims(A) if size(A, d) == 1)...)
return squeeze(A, singleton_dims)
end
Just be aware of the potential implications of using it.
Let me simply add that "uncontrolled" dropdims (drop any singleton dimension) is a frequent source of bugs. For example, suppose you have some loop that asks for a data array A from some external source, and you run R = sum(A, dims=2) on it and then get rid of all singleton dimensions. But then suppose that one time out of 10000, your external source returns A for which size(A, 1) happens to be 1: boom, suddenly you're dropping more dimensions than you intended and perhaps at risk for grossly misinterpreting your data.
If you specify those dimensions manually instead (e.g., dropdims(R, dims=2)) then you are immune from bugs like these.
You can get rid of tuple in favor of a comma ,:
dropdims(a, dims = (findall(size(a) .== 1)...,))
I'm a bit surprised at Colin's revelation; surely something relying on 'reshape' is type stable? (plus, as a bonus, returns a view rather than a copy).
julia> function squeeze( A :: AbstractArray )
keepdims = Tuple(i for i in size(A) if i != 1);
return reshape( A, keepdims );
end;
julia> a = randn(2,1,3,1,4,1,5,1,6,1,7);
julia> size( squeeze(a) )
(2, 3, 4, 5, 6, 7)
No?
Hi I'm a newbie to Tensorflow. What I want to do is something like this in R:
mat = tf$Variable(matrix(1:4, nrow = 2))
apply(mat, 1, cumprod)
Is this do-able in Tensorflow, either in Python API or R tensorflow package? Thanks!
EDIT: tf$cumprod is actually what I want.
The TensorFlow Python API includes the tf.map_fn(fn, elems) higher-order operator, which allows you to specify a (Python) function fn that will be applied to each slice of elems in the 0th dimension (i.e. to each row if elems is a matrix).
Note that, while tf.map_fn() is very general, it may be more efficient to use specialized ops that either broadcast their arguments on one or more dimensions (e.g. tf.multiply()), or reduce in parallel across one or more dimensions (e.g. tf.reduce_sum()). However, tf.map_fn() is useful when there is no built-in operator to do what you want.
I want to do a very simple thing:
Given two vectors, I want to encrypt them and do some calculation, then decrypt the result and get the inner product between both vectors.
Can you recommend me some library that can do this thing? Any other material will help me as well.
I found HELIB, but I still dont know if that is the best that I can do. Would you recommend this library? Do you know better ones for my purpose? I want to do it as fast as possible for the biggest vector dimension possible.
I have only basic knowledge in crypto, so I would like use that as black box as much as possible without putting to much effort in the mathematics behind it.
Thanks for any help!
You can do exactly that with the homomorphic encryption library Pyfhel in Python3. Just install it using pip install Pyfhel and create a simple demo:
from Pyfhel import Pyfhel, PyCtxt
he = Pyfhel() # Object in charge of all homomorphic operations
he.contextGen(10000) # Choose the maximum size of your values
he.keyGen(10000) # Generate your private/public key pair
# Now to the encryption/operation/decryption
v1 = [5.34, 3.44, -2.14, 9.13]
v2 = [1, 2.5, -2.1, 3]
p1 = [he.encrypt(val) for val in v1]
p2 = [he.encrypt(val) for val in v2]
pMul = [a*b for a,b in zip(p1, p2)]
pScProd = [pMul[0]+p for p in pMul[1:]
round(he.decrypt(pScProd), 3)
#> 45.824