Julia BitArray with 128 Bits - julia

I need a Julia BitArray-like object that can encode more than 64 bits, say 128 bits. Does a simple replacement of UInt64 with UInt128 in bitarray.jl work?

Based on the information in your comment, the existing BitArray would itself serve your needs. Note that BitArray uses UInt64s internally, but that's not a limitation on the size of the array - it actually stores the bits as a Vector of UInt64s, so there's no special size limitation. You can create a 5x5x5 BitArray with no problem.
julia> b = BitArray(undef, 5, 5, 5);
julia> b .= 0;
julia> b[3, 5, 5] = 1
1
julia> b[3, :, :]
5×5 BitMatrix:
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 1
Maybe this part of the documentation threw you off:
BitArrays pack up to 64 values into every 8 bytes, resulting in an 8x space efficiency over Array{Bool, N} and allowing some
operations to work on 64 values at once.
but that's talking about internal implementation details. BitArrays are not limited to 8 bytes, so they're not limited to having just 64 values in them either.
Creating a new type of bit array using UInt128s would likely not be optimized, and is unnecessary anyway.

Related

Why does Julia DIct.keys show a power of 2 values

using Julia 0.6.2
when i create a dictionary of 10 items, the array for the keys is 16, apparently rounding up to the next power of 2.
julia> dk.keys
16-element Array{Int64,1}:
0
4
9
25
100
81
0
0
16
36
64
0
49
0
0
1
when i create a dictionary with 17 keys
julia> dkk = Dict(k^2 => "*"^k for k = 1:17)
Dict{Int64,String} with 17 entries:
...
julia> dkk.keys
64-element Array{Int64,1}:
0
0
100
0
121
81
0
0
16
0
⋮
4536409040
4536409456
36
225
256
0
0
4536409904
1
why 64 instead of the next power of 2, which would be 32?
either way, i really just want the keys and not the hash table.
note: when the dictionary is access directly, the number of entries is what i'd expect.
julia> dk
Dict{Int64,String} with 10 entries:
julia> dkk
Dict{Int64,String} with 17 entries:
It's powers of 2 for some internal reason (which I would guess is due to using a tree or something like that, I don't know). Avoid directly grabbing internals. Instead, use the iterator keys(dk). If you want the keys as an array, use collect(keys(dk)).

Lisp - Logical operators

I am new to Lisp so this might be so simple but I am curious to know about this in any case.
I am familiar with logical operators like AND and OR but lisp does not seem to behave as expected.
For example, For (and 1 8)
Expected:
1 => 0 0 0 1
8 => 1 0 0 0
(and 1 8) => 0 0 0 0
Received:
So, the answer should have been 0
...but instead it is 8
Questions:
How is this calculation done in LISP?
Is Logical operators
fundamentally different in LISP?
In Common Lisp, AND and OR operate on boolean values, not binary digits. NIL is the only false value, anything else is considered true.
To operate on the binary representation of numbers, use LOGAND, LOGIOR, etc. These are all documented at http://clhs.lisp.se/Body/f_logand.htm.
(logand 1 8) ==> 0
In programming languages there are often two types of and and or operator. The Conditional Operators are called && and || in Algol languages and in Common Lisp they are called and and or. On the other hand the arithmetic operators &, and | have CL equivalents logand and logior.
In Common Lisp every value are booleans and with the exception of nil every other value is considered a true value. Perl is very similar except it has a couple of false values, however 1 and 8 are true values in both languages:
1 && 8 # ==> 8
1 & 8 # ==> 0
1 || 8 # ==> 1
1 | 8 # ==> 9
Same in CL
(and 1 8) ; ==> 8
(logand 1 8) ; ==> 0
(or 1 8) ; ==> 1
(logior 1 8) ; ==> 9

Why does 01001000 equal H in Binary

128 + 16 = 144 which isn't on the ASCII chart yet it equals H (decimal).
Can somebody help me with the conversion process because I'm quite new to binary so don't understand it that well but ASCII chart only goes up to 128 and H equals 72.
Summary: Why does 01001000 equal H in decimal.
Your binary to decimal conversion is incorrect:
01001000 = 1 * 2^6 + 1 * 2^3 = 72
Recall that the right-most binary digit corresponds to 2^0, not 2^1.
the value 01001000 is translated to decimal like this
0 1 0 0 1 0 0 0
128s 64s 32s 16s 8s 4s 2s 1s
0 64 0 0 8 0 0 0 = 72

R: filling matrix with values does not work

I have a data frame vec that I need to prepare for an image.plot() plot. The structure of vec is as follows:
> str(vec)
'data.frame': 31212 obs. of 5 variables:
$ x : int 8 24 40 56 72 88 104 120 136 152 ...
$ y : int 8 8 8 8 8 8 8 8 8 8 ...
$ dx: num 0 0 0 0 0 0 0 0 0 0 ...
$ dy: num 0 0 0 0 0 0 0 0 0 0 ...
$ d : num 0 0 0 0 0 0 0 0 0 0 ...
Note: the values in $dx, $dy and $d are not zero but only too small to be shown in this overview.
Background: the data is the output of a pixel tracking software. $x and $y are pixel coordinates while in $d are the displacement vector lengths (in pixels) of that pixel.
image.plot() expects as first and second argument the dimension of the matrix as ordered vectors, so I think sort(unique(vec$x)) and sort(unique(vec$y)) respectively should be good. So, I would like to end up with image.plot(sort(unique(vec$x)),sort(unique(vec$y)), data)
The third argument is the actual data. To build this I tried:
# spanning an empty matrix
data = matrix(NA,length(unique(vec$x)),length(unique(vec$y)))
# filling the matrix
data[match(vec$x, sort(unique(vec$x))), match(vec$y, sort(unique(vec$y)))] = vec$d
But, unfortunately, this isn't working. It reports no errors but data contains no values! This works:
for(i in c(1:length(vec$x))) data[match(vec$x[i], sort(unique(vec$x))), match(vec$y[i], sort(unique(vec$y)))] = vec$d[i]
But is very slow.
a) is there a better way to build data?
b) is there a better way to deal with my problem, anyways?
R allows indexing of a matrix by a two-column matrix, where the first column of the index is interpreted as the row index, and the second column as the column index. So create the indexes into data as a two-column matrix
idx = cbind(match(vec$x, sort(unique(vec$x))),
match(vec$y, sort(unique(vec$y))))
and use that
data[idx] = vec$d

Lossy conversion Matrix -> Quaternion -> Matrix

I have a box defined by 8 points. From those points, I calculate axes and create rotation matrix as follows:
axis[0], axis[1], axis[2]
mat =
{
axis[0].x axis[1].x axis[2].x 0
axis[0].y axis[1].y axis[2].y 0
axis[0].z axis[1].z axis[2].z 0
0 0 0 1
}
I have particular rotation matrix:
{
-1 0 0 0
0 0 1 0
0 -1 0 0
0 0 0 1
}
As best of my knowledge, this is a valid rotation matrix. Its inversion is equal to its transposition.
Now I would like to store this matrix as a quaternion. But later, I need rotation matrix to be recreated from this quaternion. I believe that convertsion from matrix to quaternion and back to matrix should be an identity transform and I should get the same matrix that I had in the beginning (maybe with very small numerical errors).
But this seems not to be the case. Both SlimDX (C#) and my propertiary math library (C++) return invalid matrix.
First, quaternion that I receive:
C#: 0, 0, 0.70710676908493, 0
C++: 0, -0.707107, 0, 0
And matrix created from this quaternion:
C#:
0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 1
C++:
0 0 0 0
0 1 0 0
0 0 0 0
0 0 0 1
Why is this wrong?
I've also tried this: http://cache-www.intel.com/cd/00/00/29/37/293748_293748.pdf but it gave me bad results as well.
The matrix you gave isn't a rotation matrix, it's a reflection matrix because its determinant is -1. See the definition on Wikipedia. You can tell something isn't right because you should get a unit quaternion, and yet the one you're getting back only has length 1/sqrt(2).
Try using a 4x4 matrix. I'm not matrix math expert, but I've never used 3x3 matrices when dealing with 3D graphics. I believe the extra dimension is for normalization, or something like that.

Resources