Iterate through all possibilities in Julia - julia

If I want to do something on each pair of letters, it could look like this in Julia:
for l1 in 'a':'z'
for l2 in 'a':'z'
w = l1*l2
# ... do something with w ...
end
end
I want to generalise this to words of any length, given a value n specifying the number of letters desired. How do I best do this in Julia?

You can use:
for ls in Iterators.product(fill('a':'z', n)...))
w = join(ls)
# ... do something with w ...
end
In particular if you wanted to collect them in an array you could write:
join.(Iterators.product(fill('a':'z', n)...))
or flatten it to a vector
vec(join.(Iterators.product(fill('a':'z', n)...)))
Note, however, that in most cases this will not be needed and for larger n it is better not to materialize the output but just iterate over it as suggested above.

Related

How to remove common elements from both lists in python3.6?

If
L1=[2,4,6,8,2,4,6,8]
L2=[1,3,2,2,4]
then after performing the operation my result should be:
L1=[6,8,4,6,8]
L2=[1,3]
The operation should remove elements present in common in both List1 and List2. Tell me a method to do this.
For less complexity I suggest:
uniq = set(L1).intersection(L2)
L1_uniq = [x for x in L1 if x not in uniq]
L2_uniq = [x for x in L2 if x not in uniq]
L1_unique=[i for i in L1 if i not in L2]
L2_unique=[i for i in L2 if i not in L1]
This is called list comprehension, which is a very useful feature of python. It makes use of for loop, which can be expressed explicitly as:
L1_unique=[]
for i in L1:
if i not in L2:
L1.append(i)
Which is equivalent to a double for loop:
for i in L1:
for j in L2:
if i==j:
break
else:
L1_unique.append(i)
As the other answer presented (and I voted up), having a set based on the intersect of the two lists before list comprehension can reduce time complexity, because it ultimately reduces number of searches in the second list. (You can simply run %%timeit to see if you use IPython)
In principle, you may want to modify the structure of the second list, so that you do not have to traverse the entire list in case of unsuccessful search . But I doubt if it can be faster than list comprehension in practice.

Julia: Apply 1 dimensional Julia function to multi-dimensional array

I'm a "write Fortran in all languages" kind of person trying to learn modern programming practices. I have a one dimensional function ft(lx)=HT(x,f(x),lx), where x, and f(x) are one dimensional arrays of size nx, and lx is the size of output array ft. I want to apply HT on a multidimensional array f(x,y,z).
Basically I want to apply HT on all three dimensions to go from f(x,y,z) defined on (nx,ny,nz) dimensional grid, to ft(lx,ly,lz) defined on (lx,ly,lz) dimensional grid:
ft(lx,y,z) = HT(x,f(x,y,z) ,lx)
ft(lx,ly,z) = HT(y,ft(lx,y,z) ,ly)
ft(lx,ly,lz) = HT(z,ft(lx,ly,z),lz)
In f95 style I would tend to write something like:
FTx=zeros((lx,ny,nz))
for k=1:nz
for j=1:ny
FTx[:,j,k]=HT(x,f[:,j,k],lx)
end
end
FTxy=zeros((lx,ly,nz))
for k=1:nz
for i=1:lx
FTxy[i,:,k]=HT(y,FTx[i,:,k],ly)
end
end
FTxyz=zeros((lx,ly,lz))
for j=1:ly
for i=1:lx
FTxyz[i,j,:]=HT(z,FTxy[i,j,:],lz)
end
end
I know idiomatic Julia would require using something like mapslices. I was not able to understand how to go about doing this from the mapslices documentation.
So my question is: what would be the idiomatic Julia code, along with proper type declarations, equivalent to the Fortran style version?
A follow up sub-question would be: Is it possible to write a function
FT = HTnD((Tuple of x,y,z etc.),f(x,y,z), (Tuple of lx,ly,lz etc.))
that works with arbitrary dimensions? I.e. it would automatically adjust computation for 1,2,3 dimensions based on the sizes of input tuples and function?
I have a piece of code here which is fairly close to what you want. The key tool is Base.Cartesian.#nexprs which you can read up on in the linked documentation.
The three essential lines in my code are Lines 30 to 32. Here is a verbal description of what they do.
Line 30: reshape an n1 x n2 x ... nN-sized array C_{k-1} into an n1 x prod(n2,...,nN) matrix tmp_k.
Line 31: Apply the function B[k] to each column of tmp_k. In my code, there are some indirections here since I want to allow for B[k] to be a matrix or a function, but the basic idea is as described above. This is the part where you would want to bring in your HT function.
Line 32: Reshape tmp_k back into an N-dimensional array and circularly permute the dimensions such that the second dimension of tmp_k ends up as the first dimension of C_k. This makes sure that the next iteration of the "loop" implied by #nexprs operates on the second dimension of the original array, and so on.
As you can see, my code avoids forming slices along arbitrary dimensions by permuting such that we only ever need to slice along the first dimension. This makes programming much easier, and it can also have some performance benefits. For example, computing the matrix-vector products B * C[i1,:,i3] for all i1,i3can be done easily and very efficiently by moving the second dimension of C into the first position of tmp and using gemm to compute B * tmp. Doing the same efficiently without the permutation would be much harder.
Following #gTcV's code, your function would look like:
using Base.Cartesian
ht(x,F,d) = mapslices(f -> HT(x, f, d), F, dims = 1)
#generated function HTnD(
xx::NTuple{N,Any},
F::AbstractArray{<:Any,N},
newdims::NTuple{N,Int}
) where {N}
quote
F_0 = F
Base.Cartesian.#nexprs $N k->begin
tmp_k = reshape(F_{k-1},(size(F_{k-1},1),prod(Base.tail(size(F_{k-1})))))
tmp_k = ht(xx[k], tmp_k, newdims[k])
F_k = Array(reshape(permutedims(tmp_k),(Base.tail(size(F_{k-1}))...,size(tmp_k,1))))
# https://github.com/JuliaLang/julia/issues/30988
end
return $(Symbol("F_",N))
end
end
A simpler version, which shows the usage of mapslices would look like this
function simpleHTnD(
xx::NTuple{N,Any},
F::AbstractArray{<:Any,N},
newdims::NTuple{N,Int}
) where {N}
for k = 1:N
F = mapslices(f -> HT(xx[k], f, newdims[k]), F, dims = k)
end
return F
end
you could even use foldl if you are a friend of one-liners ;-)
fold_HTnD(xx, F, newdims) = foldl((F, k) -> mapslices(f -> HT(xx[k], f, newdims[k]), F, dims = k), 1:length(xx), init = F)

Elixir loop over a matrix

I have a list of elements and I am converting it into a list of lists using the Enum.chunk_every method.
The code is something like this:
matrix = Enum.chunk_every(list_1d, num_cols)
Now I want to loop over the matrix and access the neighbors
Simply if I have the list [1,2,3,4,5,6,1,2,3] it is converted to a 3X3 matrix like:
[[1,2,3], [4,5,6], [1,2,3]]
Now how do I loop over this matrix? And what if I want to access the neighbors of the elements? For example the neighbors of 5 are 2,4,6 and 2.
I can see that recursion is a way to go but how will that work here?
There are many ways to solve this, and I think that you should consider first what is your use case (size of the matrix, number of matrices, number of accesses...) and adapt your data structure accordingly.
Nevertheless, here is a simple implementation (in Erlang shell, I let you adapt to elixir):
1> L = [[1,2,3], [4,5,6], [1,2,3]].
[[1,2,3],[4,5,6],[1,2,3]]
2> Get = fun(I,J,L) ->
try
V = lists:nth(I,lists:nth(J,L)),
{ok,V}
catch
_:_ -> {error,out_of_bound}
end
end.
#Fun<erl_eval.18.99386804>
3> Get(1,2,L).
{ok,4}
4> Get(2,3,L).
{ok,2}
5> Get(2,4,L).
{error,out_of_bound}
6> Neighbor = fun(I,J,L) ->
[ V || {I1,J1} <- [{I,J-1},{I-1,J},{I+1,J},{I,J+1}],
{ok,V} <- [Get(I1,J1,L)]
]
end.
#Fun<erl_eval.18.99386804>
7> Neighbor(2,2,L).
[2,4,6,2]
8> Neighbor(1,2,L).
[1,5,1]
9>
Remark: I like list comprehension, you may prefer to use lists:map in this case. This code is not efficient since it parses 4 time the list to get the neighbors. The only advantage is that it is "straight". so it should be easy to read.

Numpy indexing using array

I'm trying to return a (square) section from an array, where the indices wrap around the edges. I need to juggle some indexing, but it works, however, I expect the last two lines of codes to have the same result, why don't they? How does numpy interpret the last line?
And as a bonus question: Am I being woefully inefficient with this approach? I'm using the product because I need to modulo the range so it wraps around, otherwise I'd use a[imin:imax, jmin:jmax, :], of course.
import numpy as np
from itertools import product
i = np.arange(-1, 2) % 3
j = np.arange(1, 4) % 3
a = np.random.randint(1,10,(3,3,2))
print a[i,j,:]
# Gives 3 entries [(i[0],j[0]), (i[1],j[1]), (i[2],j[2])]
# This is not what I want...
indices = list(product(i, j))
print indices
indices = zip(*indices)
print 'a[indices]\n', a[indices]
# This works, but when I'm explicit:
print 'a[indices, :]\n', a[indices, :]
# Huh?
The problem is that advanced indexing is triggered if:
the selection object, obj, is [...] a tuple with at least one sequence object or ndarray
The easiest fix in your case is to use repeated indexing:
a[i][:, j]
An alternative would be to use ndarray.take, which will perform the modulo operation for you if you specify mode='wrap':
a.take(np.arange(-1, 2), axis=0, mode='wrap').take(np.arange(1, 4), axis=1, mode='wrap')
To give another method of advanced indexing which is better in my opinion then the product solution.
If you have for every dimension an integer array these are broadcasted together and the output is the same output as the broadcast shape (you will see what I mean)...
i, j = np.ix_(i,j) # this adds extra empty axes
print i,j
print a[i,j]
# and now you will actually *not* be surprised:
print a[i,j,:]
Note that this is a 3x3x2 array, while you had a 9x2 array, but simple reshape will fix that and the 3x3x2 array is actually closer to what you want probably.
Actually the surprise is still hidden in a way, because in your examples a[indices] is the same as a[indices[0], indicies[1]] but a[indicies,:] is a[(indicies[0], indicies[1]),:] which is not a big surprise that it is different. Note that a[indicies[0], indicies[1],:] does give the same result.
See : http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing
When you add :, you are mixing integer indexing and slicing. The rules are quite complicated and better explained than I could in the above link.

Fortran's do-loop over arbitary indices like for-loop in R?

I have two p-times-n arrays x and missx, where x contains arbitrary numbers and missx is an array containing zeros and ones. I need to perform recursive calculations on those points where missx is zero. The obvious solution would be like this:
do i = 1, n
do j = 1, p
if(missx(j,i)==0) then
z(j,i) = ... something depending on the previous computations and x(j,i)
end if
end do
end do
Problem with this approach is that most of the time missx is always 0, so there is quite a lot if statements which are always true.
In R, I would do it like this:
for(i in 1:n)
for(j in which(xmiss[,i]==0))
z[j,i] <- ... something depending on the previous computations and x[j,i]
Is there a way to do the inner loop like that in Fortran? I did try a version like this:
do i = 1, n
do j = 1, xlength(i) !xlength(i) gives the number of zero-elements in x(,i)
j2=whichx(j,i) !whichx(1:xlength(i),i) contains the indices of zero-elements in x(,i)
z(j2,i) = ... something depending on the previous computations and x(j,i)
end do
end do
This seemed slightly faster than the first solution (if not counting the amount of defining xlength and whichx), but is there some more clever way to this like the R version, so I wouldn't need to store those xlength and whichx arrays?
I don't think you are going to get dramatic speedup anyway, if you must do the iteration for most items, than storing just the list of those with the 0 value for the whole array is not an option. You can of course use the WHERE or FORALL construct.
forall(i = 1: n,j = 1: p,miss(j,i)==0) z(j,i) = ...
or just
where(miss==0) z = ..
But the ussual limitations of these constructs apply.

Resources