I would like to create a pointer to an array with smaller dimension.
For example, I have some array arr(1:2, 1:10, 1:10).
Now I want to create a pointer to arr(1:1, 1:10, 1:10) but I want to delete first I don't know how I should name it by it look like index, and second pointer to (2:2, 1:10, 1:10).
I need it because I would like to send array with 2 dimensions (matrix) to a function.
Here is an indication of what I want to do, with pseudocode.
INTEGER, DIMENSION (1:2, 1:10, 1:10), TARGET :: BOUNDRIES
INTEGER, DIMENSION (:,:), POINTER : LEFT_BOUNDRY
LEFT_BOUNDRY => BOUNDRIES(1,1:10,1:10)
DO i = 1,n
DO j = 1,10
write(*,*) LEFT_BOUNDRY(i,j)
END DO
END DO
Is it possible to do it?
When we have a dummy argument in a function or a subroutine (collectively, procedure) we have a corresponding actual argument when we execute that procedure. Consider the subroutine s:
subroutine s(x)
real x(5,2)
...
end subroutine s
The dummy argument x is in this case an explicit shape array, of rank 2, shape [5,2].
If we want to
call s(y)
where y is some real thing we don't need to have y a whole array which is of rank 2 and shape [5,2]. We simply need to have y have at least ten elements and a thing called storage association maps those ten elements to x when we are in the subroutine.
Imagine, then
real y1(10), y2(1,10), y3(29)
call s(y1)
call s(y2)
call s(y3)
Each of these works (in the final case, it's just the first ten elements that become associated with the dummy argument).
Crucially, it's a so-called element sequence that is important when choosing the elements to associate with x. Consider
real y(5,12,10,10)
call s (y(1,1,1:2,3:7))
This is an array section of y of ten elements. Those ten elements together become x in the subroutine s.
To conclude, if you want to pass arr(2,1:10,1:10) (which is actually a rank 2 array section) to a rank 2 argument which is an explicit shape array of no more than 100 elements, everything is fine.
Related
I've encountered a problem when trying to iterate through two dimension array and summing up the lengths of all elements inside in prolog.
I've tried iterating through a simple 1D array and result was just as expected. However, difficulties appeared when I started writing the code for 2D array. Here's my code :
findsum(L):-
atom_row(L, Sum),
write(Sum).
atom_row([Head|Tail], Sum) :-
atom_lengths(Head, Sum),
atom_row(Tail, Sum).
atom_row([], 0).
atom_lengths([Head|Tail], Sum):-
atom_chars(Head, CharList),
length(CharList, ThisLenght),
atom_lengths(Tail, Temp),
Sum is Temp + ThisLenght,
write(ThisLenght).
atom_lengths([], 0).
For example, sum of the elements in array [[aaa, bbbb], [ccccc, dddddd]] should be equal to 18. And this is what I get:
?- findsum([[aaa, bbbb], [ccccc, dddddd]]).
436
false.
The output comes from write(ThisLength) line after each iteration.
Typically it helps (a lot) by splitting the problem into simpeler sub-problems. We can solve the problem, for example, with the following three steps:
first we concatenate the list of lists into a single one-dimension list, for example with append/2;
next we map each atom in that list to the length of that atom, with the atom_length/2 predicate; and
finally we sum up these values, for example with sum_list/2.
So the main predicate looks like:
findsum(LL, S) :-
append(LL, L),
maplist(atom_length, L, NL),
sumlist(NL, S).
Since maplist/3 is a predicate defined in the library(apply), we thus don't need to implement any other predicates.
Note: You can see the implementions of the linked predicates by clicking on the :- icon.
For example:
?- findsum([[aaa, bbbb], [ccccc, dddddd]], N).
N = 18.
using ShiftedArrays
struct CircularMatrix{T} <: AbstractArray{T,2}
data::Array{T,2}
view::CircShiftedArray
currentIndex::Int
function CircularMatrix{T}(dims...) where T
data = zeros(T, dims...)
CircularMatrix(data, ShiftedArrays.circshift(data, (0, -1)), 1)
end
end
Base.size(M::CircularMatrix) = size(M.data)
Base.eltype(::Type{CircularMatrix{T}}) where {T} = T
function shift_forward!(M::CircularMatrix)
M.shift_forward!(1)
end
function shift_forward!(M::CircularMatrix, n)
# replace the view with a view shifted forwards.
M.currentIndex += n
M.view = ShiftedArrays.circshift(M.data, (n, M.currentIndex))
end
#inline Base.#propagate_inbounds function Base.getindex(M::CircularMatrix, i) = M.view[i]
#inline Base.#propagate_inbounds function Base.setindex!(M::CircularMatrix, data, i) = M.view[i] = data
How can I make CircularMatrix act just like a regular matrix.
So that I can access it like
m = CircularMatrix{Int}(4,4)
m[1, 1] = 5
x = view(m, 1, :)
Your matrix type is defined to be a subtype of AbstractArray{T, 2}. You need to implement a few methods in the informal array interface of Julia for your type to make functions and features that work on AbstractArray{T, 2} to also work on your custom type, that is, to make your CircularMatrix an iterable, indexable, completely functioning matrix.
The methods to implement are
size(M::CircularMatrix)
getindex(M::CircularMatrix, i::Int)
getindex(M::CircularMatrix, I::Vararg{Int, N})
setindex!(M::CircularMatrix, v, i::Int)
setindex!(M::CircularMatrix, v, I::Vararg{Int, N})
You already implement 1, 2 and 4 but have not yet set your indexing style. You might not need 3 and 5 if you choose linear indexing style. You only need to set IndexStyle to be IndexLinear() and maybe a few modifications, then everything should just work for your matrix.
1. size(M::CircularMatrix)
The first one is size. size(A::CircularMatrix) returns a Tuple of dimensions of A. I believe for your matrix probably something like the following
Base.size(M::CircularMatrix) = size(M.data)
2. getindex(M::CircularMatrix, i::Int)
This method is needed if you choose linear indexing style. getindex(M, i::Int) should give you the value at linear index i. You already implement it in your code. If you choose linear indexing, you need to set IndexStyle for your type and then you simply skip 3 and 5. Julia will automatically convert multiple index accesses, e.g. a[3, 5], to a linear index access.
Base.IndexStyle(::Type{<:CircularMatrix}) = IndexLinear()
Base.#propogate_inbounds function Base.getindex(M::CircularMatrix, i::Int)
#boundscheck checkbounds(M, i)
#inbounds M.view[i]
end
It might be better to use #inbounds here on the second line. If the caller doesn't use #inbounds, we check the bounds first and this hopefully makes the subsequent bounds check unnecessary. You might want to omit this during development, though.
3. getindex(M::CircularMatrix, I::Vararg{Int, N})
The third one is for Cartesian indexing style. If you choose this style you need to implement this method. Vararg{Int, N} in the signature stands for "exactly N Int arguments". Here N should be equal to the dimensionality of CircularMatrix. Since this is a matrix, N should be two. If you choose this style, you need to define something like the following
Base.#propogate_inbounds function Base.getindex(A::CircularMatrix, I::Vararg{Int, 2})
#boundscheck checkbounds(A, I...)
#inbounds A.view[# convert I[1]` and `I[2]` to a linear index in `view`]
end
or since your dimensionality is not parametric and a matrix is 2D, simply
Base.#propogate_inbounds function Base.getindex(A::CircularMatrix, i::Int, j::Int)
#boundscheck checkbounds(A, i, j)
#inbounds A.view[# convert i` and `j` to a linear index in `view`]
end
4. setindex!(M::CircularMatrix, v, i::Int)
The fourth one is similar to the second. This method should set the value at linear index i, if you choose linear indexing style.
5. setindex!(M::CircularMatrix, v, I::Vararg{Int, N})
The fifth one should be similar to the third, if you choose Cartesian indexing style.
After the implementations for 1, 2, and 4 and setting IndexStyle, you should have a custom matrix type that just works.
m[1, 1] = 5
x = view(m, 1, :)
for e in
...
end
for i in eachindex(m)
...
end
display(m)
println(m)
length(m)
ndims(m)
map(f, A)
....
These should all work.
A few notes
There is a documentation for Abstract Arrays interface here with a few examples. You can also see Optional Methods to implement.
There is a JuliaArray organization on GitHub that provides lots of useful custom array implementations including StaticArrays, OffsetArrays, etc. and also a JuliaMatrices organization that provides custom matrix types. You might want to take a look at their implementations.
#inline is redundant if you use Base.#propogate_inbounds.
#propagate_inbounds
Tells the compiler to inline a function while retaining the caller's
inbounds context.
You do not need to define eltype for your matrix, since there is already a definition for AbstractArray{T, N} which returns T.
In R, the function outer structurally allows you to take the outer product of two vectors x and y while providing a number of options for the actual function applied to each combination. For example outer(x,y,'-') creates an "outer product" matrix of the elementwise differences between x and y. Does Julia have something similar?
Broadcast is the Julia operation which occurs when adding .'s around. When the two containers have the same size, it's an element-wise operation. Example: x.*y is element-wise if size(x)==size(y). However, when the shapes don't match, then broadcast really comes into effect. If one of them is a row vector and one of them is a column vector, then the output will be 2D with out[i,j] matching the ith row of the column vector with the j row vector. This means x .* y is a peculiar way to write the outer product if one a row and the other is a column vector.
In general, what broadcast is doing is:
This is wasteful when dimensions get large, so Julia offers broadcast(), which expands singleton dimensions in array arguments to match the corresponding dimension in the other array without using extra memory
(This is from the Julia Manual)
But this generalizes to all of the other binary operators, so x .- y' is what you're looking for.
I have an Array of arrays, called y:
y=Array(Vector{Int64}, 10)
which is basically a list of 1-dimensional arrays(10 of them), and each 1-dimensional array has length 5. Below is an example of how they are initialized:
for i in 1:10
y[i]=sample(1:20, 5)
end
Each 1-dimensional array includes 5 randomly sampled integers between 1 to 20.
Right now I am applying a map function where for each of those 1-dimensional arrays in y , excludes which numbers from 1 to 20:
map(x->setdiff(1:20, x), y)
However, I want to make sure when the function applied to y[i], if the output of setdiff(1:20, y[i]) includes i, i is excluded from the results. in other words I want a function that works like
setdiff(deleteat!(Vector(1:20),i) ,y[i])
but with map.
Mainly my question is that whether you can access the index in the map function.
P.S, I know how to do it with comprehensions, I wanted to know if it is possible to do it with map.
comprehension way:
[setdiff(deleteat!(Vector(1:20), index), value) for (index,value) in enumerate(y)]
Like this?
map(x -> setdiff(deleteat!(Vector(1:20), x[1]),x[2]), enumerate(y))
For your example gives this:
[2,3,4,5,7,8,9,10,11,12,13,15,17,19,20]
[1,3,5,6,7,8,9,10,11,13,16,17,18,20]
....
[1,2,4,7,8,10,11,12,13,14,15,16,17,18]
[1,2,3,5,6,8,11,12,13,14,15,16,17,19,20]
I have a task in which I will have several data types together; character, several integers, and a double precision value, which represent a solution to a problem.
At the moment, I have a "toy" F90 program, that uses MPI with random numbers and a contrived character string for each processor. I want to have a data type that has the character and the double precision random number together.
I will use MPI_REDUCE to get the minimum value for the double precision values. I will have the data type for each process brought together to the root (rank = 0) via the MPI_GATHERV function.
My goal is to match up the minimum value from the random values to the data type. That would be the final answer. I have tried all sort of ideas up to this point, but to no avail. I end up with "forrtl: severe SIGSEGV, segmentation fault occurred".
Now I have looked at several of the other postings too. For instance, I cannot use the "use mpif.h" statement on this particular system.
But, at last, here is the code:
program fredtype
implicit none
include '/opt/apps/intel15/mvapich2/2.1/include/mpif.h'
integer rank,size,ierror,tag,status(MPI_STATUS_SIZE),i,np,irank
integer blocklen(2),type(2),num,rcount(4)
double precision :: x,aout
character(len=4) :: y
type, BIND(C) :: mytype
double precision :: x,aout,test
character :: y
end type mytype
type(mytype) :: foo,foobag(4)
integer(KIND=MPI_ADDRESS_KIND) :: disp(2),base
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD,size,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierror)
aout = 99999999999.99
call random_seed()
call random_number(x)
if(rank.eq.0)y="dogs"
if(rank.eq.1)y="cats"
if(rank.eq.2)y="tree"
if(rank.eq.3)y="woof"
print *,rank,x,y
call MPI_GET_ADDRESS(foo%x,disp(1),ierror)
call MPI_GET_ADDRESS(foo%y,disp(2),ierror)
base = disp(1)
call MPI_COMM_SIZE(MPI_COMM_WORLD,size,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierror)
aout = 99999999999.99
call random_seed()
call random_number(x)
if(rank.eq.0)y="dogs"
if(rank.eq.1)y="cats"
if(rank.eq.2)y="tree"
if(rank.eq.3)y="woof"
print *,rank,x,y
call MPI_GET_ADDRESS(foo%x,disp(1),ierror)
call MPI_GET_ADDRESS(foo%y,disp(2),ierror)
base = disp(1)
call MPI_COMM_SIZE(MPI_COMM_WORLD,size,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierror)
aout = 99999999999.99
call random_seed()
call random_number(x)
if(rank.eq.0)y="dogs"
if(rank.eq.1)y="cats"
if(rank.eq.2)y="tree"
if(rank.eq.3)y="woof"
print *,rank,x,y
call MPI_GET_ADDRESS(foo%x,disp(1),ierror)
call MPI_GET_ADDRESS(foo%y,disp(2),ierror)
base = disp(1)
disp(2) = disp(2) - base
blocklen(1) = 1
blocklen(2) = 1
type(1) = MPI_DOUBLE_PRECISION
type(2) = MPI_CHARACTER
call MPI_TYPE_CREATE_STRUCT(2,blocklen,disp,type,foo,ierror)
call MPI_TYPE_COMMIT(foo,ierror)
call MPI_REDUCE(x,aout,1,MPI_DOUBLE_PRECISION,MPI_MIN,0,MPI_COMM_WORLD,i\
error)
call MPI_GATHER(num,1,MPI_INT,rcount,1,MPI_INT,0,MPI_COMM_WORLD)
call MPI_GATHERV(foo,num,type,foobag,rcount,disp,type,0,MPI_COMM_WORLD)
if(rank.eq.0)then
print *,'fin ',aout
end if
end program fredtype
Thank you for any help.
Sincerely,
Erin
Your code is definitely too confusing for me to try to fully fix it. So let's just assume that you have your type mytype defined as follow:
type, bind(C) :: mytype
double precision :: x, aout, test
character(len=4) :: y
end type mytype
(Rk: I've add len=4 to the definition of y as it seemed to be missing from your original code. I might be wrong it that and if so, just adjust blocklen(2) in the subsequent code accordingly)
Now let's assume that you only want to transfer the x and y fields of your variables of type mytype. For this, you'll need to create an appropriated derived MPI type using first MPI_Type_create_struct() to define the basic types and their location into your structure, and then MPI_Type_create_resized() to define the true extent and lower bound of the type, including holes.
The tricky part is usually to evaluate what the lower bound and extent of your Fortran type is. Here, as you include into the fields that you transfer the first and last of them, and as you added bind(C), you can just use MPI_Type_get_extend() to get these informations. However, if you hadn't included x or y (which are first and last fields of the type) into the MPI data type, MPI_Type_get_extent() wouldn't have return what you would have needed. So I'll propose you an alternative (slightly more cumbersome) approach which will, I believe, always work:
integer :: ierror, typefoo, tmptypefoo
integer :: blocklen(2), types(2)
type(mytype) :: foobag(4)
integer(kind=MPI_ADDRESS_KIND) :: disp(2), lb, extent
call MPI_Get_address( foobag(1), lb, ierror )
call MPI_Get_address( foobag(1)%x, disp(1), ierror )
call MPI_Get_address( foobag(1)%y, disp(2), ierror )
call MPI_Get_address( foobag(2), extent, ierror )
disp(1) = MPI_Aint_diff( disp(1), lb )
disp(2) = MPI_Aint_diff( disp(2), lb )
extent = MPI_Aint_diff( extent, lb )
lb = 0
blocklen(1) = 1
blocklen(2) = 4
types(1) = MPI_DOUBLE_PRECISION
types(2) = MPI_CHARACTER
call MPI_Type_create_struct( 2, blocklen, disp, types, tmptypefoo, ierror )
call MPI_Type_create_resized( tmptypefoo, lb, extent, typefoo, ierror )
call MPI_Type_commit( typefoo, ierror )
So as you can see, lb serves as base address for the displacements into the structure, and the type extent is computed by using the relative addresses of two consecutive elements of an array of type mytype.
Then, we create an intermediary MPI data type tmptypefoo which only contains the information about the actual data we will transfer, and we extent it with information about the actual lower bound and extent of the Fortran type into typefoo. Finally, only this last one needs to be committed as only it will serve for data transfers.