Why this Fortran code is incorrect?
function foo(x)
real x
real, dimension(3) :: foo
foo = (/1, 2, 3/)
end
... and in main program
print*, foo(x)(1)
Why we cannot access element in function result directly?
While you ponder your own question
Why we cannot access element in function result directly?
I suggest you also write lines, in your main program, such as
res = foo(x) ! having taken care to declare res appropriately
print*, res(1)
and get on with your coding. It's just not syntactically-correct to index a function call the way you've tried.
So one answer to your original question is because that's the way Fortran's syntax is defined to which you might be prompted to respond why is Fortran's syntax defined that way ? Even if this process turns up an answer in the form of reference to the roots of the design of Fortran (now over 50 years old) you're still going to have to modify your code to align with Fortran's syntax. For sure your compiler isn't going to say you know, what you've written is better than the syntax I've been programmed to accept, I'll compile that up right now ...
The answer by High Performance Mark tells all needed. As you're after syntactic niceness I'll address one thing in there: "having taken care to declare res appropriately".
One could use an associate construct to hide this a little.
associate (res => foo(x))
print *, res(1)
end associate
This changes nothing in that answer other than reducing junk declarations.
Related
In Pascal, I understand that one could create a function returning a pointer which can be dereferenced and then assign a value to that, such as in the following (obnoxiously useless) example:
type ptr = ^integer;
var d: integer;
function f(x: integer): ptr;
begin
f := #x;
end;
begin
f(d)^ := 4;
end.
And now d is 4.
(The actual usage is to access part of a quite complicated array of records data structure. I know that a class would be better than an array of nested records, but it isn't my code (it's TeX: The Program) and was written before Pascal implementations supported object-orientation. The code was written using essentially a language built on top of Pascal that added macros which expand before the compiler sees them. Thus you could define some macro m that takes an argument x and expands into thearray[x + 1].f1.f2 instead of writing that every time; the usage would be m(x) := somevalue. I want to replicate this functionality with a function instead of a macro.)
However, is it possible to achieve this functionality without the ^ operator? Can a function f be written such that f(x) := y (no caret) assigns the value y to x? I know that this is stupid and the answer is probably no, but I just (a) don't really like the look of it and (b) am trying to mimic exactly the form of the macro I mentioned above.
References are not first class objects in Pascal, unlike languages such as C++ or D. So the simple answer is that you cannot directly achieve what you want.
Using a pointer as you illustrated is one way to achieve the same effect although in real code you'd need to return the address of an object whose lifetime extends beyond that of the function. In your code that is not the case because the argument x is only valid until the function returns.
You could use an enhanced record with operator overloading to encapsulate the pointer, and so encapsulate the pointer dereferencing code. That may be a good option, but it very much depends on your overall problem, of which we do not have sight.
I know about recursion, but I don't know how it's possible. I'll use the fallowing example to further explain my question.
(def (pow (x, y))
(cond ((y = 0) 1))
(x * (pow (x , y-1))))
The program above is in the Lisp language. I'm not sure if the syntax is correct since I came up with it in my head, but it will do. In the program, I am defining the function pow, and in pow it calls itself. I don't understand how it's able to do this. From what I know the computer has to completely analyze a function before it can be defined. If this is the case, then the computer should give an undefined message when I use pow because I used it before it was defined. The principle I'm describing is the one at play when you use an x in x = x + 1, when x was not defined previously.
Compilers are much smarter than you think.
A compiler can turn the recursive call in this definition:
(defun pow (x y)
(cond ((zerop y) 1)
(t (* x (pow x (1- y))))))
into a goto intruction to re-start the function from scratch:
Disassembly of function POW
(CONST 0) = 1
2 required arguments
0 optional arguments
No rest parameter
No keyword parameters
12 byte-code instructions:
0 L0
0 (LOAD&PUSH 1)
1 (CALLS2&JMPIF 172 L15) ; ZEROP
4 (LOAD&PUSH 2)
5 (LOAD&PUSH 3)
6 (LOAD&DEC&PUSH 3)
8 (JSR&PUSH L0)
10 (CALLSR 2 57) ; *
13 (SKIP&RET 3)
15 L15
15 (CONST 0) ; 1
16 (SKIP&RET 3)
If this were a more complicated recursive function that a compiler cannot unroll into a loop, it would merely call the function again.
From what I know the computer has to completely analyze a function before it can be defined.
When the compiler sees that one defines a function POW, then it tells itself: now we are defining function POW. If it then inside the definition sees a call to POW, then the compiler says to itself: oh, this seems to be a call to the function that I'm currently compiling and it can then create code to make a recursive call.
A function is just a block of code. It's name is just help so you don't have to calculate the exact address it will end up in. The programming language will turn the names into where the program is to go to execute.
How one function call another is by storing the address of the next command in this function on the stack, perhaps add arguments to the stack and then jump to the address location of the function. The function itself jumps to the return address it finds so that control goes back to the callee. There are several calling conventions implemented by the language on which side do what. CPUs don't really have function support so just like there is nothing called a while loop in CPUs functions are emulated.
Just like functions have names, arguments have names too, however they are mere pointers just like the return address. When calling itself it just adds a new return address and arguments onto the stack and jump to itself. The top of the stack will be different and thus the same variable names are unique addresses to the call so x and y in the previous call is somewhere else than the current x and y. In fact there is no special treatment needed for calling itself than calling anything else.
Historically the first high level language, Fortran, did not support recursion. It would call itself but when it returned it returned to the original callee without doing the rest of the function after the self call. Fortran itself would have been impossible to write without recursion so while itself used recursion it did not offer it to the programmer that used it. This limitation is the reason why John McCarthy discovered Lisp.
I think to see how this can work in general, and in particular in cases where recursive calls can't be turned into loops, it's worth thinking about how a general compiled language might work, because the problems are not different.
Let's imagine how a compiler might turn this function into machine code:
(defun foo (x)
(+ x (bar x)))
And let's assume that it does not know anything about bar at the time of compilation. Well, it has two options.
It can compile foo in such a way that the call to bar is translated a set of instructions which say, 'look up the function definition stored under the name bar, whatever it currently is, and arrange to call that function with the right arguments'.
It can compile foo in such a way that there is a machine-level function call to a function but the address of that function is left as a placeholder of some kind. And it can then attach some metadata to foo which says: 'before this function is called you need to find the function named bar, find its address, splice it into the code in the right place, and remove this metadata.
Both of these mechanisms allow foo to be defined before it's known what bar is. And note that instead of bar I could have written foo: these mechanisms deal with recursive calls too. They differ apart from that, however.
The first mechanism means that, every time foo is called it needs to do some kind of dynamic lookup for bar which will involve some overhead (but this overhead can be pretty small):
as a consequence of this the first mechanism will be slightly slower than it might be;
but, also as a consequence of this, if bar gets redefined, then the new definition will get picked up, which is a very desirable thing for an interactive language, which Lisp implementations usually are.
The second mechanism means that, after foo has all its references to other functions linked in to it, then the calls happen at the machine level:
this means they will be quick;
but that redefinition will be, at best, more complicated or, at worst, not possible at all.
The second of these implementations is close to how traditional compilers compile code: they compile code leaving a bunch of placeholders with associated metadata saying what names those placeholders correspond to. A linker, (sometimes known as a link-loader, or loader) then grovels over all the files produced by the compiler as well as other libraries of code and resolves all these references, resulting in a bit of code which can actually be run.
A very simple-minded Lisp system might work entirely by the first mechanism (I am pretty sure that this is how Python works, for instance). A more advanced compiler will probably work by some combination of the first and second mechanism. As an example of this, CL allows the compiler to make assumptions that apparent self-calls in functions really are self-calls, and so the compiler may well compile them as direct calls (essentially it will compile the function and then link it on the fly). But when compiling code in general, it might call 'through the name' of the function.
There are also more-or-less heroic strategies which things could do: for instance at the first call of a function link it, on the fly, to all the things it refers to, and note in their definitions that if they change then this thing needs to be unlinked as well so it all happens again. These kind of tricks once seemed implausible, but compilers for languages like JavaScript do things at least as hairy as this all the time now.
Note that compilers and linkers for modern systems actually do something more complicated than I've described, because of shared libraries &c: what I described is more-or-less what happened pre shared-library.
I am a newbie to Julia and still trying to figure out everything.
I want to restrict input variable type to array that can contain int and floats.
I would really appreciate any help.
function foo(array::??)
As I mentioned in the comment, you don't want to mix them for performance reasons. However, if your array can be either Floats or Ints, but you don't know which it will be, then the best approach is to make it dispatch on the parametric type:
function foo{T<:Number,N}(array::Array{T,N})
This will make it compile a separate function for arrays of each number type (only when needed), and since the type will be known for the compiler, it will run an optimized version of the function whether you give it foo([0.1,0.3,0.4]), foo([1 2 3]), foo([1//2 3//4]), etc.
Updated syntax in Julia 0.6+
function foo(array::Array{T,N}) where {T<:Number,N}
For more generality, you can use Array{Union{Int64,Float64},N} as a type. This will allow Floats and Ints, and you can use its constructor like
arr = Array{Union{Int64,Float64},2}(4,4) # The 2 is the dimension, (4,4) is the size
and you can allow dispatching onto weird things like this as well by doing
function foo{T,N}(array::Array{T,N})
i.e. just remove the restriction on T. However, since the compiler cannot know in advance whether any element of the array is an Int or a Float, it cannot optimize it very well. So in general you should not do this...
But let me explain one way you can work with this and still get something with decent performance. It also works by multiple dispatch. Essentially, if you encase your inner loops with a function call which is a strictly typed dispatch, then when doing all of the hard calculations it can know exactly what type it is and optimize the code anyways. This is best explained by an example. Let's say we want to do:
function foo{T,N}(array::Array{T,N})
for i in eachindex(array)
val = array[i]
# do algorithm X on val
end
end
You can check using #code_warntype that val will not compile as an Int64 or Float64 because it won't know until runtime what type it will be for each i. If you check #code_llvm (or #code_native for the assembly) you see that there is a really long code that is generated in order to handle this. What we can instead do is define
function inner_foo{T<:Number}(val::T)
# Do algorithm X on val
end
and then instead define foo as
function foo2{T,N}(array::Array{T,N})
for i in eachindex(array)
inner_foo(array[i])
end
end
While this looks the same to you, it is very different to the compiler. Note that inner_foo(array[i]) dispatches a specially-compiled function for whatever number type it sees, so in foo2 algorithm X is calculated efficiently, and the only non-efficient part is the wrapping above inner_foo (so if all your time is spent in inner_foo, you will get basically maximal performance).
This is why Julia is built around multiple-dispatch: it's a design which allows you to push things out to optimized functions whenever possible. Julia is fast because of it. Use it.
This should be a comment to Chris' answer, but I don't have enough points to comment.
As Chris points out, using function barriers can be quite useful to generate optimal code. However be aware that dynamic dispatch has some overhead. This may or may not be important depending on the complexity of the inner function.
function foo1{T,N}(array::Array{T,N})
s = 0.0
for i in eachindex(array)
val = array[i]
s += val*val
end
s
end
function foo2{T,N}(array::Array{T,N})
s = 0.0
for i in eachindex(array)
s += inner_foo(array[i])
end
s
end
function foo3{T,N}(array::Array{T,N})
s = 0.0
for i in eachindex(array)
val = array[i]
if isa(val, Float64)
s += inner_foo(val::Float64)
else
s += inner_foo(val::Int64)
end
end
s
end
function inner_foo{T<:Number}(val::T)
val*val
end
For A = Array{Union{Int64,Float64},N}, foo2 doesn't provide much speedup over foo1 since benefit of the optimised inner_foo is countered by the cost of dynamic dispatch.
foo3 is much faster (~7 times) and could be used if possible types are limited and known ahead of time (as in above example where elements are either Int64 or Float64)
See https://groups.google.com/forum/#!topic/julia-users/OBs0fmNmjCU for further discussion.
Say I have a Julia trait that relates to two types: one type is a sort of "base" type that may satisfy a sort of partial trait, the other is an associated type that is uniquely determined by the base type. (That is, the relation from BaseType -> AssociatedType is a function.) Together, these types satisfy a composite trait that is the one of interest to me.
For example:
using Traits
#traitdef IsProduct{X} begin
isnew(X) -> Bool
coolness(X) -> Float64
end
#traitdef IsProductWithMeasurement{X,M} begin
#constraints begin
istrait(IsProduct{X})
end
measurements(X) -> M
#Maybe some other stuff that dispatches on (X,M), e.g.
#fits_in(X,M) -> Bool
#how_many_fit_in(X,M) -> Int64
#But I don't want to implement these now
end
Now here are a couple of example types. Please ignore the particulars of the examples; they are just meant as MWEs and there is nothing relevant in the details:
type Rope
color::ASCIIString
age_in_years::Float64
strength::Float64
length::Float64
end
type Paper
color::ASCIIString
age_in_years::Int64
content::ASCIIString
width::Float64
height::Float64
end
function isnew(x::Rope)
(x.age_in_years < 10.0)::Bool
end
function coolness(x::Rope)
if x.color=="Orange"
return 2.0::Float64
elseif x.color!="Taupe"
return 1.0::Float64
else
return 0.0::Float64
end
end
function isnew(x::Paper)
(x.age_in_years < 1.0)::Bool
end
function coolness(x::Paper)
(x.content=="StackOverflow Answers" ? 1000.0 : 0.0)::Float64
end
Since I've defined these functions, I can do
#assert istrait(IsProduct{Rope})
#assert istrait(IsProduct{Paper})
And now if I define
function measurements(x::Rope)
(x.length)::Float64
end
function measurements(x::Paper)
(x.height,x.width)::Tuple{Float64,Float64}
end
Then I can do
#assert istrait(IsProductWithMeasurement{Rope,Float64})
#assert istrait(IsProductWithMeasurement{Paper,Tuple{Float64,Float64}})
So far so good; these run without error. Now, what I want to do is write a function like the following:
#traitfn function get_measurements{X,M;IsProductWithMeasurement{X,M}}(similar_items::Array{X,1})
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
Generically, this function is meant to be an example of "I want to use the fact that I, as the programmer, know that BaseType is always associated to AssociatedType to help the compiler with type inference. I know that whenever I do a certain task [in this case, get_measurements, but generically this could work in a bunch of cases] then I want the compiler to infer the output type of that function in a consistently patterned way."
That is, e.g.
do_something_that_makes_arrays_of_assoc_type(x::BaseType)
will always spit out Array{AssociatedType}, and
do_something_that_makes_tuples(x::BaseType)
will always spit out Tuple{Int64,BaseType,AssociatedType}.
AND, one such relationship holds for all pairs of <BaseType,AssociatedType>; e.g. if BatmanType is the base type to which RobinType is associated, and SupermanType is the base type to which LexLutherType is always associated, then
do_something_that_makes_tuple(x::BatManType)
will always output Tuple{Int64,BatmanType,RobinType}, and
do_something_that_makes_tuple(x::SuperManType)
will always output Tuple{Int64,SupermanType,LexLutherType}.
So, I understand this relationship, and I want the compiler to understand it for the sake of speed.
Now, back to the function example. If this makes sense, you will have realized that while the function definition I gave as an example is 'correct' in the sense that it satisfies this relationship and does compile, it is un-callable because the compiler doesn't understand the relationship between X and M, even though I do. In particular, since M doesn't appear in the method signature, there is no way for Julia to dispatch on the function.
So far, the only thing I have thought to do to solve this problem is to create a sort of workaround where I "compute" the associated type on the fly, and I can still use method dispatch to do this computation. Consider:
function get_measurement_type_of_product(x::Rope)
Float64
end
function get_measurement_type_of_product(x::Paper)
Tuple{Float64,Float64}
end
#traitfn function get_measurements{X;IsProduct{X}}(similar_items::Array{X,1})
M = get_measurement_type_of_product(similar_items[1]::X)
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
Then indeed this compiles and is callable:
julia> get_measurements(Array{Rope,1}([Rope("blue",1.0,1.0,1.0),Rope("red",2.0,2.0,2.0)]))
2-element Array{Float64,1}:
1.0
2.0
But this is not ideal, because (a) I have to redefine this map each time, even though I feel as though I already told the compiler about the relationship between X and M by making them satisfy the trait, and (b) as far as I can guess--maybe this is wrong; I don't have direct evidence for this--the compiler won't necessarily be able to optimize as well as I want, since the relationship between X and M is "hidden" inside the return value of the function call.
One last thought: if I had the ability, what I would ideally do is something like this:
#traitdef IsProduct{X} begin
isnew(X) -> Bool
coolness(X) -> Float64
∃ ! M s.t. measurements(X) -> M
end
and then have some way of referring to the type that uniquely witnesses the existence relationship, so e.g.
#traitfn function get_measurements{X;IsProduct{X},IsWitnessType{IsProduct{X},M}}(similar_items::Array{X,1})
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
because this would be somehow dispatchable.
So: what is my specific question? I am asking, given that you presumably by this point understand that my goals are
Have my code exhibit this sort of structure generically, so that
I can effectively repeat this design pattern across a lot of cases
and then program in the abstract at the high-level of X and M,
and
do (1) in such a way that the compiler can still optimize to the best of its ability / is as aware of the relationship among
types as I, the coder, am
then, how should I do this? I think the answer is
Use Traits.jl
Do something pretty similar to what you've done so far
Also do ____some clever thing____ that the answerer will indicate,
but I'm open to the idea that in fact the correct answer is
Abandon this approach, you're thinking about the problem the wrong way
Instead, think about it this way: ____MWE____
I'd also be perfectly satisfied by answers of the form
What you are asking for is a "sophisticated" feature of Julia that is still under development, and is expected to be included in v0.x.y, so just wait...
and I'm less enthusiastic about (but still curious to hear) an answer such as
Abandon Julia; instead use the language ________ that is designed for this type of thing
I also think this might be related to the question of typing Julia's function outputs, which as I take it is also under consideration, though I haven't been able to puzzle out the exact representation of this problem in terms of that one.
Original question:
I know Mathematica has a built in map(f, x), but what does this function look like? I know you need to look at every element in the list.
Any help or suggestions?
Edit (by Jefromi, pieced together from Mike's comments):
I am working on a program what needs to move through a list like the Map, but I am not allowed to use it. I'm not allowed to use Table either; I need to move through the list without help of another function. I'm working on a recursive version, I have an empty list one down, but moving through a list with items in it is not working out. Here is my first case: newMap[#, {}] = {} (the map of an empty list is just an empty list)
I posted a recursive solution but then decided to delete it, since from the comments this sounds like a homework problem, and I'm normally a teach-to-fish person.
You're on the way to a recursive solution with your definition newMap[f_, {}] := {}.
Mathematica's pattern-matching is your friend. Consider how you might implement the definition for newMap[f_, {e_}], and from there, newMap[f_, {e_, rest___}].
One last hint: once you can define that last function, you don't actually need the case for {e_}.
UPDATE:
Based on your comments, maybe this example will help you see how to apply an arbitrary function:
func[a_, b_] := a[b]
In[4]:= func[Abs, x]
Out[4]= Abs[x]
SOLUTION
Since the OP caught a fish, so to speak, (congrats!) here are two recursive solutions, to satisfy the curiosity of any onlookers. This first one is probably what I would consider "idiomatic" Mathematica:
map1[f_, {}] := {}
map1[f_, {e_, rest___}] := {f[e], Sequence##map1[f,{rest}]}
Here is the approach that does not leverage pattern matching quite as much, which is basically what the OP ended up with:
map2[f_, {}] := {}
map2[f_, lis_] := {f[First[lis]], Sequence##map2[f, Rest[lis]]}
The {f[e], Sequence##map[f,{rest}]} part can be expressed in a variety of equivalent ways, for example:
Prepend[map[f, {rest}], f[e]]
Join[{f[e]}, map[f, {rest}] (#Mike used this method)
Flatten[{{f[e]}, map[f, {rest}]}, 1]
I'll leave it to the reader to think of any more, and to ponder the performance implications of most of those =)
Finally, for fun, here's a procedural version, even though writing it made me a little nauseous: ;-)
map3[f_, lis_] :=
(* copy lis since it is read-only *)
Module[{ret = lis, i},
For[i = 1, i <= Length[lis], i++,
ret[[i]] = f[lis[[i]]]
];
ret
]
To answer the question you posed in the comments, the first argument in Map is a function that accepts a single argument. This can be a pure function, or the name of a function that already only accepts a single argument like
In[1]:=f[x_]:= x + 2
Map[f, {1,2,3}]
Out[1]:={3,4,5}
As to how to replace Map with a recursive function of your own devising ... Following Jefromi's example, I'm not going to give to much away, as this is homework. But, you'll obviously need some way of operating on a piece of the list while keeping the rest of the list intact for the recursive part of you map function. As he said, Part is a good starting place, but I'd look at some of the other functions it references and see if they are more useful, like First and Rest. Also, I can see where Flatten would be useful. Finally, you'll need a way to end the recursion, so learning how to constrain patterns may be useful. Incidentally, this can be done in one or two lines depending on if you create a second definition for your map (the easier way), or not.
Hint: Now that you have your end condition, you need to answer three questions:
how do I extract a single element from my list,
how do I reference the remaining elements of the list, and
how do I put it back together?
It helps to think of a single step in the process, and what do you need to accomplish in that step.