Shadowing in F# - math

I'm wondering why F# allows shadowing, particularly within the same scope.
I've always thought of value binding in purely functional programming constructs as being akin to assignments in algebra/mathematics.
So for example,
y = x + 1
will be a valid mathematical expression but
y = y + 1
won't be.
However, because of shadowing,
let y = y + 1
is a perfectly valid expression in F#.
Why does the language allow this?

The most correct answer to this is because the creator, Don Syme, felt that it was a useful feature to add to the language.
There are some good uses for it, though. One is when working with F#-style optional parameters:
type C() =
member _.M(?x) =
let x = Option.defaultValue 0 x
printfn "%d" x
F# optional parameters are options, which brings a high degree of correctness and consistency. But imagine having 3 or more optional parameters to a method. It would get annoying to have to rebind the value for each of them and use a different name for them! This is one area where shadowing comes in handy.
It's also handy when writing recursive subroutines. Consider the following naiive implementation of sum for a list:
let mySum xs =
let rec loop xs acc =
match xs with
| [] -> acc
| h :: t -> loop t (h + acc)
loop xs 0
I don't need to rebind xs for the inner loop because of shadowing. Since it's generic, xs is about as good of a name as I can come up with, so it would be annoying to have had to use a different name for the inner loop.
Shadowing isn't all good news, though. If you're not careful, types from one open declaraction can shadow types from one that was declared previously. This can be confusing. F# editor tooling can distinguish bindings from the ones that shadow them, but you don't get that with plain text. So the bottom line is: think carefully when applying shadowing in F#.

Related

Recursive Sequences in F#

Let's say I want to calculate the factorial of an integer. A simple approach to this in F# would be:
let rec fact (n: bigint) =
match n with
| x when x = 0I -> 1I
| _ -> n * fact (n-1I)
But, if my program needs dynamic programming, how could I sustain functional programming whilst using memoization?
One idea I had for this was making a sequence of lazy elements, but I ran into a problem. Assume that the follow code was acceptable in F# (it is not):
let rec facts =
seq {
yield 1I
for i in 1I..900I do
yield lazy (i * (facts |> Seq.item ((i-1I) |> int)))
}
Is there anything similar to this idea in F#?
(Note: I understand that I could use a .NET Dictionary but isn't invoking the ".Add()" method imperative style?)
Also, Is there any way I could generalize this with a function? For example, could I create a sequence of length of the collatz function defined by the function:
let rec collatz n i =
if n = 0 || n = 1 then (i+1)
elif n % 2 = 0 then collatz (n/2) (i+1)
else collatz (3*n+1) (i+1)
If you want to do it lazily, this is a nice approach:
let factorials =
Seq.initInfinite (fun n -> bigint n + 1I)
|> Seq.scan ((*)) 1I
|> Seq.cache
The Seq.cache means you won't repeatedly evaluate elements you've already enumerated.
You can then take a particular number of factorials using e.g. Seq.take n, or get a particular factorial using Seq.item n.
At first, i don't see in your example what you mean with "dynamic programming".
Using memorization doesn't mean something is not "functional" or breaks immutability. The important
point is not how something is implemented. The important thing is how it behaves. A function that uses
a mutable memoization is still considered pure, as long as it behaves like a pure function/immutable
function. So using a mutable variables in a limited scope that is not visible to the caller is still
considered pure. If the implementation would be important we could also consider tail-recursion as
not pure, as the compiler transform it into a loop with mutable variables under the hood. There
also exists some List.xyz function that use mutation and transform things into a mutable variable
just because of speed. Those function are still considered pure/immutable because they still behave like
pure function.
A sequence itself is already lazy. It already computes all its elements only when you ask for those elements.
So it doesn't make much sense to me to create a sequence that returns lazy elements.
If you want to speed up the computation there exists multiple ways how to do it. Even in the recursion
version you could use an accumulator that is passed to the next function call. Instead of doing deep
recursion.
let fact n =
let rec loop acc x =
if x = n
then acc * x
else loop (acc*x) (x+1I)
loop 1I 1I
That overall is the same as
let fact' n =
let mutable acc = 1I
let mutable x = 1I
while x <= n do
acc <- acc * x
x <- x + 1I
acc
As long you are learning functional programming it is a good idea to get accustomed to the first version and learn
to understand how looping and recursion relate to each other. But besides learning there isn't a reason why you
always should force yourself to always write the first version. In the end you should use what you consider more
readable and easier to understand. Not whether something uses a mutable variable as an implementation or not.
In the end nobody really cares for the exact implementation. We should view functions as black-boxes. So as long as
a function behaves like a pure function, everything is fine.
The above uses an accumulator, so you don't need to repetitive call a function again to get a value. So you also
don't need an internal mutable cache. if you really have a slow recursive version and want to speed it up with
caching you can use something like that.
let fact x =
let rec fact x =
match x with
| x when x = 1I -> 1I
| x -> (fact (x-1I)) * x
let cache = System.Collections.Generic.Dictionary<bigint,bigint>()
match cache.TryGetValue x with
| false,_ ->
let value = fact x
cache.Add(x,value)
value
| true,value ->
value
But that would probably be slower as the versions with an accumulator. If you want to cache calls to fact even across multiple
fact calls across your whole application then you need an external cache. You could create a Dictionary outside of fact and use a
private variable for this. But you also then can use a function with a closure, and make the whole process itself generic.
let memoize (f:'a -> 'b) =
let cache = System.Collections.Generic.Dictionary<'a,'b>()
fun x ->
match cache.TryGetValue x with
| false,_ ->
let value = f x
cache.Add(x,value)
value
| true,value ->
value
let rec fact x =
match x with
| x when x = 1I -> 1I
| x -> (fact (x-1I)) * x
So now you can use something like that.
let fact = memoize fact
printfn "%A" (fact 100I)
printfn "%A" (fact 100I)
and create a memoized function out of every other function that takes 1 parameter
Note that memoization doesn't automatically speed up everything. If you use the memoize function on fact
nothing get speeded up, it will even be slower as without the memoization. You can add a printfn "Cache Hit"
to the | true,value -> branch inside the memoize function. Calling fact 100I twice in a row will only
yield a single "Cache Hit" line.
The problem is how the algorithm works. It starts from 100I and it goes down to 0I. So calculating 100I ask
the cache of 99I, it doesn't exists, so it tries to calculate 98I and ask the cache. That also doesn't exists
so it goes down to 1I. It always asked the cache, never found a result and calculates the needed value.
So you never get a "Cache Hit" and you have the additional work of asking the cache. To really benefit from the
cache you need to change fact itself, so it starts from 1I up to 100I. The current version even throws StackOverflow
for big inputs, even with the memoize function.
Only the second call benefits from the cache, That is why calling fact 100I twice will ever only print "Cache Hit" once.
This is just an example that is easy to get the behaviour wrong with caching/memoization. In general you should try to
write a function so it is tail-recursive and uses accumulators instead. Don't try to write functions that expects
memoization to work properly.
I would pick a solution with an accumulator. If you profiled your application and you found that this is still to slow
and you have a bottleneck in your application and caching fact would help, then you also can just cache the results of
facts directly. Something like this. You could use dict or a Map for this.
let factCache = [1I..100I] |> List.map (fun x -> x,fact x) |> dict
let factCache = [1I..100I] |> List.map (fun x -> x,fact x) |> Map.ofList

What are real use cases of currying?

I've been reading lots of articles on currying, but almost all of them are misleading, explaining currying as a partial function application and all of examples usually are about functions with arity of 2, like add function or something.
Also many implementations of curry function in JavaScript makes it to accept more than 1 argument per partial application (see lodash), when Wikipedia article clearly tells that currying is about:
translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument (partial application)
So basically currying is a series of partial applications each with a single argument. And I really want to know real uses of that, in any language.
Real use case of currying is partial application.
Currying by itself is not terribly interesting. What's interesting is if your programming language supports currying by default, as is the case in F# or Haskell.
You can define higher order functions for currying and partial application in any language that supports first class functions, but it's a far cry from the flexibility you get when every function you get is curried, and thus partially applicable without you having to do anything.
So if you see people conflating currying and partial application, that's because of how closely those concepts are tied there - since currying is ubiquitous, you don't really need other forms of partial application than applying curried functions to consecutive arguments.
It is usefull to pass context.
Consider the 'map' function. It takes a function as argument:
map : (a -> b) -> [a] -> [b]
Given a function which uses some form of context:
f : SomeContext -> a -> b
This means you can elegantly use the map function without having to state the 'a'-argument:
map (f actualContext) [1,2,3]
Without currying, you would have to use a lambda:
map (\a -> f actualContext a) [1,2,3]
Notes:
map is a function which takes a list containing values of a, a function f. It constructs a new list, by taking each a and applying f to it, resulting in a list of b
e.g. map (+1) [1,2,3] = [2,3,4]
The bearing currying has on code can be divided into two sets of issues (I use Haskell to illustrate).
Syntactical, Implementation.
Syntax Issue 1:
Currying allows greater code clarity in certain cases.
What does clarity mean? Reading the function provides clear indication of its functionality.
e.g. The map function.
map : (a -> b) -> ([a] -> [b])
Read in this way, we see that map is a higher order function that lifts a function transforming as to bs to a function that transforms [a] to [b].
This intuition is particularly useful when understanding such expressions.
map (map (+1))
The inner map has the type above [a] -> [b].
In order to figure out the type of the outer map, we recursively apply our intuition from above. The outer map thus lifts [a] -> [b] to [[a]] -> [[b]].
This intuition will carry you forward a LOT.
Once we generalize map over into fmap, a map over arbitrary containers, it becomes really easy to read expressions like so (Note I've monomorphised the type of each fmap to a different type for the sake of the example).
showInt : Int -> String
(fmap . fmap . fmap) showInt : Tree (Set [Int]) -> Tree (Set [String])
Hopefully the above illustrates that fmap provides this generalized notion of lifting vanilla functions into functions over some arbitrary container.
Syntax Issue 2:
Currying also allows us to express functions in point-free form.
nthSmallest : Int -> [Int] -> Maybe Int
nthSmallest n = safeHead . drop n . sort
safeHead (x:_) = Just x
safeHead _ = Nothing
The above is usually considered good style as it illustrates thinking in terms of a pipeline of functions rather than the explicit manipulation of data.
Implementation:
In Haskell, point free style (through currying) can help us optimize functions. Writing a function in point free form will allow us to memoize it.
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0 ..] !!)
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
not_memoized_fib :: Int -> Integer
not_memoized_fib x = map fib [0 ..] !! x
where fib 0 = 0
fib 1 = 1
fib n = not_memoized_fib (n-2) + not_memoized_fib (n-1)
Writing it as a curried function as in the memoized version treats the curried function as an entity and therefore memoizes it.

In pure functional languages, is data (strings, ints, floats.. ) also just functions?

I was thinking about pure Object Oriented Languages like Ruby, where everything, including numbers, int, floats, and strings are themselves objects. Is this the same thing with pure functional languages? For example, in Haskell, are Numbers and Strings also functions?
I know Haskell is based on lambda calculus which represents everything, including data and operations, as functions. It would seem logical to me that a "purely functional language" would model everything as a function, as well as keep with the definition that a function most always returns the same output with the same inputs and has no state.
It's okay to think about that theoretically, but...
Just like in Ruby not everything is an object (argument lists, for instance, are not objects), not everything in Haskell is a function.
For more reference, check out this neat post: http://conal.net/blog/posts/everything-is-a-function-in-haskell
#wrhall gives a good answer. However you are somewhat correct that in the pure lambda calculus it is consistent for everything to be a function, and the language is Turing-complete (capable of expressing any pure computation that Haskell, etc. is).
That gives you some very strange things, since the only thing you can do to anything is to apply it to something else. When do you ever get to observe something? You have some value f and want to know something about it, your only choice is to apply it some value x to get f x, which is another function and the only choice is to apply it to another value y, to get f x y and so on.
Often I interpret the pure lambda calculus as talking about transformations on things that are not functions, but only capable of expressing functions itself. That is, I can make a function (with a bit of Haskelly syntax sugar for recursion & let):
purePlus = \zero succ natCase ->
let plus = \m n -> natCase m n (\m' -> plus m' n)
in plus (succ (succ zero)) (succ (succ zero))
Here I have expressed the computation 2+2 without needing to know that there are such things as non-functions. I simply took what I needed as arguments to the function I was defining, and the values of those arguments could be church encodings or they could be "real" numbers (whatever that means) -- my definition does not care.
And you could think the same thing of Haskell. There is no particular reason to think that there are things which are not functions, nor is there a particular reason to think that everything is a function. But Haskell's type system at least prevents you from applying an argument to a number (anybody thinking about fromInteger right now needs to hold their tongue! :-). In the above interpretation, it is because numbers are not necessarily modeled as functions, so you can't necessarily apply arguments to them.
In case it isn't clear by now, this whole answer has been somewhat of a technical/philosophical digression, and the easy answer to your question is "no, not everything is a function in functional languages". Functions are the things you can apply arguments to, that's all.
The "pure" in "pure functional" refers to the "freedom from side effects" kind of purity. It has little relation to the meaning of "pure" being used when people talk about a "pure object-oriented language", which simply means that the language manipulates purely (only) in objects.
The reason is that pure-as-in-only is a reasonable distinction to use to classify object-oriented languages, because there are languages like Java and C++, which clearly have values that don't have all that much in common with objects, and there are also languages like Python and Ruby, for which it can be argued that every value is an object1
Whereas for functional languages, there are no practical languages which are "pure functional" in the sense that every value the language can manipulate is a function. It's certainly possible to program in such a language. The most basic versions of the lambda calculus don't have any notion of things that are not functions, but you can still do arbitrary computation with them by coming up with ways of representing the things you want to compute on as functions.2
But while the simplicity and minimalism of the lambda calculus tends to be great for proving things about programming, actually writing substantial programs in such a "raw" programming language is awkward. The function representation of basic things like numbers also tends to be very inefficient to implement on actual physical machines.
But there is a very important distinction between languages that encourage a functional style but allow untracked side effects anywhere, and ones that actually enforce that your functions are "pure" functions (similar to mathematical functions). Object-oriented programming is very strongly wed to the use of impure computations3, so there are no practical object-oriented programming languages that are pure in this sense.
So the "pure" in "pure functional language" means something very different from the "pure" in "pure object-oriented language".4 In each case the "pure vs not pure" distinction is one that is completely uninteresting applied to the other kind of language, so there's no very strong motive to standardise the use of the term.
1 There are corner cases to pick at in all "pure object-oriented" languages that I know of, but that's not really very interesting. It's clear that the object metaphor goes much further in languages in which 1 is an instance of some class, and that class can be sub-classed, than it does in languages in which 1 is something else than an object.
2 All computation is about representation anyway. Computers don't know anything about numbers or anything else. They just have bit-patterns that we use to represent numbers, and operations on bit-patterns that happen to correspond to operations on numbers (because we designed them so that they would).
3 This isn't fundamental either. You could design a "pure" object-oriented language that was pure in this sense. I tend to write most of my OO code to be pure anyway.
4 If this seems obtuse, you might reflect that the terms "functional", "object", and "language" have vastly different meanings in other contexts also.
A very different angle on this question: all sorts of data in Haskell can be represented as functions, using a technique called Church encodings. This is a form of inversion of control: instead of passing data to functions that consume it, you hide the data inside a set of closures, and to consume it you pass in callbacks describing what to do with this data.
Any program that uses lists, for example, can be translated into a program that uses functions instead of lists:
-- | A list corresponds to a function of this type:
type ChurchList a r = (a -> r -> r) --^ how to handle a cons cell
-> r --^ how to handle the empty list
-> r --^ result of processing the list
listToCPS :: [a] -> ChurchList a r
listToCPS xs = \f z -> foldr f z xs
That function is taking a concrete list as its starting point, but that's not necessary. You can build up ChurchList functions out of just pure functions:
-- | The empty 'ChurchList'.
nil :: ChurchList a r
nil = \f z -> z
-- | Add an element at the front of a 'ChurchList'.
cons :: a -> ChurchList a r -> ChurchList a r
cons x xs = \f z -> f z (xs f z)
foldChurchList :: (a -> r -> r) -> r -> ChurchList a r -> r
foldChurchList f z xs = xs f z
mapChurchList :: (a -> b) -> ChurchList a r -> ChurchList b r
mapChurchList f = foldChurchList step nil
where step x = cons (f x)
filterChurchList :: (a -> Bool) -> ChurchList a r -> ChurchList a r
filterChurchList pred = foldChurchList step nil
where step x xs = if pred x then cons x xs else xs
That last function uses Bool, but of course we can replace Bool with functions as well:
-- | A Bool can be represented as a function that chooses between two
-- given alternatives.
type ChurchBool r = r -> r -> r
true, false :: ChurchBool r
true a _ = a
false _ b = b
filterChurchList' :: (a -> ChurchBool r) -> ChurchList a r -> ChurchList a r
filterChurchList' pred = foldChurchList step nil
where step x xs = pred x (cons x xs) xs
This sort of transformation can be done for basically any type, so in theory, you could get rid of all "value" types in Haskell, and keep only the () type, the (->) and IO type constructors, return and >>= for IO, and a suitable set of IO primitives. This would obviously be hella impractical—and it would perform worse (try writing tailChurchList :: ChurchList a r -> ChurchList a r for a taste).
Is getChar :: IO Char a function or not? Haskell Report doesn't provide us with a definition. But it states that getChar is a function (see here). (Well, at least we can say that it is a function.)
So I think the answer is YES.
I don't think there can be correct definition of "function" except "everything is a function". (What is "correct definition"? Good question...) Consider the next example:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative
f :: Applicative f => f Int
f = pure 1
g1 :: Maybe Int
g1 = f
g2 :: Int -> Int
g2 = f
Is f a function or datatype? It depends.

Why does ocaml need both "let" and "let rec"? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why are functions in Ocaml/F# not recursive by default?
OCaml uses let to define a new function, or let rec to define a function that is recursive. Why does it need both of these - couldn't we just use let for everything?
For example, to define a non-recursive successor function and recursive factorial in OCaml (actually, in the OCaml interpreter) I might write
let succ n = n + 1;;
let rec fact n =
if n = 0 then 1 else n * fact (n-1);;
Whereas in Haskell (GHCI) I can write
let succ n = n + 1
let fact n =
if n == 0 then 1 else n * fact (n-1)
Why does OCaml distinguish between let and let rec? Is it a performance issue, or something more subtle?
Well, having both available instead of only one gives the programmer tighter control on the scope. With let x = e1 in e2, the binding is only present in e2's environment, while with let rec x = e1 in e2 the binding is present in both e1 and e2's environments.
(Edit: I want to emphasize that it is not a performance issue, that makes no difference at all.)
Here are two situations where having this non-recursive binding is useful:
shadowing an existing definition with a refinement that use the old binding. Something like: let f x = (let x = sanitize x in ...), where sanitize is a function that ensures the input has some desirable property (eg. it takes the norm of a possibly-non-normalized vector, etc.). This is very useful in some cases.
metaprogramming, for example macro writing. Imagine I want to define a macro SQUARE(foo) that desugars into let x = foo in x * x, for any expression foo. I need this binding to avoid code duplication in the output (I don't want SQUARE(factorial n) to compute factorial n twice). This is only hygienic if the let binding is not recursive, otherwise I couldn't write let x = 2 in SQUARE(x) and get a correct result.
So I claim it is very important indeed to have both the recursive and the non-recursive binding available. Now, the default behaviour of the let-binding is a matter of convention. You could say that let x = ... is recursive, and one must use let nonrec x = ... to get the non-recursive binder. Picking one default or the other is a matter of which programming style you want to favor and there are good reasons to make either choice. Haskell suffers¹ from the unavailability of this non-recursive mode, and OCaml has exactly the same defect at the type level : type foo = ... is recursive, and there is no non-recursive option available -- see this blog post.
¹: when Google Code Search was available, I used it to search in Haskell code for the pattern let x' = sanitize x in .... This is the usual workaround when non-recursive binding is not available, but it's less safe because you risk writing x instead of x' by mistake later on -- in some cases you want to have both available, so picking a different name can be voluntary. A good idiom would be to use a longer variable name for the first x, such as unsanitized_x. Anyway, just looking for x' literally (no other variable name) and x1 turned a lot of results. Erlang (and all language that try to make variable shadowing difficult: Coffeescript, etc.) has even worse problems of this kind.
That said, the choice of having Haskell bindings recursive by default (rather than non-recursive) certainly makes sense, as it is consistent with lazy evaluation by default, which makes it really easy to build recursive values -- while strict-by-default languages have more restrictions on which recursive definitions make sense.

Confused over behavior of List.mapi in F#

I am building some equations in F#, and when working on my polynomial class I found some odd behavior using List.mapi
Basically, each polynomial has an array, so 3*x^2 + 5*x + 6 would be [|6, 5, 3|] in the array, so, when adding polynomials, if one array is longer than the other, then I just need to append the extra elements to the result, and that is where I ran into a problem.
Later I want to generalize it to not always use a float, but that will be after I get more working.
So, the problem is that I expected List.mapi to return a List not individual elements, but, in order to put the lists together I had to put [] around my use of mapi, and I am curious why that is the case.
This is more complicated than I expected, I thought I should be able to just tell it to make a new List starting at a certain index, but I can't find any function for that.
type Polynomial() =
let mutable coefficients:float [] = Array.empty
member self.Coefficients with get() = coefficients
static member (+) (v1:Polynomial, v2:Polynomial) =
let ret = List.map2(fun c p -> c + p) (List.ofArray v1.Coefficients) (List.ofArray v2.Coefficients)
let a = List.mapi(fun i x -> x)
match v1.Coefficients.Length - v2.Coefficients.Length with
| x when x < 0 ->
ret :: [((List.ofArray v1.Coefficients) |> a)]
| x when x > 0 ->
ret :: [((List.ofArray v2.Coefficients) |> a)]
| _ -> [ret]
I think that a straightforward implementation using lists and recursion would be simpler in this case. An alternative implementation of the Polynomial class might look roughly like this:
// The type is immutable and takes initial list as constructor argument
type Polynomial(coeffs:float list) =
// Local recursive function implementing the addition using lists
let rec add l1 l2 =
match l1, l2 with
| x::xs, y::ys -> (x+y) :: (add xs ys)
| rest, [] | [], rest -> rest
member self.Coefficients = coeffs
static member (+) (v1:Polynomial, v2:Polynomial) =
// Add lists using local function
let newList = add v1.Coefficients v2.Coefficients
// Wrap result into new polynomial
Polynomial(newList)
It is worth noting that you don't really need mutable field in the class, since the + operator creates and returns a new instance of the type, so the type is fully immutable (as you'd usually want in F#).
The nice thing in the add function is that after processing all elements that are available in both lists, you can simply return the tail of the non-empty list as the rest.
If you wanted to implement the same functionality using arrays, then it may be better to use a simple for loop (since arrays are, in principle, imperative, the usual imperative patterns are usually the best option for dealing with them). However, I don't think there is any particular reason for preferring arrays (maybe performance, but that would have to be evaluated later during the development).
As Pavel points out, :: operator appends a single element to the front of a list (see the add function above, which demonstrates that). You could write what you wanted using # which concatenates lists, or using Array.concat (which concatenates a sequence of arrays).
An implementation using higher-order functions and arrays is also possible - the best version I can come up with would look like this:
let add (a1:_[]) (a2:_[]) =
// Add parts where both arrays have elements
let l = min a1.Length a2.Length
let both = Array.map2 (+) a1.[0 .. l-1] a2.[0 .. l-1]
// Take the rest of the longer array
let rest =
if a1.Length > a2.Length
then a1.[l .. a1.Length - 1]
else a2.[l .. a2.Length - 1]
// Concatenate them
Array.concat [ both; rest ]
add [| 6; 5; 3 |] [| 7 |]
This uses slices (e.g. a.[0 .. l]) which give you a part of an array - you can use these to take the parts where both arrays have elements and the remaining part of the longer array.
I think you're misunderstanding what operator :: does. It's not used to concatenate two lists. It's used to prepend a single element to the list. Consequently, it's type is:
'a -> 'a list -> 'a list
In your case, you're giving ret as a first argument, and ret is itself a float list. Consequently, it expects the second argument to be of type float list list - hence why you need to add an extra [] to the second argument to make it to compile - and that will also be the result type of your operator +, which is probably not what you want.
You can use List.concat to concatenate two (or more) lists, but that is inefficient. In your example, I don't see the point of using lists at all - all this converting back & forth is going to be costly. For arrays, you can use Array.append, which is better.
By the way, it's not clear what is the purpose of mapi in your code at all. It's exactly the same as map, except for the index argument, but you're not using it, and your mapping is the identity function, so it's effectively a no-op. What's it about?

Resources