Compute in a new thread and refer to results later in R - r

In clojure I can do something like this:
(def x
;; perform some expensive computation in a new thread
;; the repl is not blocked, so you can go on do something else
(future
(do
(Thread/sleep 500)
3.14)))
;; ... do something else
;; now when you need x
;; just deref the future, get 3.14
#x
Is there something similar to this in R?

On Linux you can fork a process and then collect it later, as illustrated on the help page ?parallel::mccollect(); this is more of a hack than a robust feature like future.
> p <- mcparallel({ Sys.sleep(5); "done" })
> sqrt(1:5)
[1] 1.000000 1.414214 1.732051 2.000000 2.236068
> mccollect(p)
$`15666`
[1] "done"
One can implement a similar strategy with snow::sendCall() / snow::recvResult(); see the implementation of snow::clusterCall() and note that a cluster can be subset, e.g., cl[1] to set a single node to work. (identically named functions are available in the parallel package, but not exported).
To check whether a job has finished, use wait = FALSE. Example:
require(parallel)
p1 = mcparallel({Sys.sleep(90); list(tag = "job1", res = 1})
p2 = mcparallel({Sys.sleep(80); 2})
p3 = mcparallel({Sys.sleep(60); 3})
res = mccollect(list(p1, p2, p3), wait = FALSE)
is.null(res)
If none of these 3 jobs has been finished, then res is NULL.

Related

ellipsis ... as function in substitute?

I'm having trouble understanding how/why parentheses work where they otherwise should not work®.
f = function(...) substitute(...()); f(a, b)
[[1]]
a
[[2]]
b
# but, substitute returns ..1
f2 = function(...) substitute(...); f2(a, b)
a
Normally an error is thrown, could not find function "..." or '...' used in an incorrect context, for example when calling (\(...) ...())(5).
What I've tried
I have looked at the source code of substitute to find out why this doesn't happen here. R Internals 1.1.1 and 1.5.2 says ... is of SEXPTYPE DOTSXP, a pairlist of promises. These promises are what is extracted by substitute.
# \-substitute #R
# \-do_substitute #C
# \-substituteList #C recursive
# \-substitute #C
Going line-by-line, I am stuck at substituteList, in which h is the current element of ... being processed. This happens recursively at line 2832 if (TYPEOF(h) == DOTSXP) h = substituteList(h, R_NilValue);. I haven't found exception handling of a ...() case in the source code, so I suspect something before this has happened.
In ?substitute we find substitute works on a purely lexical basis. Does it mean ...() is a parser trick?
parse(text = "(\\(...) substitute(...()))(a, b)") |> getParseData() |> subset(text == "...", select = c(7, 9))
#> token text
#> 4 SYMBOL_FORMALS ...
#> 10 SYMBOL_FUNCTION_CALL ...
The second ellipsis is recognized during lexical analysis as the name of a function call. It doesn't have its own token like |> does. The output is a pairlist ( typeof(f(a, b)) ), which in this case is the same as a regular list (?). I guess it is not a parser trick. But whatever it is, it has been around for a while!
Question:
How does ...() work?
Note: When referring to documentation and source code, I provide links to an unofficial GitHub mirror of R's official Subversion repository. The links are bound to commit 97b6424 in the GitHub repo, which maps to revision 81461 in the Subversion repo (the latest at the time of this edit).
substitute is a "special" whose arguments are not evaluated (doc).
typeof(substitute)
[1] "special"
That means that the return value of substitute may not agree with parser logic, depending on how the unevaluated arguments are processed internally.
In general, substitute receives the call ...(<exprs>) as a LANGSXP of the form (pseudocode) pairlist(R_DotsSymbol, <exprs>) (doc). The context of the substitute call determines how the SYMSXP R_DotsSymbol is processed. Specifically, if substitute was called inside of a function with ... as a formal argument and rho as its execution environment, then the result of
findVarInFrame3(rho, R_DotsSymbol, TRUE)
in the body of C utility substituteList (source) is either a DOTSXP or R_MissingArg—the latter if and only if f was called without arguments (doc). In other contexts, the result is R_UnboundValue or (exceptionally) some other SEXP—the latter if and only if a value is bound to the name ... in rho. Each of these cases is handled specially by substituteList.
The multiplicity in the processing of R_DotsSymbol is the reason why these R statements give different results:
f0 <- function() substitute(...(n = 1)); f0()
## ...(n = 1)
f1 <- function(...) substitute(...(n = 1)); f1()
## $n
## [1] 1
g0 <- function() {... <- quote(x); substitute(...(n = 1))}; g0()
## Error in g0() : '...' used in an incorrect context
g1 <- function(...) {... <- quote(x); substitute(...(n = 1))}; g1()
## Error in g1() : '...' used in an incorrect context
h0 <- function() {... <- NULL; substitute(...(n = 1))}; h0()
## $n
## [1] 1
h1 <- function(...) {... <- NULL; substitute(...(n = 1))}; h1()
## $n
## [1] 1
Given how ...(n = 1) is parsed, you might have expected f1 to return call("...", n = 1), both g0 and g1 to return call("x", n = 1), and both h0 and h1 to throw an error, but that is not the case for the above, mostly undocumented reasons.
Internals
When called inside of the R function f,
f <- function(...) substitute(...(<exprs>))
substitute evaluates a call to the C utility do_substitute—you can learn this by looking here—in which argList gets a LISTSXP of the form pairlist(x, R_MissingArg), where x is a LANGSXP of the form pairlist(R_DotsSymbol, <exprs>) (source).
If you follow the body of do_substitute, then you will find that the value of t passed to substituteList from do_substitute is a LISTSXP of the form pairlist(copy_of_x) (source).
It follows that the while loop inside of the substituteList call (source) has exactly one iteration and that the statement CAR(el) == R_DotsSymbol in the body of the loop (source) is false in that iteration.
In the false branch of the conditional (source), h gets the value
pairlist(substituteList(copy_of_x, env)). The loop exits and substituteList returns h to do_substitute, which in turn returns CAR(h) to R (source 1, 2, 3).
Hence the return value of substitute is substituteList(copy_of_x, env), and it remains to deduce the identity of this SEXP. Inside of this call to substituteList, the while loop has 1+m iterations, where m is the number of <exprs>. In the first iteration, the statement CAR(el) == R_DotsSymbol in the body of the loop is true.
In the true branch of the conditional (source), h is either a DOTSXP or R_MissingArg, because f has ... as a formal argument (doc). Continuing, you will find that substituteList returns:
R_NilValue if h was R_MissingArg in the first while iteration and m = 0,
or, otherwise,
a LISTSXP listing the expressions in h (if h was a DOTSXP in the first while iteration) followed by <exprs> (if m > 1), all unevaluated and without substitutions, because the execution environment of f is empty at the time of the substitute call.
Indeed:
f <- function(...) substitute(...())
is.null(f())
## [1] TRUE
f <- function(...) substitute(...(n = 1))
identical(f(a = sin(x), b = zzz), pairlist(a = quote(sin(x)), b = quote(zzz), n = 1))
## [1] TRUE
Misc
FWIW, it helped me to recompile R after adding some print statements to coerce.c. For example, I added the following before UNPROTECT(3); in the body of do_substitute (source):
Rprintf("CAR(t) == R_DotsSymbol? %d\n",
CAR(t) == R_DotsSymbol);
if (TYPEOF(CAR(t)) == LISTSXP || TYPEOF(CAR(t)) == LANGSXP) {
Rprintf("TYPEOF(CAR(t)) = %s, length(CAR(t)) = %d\n",
type2char(TYPEOF(CAR(t))), length(CAR(t)));
Rprintf("CAR(CAR(t)) = R_DotsSymbol? %d\n",
CAR(CAR(t)) == R_DotsSymbol);
Rprintf("TYPEOF(CDR(CAR(t))) = %s, length(CDR(CAR(t))) = %d\n",
type2char(TYPEOF(CDR(CAR(t)))), length(CDR(CAR(t))));
}
if (TYPEOF(s) == LISTSXP || TYPEOF(s) == LANGSXP) {
Rprintf("TYPEOF(s) = %s, length(s) = %d\n",
type2char(TYPEOF(s)), length(s));
Rprintf("TYPEOF(CAR(s)) = %s, length(CAR(s)) = %d\n",
type2char(TYPEOF(CAR(s))), length(CAR(s)));
}
which helped me confirm what was going into and coming out of the substituteList call on the previous line:
f <- function(...) substitute(...(n = 1))
invisible(f(hello, world, hello(world)))
CAR(t) == R_DotsSymbol? 0
TYPEOF(CAR(t)) = language, length(CAR(t)) = 2
CAR(CAR(t)) = R_DotsSymbol? 1
TYPEOF(CDR(CAR(t))) = pairlist, length(CDR(CAR(t))) = 1
TYPEOF(s) = pairlist, length(s) = 1
TYPEOF(CAR(s)) = pairlist, length(CAR(s)) = 4
invisible(substitute(...()))
CAR(t) == R_DotsSymbol? 0
TYPEOF(CAR(t)) = language, length(CAR(t)) = 1
CAR(CAR(t)) = R_DotsSymbol? 1
TYPEOF(CDR(CAR(t))) = NULL, length(CDR(CAR(t))) = 0
TYPEOF(s) = pairlist, length(s) = 1
TYPEOF(CAR(s)) = language, length(CAR(s)) = 1
Obviously, compiling R with debugging symbols and running R under a debugger helps, too.
Another puzzle
Just noticed this oddity:
g <- function(...) substitute(...(n = 1), new.env())
gab <- g(a = sin(x), b = zzz)
typeof(gab)
## [1] "language"
gab
## ...(n = 1)
Someone here can do another deep dive to find out why the result is a LANGSXP rather than a LISTSXP when you supply env different from environment() (including env = NULL).

Problem implementing a BFS algorithm in R

I'm trying to implement a bfs breadth first search algorithm in R. I know about the graph::bfs function and do_bfs from DiaGrammer. I think my problem is in the "for" of the bfs function.
The input would be a graph as the following
1
2 3
4 5 6 7
The output should be the path. in this case, if i start from 1, 1,2,3,4,5,6,7
library (igraph)
library(foreach)
library(flifo)
library(digest)
# devtools::install_github("rdpeng/queue")
This packages seemed useful for the implementation, especially the queue one.
t<-make_tree(7, children = 2, mode ="out")
plot.igraph(t)
bfsg(t, 1)
bfsg<- function (g, n) {
m <- c(replicate(length(V(t)), 0))
q<-flifo::fifo ()
m[n]<- 1
push (q, n)
pr <- c(replicate(length(V(t)), 0))
}
at this point, 1 should be in the queue, afrter this, got printed and popped out of the queue. After the pop, the algorithm should go to 2 and 3
while (size(q)!=0){
print (n)
pop(q)
}
for (i in unlist(adjacent_vertices(g, n, mode = "out"))){
if (m[i] == 0){
push(q,i)
m[i]=2
}
}

Tail recursion in mutually recursive functions

I have the following code:
let rec f n =
if n < 10 then "f" + g (n+1) else "f"
and g n =
if n < 10 then "g" + f (n+1) else "g"
I want to make these mutually recursive functions tail recursive for optimization.
I have tried the following :
let rec fT n =
let rec loop a =
if n < 10 then "f" + gT (a) else "f"
loop (n + 1)
and gT n =
let rec loop a =
if n < 10 then "g" + fT (a) else "g"
loop (n + 1)
Is that a correct tail recursive version? If no, a hint in the right direction would be greatly appreciated.
EDIT (Second take on a solution):
let rec fA n =
let rec loop n a =
if n < 10 then loop (n + 1) ("f" + a) else a
loop n "f"
and gA n =
let rec loop n a =
if n < 10 then loop (n + 1) ("g" + a) else a
loop n "g"
EDIT (Third take on a solution):
let rec fA n a =
if n < 10 then gA (n + 1) (a + "f") else a
and gA n a =
if n < 10 then fA (n + 1) (a + "g") else a
EDIT (The correct solution):
let rec fA n a =
if n < 10 then gA (n + 1) (a + "f") else (a + "f")
and gA n a =
if n < 10 then fA (n + 1) (a + "g") else (a + "g")
Your solution is most definitely not tail-recursive.
"Tail-recursion" is such recursion where every recursive call is the last thing that the function does. This concept is important, because it means that the runtime can opt out of keeping a stack frame between the calls: since the recursive call is the very last thing, and the calling function doesn't need to do anything else after that, the runtime can skip returning control to the calling function, and have the called function return right to the top-level caller. This allows for expressing recursive algorithms of arbitrary depth without fear of running out of stack space.
In your implementation, however, the function fT.loop calls function gT, and then prepends "f" to whatever gT returned. This prepending of "f" happens after gT has returned, and therefore the call to gT is not the last thing that fT.loop does. Ergo, it is not tail-recursive.
In order to convert "regular" recursion into the "tail" kind, you have to "turn the logic inside out", so to speak. Let's look at the function f: it calls g and then prepends "f" to whatever g returned. This "f" prefix is the whole "contribution" of function f in the total computation. Now, if we want tail recursion, it means we can't make the "contribution" after the recursive call. This means that the contribution has to happen before. But if we do the contribution before the call and don't do anything after, then how do we avoid losing that contribution? The only way is to pass the contribution into the recursive call as argument.
This is the general idea behind tail-recursive computation: instead of waiting for the nested call to complete and then adding something to the output, we do the adding first and pass what has been "added so far" into the recursive call.
Going back to your specific example: since the contribution of f is the "f" character, it needs to add this character to what has been computed "so far" and pass that into the recursive call, which will then do the same, and so on. The "so far" argument should have the semantics of "compute whatever you were going to compute, and then prepend my 'so far' to that".
Since you've only asked for a "hint", and this is obviously homework (forgive me if I'm wrong), I am not going to post the actual code. Let me know if you'd rather I did.
I second the observation that your attempt definitely does not put the recursion in tail position
How I would handle moving the recursion into tail position would be using a continuation. To do so, we'd have to implement fk and gk variants which have a continuation parameter, then f and g can be implemented using fk and gk respectively
I'm not an F# expert, but I can illustrate this quite simply with JavaScript. I wouldn't ordinarily post an answer using a different language, but since the syntax is so similar, I think it will be helpful to you. It also has the added benefit that you can run this answer in the browser to see it working
// f helper
let fk = (n, k) => {
if (n < 10)
return gk(n + 1, g => k("f" + g))
else
return k("f")
}
// g helper
let gk = (n, k) => {
if (n < 10)
return fk(n + 1, f => k("g" + f))
else
return k("g")
}
let f = n =>
fk(n, x => x)
let g = n =>
gk(n, x => x)
console.log(f(0)) // fgfgfgfgfgf
console.log(g(0)) // gfgfgfgfgfg
console.log(f(5)) // fgfgfg
console.log(g(5)) // gfgfgf
console.log(f(11)) // f
console.log(g(11)) // g

Tail recursion in R

I seem to misunderstand tail recursion; according to this stackoverflow question R does not support tail recursion. However, let's consider the following functions to compute the nth fibonacci number:
Iterative version:
Fibo <- function(n){
a <- 0
b <- 1
for (i in 1:n){
temp <- b
b <- a
a <- a + temp
}
return(a)
}
"Naive" recursive version:
FiboRecur <- function(n){
if (n == 0 || n == 1){
return(n)
} else {
return(FiboRecur(n-1) + FiboRecur(n-2))
}
}
And finally an example I found that should be tail call recursive:
FiboRecurTail <- function(n){
fib_help <- function(a, b, n){
if(n > 0){
return(fib_help(b, a+b, n-1))
} else {
return(a)
}
}
return(fib_help(0, 1, n))
}
Now if we take a look at the traces when these functions are called, here is what we get:
Fibo(25)
trace: Fibo(25)
[1] 75025
trace(FiboRecur)
FiboRecur(25)
Thousands of calls to FiboRecur and takes a lot of time to run
FiboRecurTail(25)
trace: FiboRecurTail(25)
[1] 75025
In the cases of Fibo(25) and FiboRecurTail(25), the answer is displayed instantaneously and only one call is made. For FiboRecur(25), thousands of calls are made and it runs for some seconds before showing the result.
We can also take a look at the run times using the benchmark function from the package rbenchmark:
benchmark(Fibo(30), FiboRecur(30), FiboRecurTail(30), replications = 5)
test replications elapsed relative user.self sys.self user.child sys.child
1 Fibo(30) 5 0.00 NA 0.000 0 0 0
2 FiboRecur(30) 5 13.79 NA 13.792 0 0 0
3 FiboRecurTail(30) 5 0.00 NA 0.000 0 0 0
So if R does not support tail recursion, what is happening in FiboRecurTail(25) that makes it run as fast as the iterative version while the "naive" recursive function runs like molasses? Is it rather that R supports tail recursion, but does not optimize a "naive" recursive version of a function to be tail-call recursive like other programming languages (Haskell for instance) do? This is what I understand from this post in R's mailing list.
I would greatly appreciate if someone would shed some light into this. Thanks!
The difference is that for each recursion, FiboRecur calls itself twice. Within FiboRecurTail, fib_help calls itself only once.
Thus you have a whole lot more function calls with the former. In the case of FiboRecurTail(25) you have a recursion depth of ~25 calls. FiboRecur(25) results in 242,785 function calls (including the first).
I didn't time any of the routines, but note that you show 0.00 for both of the faster routines. You should see some difference with a higher input value, but note that Fibo iterates exactly as much as FiboRecurTail recurses.
In the naive recursive approach, you repetitively calculated a lot of values. For example, when you calculate FiboRecur(30) you will calculate FiboRecur(29) and FiboRecur(28), and each of these two calls are independent. And in FiboRecur(29) you will calculate FiboRecur(28) again and FiboRecur(27) even though FiboRecur(28) has already been calculated somewhere else as above. And this happens for every stage of recursion. Or simply put, for every increase of n, the calculation effort almost doubles but obviously, in reality it should just be as simple as add the last two calculated numbers together.
A little summary of FiboRecur(4): FiboRecur(0) is calculated twice, FiboRecur(1) is calculated three times, FiboRecur(2) is calculated twice and FiboRecur(3) is calculated once. The former three should really be calculated once and stored somewhere so that you can extract the values whenever they are needed. And that's why you see so many function calls even though it's not a large number.
In the tail recursive version, however, every previously calculated values are passed to the next stage via a + b parameter, which avoids countless repetitive calculations as in the naive recursive version, and thus more efficient.
The following algorithm uses accumulator parameter technique to make things tail recursive, then wraps it in a memoization function.
Number of function calls shouldn't necessarily differ for tail-recursion. This is mostly about managing stack memory, not speed. Every call to fib(n) generates calls to fib(n - 1) and fib(n - 2), expect in tail-recursive cases, the stack frame is reused rather than a new one being allocated for each call.
Memoization is what gives a speed-boost. Results are cached for future use.
library(hash)
# Generate Fibonacci numbers
# Tail Recursive Algorithm using Accumulator Parameter Technique
fibTR <- function(n) {
fibLoop <- function(acc, m, k) {
if (k == 0)
acc
else
fibLoop(acc = m, m = acc + m, k = k - 1)
}
fibLoop(acc = 0, m = 1, k = n)
}
# A generic memoization function for function fn taking integer input
memoize <- function(fn, inp) {
cache <- hash::hash()
key <- as.character(inp)
if (hash::has.key(key = key, hash = cache))
cache[[key]]
else {
cache[[key]] <- inp %>% fn
cache[[key]]
}
}
# Partial Application of a Function
# Memoized and Tail Recursive Fibonacci Number Generator
fib <- partial(.f = memoize, fn = fibTR)
# Get the first 10 Fibonacci numbers
map(.x = 0:9, .f = fib) %>% unlist
Running fibAux(10000) yields
Error: C stack usage 15927040 is too close to the limit
So, I doubt R does efficient tail call optimization.
Another issue is the construction of the cache or lookaside table. In functional languages such as Haskell, ML, ..., that intermediary data structures get built when you first partially call the function. Assuming the same effect in R, another issue is that memory allocation in R is very expensive so is growing vectors, matrices, etc: Here, we are growing a dictionary, and if we pre-allocate the dictionary of appropriate size, then we have to supply the n argument and the cache gets constructed every time we call the function which defeats the purpose.
// Here is F# code to do the same:
// A generate Fibonacci numbers: Tail Recursive Algorithm
let fibTR n =
let rec fibLoop acc m k =
match k with
| 0 -> acc
| n -> fibLoop m (acc + m) (n - 1)
fibLoop 0 1 n
// A generic memoization function
let memoize (fn: 'T -> 'U) =
let cache = new System.Collections.Generic.Dictionary<_, _>()
fun inp ->
match cache.TryGetValue inp with
| true, res -> res
| false, _ ->
let res = inp |> fn
cache.Add(inp, res)
res
// A tail recursive and
let fib = fibTR |> memoize
// Get the first 10 Fibonacci numbers
[ 0..9 ] |> List.map fib

List comprehensions and tuples in Julia

I am trying to do in Julia what this Python code does. (Find all pairs from the two lists whose combined value is above 7.)
#Python
def sum_is_large(a, b):
return a + b > 7
l1 = [1,2,3]
l2 = [4,5,6]
l3 = [(a,b) for a in l1 for b in l2 if sum_is_large(a, b)]
print(l3)
There is no if for list comprehensions in Julia. And if I use filter(), I'm not sure if I can pass two arguments. So my best suggestion is this:
#Julia
function sum_is_large(pair)
a, b = pair
return a + b > 7
end
l1 = [1,2,3]
l2 = [4,5,6]
l3 = filter(sum_is_large, [(i,j) for i in l1, j in l2])
print(l3)
I don't find this very appealing. So my question is, is there a better way in Julia?
Using the very popular package Iterators.jl, in Julia:
using Iterators # install using Pkg.add("Iterators")
filter(x->sum(x)>7,product(l1,l2))
is an iterator producing the pairs. So to get the same printout as the OP:
l3iter = filter(x->sum(x)>7,product(l1,l2))
for p in l3iter println(p); end
The iterator approach is potentially much more memory efficient. Ofcourse, one could just l3 = collect(l3iter) to get the pair vector.
#user2317519, just curious, is there an equivalent iterator form for python?
Guards (if) are now available in Julia v0.5 (currently in the release-candidate stage):
julia> v1 = [1, 2, 3];
julia> v2 = [4, 5, 6];
julia> v3 = [(a, b) for a in v1, b in v2 if a+b > 7]
3-element Array{Tuple{Int64,Int64},1}:
(3,5)
(2,6)
(3,6)
Note that generators are also now available:
julia> g = ( (a, b) for a in v1, b in v2 if a+b > 7 )
Base.Generator{Filter{##18#20,Base.Prod2{Array{Int64,1},Array{Int64,1}}},##17#19}(#17,Filter{##18#20,Base.Prod2{Array{Int64,1},Array{Int64,1}}}(#18,Base.Prod2{Array{Int64,1},Array{Int64,1}}([1,2,3],[4,5,6])))
Another option similar to the one of #DanGetz using also Iterators.jl:
function expensive_fun(a, b)
return (a + b)
end
Then, if the condition is also complicated, it can be defined as a function:
condition(x) = x > 7
And last, filter the results:
>>> using Iterators
>>> result = filter(condition, imap(expensive_fun, l1, l2))
result is an iterable that is only computed when needed (inexpensive) and can be collected collect(result) if required.
The one-line if the filter condition is simple enough would be:
>>> result = filter(x->(x > 7), imap(expensive_fun, l1, l2))
Note: imap works natively for arbitrary number of parameters.
Perhaps something like this:
julia> filter(pair -> pair[1] + pair[2] > 7, [(i, j) for i in l1, j in l2])
3-element Array{Tuple{Any,Any},1}:
(3,5)
(2,6)
(3,6)
although I'd agree it doesn't look like it ought to be the best way...
I'm surprised nobody mentions the ternary operator to implement the conditional:
julia> l3 = [sum_is_large((i,j)) ? (i,j) : nothing for i in l1, j in l2]
3x3 Array{Tuple,2}:
nothing nothing nothing
nothing nothing (2,6)
nothing (3,5) (3,6)
or even just a normal if block within a compound statement, i.e.
[ (if sum_is_large((x,y)); (x,y); end) for x in l1, y in l2 ]
which gives the same result.
I feel this result makes a lot more sense than filter(), because in julia the a in A, b in B construct is interpreted dimensionally, and therefore the output is in fact an "array comprehension" with appropriate dimensionality, which clearly in many cases would be advantageous and presumably the desired behaviour (whether we include a conditional or not).
Whereas filter will always return a vector. Obviously, if you really want a vector result you can always collect the result; or for a conditional list comprehension like the one here, you can simply remove nothing elements from the array by doing l3 = l3[l3 .!= nothing].
Presumably this is still clearer and no less efficient than the filter() approach.
You can use the #vcomp (vector comprehension) macro in VectorizedRoutines.jl to do Python-like comprehensions:
using VectorizedRoutines
Python.#vcomp Int[i^2 for i in 1:10] when i % 2 == 0 # Int[4, 16, 36, 64, 100]

Resources