Slow asynchronous recursion in the dart language - asynchronous

I found asynchronous recursion in the dart language seems to be incredibly slow in the following code sample and I'd like to know why.
import "dart:async";
Stream<int> rec(int z) async* {
yield z;
if (z > 0) yield* rec(z - 1);
}
void main() {
Stream<int> stream = rec(10000);
stream.listen((int x) {
if (x % 1000 == 0) print(x);
});
}
I'm testing this on the dart vm, so I can't believe that there is a timer involved as it is likely to be in a js vm in the browser.
If yield * were efficient enough I think it could serve as a replacement for trampolining a recursive function to avoid stack size limitation, too.

Sounds like the following issue which has a long history. I recommend you to read the thread since I don't think I can make a resume here which would make it justice: https://github.com/dart-lang/sdk/issues/29189
I want to add that you example can also be written using sync* which are much faster but is of course not async:
Iterable<int> rec(int z) sync* {
yield z;
if (z > 0) yield* rec(z - 1);
}
void main() {
final stream = rec(10000);
stream.forEach((int x) {
if (x % 1000 == 0) print(x);
});
}

Related

Parallel work stealing in arbitrary order in Rust

I'm trying to write a parallel data loader for deep learning in Rust. The task is to write an iterator that under the hood does the following
Reads files from disk and applies some compute-heavy preprocessing to them, the result is generally a numeric array (or multiple)
Groups the results of the previous step into batches of size B and "collates" them - this generally means just concatenating the arrays - moderately compute heavy
Yields the results from step 2.
Step 1 can be both IO and compute bound, depending on network latency, size of files and complexity of preprocessing. It has to be run in parallel by many workers. Step 2 should be off the main thread but likely doesn't need a pool of workers. Step 3 happens on main thread (exposed to Python).
The reason I write it in Rust is that Python offers two options: pure Python implementation shipped with PyTorch, based on multiprocessing, which is somewhat slow but very flexible (arbitrary user-defined data preprocessing and batching) and C++ implementation shipped with Tensorflow, which is assembled by the user from a set of predefined primitives. The latter is substantially faster but too restrictive for the kinds of data processing I wish to do. I expect that Rust will give me the speed of Tensorflow with flexibility of arbitrary code as in PyTorch.
My question is purely about the way to implement parallelism. The ideal setup is to have N workers for step 1) -> channel -> worker for step 2) -> channel -> step 3. Because the iterator object may be dropped at any time, there is a strict requirement to be able to terminate the whole scheme after Drop. On the other hand, there is the flexibility of loading the files in an arbitrary order: for example if the batch size B == 16 and max_n_threads == 32, it is perfectly fine to start 32 workers and yield the first batch containing the 16 examples which happen to return first. This can be exploited for speed.
My naive implementation creates the DataLoader in 3 steps:
Create a n_working: Arc<AtomicUsize> to control the number of worker threads active and should_shutdown: Arc<AtomicBool> to signal shutdown (when Drop is called)
Create a thread responsible for maintaining the pool. It spins on n_working < max_n_threads and keeps spawning worker threads which terminate on should_shutdown, otherwise fetch a single example, send it down the worker->batcher channel and decrement n_working
Create a batching thread which polls the worker->batcher channel, upon receiving B objects concatenates them into a batch and sends down the batcher->yielder channel
#[pyclass]
struct DataLoader {
collate_worker: Option<thread::JoinHandle<()>>,
example_worker: Option<thread::JoinHandle<()>>,
should_shut_down: Arc<AtomicBool>,
receiver: Receiver<Batch>,
length: usize,
}
impl DataLoader {
fn new(
dataset: Dataset,
batch_size: usize,
capacity: usize,
) -> Self {
let n_batches = dataset.len() / batch_size;
let max_n_threads = capacity * batch_size;
let (example_sender, collate_receiver) = bounded((batch_size - 1) * capacity);
let should_shut_down = Arc::new(AtomicBool::new(false));
let shutdown_flag = should_shut_down.clone();
let example_worker = thread::spawn(move || {
rayon::scope_fifo(|s| {
let dataset = &dataset;
let n_working = Arc::new(AtomicUsize::new(0));
let mut current_index = 0;
while current_index < n_batches * batch_size {
if n_working.load(Ordering::Relaxed) == max_n_threads {
continue;
}
if shutdown_flag.load(Ordering::Relaxed) {
break;
}
let index = current_index.clone();
let sender = example_sender.clone();
let counter = n_working.clone();
let shutdown_flag = shutdown_flag.clone();
s.spawn_fifo(move |_s| {
let example = dataset.get_example(index);
if !shutdown_flag.load(Ordering::Relaxed) {
_ = sender.send(example);
} // if we should shut down, skip sending
counter.fetch_sub(1, Ordering::Relaxed);
});
current_index += 1;
n_working.fetch_add(1, Ordering::Relaxed);
};
});
});
let (batch_sender, final_receiver) = bounded(capacity);
let shutdown_flag = should_shut_down.clone();
let collate_worker = thread::spawn(move || {
'outer: loop {
let mut batch = vec![];
for _ in 0..batch_size {
if let Ok(example) = collate_receiver.recv() {
batch.push(example);
} else {
break 'outer;
}
};
let collated = collate(batch);
if shutdown_flag.load(Ordering::Relaxed) {
break; // skip sending
}
_ = batch_sender.send(collated);
};
});
Self {
collate_worker: Some(collate_worker),
example_worker: Some(example_worker),
should_shut_down: should_shut_down,
receiver: final_receiver,
length: n_batches,
}
}
}
#[pymethods]
impl DataLoader {
fn __iter__(slf: PyRef<Self>) -> PyRef<Self> { slf }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
impl Drop for DataLoader {
fn drop(&mut self) {
self.should_shut_down.store(true, Ordering::Relaxed);
if self.collate_worker.take().unwrap().join().is_err() {
println!("Panic in collate worker");
};
if self.example_worker.take().unwrap().join().is_err() {
println!("Panic in example_worker");
};
println!("dropped the dataloader");
}
}
This implementation works and roughly matches the performance of PyTorch but provides no significant speedup. I don't know where to look for improvements, but I imagine it would help to have the thing load-balance automatically in a work-stealing way and to flexibly spawn workers depending on the proportion of IO and compute time. I am also expecting performance issues due to the spinning pool manager and likely corner cases in my handling of Drop.
My question is how to best approach the problem. I am generally unsure if this should be tackled with parallel crates like rayon, async crates like tokio, or a mix of both. I also have the hunch my implementation could be much simpler with the correct use of their combinators/higher order APIs. I tried with rayon but I couldn't get a solution which doesn't wastefully enforce the original sequential returning order and respects the Drop requirement.
Okay I think I've figured out a solution for you that uses rayon parallel iterators.
The trick is to use Results in the rayon iterators, and return Err if the cancellation flag is set.
I first created a utility type to create a cancellable thread in which you can execute rayon iterators. You use it by passing in the thread closure which takes the atomic cancellation token as a parameter. Then you have to check if the cancellation token is true, and if so, exit early.
use std::sync::Arc;
use std::sync::atomic::{Ordering, AtomicBool};
use std::thread::JoinHandle;
fn collate(batch: &[Computed]) -> Batch {
batch.iter().map(|&x| i128::from(x)).sum()
}
#[derive(Debug)]
struct Cancelled;
struct CancellableThread<Output: Send + 'static> {
cancel_token: Arc<AtomicBool>,
thread: Option<JoinHandle<Result<Output, Cancelled>>>,
}
impl<Output: Send + 'static> CancellableThread<Output> {
fn new<F: FnOnce(Arc<AtomicBool>) -> Result<Output, Cancelled> + Send + 'static>(init: F) -> Self {
let cancel_token = Arc::new(AtomicBool::new(false));
let thread_cancel_token = Arc::clone(&cancel_token);
CancellableThread {
thread: Some(std::thread::spawn(move || init(thread_cancel_token))),
cancel_token,
}
}
fn output(mut self) -> Output {
self.thread.take().unwrap().join().unwrap().unwrap()
}
}
impl<Output: Send + 'static> Drop for CancellableThread<Output> {
fn drop(&mut self) {
self.cancel_token.store(true, Ordering::Relaxed);
if let Some(thread) = self.thread.take() {
let _ = thread.join().unwrap();
}
}
}
I found it useful to create a closure that returns a Result<(), Cancelled> so I could use the try operator (?) to exit early.
CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
loop {
// was the thread dropped?
// if so, stop what we're doing
cancelled?;
// do stuff and
// eventually return a result
}
});
I then used that CancellableThread abstraction in the DataLoader. No need to create a special Drop impl for it, because by default, it will call drop on each field anyways, which will handle the cancellation.
type Data = Vec<u8>;
type Dataset = Vec<Data>;
type Computed = u64;
type Batch = i128;
use rayon::prelude::*;
use crossbeam::channel::{unbounded, Receiver};
struct DataLoader {
example_worker: CancellableThread<()>,
collate_worker: CancellableThread<()>,
receiver: Receiver<Batch>,
length: usize,
}
I used unbounded channels, as it was one less thing to bother about. It shouldn't be hard to switch to bounded ones instead.
impl DataLoader {
fn new(dataset: Dataset, batch_size: usize) -> Self {
let (example_sender, collate_receiver) = unbounded();
let (batch_sender, final_receiver) = unbounded();
I'm not sure if you can always guarantee that the number of items in your dataset will be a multiple of the batch_size, so I decided to handle that explicitly.
let length = if dataset.len() % batch_size == 0 {
dataset.len() / batch_size
} else {
dataset.len() / batch_size + 1
};
I created the collating worker first, though that may not be necessary. As you can see, I had to duplicate a little bit to handle partial batches.
let collate_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
'outer: loop {
let mut batch = Vec::with_capacity(batch_size);
for _ in 0..batch_size {
cancelled()?;
if let Ok(data) = collate_receiver.recv() {
batch.push(data);
} else {
if !batch.is_empty() {
// handle the last batch, if there
// weren't enough items to fill it
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
break 'outer;
}
}
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
Ok(())
});
The example worker is where things are really made much simpler, because we can just use rayon parallel iterators. As you can see, we check for cancellation before each heavy computation.
let example_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
let heavy_compute = |data: Data| -> Result<Computed, Cancelled> {
cancelled()?;
Ok(data.iter().map(|&x| u64::from(x)).product())
};
dataset
.into_par_iter()
.map(heavy_compute)
.try_for_each(|computed| {
example_sender.send(computed?).unwrap();
Ok(())
})
});
Then we just construct the DataLoader. You can see the Python impl is identical:
DataLoader {
example_worker,
collate_worker,
receiver: final_receiver,
length,
}
}
}
// #[pymethods]
impl DataLoader {
fn __iter__(this: Self /* PyRef<Self> */) -> Self /* PyRef<Self> */ { this }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
playground

Unable to change pathBuff/path variable in async function

I was unsure if I should post this here or in code review.
Code review seems to have only functioning code.
So I've a multitude of problems I don't really understand.
(I’m a noob) full code can be found here: https://github.com/NicTanghe/winder/blob/main/src/main.rs
main problem is here:
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
I’ve tried various things to get around problems with borrowing as mutable or as reference,
and I can’t seem to get it to work.
I just get a different set of errors with everything I try.
Furthermore, I'm sorry if this is a duplicate, but looking for separate solutions to the errors just gave me a different error. In a circle.
Full function
async fn print_events(mut selector_loc1:i8, location_loc1: PathBuf) {
let mut reader = EventStream::new();
loop {
//let delay = Delay::new(Duration::from_millis(1_000)).fuse();
let mut event = reader.next().fuse();
select! {
// _ = delay => {
// print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
// },
maybe_event = event => {
match maybe_event {
Some(Ok(event)) => {
//println!("Event::{:?}\r", event);
// if event == Event::Mouse(MouseEvent::Up("Left").into()) {
// println!("Cursor position: {:?}\r", position());
// }
print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
if event == Event::Key(KeyCode::Char('k').into()) {
if selector_loc1 > 0 {
selector_loc1 -= 1;
};
//println!("go down");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('j').into()) {
selector_loc1 += 1;
//println!("go up");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('h').into()) {
//-----------------------------------------
//-------------BackLogic-------------------
//-----------------------------------------
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
//------------------------------------------
//------------------------------------------
} else if event == Event::Key(KeyCode::Char('l').into()) {
//go to next dir
} if event == Event::Key(KeyCode::Esc.into()) {
break;
}
printtype(location_loc1,selector_loc1);
}
Some(Err(e)) => println!("Error: {:?}\r", e),
None => break,
}
}
};
}
}
also, it seems using
use async_std::path::{Path, PathBuf};
makes the rust not recognize unwrap() function → how would I use using ?
There are two problems with your code.
Your PathBuf is immutable. It's not possible to modify immutable objects, unless they support interior mutability. PathBuf does not. Therefore you have to make your variable mutable. You can either add mut in front of it like that:
async fn print_events(mut selector_loc1:i8, mut location_loc1: PathBuf) {
Or you can re-bind it:
let mut location_loc1 = location_loc1;
You cannot have borrow it both mutable and immutably - the mutable borrows are exclusive! Given that the method .parent() borrows the buffer, you have to create a temporary owned value:
// the PathBuf instance
let mut path = PathBuf::from("root/parent/child");
// notice the .map(|p| p.to_owned()) method - it helps us avoid the immutable borrow
let parent = path.parent().map(|p| p.to_owned()).unwrap();
// now it's fine to modify it, as it's not borrowed
path.push(parent);
Your second question:
also, it seems using use async_std::path::{Path, PathBuf}; makes the rust not recognize unwrap() function → how would I use using ?
The async-std version is just a wrapper over std's PathBuf. It just delegates to the standard implementation, so it should not behave differently
// copied from async-std's PathBuf implementation
pub struct PathBuf {
inner: std::path::PathBuf,
}

Stackoverflow error on factorial using recursion on ktolin

This is my code
This gives stack overflow error 30 times on the output console
fun main(args:Array<String>){
var no:Int=Integer.parseInt(readLine())//read input from user and convert to Integer
var ans:Int=calculateFact(no) //call function and store to ans variable
println("Factorial of "+no+" is "+ans) //print result
}
fun calculateFact(no:Int):Int //function for recursion
{
if(no==0) {
return 1 }
return (no*calculateFact(no))
}
I don't know what is error
solve plz
You should return
no*calculateFact(no - 1)
not
no*calculateFact(no)
otherwise the recursion can never end.
Other than the mistake in the recursion that was already pointed out, it's worth mentioning that your method will still only work correctly for numbers up to 12, since 13! is larger than the maximum value that you can store in an Int. Therefore, for numbers 13 and up, you'll essentially get "random" results due to overflow.
If you just use BigInteger instead, it will work until the call stack gets too deep and causes a stack overflow, this happens around 8000 on my machine.
fun calculateFact(no: BigInteger): BigInteger {
if (no == BigInteger.ZERO) {
return BigInteger.ONE
}
return (no * calculateFact(no - BigInteger.ONE))
}
fun main(args: Array<String>) {
val no: BigInteger = BigInteger(readLine())
val ans: BigInteger = calculateFact(no)
println("Factorial of $no is $ans")
}
If you want to handle numbers larger than that, you can use a tailrec function (this specific solution is taken from this article):
tailrec fun calculateFact(acc: BigInteger, n: BigInteger): BigInteger {
if (n == BigInteger.ZERO) {
return acc
}
return calculateFact(n * acc, n - BigInteger.ONE)
}
fun calculateFact(n: BigInteger) : BigInteger {
return calculateFact(BigInteger.ONE, n)
}
fun main(args: Array<String>) {
val n: BigInteger = BigInteger(readLine())
val ans: BigInteger = calculateFact(n)
println("Factorial of $n is $ans")
}
This will work for numbers up to a couple hundred thousand, your problem with this one will become the time it takes to run instead of the memory constraints.
fun main(args:Array<String>) {
var no:Int = Integer.parseInt(readLine()) //read input from user and convert to Integer
var ans:Int=calculateFact(no) //call function and store to ans variable
println("Factorial of "+no+" is "+ans) //print result
}
fun calculateFact(no:Int):Int { //function for recursion
if(no==0) {
return 1
}
return (no*calculateFact(no - 1)) // you forgot to decrease the no here.
}
If you didnot decrease no then it will call the calculateFact() method all the time. Please check the code, it will work.

Clean code for finding the intersection of tuples in a list

I've written a algorithm to find parameters in a command line tool and are looking to clean up my code but is stuck.
The task
My program receives parameters as: flag output input ... | input flag output
Examples are: -d one/path second/path/to/file.txt and second/path/to/file.txt --dir one/path etc. Each space is used as a delimiter to create an array of parameters. A parameter can be either a flag such as -d or a path.
I got each flag mapped out in two arrays, which I zip into a an array of tuples. I call them the search set.
In math notation
I'm new to both FP and math notations so please forgive my mistakes (I've learned from wikipedia and other sites).
S for search and P for parameters
S = { S₁, S₂, S₃ }
where Sn = { flagShort, flagLong }
where flagShort = '-d' | '-r' | '-o'
flagLong = '--dir' | '--replace' | '--output'
P = { P₁, P₂, ... }
where Pn = flag | path
where path = dir | file
So I need to find the output by searching P for occurrences of Sn + the next parameter after the flag.
Ps = Sn ∩ P + Pₙ+₁
Input is just Ps ∉ P, so that is easy if I can get Ps.
Which leads me to the following transformation:
P -> Pn -> S -> Sn -> Sn = Pn -> Pn + Pₙ+₁
In javascript it can be written as:
const flagsShort = ["-r","-d","-o"]
const flagsLong = ["--replace","--dir","--output"]
const search = _.zip(flagsShort, flagsLong)
let Ps = tuplesIntersectionParametersPlusNextP(
['one/path', '--dir', 'two/path', '-d', 'one/path', '-d'],
search
)
// Ps -> [ '--dir', 'two/path', '-d', 'one/path' ]
function tuplesIntersectionParametersPlusNextP(P, S) {
const Ps = [];
P.forEach( (Pn, n) => {
S.forEach(Sn => {
Sn.forEach(flag => {
if(flag===Pn) Ps.push(Pn, P[n+1])
})
})
})
return Ps
}
While the code above works, it doesn't look clean. I've been looking around for different FP libraries such as underscore and different python articles but have yet to figure out how to use all these clever FP functions to clean up my code.
I will accept an answer in any language, Python, Haskell, Scala, etc but please don't use list comprehensions. While I'm very confident that I can port your code to js, I find list comprehensions a tad difficult to port. Better use map each reduce etc.
If you also can point me in the right direction with the set notations, I will be truly grateful!
Disclaimer
I tend to stay away from Javascript. There might be nicer ways to do certain things in Javascript, but I believe that the general principles still hold.
Your code is fine, albeit nested a little deep.
The trick in making your code cleaner is to extract abstract functionality. In the deepest nesting, what you're really asking is "does element Pn exist in the list of lists S?" This is something I can imagine myself asking more than once in an application, so it's perfect to turn into a function. You could even make this recursive for any level of nesting:
function ElementInNestedLists(e, l) {
if (l instanceof Array) {
return l.reduce(function(prev, cur, idx, arr) {
if (prev || ElementInNestedLists(e, cur)) {
return true;
}
return false;
}, false);
} else {
return l == e;
}
}
You don't have to use reduce here. There's nothing forbidding you from doing an actual for-loop in functional programming, in languages that support it. This does a better job at preventing the function from continuing after your element has been found:
function ElementInNestedLists(e, l) {
if (l instanceof Array) {
for (elem of l) {
if (ElementInNestedLists(e, elem)) {
return true;
}
}
return false;
} else {
return l == e;
}
}
Using this new function, you can simplify tuplesIntersectionParametersPlusNextP:
function tuplesIntersectionParametersPlusNextP(P, S) {
const Ps = [];
P.forEach( (Pn, n) => {
if (ElementInNestedLists(Pn, S)) {
Ps.push(Pn, P[n + 1]);
}
});
return Ps;
}
But there's a bug. The output of this function, given your input, is [ '--dir', 'two/path', '-d', 'one/path', _undefined_ ], because the last flag has no parameter following it. We need to add a test to ensure that there are at least two elements remaining to be checked.
function tuplesIntersectionParametersPlusNextP(P, S) {
const Ps = [];
P.forEach( (Pn, n) => {
if (n + 1 < P.length && ElementInNestedLists(Pn, S)) {
Ps.push(Pn, P[n + 1]);
}
});
return Ps;
}
But there's another bug. Consider the input ['-d', '-d', 'foo']. The output would be ['-d', '-d', '-d', 'foo'], which is incorrect. The desired output is ['-d', '-d']. You could decide to add a variable to track whether or not you need to process the next field:
function tuplesIntersectionParametersPlusNextP(P, S) {
const Ps = [];
skip = false;
P.forEach( (Pn, n) => {
if (skip) {
skip = false;
return;
}
if (n+1 < P.length && ElementInNestedLists(Pn, S)) {
Ps.push(Pn, P[n + 1]);
skip = true;
}
})
return Ps
}
And while this does what you want, you've now lost cleanliness again. The solution (as is often the case in functional programming) is to solve this problem recursively.
function Parse(cmd, flags, acc = []) {
if (cmd.length < 2) {
return acc;
}
if (ElementInNestedLists(cmd[0], flags)) {
acc.push(cmd[0], cmd[1]);
return Parse(cmd.slice(2, cmd.length), flags, acc)
}
return Parse(cmd.slice(1, cmd.length), flags, acc)
}
And if you don't want to slice:
function Parse(cmd, flags, idx = 0, acc = []) {
if (idx + 1 >= cmd.length) {
return acc;
}
if (ElementInNestedLists(cmd[idx], flags)) {
acc.push(cmd[idx], cmd[idx + 1]);
return Parse(cmd, flags, idx + 2, acc);
}
return Parse(cmd, flags, idx + 1, acc);
}
Of course, you might want to check and discard — or otherwise handle — a flag following a flag. There might be other, more complex requirements that you either haven't mentioned or haven't thought of (yet), but those are outside the scope of this answer.

how to write mutually recursive functions in Haxe

I am trying to write a simple mutually recursive function in Haxe 3, but couldn't get the code to compile because whichever one of the mutual functions that appears first will report that the other functions in the group is undefined. A minimal example is below, in which mutually defined functions odd and even are used to determine parity.
static public function test(n:Int):Bool {
var a:Int;
if (n >= 0) a = n; else a = -n;
function even(x:Int):Bool {
if (x == 0)
return true;
else
return odd(x - 1);
}
function odd(x:Int):Bool {
if (x == 0)
return false;
else
return even(x - 1);
}
return even(a);
}
Trying to compile it to neko gives:
../test.hx:715: characters 11-14 : Unknown identifier : odd
Uncaught exception - load.c(181) : Module not found : main.n
I tried to give a forward declaration of odd before even as one would do in c/c++, but it seems to be illegal in haxe3. How can one define mutually-recursive functions like above? Is it possible at all?
Note: I wanted to have both odd and even to be local functions wrapped in the globally visible function test.
Thanks,
Rather than using the function myFn() {} syntax for a local variable, you can use the myFn = function() {} syntax. Then you are able to declare the function type signiatures before you use them.
Your code should now look like:
static public function test(n:Int):Bool {
var a:Int;
if (n >= 0) a = n; else a = -n;
var even:Int->Bool = null;
var odd = null; // Leave out the type signiature, still works.
even = function (x:Int):Bool {
if (x == 0)
return true;
else
return odd(x - 1);
}
odd = function (x:Int):Bool {
if (x == 0)
return false;
else
return even(x - 1);
}
return even(a);
}
This works because Haxe just needs to know that even and odd exist, and are set to something (even if it's null) before they are used. We know that we'll set both of them to callable functions before they are actually called.
See on try haxe: http://try.haxe.org/#E79D4

Resources