Spark Async interface for Fold, Reduce, Aggregate? - asynchronous

In the official Spark RDD API:
https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/rdd/AsyncRDDActions.html
count, collect, foreach, and take all have async variants that return a Future.
Why do fold, reduce, and aggregate not have this async/future interface? That seems pretty important.

!!! Edited
#Jan Van den bosch is right (see comments below). This question is not about transformations at all. In case someone else was fooled, I've left my misguided answer below.
!!! Original Answer (incorrect)
TL;DR: The difference is between spark "actions" vs. "transformations": https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#rdd-operations
Notice, that all the things you listed with an asynchronous option are spark "actions", which means they will start processing the data right away and attempt to return synchronously. It may take a long time if there's a lot of data, so it's nice to have an asynchronous option.
Meanwhile, the operations you listed without an asynchronous option are spark "transformations" which are lazily evaluated, which means it instantly creates a plan to do the work, but it won't actually process any data until you apply an "action" later to return results.
Meanwhile, do you have specific code or a problem you're trying to solve with this?

Related

Trigger a function after 5 API calls return (in a distributed context)

My girlfriend was asked the below question in an interview:
We trigger 5 independent APIs simultaneously. Once they have all completed, we want to trigger a function. How will you design a system to do this?
My girlfriend replied she will use a flag variable, but the interviewer was evidently not happy with it.
So, is there a good way in which this could be handled (in a distributed context)? Note that each of the 5 API calls are made by different servers and the function to be triggered is on a 6th server.
The other answers suggesting Promises seem to assume all these requests necessarily come from the same client. If the context here is distributed systems, as you said it is, then I don't think those are valid answers. If they were, then the interview question would have nothing to do with distributed systems, except to essay your girlfriend's ability to recognize something that isn't really a distributed systems problem.
And the question does have the shape of some classic problems in distributed systems. It sounds a lot like YouTube view counting: How do you achieve qualities like atomicity and consistency in a multi-threaded, multi-process, or multi-client environment? Failing to recognize this, thinking the answer could be as simple as "a flag", betrayed a lack of experience in distributed systems.
Another thing about that answer is that it leaves many ambiguities. Where does the flag live? As a variable in another (Java?) API? In a database? In a file? Even in a non-distributed context, these are important questions. And if she had gone on to address these questions, even being innocent of all the distributed systems complications, she might have happily fallen into a discussion of the kinds of D.S. problems that occur when you use, say, a file; and how using a ACID-compliant database might solve those problems, and what the tradeoffs might be there... And she might have corrected herself and said "counter" instead of "flag"!
If I were asked this, my first thought would be to use promises/futures. The idea behind them is that you can execute time-consuming operations asynchronously and they will somehow notify you when they've completed, either successfully or unsuccessfully, typically by calling a callback function. So the first step is to spawn five asynchronous tasks and get five promises.
Then I would join the five promises together, creating a unified promise that represents the five separate tasks. In JavaScript I might call Promise.all(); in Java I would use CompletableFuture.allOf().
I would want to make sure to handle both success and failure. The combined promise should succeed if all of the API calls succeed and fail if any of them fail. If any fail there should be appropriate error handling/reporting. What happens if multiple calls fail? How would a mix of successes and failures be reported? These would be design points to mention, though not necessarily solve during the interview.
Promises and futures typically have modular layering system that would allow edge cases like timeouts to be handled by chaining handlers together. If done right, timeouts could become just another error condition that would be naturally handled by the error handling already in place.
This solution would not require any state to be shared across threads, so I would not have to worry about mutexes or deadlocks or other thread synchronization problems.
She said she would use a flag variable to keep track of the number of API calls have returned.
One thing that makes great interviewees stand out is their ability to anticipate follow-up questions and explain details before they are asked. The best answers are fully fleshed out. They demonstrate that one has thought through one's answer in detail, and they have minimal handwaving.
When I read the above I have a slew of follow-up questions:
How will she know when each API call has returned? Is she waiting for a function call to return, a callback to be called, an event to be fired, or a promise to complete?
How is she causing all of the API calls to be executed concurrently? Is there multithreading, a fork-join pool, multiprocessing, or asynchronous execution?
Flag variables are booleans. Is she really using a flag, or does she mean a counter?
What is the variable tracking and what code is updating it?
What is monitoring the variable, what condition is it checking, and what's it doing when the condition is reached?
If using multithreading, how is she handling synchronization?
How will she handle edge cases such API calls failing, or timing out?
A flag variable might lead to a workable solution or it might lead nowhere. The only way an interviewer will know which it is is if she thinks about and proactively discusses these various questions. Otherwise, the interviewer will have to pepper her with follow-up questions, and will likely lower their evaluation of her.
When I interview people, my mental grades are something like:
S — Solution works and they addressed all issues without prompting.
A — Solution works, follow-up questions answered satisfactorily.
B — Solution works, explained well, but there's a better solution that more experienced devs would find.
C — What they said is okay, but their depth of knowledge is lacking.
F — Their answer is flat out incorrect, or getting them to explain their answer was like pulling teeth.

What is the need for immutable/persistent data structures in erlang

Each Erlang process maintains its own private address space. All communication happens via copying without sharing (except big binaries). If each process is processing one message at a time with no concurrent access over its objects, I don't see why do we need immutable/persistent data structures.
Erlang was initially implemented in Prolog, which doesn't really use mutable data structures either (though some dialects do). So it started off without them. This makes runtime implementation simpler and faster (garbage collection in particular).
So adding mutable data structures would require a lot of effort, could introduce bugs, and Erlang programmers are nearly by definition at least willing to live without them.
Many actually consider their absence to be a positive good: less concern about object identity, no need for defensive copying because you don't know whether some other piece of code is going to modify the data you passed (or might be changed later to modify it), etc.
This absence does mean that Erlang is pretty unusable in some domains (e.g. high performance scientific computing), at least as the main language. But again, this means that nobody in these domains is going to use Erlang in the first place and so there's no particular incentive to make it usable at the cost of making existing users unhappy.
I remember seeing a mailing list post by Joe Armstrong quite a long time ago (which I couldn't find with a quick search now) saying that he initially planned to add mutable variables when he'd need them... except he never quite did, and performance was good enough for everything he was using Erlang for.
It is indeed the case that in Erlang immutability does not solve any "shared state" problems, as immutable data are "process local".
From the functional programming language perspective, however, immutability offers a number of benefits, summarized adequately in this Quora answer:
The simplest definition of functional programming is that it’s a programming
paradigm where you are transforming immutable data with functions.
The definition uses functions in the mathematical sense, where it’s
something that takes an input, and produces an output.
OO + mutability tends to violate that definition because when you want
to change a piece of data it generally will not return the output, it
will likely return void or unit, and that when you call a method on
the object the object itself isn’t input for the function.
As far as what advantages the paradigm has, composability, thread
safety, being able to track what went wrong where better, the ability
to sort of separate the data from the actual computation on it being
done, etc.
how would this work?
factorial(1) -> 1;
factorial(X) ->
X*factorial(X-1).
if you run factorial(4), a single process will be running the same function. Each time the function will have it's own value of X, if the value of X was in the scope of the process and not the function recursive functions wouldn't work. So first we need to understand scope. If you want to say that you don't see why data needs to be immutable within the scope of a single function/block you would have a point, but it would be a headache to think about where data is immutable and where it isn't.

Using generic functions of R, when and why?

I'm developing an major upgrade to the R package, and as part of the changes I want to start using the S3 methods so I can use the generic plot, summary and print functions. But I think I'm not totally sure I understand why and when to use generic functions in general.
For example, I currently have a function called logLikSSM, which computes the log-likelihood of a state space model. Instead of using this functions, I could make function logLik.SSM or something like that, as there is generic function logLik in R. The benefit of this would be that logLik is shorter to write than logLikSSM, but is there really any other point in this?
Similar case, there is a generic function called simulate in stats package, so in theory I could use that instead of simulateSSM. But now the description of the simulate function tells that function is used to "Simulate Responses", but my function actually simulates the hidden states, so it really doesn't fit into the description of the simulate function. So probably in this case I shouldn't use the generic function right?
I apologize if this question is too vague for here.
The advantages of creating methods for generics from the core of R include:
Ease of Use. Users of your package already familiar with those generics will have less to remember making it easier to use your package. They might even be able to do a certain amount without reading the documentation. If you come up with your own names then they must discover and remember new names which is an added cognitive burden.
Leverage Existing Functionality. Also any other functions that make use of generics you create methods for can then automatically use yours as well; otherwise, they would have to be changed. For example, AIC uses logLik.
A disadvantage is that the generic involves the extra level of dispatch and if logLik is in the inner loop of an optimization there could be an impact (although possibly not material). In that case you could check the performance of calling the generic vs. calling the method directly and use the latter if it makes a significant difference.
Regarding the case that your function has a completely different purpose than the generic in the core of R, then it might be more confusing than helpful so you might, in that case, not create a method but have your own function name.
You might want to read the zoo Design manual (see link to zoo Design under Vignettes near the bottom of that page) which discusses the design ideas that went into the zoo package. These include the idea being discussed here.
EDIT: Added disadvantates.
good question.
I'll split your Question into two parts; here's the first one:
i]s there really any other point in [making functions generic]?
Well, this pattern is usually invoked when the develper doesn't know the object class for every object he/she expects a user to pass in to the method under consideration.
And because of this uncertainty, this design pattern (which is called overloading in many other languages) is invokved, and which requires R to evaluate the object class, then dispatch that object to the appropriate method given the object type.
The second part of your Question: [i]n this case I shouldn't use [the generic function] right?
To try to give you an answer useful beyond the detail of your Question, consider what happens to the original method when you call setGeneric, passing that method in.
the original function body is replaced with code for performing a top-level dispatch based on type of object passed in. This replaces the original function body, which just slides down one level so that it becomes the default method that the top level (generic) function dispatches to.
showMethods() will let you see all of those methods which are called by the newly created dispatch function (generic function).
And now for one huge disadvantage:
Ease of MISUse:
Users of your package already familiar with those generics might do a certain amount without reading the documentation.
And therein lies the fallacy that components, reusable objects, services, etc are an easy panacea for all software challenges.
And why the overwhelming majority of software is buggy, bloated, and operates inconsistently with little hope of tech support being able to diagnose your problem.
There WAS a reason for static linking and small executables back in the day. But this generation of code now, get paid now, debug later if ever, before the layoffs/IPO come, has no memory of the days when code actually worked very reliably and installation/integration didn't require 200$/hr Big 4 consultants or hackers who spend a week trying to get some "simple" open source product installed and productively running.
But if you want to continue the tradition of writing ever shorter function/method names, be my guest.

How to implement a callback/producer/consumers scheme?

I'm warming up with Clojure and started to write a few simple functions.
I'm realizing how the language is clearly well-suited for parallel computation and this got me thinking. I've got an app (written in Java but whatever) that works the following way:
one thread waits for input to come in (filesystem in my case but it could be network or whatever) and puts that input once it arrives on a queue
several consumers fetch data from that queue and process the data in parallel
The code that puts the input to be parallely processed may look like this (it's just an example):
asynchFetchInput( new MyCallBack() {
public void handle( Input input ) {
queue.put(input)
}
})
Where asynchFetchInput would spawn a Thread and then call the callback.
It's really just an example but if someone could explain how to do something similar using Clojure it would greatly help me understand the "bigger picture".
If you have to transform data, you can make it into a seq, then you can feed it to either map or pmap The latter will process it in parallel. filter and reduce are also both really useful; so you might want to see if you can express your logic in those terms.
You might also want to look into the concurrency utilities in basic Java rather than spawning your own threads.

Recursion of internal functions in Erlang

Playing with Erlang, I've got a process-looping function like:
process_loop(...A long list of parameters here...) ->
receive
...Message processing logic involving the function parameters...
end,
process_loop(...Same long list of parameters...)
end.
It looks quite ugly, so I tried a refactoring like that:
process_loop(...A long list of parameters...) ->
Loop = fun() ->
receive
...Message processing logic...
end,
Loop()
end,
Loop()
end.
But it turned out to be incorrect, as Loop variable is unbound inside the Loop function. So, I've arranged a workaround:
process_loop(...A long list of parameters...) ->
Loop = fun(Next) ->
receive
...Message processing logic...
end,
Next(Next)
end,
Loop(Loop)
end.
I have two questions:
Is there a way to achieve the idea of snippet #2, but without such "Next(Next)" workarounds?
Do snippets #1 and #3 differ significantly in terms of performance, or they're equivalent?
No. Unfortunately anonymous function are just that. Anonymous, unless you give them a name.
Snippet #3 is a little bit more expensive. Given that you do pattern matching on messages in the body, I wouldn't worry about it. Optimise for readability in this case. The difference is a very small constant factor.
You might use tuples/records as named parameters instead of passing lots of parameters. You can just reuse the single parameter that the function is going to take.
I guess (but I' not sure) that this syntax isn't supported by proper tail-recursion. If you refactor to use a single parameter I think that you will be again on the right track.
The more conventional way of avoiding repeating the list of parameters in snippet #1 is to put all or most of them in a record that holds the loop state. Then you only have one or a few variables to pass around in the loop. That's easier to read and harder to screw up than playing around with recursive funs.
I must say that in all cases where I do this type of recursion I don't think I have ever come across the case where exactly the same set of variables is passed around in the recursion. Usually variables will change reflecting state change in the process loop. It cannot be otherwise as you have to handle state explicitly. I usually group related parameters into records which cuts down the number of arguments and adds clarity.
You can of course use your solution and have some parameters implicit in the fun and some explicit in the recursive calls but I don't think this would improve clarity.
The same answer applies to "normal" recursion where you are stepping over data structures.

Resources