Trigger a function after 5 API calls return (in a distributed context) - software-design

My girlfriend was asked the below question in an interview:
We trigger 5 independent APIs simultaneously. Once they have all completed, we want to trigger a function. How will you design a system to do this?
My girlfriend replied she will use a flag variable, but the interviewer was evidently not happy with it.
So, is there a good way in which this could be handled (in a distributed context)? Note that each of the 5 API calls are made by different servers and the function to be triggered is on a 6th server.

The other answers suggesting Promises seem to assume all these requests necessarily come from the same client. If the context here is distributed systems, as you said it is, then I don't think those are valid answers. If they were, then the interview question would have nothing to do with distributed systems, except to essay your girlfriend's ability to recognize something that isn't really a distributed systems problem.
And the question does have the shape of some classic problems in distributed systems. It sounds a lot like YouTube view counting: How do you achieve qualities like atomicity and consistency in a multi-threaded, multi-process, or multi-client environment? Failing to recognize this, thinking the answer could be as simple as "a flag", betrayed a lack of experience in distributed systems.
Another thing about that answer is that it leaves many ambiguities. Where does the flag live? As a variable in another (Java?) API? In a database? In a file? Even in a non-distributed context, these are important questions. And if she had gone on to address these questions, even being innocent of all the distributed systems complications, she might have happily fallen into a discussion of the kinds of D.S. problems that occur when you use, say, a file; and how using a ACID-compliant database might solve those problems, and what the tradeoffs might be there... And she might have corrected herself and said "counter" instead of "flag"!

If I were asked this, my first thought would be to use promises/futures. The idea behind them is that you can execute time-consuming operations asynchronously and they will somehow notify you when they've completed, either successfully or unsuccessfully, typically by calling a callback function. So the first step is to spawn five asynchronous tasks and get five promises.
Then I would join the five promises together, creating a unified promise that represents the five separate tasks. In JavaScript I might call Promise.all(); in Java I would use CompletableFuture.allOf().
I would want to make sure to handle both success and failure. The combined promise should succeed if all of the API calls succeed and fail if any of them fail. If any fail there should be appropriate error handling/reporting. What happens if multiple calls fail? How would a mix of successes and failures be reported? These would be design points to mention, though not necessarily solve during the interview.
Promises and futures typically have modular layering system that would allow edge cases like timeouts to be handled by chaining handlers together. If done right, timeouts could become just another error condition that would be naturally handled by the error handling already in place.
This solution would not require any state to be shared across threads, so I would not have to worry about mutexes or deadlocks or other thread synchronization problems.
She said she would use a flag variable to keep track of the number of API calls have returned.
One thing that makes great interviewees stand out is their ability to anticipate follow-up questions and explain details before they are asked. The best answers are fully fleshed out. They demonstrate that one has thought through one's answer in detail, and they have minimal handwaving.
When I read the above I have a slew of follow-up questions:
How will she know when each API call has returned? Is she waiting for a function call to return, a callback to be called, an event to be fired, or a promise to complete?
How is she causing all of the API calls to be executed concurrently? Is there multithreading, a fork-join pool, multiprocessing, or asynchronous execution?
Flag variables are booleans. Is she really using a flag, or does she mean a counter?
What is the variable tracking and what code is updating it?
What is monitoring the variable, what condition is it checking, and what's it doing when the condition is reached?
If using multithreading, how is she handling synchronization?
How will she handle edge cases such API calls failing, or timing out?
A flag variable might lead to a workable solution or it might lead nowhere. The only way an interviewer will know which it is is if she thinks about and proactively discusses these various questions. Otherwise, the interviewer will have to pepper her with follow-up questions, and will likely lower their evaluation of her.
When I interview people, my mental grades are something like:
S — Solution works and they addressed all issues without prompting.
A — Solution works, follow-up questions answered satisfactorily.
B — Solution works, explained well, but there's a better solution that more experienced devs would find.
C — What they said is okay, but their depth of knowledge is lacking.
F — Their answer is flat out incorrect, or getting them to explain their answer was like pulling teeth.

Related

Could you implement async-await by memcopying stack frames rather than creating state machines?

I am trying to understand all the low-level stuff Compilers / Interpreters / the Kernel do for you (because I'm yet another person who thinks they could design a language that's better than most others)
One of the many things that sparked my curiosity is Async-Await.
I've checked the under-the-hood implementation for a couple languages, including C# (the compiler generates the state machine from sugar code) and Rust (where the state machine has to be implemented manually from the Future trait), and they all implement Async-Await using state machines.
I've not found anything useful by googling ("async copy stack frame" and variations) or in the "Similar questions" section.
To me, this method seems rather complicated and overhead-heavy;
Could you not implement Async-Await by simply memcopying the stack frames of async calls to/from heap?
I'm aware that it is architecturally impossible for some languages (I thank the CLR can't do it, so C# can't either).
Am I missing something that makes this logically impossible? I would expect less complicated code and a performance boost from doing it that way, am I mistaken? I suppose when you have a deep stack hierarchy after a async call (eg. a recursive async function) the amount of data you would have to memcopy is rather large, but there are probably ways to work around that.
If this is possible, then why isn't it done anywhere?
Yes, an alternative to converting code into state machines is copying stacks around. This is the way that the go language does it now, and the way that Java will do it when Project Loom is released.
It's not an easy thing to do for real-world languages.
It doesn't work for C and C++, for example, because those languages let you make pointers to things on the stack. Those pointers can be used by other threads, so you can't move the stack away, and even if you could, you would have to copy it back into exactly the same place.
For the same reason, it doesn't work when your program calls out to the OS or native code and gets called back in the same thread, because there's a portion of the stack you don't control. In Java, project Loom's 'virtual threads' will not release the thread as long as there's native code on the stack.
Even in situations where you can move the stack, it requires dedicated support in the runtime environment. The stack can't just be copied into a byte array. It has to be copied off in a representation that allows the garbage collector to recognize all the pointers in it. If C# were to adopt this technique, for example, it would require significant extensions to the common language runtime, whereas implementing state machines can be accomplished entirely within the C# compiler.
I would first like to begin by saying that this answer is only meant to serve as a starting point to go in the actual direction of your exploration. This includes various pointers and building up on the work of various other authors
I've checked the under-the-hood implementation for a couple languages, including C# (the compiler generates the state machine from sugar code) and Rust (where the state machine has to be implemented manually from the Future trait), and they all implement Async-Await using state machines
You understood correctly that the Async/Await implementation for C# and Rust use state machines. Let us understand now as to why are those implementations chosen.
To put the general structure of stack frames in very simple terms, whatever we put inside a stack frame are temporary allocations which are not going to outlive the method which resulted in the addition of that stack frame (including, but not limited to local variables). It also contains the information of the continuation, ie. the address of the code that needs to be executed next (in other words, the control has to return to), within the context of the recently called method. If this is a case of synchronous execution, the methods are executed one after the other. In other words, the caller method is suspended until the called method finishes execution. This, from a stack perspective fits in intuitively. If we are done with the execution of a called method, the control is returned to the caller and the stack frame can be popped off. It is also cheap and efficient from a perspective of the hardware that is running this code as well (hardware is optimised for programming with stacks).
In the case of asynchronous code, the continuation of a method might have to trigger several other methods that might get called from within the continuation of callers. Take a look at this answer, where Eric Lippert outlines the entirety of how the stack works for an asynchronous flow. The problem with asynchronous flow is that, the method calls do not exactly form a stack and trying to handle them like pure stacks may get extremely complicated. As Eric says in the answer, that is why C# uses graph of heap-allocated tasks and delegates that represents a workflow.
However, if you consider languages like Go, the asynchrony is handled in a different way altogether. We have something called Goroutines and here is no need for await statements in Go. Each of these Goroutines are started on their own threads that are lightweight (each of them have their own stacks, which defaults to 8KB in size) and the synchronization between each of them is achieved through communication through channels. These lightweight threads are capable of waiting asynchronously for any read operation to be performed on the channel and suspend themselves. The earlier implementation in Go is done using the SplitStacks technique. This implementation had its own problems as listed out here and replaced by Contigious Stacks. The article also talks about the newer implementation.
One important thing to note here is that it is not just the complexity involved in handling the continuation between the tasks that contribute to the approach chosen to implement Async/Await, there are other factors like Garbage Collection that play a role. GC process should be as performant as possible. If we move stacks around, GC becomes inefficient because accessing an object then would require thread synchronization.
Could you not implement Async-Await by simply memcopying the stack frames of async calls to/from heap?
In short, you can. As this answer states here, Chicken Scheme uses a something similar to what you are exploring. It begins by allocating everything on the stack and move the stack values to heap when it becomes too large for the GC activities (Chicken Scheme uses Generational GC). However, there are certain caveats with this kind of implementation. Take a look at this FAQ of Chicken Scheme. There is also lot of academic research in this area (linked in the answer referred to in the beginning of the paragraph, which I shall summarise under further readings) that you may want to look at.
Further Reading
Continuation Passing Style
call-with-current-continuation
The classic SICP book
This answer (contains few links to academic research in this area)
TLDR
The decision of which approach to be taken is subjective to factors that affect the overall usability and performance of the language. State Machines are not the only way to implement the Async/Await functionality as done in C# and Rust. Few languages like Go implement a Contigious Stack approach coordinated over channels for asynchronous operations. Chicken Scheme allocates everything on the stack and moves the recent stack value to heap in case it becomes heavy for its GC algorithm's performance. Moving stacks around has its own set of implications that affect garbage collection negatively. Going through the research done in this space will help you understand the advancements and rationale behind each of the approaches. At the same time, you should also give a thought to how you are planning on designing/implementing the other parts of your language for it be anywhere close to be usable in terms of performance and overall usability.
PS: Given the length of this answer, will be happy to correct any inconsistencies that may have crept in.
I have been looking into various strategies for doing this myseøf, because I naturally thi k I can design a language better than anybody else - same as you. I just want to emphasize that when I say better, I actually mean better as in tastes better for my liking, and not objectively better.
I have come to a few different approaches, and to summarize: It really depends on many other design choices you have made in the language.
It is all about compromises; each approach has advantages and disadvantages.
It feels like the compiler design community are still very focused on garbage collection and minimizing memory waste, and perhaps there is room for some innovation for more lazy and less purist language designers given the vast resources available to modern computers?
How about not having a call stack at all?
It is possible to implement a language without using a call stack.
Pass continuations. The function currently running is responsible for keeping and resuming the state of the caller. Async/await and generators come naturally.
Preallocated static memory addresses for all local variables in all declared functions in the entire program. This approach causes other problems, of course.
If this is your design, then asymc functions seem trivial
Tree shaped stack
With a tree shaped stack, you can keep all stack frames until the function is completely done. It does not matter if you allow progress on any ancestor stack frame, as long as you let the async frame live on until it is no longer needed.
Linear stack
How about serializing the function state? It seems like a variant of continuations.
Independent stack frames on the heap
Simply treat invocations like you treat other pointers to any value on the heap.
All of the above are trivialized approaches, but one thing they have in common related to your question:
Just find a way to store any locals needed to resume the function. And don't forget to store the program counter in the stack frame as well.

Detect deadloop in PinTool

I am writing a PinTool, which can manipulate certain register/memory value. However, after manipulation, one challenge I am facing now, is the deadloop.
In particular, due to the frequent manipulation of certain register value, it is indeed common to create deadloop in the execution trace. I am thinking to detect such case, and terminate the execution.
So here is my question, what is a good practice to detect a deadloop in a PinTool? I can come up with some naive solutions, say, record the executed instructions, and if certain instruction has been executed for a large amount of times, just terminate the execution.
Could anyone help me on this issue? Thank you.
Detecting whether a program will terminate isn't a computable problem in general, so no, I don't think it's a good idea.

Resources about Asynchronous Programming Design Patterns

I'm looking for non-trivial resources on concepts of asychronous programming, preferably books but also substantial articles or papers. This is not about the simple examples like passing a callback to an event listener in GUI programming, or having producer-consumer decoupled over a queue, or writing an onload handler for your HTML (although all those are valid). It's about the kind of problems the lighttpd developers might be concerned with, or someone doing substantial business logic in JavaScript that runs in a browser or on node.js. It's about situations where you need to pass a callback to a callback to a callback ... about complex asynchronous control-flows, and staying sane at the same time. I'm looking for concepts that allow you to do this systematically, to reason about this kind of control-flows, to seriously manage a significant amount of logic distributed in deeply nested callbacks, with all its ensuing issues of timing, synchronization, binding of values, passing of contexts, etc.
I wouldn't shrink away from some abstract explorations like continuation-passing-style, linear logic or temporal reasoning. Posts like this seem to go into the right direction, but discuss specific issues rather than a complete theory (E.g. the post mentions the "reactor" pattern, which seems relevant, without describing it).
Thanks.
EDIT:
To give more details about the aspects I'm interested in. I'm interested in a disciplined approach to asynchronous programming, a theory if you will, maybe just a set of specific patterns that I can pass to fellow programmers and say "This is the way we do asynchronous programming" in non-trivial scenarios. I need a theory to disentangle layers of callbacks that randomly fail to work, or produce spurious results. I want an approach which allows me to say "If we do it this way, we can be sure that ...". - Does this make things clearer?
EDIT 2:
As feedback indicates a dependency on the programming language: This will be JavaScript, but maybe it's enough to assume a language that allows higher-order functions.
EDIT 3:
Changed the title to be more specific (although I think design patterns are only one way to look at it; but at least it gives a better direction).
When doing layered callbacks currying is a useful technique.
For more on this you can look at http://en.wikibooks.org/wiki/Haskell/Higher-order_functions_and_Currying and for javascript you can look at http://www.svendtofte.com/code/curried_javascript/.
Basically, if you have multiple layers of callbacks, rather than having one massive parameter list, you can build it up incrementally, so that when you are in a loop calling your function, the various callback functions have already been defined, and passed.
This isn't meant as a complete answer to the question, but I was asked to put this part into an answer, so I did.
After a quick search here is a blog where he shows using currying with callbacks:
http://bjouhier.wordpress.com/2011/04/04/currying-the-callback-or-the-essence-of-futures/
UPDATE:
After reading the edit to the original question, to see design patterns for asynchronous programming, this may be a good diagram:
http://www1.cse.wustl.edu/~schmidt/patterns-ace.html, but there is much more to good asynchronous design, as first-order functions will enable this to be simplified, but, if you are using the MPI library and Fortran then you will have different implementations.
How you approach the design is affected heavily by the language and the technologies involved, that any answer will fall short of being complete.

Synchronous vs. Asynchronous updates in Real-time applications

I am looking to update an application in which I have the ability to update synchronously or asynchronously. For the real-time nature of the app, which currently ranges from a synchronous execution of methods ranging in frequencies from 1-60Hz, do you see any advantages into asynchronously updating due to user input? Or should I wait until the next synchronous cycle to incorporate the change?
My thoughts so far:
The current advantage that I see with introducing an asynchronous update is that if a member in a 1Hz method is updated, the 60Hz method may execute 50+ times with the old value. I know this is still a relatively short amount of time to a user ( < 1 second), but to me the principal of continuing calcs with bad values for 50+ reps seems bad.
The current advantage that I see with keeping it synchronous is the ease of readability for the flow of code execution.
Are there any repercussions I am not thinking of?
It's a little hard to say without more of a sense of your application. In general, I'd say it's preferable to stay synchronous for a real-time application where possible, just because it makes it easier to reason about timeliness (often the hardest thing to reason about.) If you can reasonably make something periodic, make it periodic and thank your lucky stars.
Moving to a partially synchronous or async model does have some advantages. Like you say, it might feel less than aesthetic to continue operating on stale data. But consider: this is a real-time application. Presumably you have a requirement that states what the update latency for data input to your 60Hz task must be. Like in any general purpose computing performance setting, don't go to extra work to do better than that unless it's easy; it's clearer in the implementation; or it becomes necessary to achieve correctness.
So, all that said, there are no hard and fast rules. Make sure your rationale is both written down and reflected in your design.

MSMQ - Message Queue Abstraction and Pattern

Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind.
I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted).
The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues).
However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly.
Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have any of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-)
Any and ALL input would be highly appreciated and seriously considered…
Thanks again.
For anyone interested. I decided in
the end to simply cache the
transaction in another location and
use the MSMQ as intended and described
below.
If the queue has a large-ish number of messages on it, then enumerating those messages will become a serious bottleneck. MSMQ was designed for first-in-first-out kind of access and anything that doesn't follow that pattern can cause you lots of grief in terms of performance.
The answer depends greatly on the sort of queries you're going to be executing, but the answer may be some kind of no-sql database (CouchDB or BerkeleyDB, etc)

Resources