Pintool- How can I traverse through all traces (even the ones that have already been executed once)? - intel-pin

I'm trying to count how many times a bbl is executed in the whole program run, but apparently, Trace_addinstrumentfunction skips traces that have already been executed once. Anyone has any ideas?

Pin instrumentation works in two phases. The instrumentation phase is called when new code is encountered, and allows you to insert analysis callbacks. Analysis callbacks are called every time the code is encountered.
I strongly recommend reading the first bit of the pin manual to understand the difference between instrumentation and analysis functions.

The instrumentation called allows you to insert the callbacks. In simpler terms, the function will have you put function calls before each instrument. You can define this instrument as either Instruction, Trace or Routine. Now, specific to your question, finding number of bbl is easy. Pin however follows a different definition of BBL. Finding the number of times a BBL(Per Pin's definition) is executed is easy. You can simply insert an Trace Instrumentation call and for every BBL increment a counter in the analysis call and you will get the BBL count.
If you want to go by the textbook definition of BBL(one entry one exit) which implies one BBL breaks at the BranchOrCall statement, insert a call using IsBranchOrCall API and increment the BBLcounter in the callback function.
I recommend trying both of them and figuring out the difference between the two definitions.

Related

Understanding (sub,partial,full,one-shot) continuations (in procedural languages)

After reading almost everything I could find about continuations, I still have trouble understanding them. Maybe because all of the explanations are heavily tied with lambda calculus, which I have trouble understanding.
In general, a continuation is some representation of what to do next, after you have finished doing the current thing, i.e. the rest of the computation.
But then, it gets trickier with all the variety. Maybe some of you could help me out with my custom analogy here and point out where I have made mistakes in my understanding.
Let's say our functions are represented as objects, and for the sake of simplicity:
Our interpreter has a stack of function calls.
Each function call has a stack for local data and arguments.
Each function call has a queue of "instructions" to be executed that operate on the local data stack and the queue itself (and, maybe on the stacks of callers too).
The analogy is meant to be similar to XY concatenative language.
So, in my understanding:
A continuation is the rest of the whole computation (this unfinished queue of instructions + stack of all subsequent computations: queues of callers).
A partial continuation is the current unfinished queue + some delimited part of the caller stack, up until some point (not the full one, for the whole program).
A sub-continuation is the rest of the current instruction queue for currently "active" function.
A one-shot continuation is such a continuation that can be executed only once,
after being reified into an object.
Please correct me if I am wrong in my analogies.
A continuation is the rest of the whole computation (this unfinished queue of instructions + stack of all subsequent computations: queues of callers)
Informally, your understanding is correct. However, continuation as a concept is an abstraction of the control state and the state the process should attain if the continuation is invoked. It needs not contain the entire stack explicitly, as long as the state can be attained. This is one of the classical definitions of continuation. Refer to Rhino JavaScript doc.
A partial continuation is the current unfinished queue + some delimited part of the caller stack, up until some point (not the full one, for the whole program)
Again, informally, your understanding is correct. It is also called delimited continuation or composable continuation. In this case, not only the process needs to attain the state represented by the delimited continuation, but it also needs to limit the invocation of the continuation to the limit specified. This is in contrast to full continuation, which starts from the point specified and continues to the end of the control flow. Delimited continuation starts from this point and ends in another identified point only. It is a partial control flow and not the full one, which is precisely why delimited continuation needs to return a value (see the Wikipedia article). The value usually represents the result of the partial execution.
A sub-continuation is the rest of the current instruction queue for currently "active" function
Your understanding here is slightly hazy. Once way to understand this is to consider a multi-threaded environment. If there is a main thread and a new thread is started from it at some point, then a continuation (from the context of main or child thread), what should it represent? The entire control flow of child and main threads from that point (which in most cases does not make much sense) or just the control flow of the child thread? Sub-continuation is a kind of delimited continuation, the control flow state representation from a point to another point that is part of the sub-process or a sub-process tree. Refer to this paper.
A one-shot continuation is such a continuation that can be executed only once, after being reified into an object
As per this paper, your understanding is correct. The paper seems not to explicitly state if this is a full/classical continuation or a delimited one; however, in next sections, the paper states that full continuations are problematic, so I would assume this type of continuation is a delimited continuation.

openCL : No. of iterations in profiling API

Trying to use clGetEventProfilingInfo for timing my kernels.
Is there any facility to give no. of iterations before which the values of start time and end time are reported?
If the kernel is run only once then , of ourse it has lots of over heads associated with it. So to get the best timing we should run the kernel several times and take the average time.
Do we have such a parameter in profiling using API? (We do have such parameters when we use third party software Tools for profiling)
The clGetEventProfilingInfo function will return profiling information for a single event, which corresponds to a single enqueued command. There is no built-in mechanism to automatically report information across a number of calls; you'll have to code that yourself.
It's pretty straightforward to do - just query the start and end times for each event you care about and add them up. If you are only running a single kernel (in a loop), then you could just use a wall-clock timer (with clFinish before you start and stop timing), or take the difference between the time the first event started and the last event finished.

How to properly write a SIGPROF handler that invokes AsyncGetCallTrace?

I am writing a short and simple profiler (in C), which is intended to print out stack traces for threads in various Java clients at regular intervals. I have to use the undocumented function AsyncGetCallTrace instead of GetStackTrace to minimize intrusion and allow for stack traces regardless of thread state. The source code for the function can be found here: http://download.java.net/openjdk/jdk6/promoted/b20/openjdk-6-src-b20-21_jun_2010.tar.gz
in hotspot/src/share/vm/prims/forte.cpp. I found some man pages documenting JVMTI, signal handling, and timing, as well as a blog with details on how to set up the AsyncGetCallTrace call: http://jeremymanson.blogspot.com/2007/05/profiling-with-jvmtijvmpi-sigprof-and.html
What this blog is missing is the code to actually invoke the function within the signal handler (the author assumes the reader can do this on his/her own). I am asking for help in doing exactly this. I am not sure how and where to create the struct ASGCT_CallTrace (and the internal struct ASGCT_CallFrame), as defined in the aforementioned file forte.cpp. The struct ASGCT_CallTrace is one of the parameters passed to AsyncGetCallTrace, so I do need to create it, but I don't know how to obtain the correct values for its fields: JNIEnv *env_id, jint num_frames, and JVMPI_CallFrame *frames. Furthermore, I do not know what the third parameter passed to AsyncGetCallTrace (void* ucontext) is supposed to be?
The above problem is the main one I am having. However, other issues I am faced with include:
SIGPROF doesn't seem to be raised by the timer exactly at the specified intervals, but rather a bit less frequently. That is, if I set the timer to send a SIGPROF every second (1 sec, 0 usec), then in a 5 second run, I am getting fewer than 5 SIGPROF handler outputs (usually 1-3)
SIGPROF handler outputs do not appear at all during a Thread.sleep in the Java code. So, if a SIGPROF is to be sent every second, and I have Thread.sleep(5000);, I will not get any handler outputs during the execution of that code.
Any help would be appreciated. Additional details (as well as parts of code and sample outputs) will be posted upon request.
I finally got a positive result, but since little discussion was spawned here, my own answer will be brief.
The ASGCT_CallTrace structure (and the underlying ASGCT_CallFrame array) can simply be declared in the signal handler, thus existing only the stack:
ASGCT_CallTrace trace;
JNIEnv *env;
global_VM_pointer->AttachCurrentThread((void **) &env, NULL);
trace.env_id = env;
trace.num_frames = 0;
ASGCT_CallFrame storage[25];
trace.frames = storage;
The following gets the uContext:
ucontext_t uContext;
getcontext(&uContext);
And then the call is just:
AsyncGetCallTrace(&trace, 25, &uContext);
I am sure there are some other nuances that I had to take care of in the process, but I did not really document them. I am not sure I can disclose the full current code I have, which successfully asynchronously requests for and obtains stack traces of any java program at fixed intervals. But if someone is interested in or stuck on the same problem, I am now able to help (I think).
On the other two issues:
[1] If a thread is sleeping and a SIGPROF is generated, the thread handles that signal only after waking up. This is normal, since it is the thread's job to handle the signal.
[2] The timer imperfections do not seem to appear anymore. Perhaps I mis-measured.

Tail calls in architectures without a call stack

My answer for a recent question about GOTOs and tail recursion was phrased in terms of a call stack. I'm worried that it wasn't sufficiently general, so I ask you: how is the notion of a tail call (or equivalent) useful in architectures without a call stack?
In continuation passing, all called functions replace the calling function, and are thus tail calls, so "tail call" doesn't seem to be a useful distinction. In messaging and event based architectures, there doesn't seem to be an equivalent, though please correct me if I'm wrong. The latter two architectures are interesting cases as they are associated with OOP, rather than FP. What about other architectures? Were the old Lisp machines based on call-stacks?
Edit: According to "What the heck is: Continuation Passing Style (CPS)" (and Alex below), the equivalent of a tail call under continuation passing is not "called function replaces calling function" but "calling function passes the continuation it was given, rather than creating a new continuation". This type of tail call is useful, unlike what I asserted.
Also, I'm not interested in systems that use call stacks at a lower level, for the higher level doesn't require a call stack. This restriction doesn't apply to Alex's answer because he's writing about the fact that other invocation architectures (is this the right term?) often have a call stack equivalent, not that they have a call stack somewhere under the hood. In the case of continuation passing, the structure is like an arborescence, but with edges in the opposite direction. Call stack equivalents are highly relevant to my interests.
"Architectures without a call stack" typically "simulate" one at some level -- for example, back in the time of IBM 360, we used the S-Type Linkage Convention using register-save areas and arguments-lists indicated, by convention, by certain general-purpose registers.
So "tail call" can still matter: does the calling function need to preserve the information necessary to resume execution after the calling point (once the called function is finished), or does it know there IS going to be no execution after the calling point, and so simply reuse its caller's "info to resume execution" instead?
So for example a tail call optimization might mean not appending the continuation needed to resume execution on whatever linked list is being used for the purpose... which I like to see as a "call stack simulation" (at some level, though it IS obviously a more flexible arrangement -- don't want to have continuation-passing fans jumping all over my answer;-).
On the off chance that this question interests someone other than me, I have an expanded answer for the other question that also answers this one. Here's the nutshell, non-rigorous version.
When a computational system performs sub-computations (i.e. a computation starts and must pause while another computation is performed because the first depends on the result of the second), a dependency relation between execution points naturally arises. In call-stack based architectures, the relation is topologically a path graph. In CPS, it's a tree, where every path between the root and a node is a continuation. In message passing and threading, it's a collection of path graphs. Synchronous event handling is basically message passing. Starting a sub-calculation involves extending the dependency relation, except in a tail call which replaces a leaf rather than appending to it.
Translating tail calling to asynchronous event handling is more complex, so instead consider a more general version. If A is subscribed to an event on channel 1, B is subscribed to the same event on channel 2 and B's handler merely fires the event on channel 1 (it translates the event across channels), then A can be subscribed to the event on channel 2 instead of subscribing B. This is more general because the equivalent of a tail call requires that
A's subscription on channel 1 be canceled when A is subscribed on channel 2
the handlers are self-unsubscribing (when invoked, they cancel the subscription)
Now for two systems that don't perform sub-computations: lambda calculus (or term rewriting systems in general) and RPN. For lambda calculus, tail calls roughly correspond to a sequence of reductions where the term length is O(1) (see iterative processes in SICP section 1.2). Take RPN to use a data stack and an operations stack (as opposed to a stream of operations; the operations are those yet to be processed), and an environment that maps symbols to a sequence of operations. Tail calls could correspond to processes with O(1) stack growth.

Is Erlang's recursive functions not just a goto?

Just to get it straight in my head. Consider this example bit of Erlang code:
test() ->
receive
{From, whatever} ->
%% do something
test();
{From, somethingelse} ->
%% do something else
test();
end.
Isn't the test() call, just a goto?
I ask this because in C we learned, if you do a function call, the return location is always put on the stack. I can't imagine this must be the case in Erlang here since this would result in a stackoverflow.
We had 2 different ways of calling functions:
goto and gosub.
goto just steered the program flow somewhere else, and gosub remembered where you came from so you could return.
Given this way of thinking, I can look at Erlang's recursion easier, since if I just read: test() as a goto, then there is no problem at all.
hence my question: isn't :Erlang just using a goto instead of remembering the return address on a stack?
EDIT:
Just to clarify my point:
I know goto's can be used in some languages to jump all over the place. But just supose instead of doing someFunction() you can also do: goto someFunction()
in the first example the flow returns, in the second example the flow just continues in someFunction and never returns.
So we limit the normal GOTO behaviour by just being able to jump to method starting points.
If you see it like this, than the Erlang recursive function call looks like a goto.
(a goto in my opinion is a function call without the ability to return where you came from). Which is exactly what is happening in the Erlang example.
A tail recursive call is more of a "return and immediately call this other function" than a goto because of the housekeeping that's performed.
Addressing your newest points: recording the return point is just one bit of housekeeping that's performed when a function is called. The return point is stored in the stack frame, the rest of which must be allocated and initialized (in a normal call), including the local variables and parameters. With tail recursion, a new frame doesn't need to be allocated and the return point doesn't need to be stored (the previous value works fine), but the rest of the housekeeping needs to be performed.
There's also housekeeping that needs to be performed when a function returns, which includes discarding locals and parameters (as part of the stack frame) and returning to the call point. During tail recursive call, the locals for the current function are discarded before invoking the new function, but no return happens.
Rather like how threads allow for lighter-weight context switching than processes, tail calls allow for lighter-weight function invocation, since some of the housekeeping can be skipped.
The "goto &NAME" statement in Perl is closer to what you're thinking of, but not quite, as it discards locals. Parameters are kept around for the newly invoked function.
One more, simple difference: a tail call can only jump to a function entry point, while a goto can jump most anywhere (some languages restrict the target of a goto, such as C, where goto can't jump outside a function).
You are correct, the Erlang compiler will detect that it is a tail recursive call, and instead of moving on on the stack, it reuses the current function's stack space.
Furthermore it is also able to detect circular tail-recursion, e.g.
test() -> ..., test2().
test2() -> ..., test3().
test3() -> ..., test().
will also be optimized.
The "unfortunate" side-effect of this is that when you are tracing function calls, you will not be able to see each invocation of a tail recursive function, but the entry and exit point.
You've got two questions here.
First, no, you're not in any danger of overrunning the stack in this case because these calls to test() are both tail-recursive.
Second, no, function calls are not goto, they're function calls. :) The thing that makes goto problematic is that it bypasses any structure in your code. You can jump out of statements, jump into statements, bypass assignments...all kinds of screwiness. Function calls don't have this problem because they have an obvious flow of control.
I think the difference here is between a "real" goto and what can in some cases seem like a goto. In some special cases the compiler can detect that it is free to cleanup the stack of the current function before calling another function. This is when the call is the last call in a function. The difference is, of course, that as in any other call you can pass arguments to the new function.
As others have pointed out this optimisation is not restricted to recursive calls but to all last calls. This is used in the "classic" way of programming FSMs.
It's a goto in the same why that if is goto and while is goto. It is implemented using (the moral equivalent of) goto, but it does not expose the full shoot-self-in-foot potential of goto directly to the programmer.
In fact, these recursive functions are the ultimate GOTO according to Guy Steele.
Here's a more general answer, which supercedes my earlier answer based on call-stacks. Since the earlier answer has been accepted, I won't replace the text.
Prologue
Some architectures don't have things they call "functions" that are "called", but do have something analogous (messaging may call them "methods" or "message handlers"; event based architectures have "event handlers" or simply "handlers"). I'll be using the terms "code block" and "invocation" for the general case, though (strictly speaking) "code block" can include things that aren't quite functions. You can substitute the appropriately inflected form of "call" for "invocation" or "invoke", as I might in a few places. The features of an architecture that describe invocation are sometimes called "styles", as in "continuation passing style" (CPS), though this isn't previously an official term. To keep things from being too abstract, we'll examine call stack, continuation passing, messaging (à la OOP) and event handling invocation styles. I should specify the models I'm using for these styles, but I'm leaving them out in the interest of space.
Invocation Features
or, C Is For Continuation, Coordination and Context, That's Good Enough For Me
Hohpe identifies three nicely alliterative invocation features of the call-stack style: Continuation, Coordination, Context (all capitalized to distinguish them from other uses of the words).
Continuation decides where execution will continue when a code block finishes. The "Continuation" feature is related to "first-class continuations" (often simply called "continuations", including by me), in that continuations make the Continuation feature visible and manipulable at a programmatic level.
Coordination means code doesn't execute until the data it needs is ready. Within a single call stack, you get Coordination for free because the program counter won't return to a function until a called function finishes. Coordination becomes an issue in (e.g.) concurrent and event-driven programming, the former because a data producer may fall behind a data consumer and the latter because when a handler fires an event, the handler continues immediately without waiting for a response.
Context refers to the environment that is used to resolve names in a code block. It includes allocation and initialization of the local variables, parameters and return value(s). Parameter passing is also covered by the calling convention (keeping up the alliteration); for the general case, you could split Context into a feature that covers locals, one that covers parameters and another for return values. For CPS, return values are covered by parameter passing.
The three features aren't necessarily independent; invocation style determines their interrelationships. For instance, Coordination is tied to Continuation under the call-stack style. Continuation and Context are connected in general, since return values are involved in Continuation.
Hohpe's list isn't necessarily exhaustive, but it will suffice to distinguish tail-calls from gotos. Warning: I might go off on tangents, such as exploring invocation space based on Hohpe's features, but I'll try to contain myself.
Invocation Feature Tasks
Each invocation feature involves tasks to be completed when invoking a code block. For Continuation, invoked code blocks are naturally related by a chain of invoking code. When a code block is invoked, the current invocation chain (or "call chain") is extended by placing a reference (an "invocation reference") to the invoking code at the end of the chain (this process is described more concretely below). Taking into account invocation also involves binding names to code blocks and parameters, we see even non-bondage-and-discipline languages can have the same fun.
Tail Calls
or, The Answer
or, The Rest Is Basically Unnecessary
Tail calling is all about optimizing Continuation, and it's a matter of recognizing when the main Continuation task (recording an invocation reference) can be skipped. The other feature tasks stand on their own. A "goto" represents optimizing away tasks for Continuation and Context. That's pretty much why a tail call isn't a simple "goto". What follows will flesh out what tail calls look like in various invocation styles.
Tail Calls In Specific Invocation Styles
Different styles arrange invocation chains in different structures, which I'll call a "tangle", for lack of a better word. Isn't it nice that we've gotten away from spaghetti code?
With a call-stack, there's only one invocation chain in the tangle; extending the chain means pushing the program counter. A tail call means no program counter push.
Under CPS, the tangle consists of the extant continuations, which form a reverse arborescence (a directed tree where every edge points towards a central node), where each path back to the center is a invocation chain (note: if the program entry point is passed a "null" continuation, the tangle can be a whole forest of reverse arborescences). One particular chain is the default, which is where an invocation reference is added during invocation. Tail calls won't add an invocation reference to the default invocation chain. Note that "invocation chain" here is basically synonymous with "continuation", in the sense of "first class continuation".
Under message passing, the invocation chain is a chain of blocked methods, each waiting for a response from the method before it in the chain. A method that invokes another is a "client"; the invoked method is a "supplier" (I'm purposefully not using "service", though "supplier" isn't much better). A messaging tangle is a set of unconnected invocation chains. This tangle structure is rather like having multiple thread or process stacks. When the method merely echos another method's response as its own, the method can have its client wait on its supplier rather than itself. Note that this gives a slightly more general optimization, one that involves optimizing Coordination as well as Continuation. If the final portion of a method doesn't depend on a response (and the response doesn't depend on the data processed in the final portion), the method can continue once it's passed on its client's wait dependency to its supplier. This is analogous to launching a new thread, where the final portion of the method becomes the thread's main function, followed by a call-stack style tail call.
What About Event Handling Style?
With event handling, invocations don't have responses and handlers don't wait, so "invocation chains" (as used above) isn't a useful concept. Instead of a tangle, you have priority queues of events, which are owned by channels, and subscriptions, which are lists of listener-handler pairs. In some event driven architectures, channels are properties of listeners; every listener owns exactly one channel, so channels become synonymous with listeners. Invoking means firing an event on a channel, which invokes all subscribed listener-handlers; parameters are passed as properties of the event. Code that would depend on a response in another style becomes a separate handler under event handling, with an associated event. A tail call would be a handler that fires the event on another channel and does nothing else afterwards. Tail call optimization would involve re-subscribing listeners for the event from the second channel to the first, or possibly having the handler that fired the event on the first channel instead fire on the second channel (an optimization made by the programmer, not the compiler/interpreter). Here's what the former optimization looks like, starting with the un-optimized version.
Listener Alice subscribes to event "inauguration" on BBC News, using handler "party"
Alice fires event "election" on channel BBC News
Bob is listening for "election" on BBC News, so Bob's "openPolls" handler is invoked
Bob subscribes to event "inauguration" on channel CNN.
Bob fires event "voting" on channel CNN
Other events are fired & handled. Eventually, one of them ("win", for example) fires event "inauguration" on CNN.
Bob's barred handler fires "inauguration" on BBC News
Alice's inauguration handler is invoked.
And the optimized version:
Listener Alice subscribes to event "inauguration" on BBC News
Alice fires event "election" on channel BBC News
Bob is listening for "election" on BBC News, so Bob's "openPolls" handler is invoked
Bob subscribes anyone listening for "inauguration" on BBC News to the inauguration event on CNN*.
Bob fires event "voting" on channel CNN
Other events are fired & handled. Eventually, one of them fires event "inauguration" on CNN.
Alice's inauguration handler is invoked for the inauguration event on CNN.
Note tail calls are trickier (untenable?) under event handling because they have to take into account subscriptions. If Alice were later to unsubscribe from "inauguration" on BBC News, the subscription to inauguration on CNN would also need to be canceled. Additionally, the system must ensure it doesn't inappropriately invoke a handler multiple times for a listener. In the above optimized example, what if there's another handler for "inauguration" on CNN that fires "inauguration" on BBC News? Alice's "party" event will be fired twice, which may get her in trouble at work. One solution is to have *Bob unsubscribe all listeners from "inauguration" on BBC News in step 4, but then you introduce another bug wherein Alice will miss inauguration events that don't come via CNN. Maybe she wants to celebrate both the U.S. and British inaugurations. These problems arise because there are distinctions I'm not making in the model, possibly based on types of subscriptions. For instance, maybe there's a special kind of one-shot subscription (like System-V signal handlers) or some handlers unsubscribe themselves, and tail call optimization is only applied in these cases.
What's next?
You could go on to more fully specify invocation feature tasks. From there, you could figure out what optimizations are possible, and when they can be used. Perhaps other invocation features could be identified. You could also think of more examples of invocation styles. You could also explore the dependencies between invocation features. For instance, synchronous and asynchronous invocation involve explicitly coupling or uncoupling Continuation and Coordination. It never ends.
Get all that? I'm still trying to digest it myself.
References:
Hohpe, Gregor; "Event-Driven Architecture"
Sugalski, Dan; "CPS and tail calls--two great tastes that taste great together"
In this case it is possible to do tail-call optimization, since we don't need to do more work or make use of local variables. So the compiler will convert this into a loop.
(a goto in my opinion is a function call without the ability to return where you came from). Which is exactly what is happening in the erlang example.
That is not what's happening in Erlang, you CAN return to where you came from.
The calls are tail-recursive, which means that it is "sort of" a goto. Make sure you understand what tail-recursion is before you attempt to understand or write any code. Reading Joe Armstrong's book probably isn't a bad idea if you are new to Erlang.
Conceptually, in the case where you call yourself using test() then a call is made to the start of the function using whatever parameters you pass (none in this example) but nothing more is added to the stack. So all your variables are thrown away and the function starts fresh, but you didn't push a new return pointer onto the stack. So it's like a hybrid between a goto and a traditional imperative language style function call like you'd have in C or Java. But there IS still one entry on the stack from the very first call from the calling function. So when you eventually exit by returning a value rather the doing another test() then that return location is popped from the stack and execution resumes in your calling function.

Resources