Golang: runtime.GC guarantees - collections

Just wondering if there are any guarantees behind Go's runtime.GC() call.
Is it always true that if there are unreferenced objects, then runtime.GC() will free them and that free space will be reflected in subsequent calls to runtime.ReadMemStats()?
Cheers,
Alin

Related

How to cancel a GRPC streaming call?

Usually, the client can cancel the gRPC call with:
(requestObserver as ClientCallStreamObserver<Request>)
.cancel("Cancelled", null)
However, it is shown in the Javadoc:
CancellableContext withCancellation = Context.current().withCancellation();
// do stuff
withCancellation.cancel(t);
Which one is the "correct" way of cancelling a client call, and letting the server know?
Edit:
To make matters more confusing, there's also ManagedChannel.shutdown*.
Both are acceptable and appropriate.
clientCallStreamObserver.cancel() is generally easier as it has less boilerplate. It should generally be preferred. However, it is not thread-safe; it is like normal sending on the StreamObserver. It also requires direct awareness with the RPC; you can't have higher-level code orchestrate the cancellation as it may not even be aware of the RPC.
Use Context cancellation for thread-safe cancellation and less RPC-aware cancellation. Context cancellation could be used in circumstances similar to thread interruption or Future cancellation. Not that CancellableContexts should be treated as a resource and must be cancelled eventually to avoid leaking memory. (context.cancel(null) can be used when the context reaches its "normal" end of life.)

Lock free non-allocating collection

I am looking for a collection data structure that is:
thread safe
lock free
non-allocating (amortized or pre-allocated is fine)
non-intrusive
not using exotic intrinsics
Element order does not matter. Stack, queue, bag, anything is fine. I have found plenty of examples that fulfill four of these five requirements, for example:
.NET's List is not thread safe.
If I put a mutex on it, then it's not lock free.
.NET's ConcurrentStack is thread safe, lock free, uses a simple CompareExchange, but allocates a new Node for each element.
If I move the next pointer from the Node to the element itself, then it's intrusive.
Array based lock free data structures tend to require multi-word intrinsics.
I feel like I'm missing something super obvious. This should be a solved problem.
.NET's ConcurrentQueue fulfills all five requirements. It does allocate when the backing storage runs out of space, similar to List<T>, but as long as there is extra capacity, no allocations occur. Unfortunately the only way to reserve extra capacity upfront, is to initialize it with a collection of same size and then dequeuing all the elements.
same is true for .NET's ConcurrentBag

Understanding (sub,partial,full,one-shot) continuations (in procedural languages)

After reading almost everything I could find about continuations, I still have trouble understanding them. Maybe because all of the explanations are heavily tied with lambda calculus, which I have trouble understanding.
In general, a continuation is some representation of what to do next, after you have finished doing the current thing, i.e. the rest of the computation.
But then, it gets trickier with all the variety. Maybe some of you could help me out with my custom analogy here and point out where I have made mistakes in my understanding.
Let's say our functions are represented as objects, and for the sake of simplicity:
Our interpreter has a stack of function calls.
Each function call has a stack for local data and arguments.
Each function call has a queue of "instructions" to be executed that operate on the local data stack and the queue itself (and, maybe on the stacks of callers too).
The analogy is meant to be similar to XY concatenative language.
So, in my understanding:
A continuation is the rest of the whole computation (this unfinished queue of instructions + stack of all subsequent computations: queues of callers).
A partial continuation is the current unfinished queue + some delimited part of the caller stack, up until some point (not the full one, for the whole program).
A sub-continuation is the rest of the current instruction queue for currently "active" function.
A one-shot continuation is such a continuation that can be executed only once,
after being reified into an object.
Please correct me if I am wrong in my analogies.
A continuation is the rest of the whole computation (this unfinished queue of instructions + stack of all subsequent computations: queues of callers)
Informally, your understanding is correct. However, continuation as a concept is an abstraction of the control state and the state the process should attain if the continuation is invoked. It needs not contain the entire stack explicitly, as long as the state can be attained. This is one of the classical definitions of continuation. Refer to Rhino JavaScript doc.
A partial continuation is the current unfinished queue + some delimited part of the caller stack, up until some point (not the full one, for the whole program)
Again, informally, your understanding is correct. It is also called delimited continuation or composable continuation. In this case, not only the process needs to attain the state represented by the delimited continuation, but it also needs to limit the invocation of the continuation to the limit specified. This is in contrast to full continuation, which starts from the point specified and continues to the end of the control flow. Delimited continuation starts from this point and ends in another identified point only. It is a partial control flow and not the full one, which is precisely why delimited continuation needs to return a value (see the Wikipedia article). The value usually represents the result of the partial execution.
A sub-continuation is the rest of the current instruction queue for currently "active" function
Your understanding here is slightly hazy. Once way to understand this is to consider a multi-threaded environment. If there is a main thread and a new thread is started from it at some point, then a continuation (from the context of main or child thread), what should it represent? The entire control flow of child and main threads from that point (which in most cases does not make much sense) or just the control flow of the child thread? Sub-continuation is a kind of delimited continuation, the control flow state representation from a point to another point that is part of the sub-process or a sub-process tree. Refer to this paper.
A one-shot continuation is such a continuation that can be executed only once, after being reified into an object
As per this paper, your understanding is correct. The paper seems not to explicitly state if this is a full/classical continuation or a delimited one; however, in next sections, the paper states that full continuations are problematic, so I would assume this type of continuation is a delimited continuation.

Should clojure core.async channels be closed when not used anymore?

Close method (at least in java world) is something that you as a good citizen have to call when you are done using related resource. Somehow I automatically started to apply the same for the close! function from core.async library. These channels are not tight to any IO as far as I understand and therefore I am not sure whether it is necessary to call close!. Is it ok to leave channels (local ones) for Garbage Collection without closing them?
It's fine to leave them to the garbage collector unless closing them has meaning in your program. It's common to close them as a signal to other components that it's time to shut themselves down. Other channels are intended to be used forever and are never expected to be closed or garbage collected. In still other cases the channel is used to convey one value to a consumer and then both the producer and the consumer are finished and the channel is GC'd. In these last case there is no need to close them.

Tail calls in architectures without a call stack

My answer for a recent question about GOTOs and tail recursion was phrased in terms of a call stack. I'm worried that it wasn't sufficiently general, so I ask you: how is the notion of a tail call (or equivalent) useful in architectures without a call stack?
In continuation passing, all called functions replace the calling function, and are thus tail calls, so "tail call" doesn't seem to be a useful distinction. In messaging and event based architectures, there doesn't seem to be an equivalent, though please correct me if I'm wrong. The latter two architectures are interesting cases as they are associated with OOP, rather than FP. What about other architectures? Were the old Lisp machines based on call-stacks?
Edit: According to "What the heck is: Continuation Passing Style (CPS)" (and Alex below), the equivalent of a tail call under continuation passing is not "called function replaces calling function" but "calling function passes the continuation it was given, rather than creating a new continuation". This type of tail call is useful, unlike what I asserted.
Also, I'm not interested in systems that use call stacks at a lower level, for the higher level doesn't require a call stack. This restriction doesn't apply to Alex's answer because he's writing about the fact that other invocation architectures (is this the right term?) often have a call stack equivalent, not that they have a call stack somewhere under the hood. In the case of continuation passing, the structure is like an arborescence, but with edges in the opposite direction. Call stack equivalents are highly relevant to my interests.
"Architectures without a call stack" typically "simulate" one at some level -- for example, back in the time of IBM 360, we used the S-Type Linkage Convention using register-save areas and arguments-lists indicated, by convention, by certain general-purpose registers.
So "tail call" can still matter: does the calling function need to preserve the information necessary to resume execution after the calling point (once the called function is finished), or does it know there IS going to be no execution after the calling point, and so simply reuse its caller's "info to resume execution" instead?
So for example a tail call optimization might mean not appending the continuation needed to resume execution on whatever linked list is being used for the purpose... which I like to see as a "call stack simulation" (at some level, though it IS obviously a more flexible arrangement -- don't want to have continuation-passing fans jumping all over my answer;-).
On the off chance that this question interests someone other than me, I have an expanded answer for the other question that also answers this one. Here's the nutshell, non-rigorous version.
When a computational system performs sub-computations (i.e. a computation starts and must pause while another computation is performed because the first depends on the result of the second), a dependency relation between execution points naturally arises. In call-stack based architectures, the relation is topologically a path graph. In CPS, it's a tree, where every path between the root and a node is a continuation. In message passing and threading, it's a collection of path graphs. Synchronous event handling is basically message passing. Starting a sub-calculation involves extending the dependency relation, except in a tail call which replaces a leaf rather than appending to it.
Translating tail calling to asynchronous event handling is more complex, so instead consider a more general version. If A is subscribed to an event on channel 1, B is subscribed to the same event on channel 2 and B's handler merely fires the event on channel 1 (it translates the event across channels), then A can be subscribed to the event on channel 2 instead of subscribing B. This is more general because the equivalent of a tail call requires that
A's subscription on channel 1 be canceled when A is subscribed on channel 2
the handlers are self-unsubscribing (when invoked, they cancel the subscription)
Now for two systems that don't perform sub-computations: lambda calculus (or term rewriting systems in general) and RPN. For lambda calculus, tail calls roughly correspond to a sequence of reductions where the term length is O(1) (see iterative processes in SICP section 1.2). Take RPN to use a data stack and an operations stack (as opposed to a stream of operations; the operations are those yet to be processed), and an environment that maps symbols to a sequence of operations. Tail calls could correspond to processes with O(1) stack growth.

Resources