What is retracting in the context of OCaml? - functional-programming

I have scribbled the term "retracting in OCaml" in a small space in my notebook and now I can't seem to recollect what it was about nor can I find anything about it on the internet.
Does this term really exist or is it my lecturer's own notation for some property of OCaml. My classmates also don't seem to remember what it was about so I just want to confirm if I was dreaming or not.

Another possible explanation: in math, a retraction is a left inverse of a morphism (see this definition). In particular, a parser can be seen as a retraction w.r.t. a given pretty-printer: start from an abstract syntax tree (AST) and pretty-print it, then parsing the resulting source code should yield the original AST (while the opposite is not necessarily true). It doesn't have much to do with OCaml per se but it is linked to an algebraic view (of compiling) which is quite common in functional programming.

Related

what's the distinction between `shows` and `obtains` in Isabelle Isar?

I am trying to understand the difference between the shows and obtains commands in Isar (as of Isabelle 2020). The documentation in isar-ref.pdf (pp 137.) seems to have some typo and confuses me.
...
Moreover, there are two kinds of conclusions: shows states several
simultaneous propositions (essentially a big conjunction), while
obtains claims several simultaneous simultaneous contexts of
(essentially a big disjunction of eliminated parameters and
assumptions, cf. §6.6).
shows seems straight forward.
From the limited experience I have so far, it seems that obtains is about proving a conclusion that begins with an existential quantifier, as shown in this question (where the conclusion is existential and then the goal is a obtains).
Is this really the distinction between shows and obtains (universal vs existential)?
If not, what is the proper intended use of obtains?
The lemmas "shows ‹∃x. P x›" and "obtains x where ‹P x›` are very similar, but not entirely identical.
In terms of proofs, the obtain version requires to find an explicit witness (look the fact called that in such a proof). Something similar can be achieved by applying the theorem exI after the shows.
The generated lemmas are different. The obtains version generates an elimination rule instead of a quantified, because there is no existential quantifier in Pure. However, the difference rarely matters when using the theorem.

What's a cterm?

The Isabelle implementation manual says:
Types ctyp and cterm represent certified types and terms, respectively. These are abstract datatypes that guarantee that its values have passed the full well-formedness (and well-typedness) checks, relative to the declarations of type constructors, constants etc. in the background theory.
My understanding is: when I write cterm t, Isabelle checks that the term is well-built according to the theory where it lives in.
The abstract types ctyp and cterm are part of the same inference kernel that is mainly responsible for thm. Thus syntactic operations on ctyp and cterm are located in the Thm module, even though theorems
are not yet involved at that stage.
My understanding is: if I want to modify a cterm at the ML level I will use operations of the Thm module (where can I find that module?)
Furthermore, it looks like cterm t is an entity that converts a term of the theory level to a term of the ML-level. So I inspect the code of cterm in the declaration:
ML_val ‹
some_simproc #{context} #{cterm "some_term"}
›
and get to ml_antiquotations.ML:
ML_Antiquotation.value \<^binding>‹cterm› (Args.term >> (fn t =>
"Thm.cterm_of ML_context " ^ ML_Syntax.atomic (ML_Syntax.print_term t))) #>
This line of code is unreadable to me with my current knowledge.
I wonder if someone could give a better low-level explanation of cterm. What is the meaning of the code below? Where are located the checks that cterm performs on theory terms? Where are located the manipulations that we can do on cterms (the module Thm above)?
The ‘c’ stands for ‘certified’ (Or ‘checked’? Not sure). A cterm is basically a term that has been undergone checking. The #{cterm …} antiquotation allows you to simply write down terms and directly get a cterm in various contexts (in this case probably the context of ML, i.e. you directly get a cterm value with the intended content). The same works for regular terms, i.e. #{term …}.
You can manipulate cterms directly using the functions from the Thm structure (which, incidentally, can be found in ~~/src/Pure/thm.ML; most of these basic ML files are in the Pure directory). However, in my experience, it is usually easier to just convert the cterm to a regular term (using Thm.term_of – unlike Thm.cterm_of, this is a very cheap operation) and then work with the term instead. Directly manipulating cterms only really makes sense if you need another cterm in the end, because re-certifying terms is fairly expensive (still, unless your code is called very often, it probably isn't really a performance problem).
In most cases, I would say the workflow is like this: If you get a cterm as an input, you turn it into a regular term. This you can easily inspect/take apart/whatever. At some point, you might have to turn it into a cterm again (e.g. because you want to instantiate some theorem with it or use it in some other way that involves the kernel) and then you just use Thm.cterm_of to do that.
I don't know exactly what the #{cterm …} antiquotation does internally, but I would imagine that at the end of the day, it just parses its parameter as an Isabelle term and then certifies it with the current context (i.e. #{context}) using something like Thm.cterm_of.
To gather my findings with cterms, I post an answer.
This is how a cterm looks like in Pure:
abstype cterm =
Cterm of {cert: Context.certificate,
t: term, T: typ,
maxidx: int,
sorts: sort Ord_List.T}
(To be continued)

In GHC's STG output with -O2, what's this sequence following Str=DmdType all about?

(Misleading title: it's only one of a plethora of inter-related similar questions below: these sound like asking for a full reference manual but keep in mind for this topic there is no reference manual other than the entirety of GHC's source-codes of its STG pipeline stage, and the collective accumulated experience of others/"insiders"..)
I'm exploring "transpiling" Haskell (from scratch for fun/learning, ignoring existing projects; target language/s similarly high-level / "already-fit-for-STG-machine" with existing GC + lambdas/func-values + closures) and so I'm trying to become ever more familiar with GHC's STG IR. Having repeatedly gone through the dozen-or-two online articles/videos of varying age, depth, detail that actually deal with the topic (plus the original paper, plus StgSyn.hs), and understanding many-perhaps-most basic principles, seeing -ddump-stged output still baffles me in various parts (I won't manually parse it but reuse GHC API's in-memory AST later on of course) --- mostly I think I'm stuck mapping my "roughly known" concepts to the "still-foreign" abbreviated/codified identifiers of that IR. If you know your way around STG a bit, mind looking at the following mini-sample to clarify a few open questions and help further solidify my (and future searchers') grasp?
From a most simple .hs module, I have -ddump-stged twice, first (on the left) with -O0 and then (on the right) with -O2, both captured in this diff.
Walking through everything def-by-def..
Lines L_|R5-11: so in O2, testX1 and testX2 seem to be global constants/literals for the integers 4 and 5 --- O0 doesn't have them. Curious!
Is Str=DmdType something about strictness? "Strictness is of type on-demand" or some such? But then a top-level/heap-ish/"global" constant literal can't be "lazy" can it.. (one of the things where I can't just casually Ctrl+F in StgSyn.hs --- it's not in there! which is odd in its own way, how come there's STG syntax not in StgSyn.hs)
Caf have a rough idea about constant-applicative-forms, but Unf=OtherCon? "Other constructor" (unboxed/native Type.S#-related?) ..
Line L6|R14: Surprised to still see type-class information in there (Num), is that "just info/annotation" or is this crucial for any of the built-in code-gens to set up some "dictionary" lookup machinery at runtime? (I'd sure hope by the late STG / pre-CMM stage that would be resolved and inlined already where possible at least in O2. After all GHC has also decided to type-default 4 and 5 to Integer). Generally speaking I understand STG is "untyped" other than denoting prim types, saturated cons, perhaps strings (looks like it later on at the bottom), so such "typeclass" annotations can only be.. I guess for readers to find their way around the ddump-ed *.stg. But correct me if not.
GblId probably just "global identifier" aka top-level CAF right? Arity clear.
Line L7|R18: now Str=DmdType for testX is, only in O2, followed by a freakish <S(LLC(C(S))LLLL),U(1*C1(C1(U)),A,1*C1(C1(U)),A,A,A,C(U))><L,U>! What's that, SKI calculus? ;D no seriously, LLC.. LLLL.. stack or other memory layout hints for CMM? Any idea? Must be some optimization, would like to understand which-and-how..
Line L8|R20: $dNum_sGM (left) and $dNum_sIx (right) have me a bit concerned, they don't seem to be "defined at the module level" here anywhere. Typeclass "method dispatch dictionary lookup" kind of thing? Would eg. CMM take this together with the above Num annotation to set things up? It always appears together with the input func arg.
The whole function "body" for both left and right can be seen here essentially as "3 lets with a lambda-ish form for 3 atoms, 2 of which are statically known literal-constants" --- I suppose this is standard and to be expected in the STG IR AST? For the first of these, funnily enough we could say that O0 has "inlined the global (what is testX1 or testX2 in O2) and O2 hasn't" (making the latter much shorter as that applies to both these constant literals).
I've only ever seen Occ=Once, what are the others and how to interpret? Once for one isn't even in StgSyn.hs..
Now LclId a counterpart to the earlier encountered GblId. That's denoting the scope of the identifier? Could it also be anything else, in this expression context? As in: if traversing the AST I roughly know how deep I am, I can ignore this since if I'm at the top-level it must be GblId and otherwise LclId? Hm.. maybe better take what STG gives me but then I need to be sure about the semantics and possibilities.. guys, using StgSyn.hs I have the wrong source file, right? Nothing on this in there either.. (always hopeful as its comments are quite well-done)
the rest is just metadata as string constants, OK.. oh wait, look at O2, there's Str=DmdType m1 and Str=DmdType m, what's the m/m1 about, another thing I don't see "defined anywhere at the module level" here? And it's not in O0..
still going strong? Merely a bonus question (for now), tell us about srt:SRT:[] ;)
Just a few tidbits - a full answer is quite beyond my knowledge.
The type of your function is
testX :: GHC.Num.Num a => a -> a
It’s compiled to a function with two arguments: a dictionary of the Num type class, and the actual argument.
The $d… names stand for dictionaries of type class instances. The <S(LLC(C(S))LLLL),… annotations are strictness information about the function arguments. They basically say which part of the argument will be used by your function and which not. Looks a bit weird here because it contains information about all the class instance members.
Some of this is explained here:
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Demand
The str:STR: is the „Static reference table“, i.e. list of free variables of the expression - in your case, always [].

What is Lambda definability?

While I was reading about lambda calculus, came across the word Lambda definability. Can someone please explain what that is as I couldn't find any good resources on that.
Thanks
More generally, there is a line of research seeking to characterize "lambda definability" over a broad class of languages. "lambda definability" itself is typically relative to a semantics of a language given in terms of sets. For a type T in our language, write |T| for its interpretation as a set. Now, take an element of |T| -- call it e. We want to know if there is a term in our language -- call it x : T (x of type T), such that |x| is e. If there is such a term, then we say that t is lambda-definable.
Now, in our perfect world, when we interpret a language into sets, we would like to say that the sets associated with each type are precisely those that contain the lambda-definable elements of that type and only the lambda-definable elements (completeness). It would also be nice, perhaps to say that we can provide an algorithm to determine if a claimed element of a set has an associated lambda term (decidability).
Now, often we don't just model into sets, but into other funny mathematical constructions. And we don't model just from the lambda calculus, but from other related systems such as Plotkin's PCF or the like. But the property under study is typically still called "lambda-definability".
After decades of research there are still many open problems and questions in this regard -- while certain lower-order terms have been shown to have decidable lambda-definability (the classic results involve terms up to second-order), many terms do not yield so easily. This paper ("The Undecidability of lambda-Definability" by Ralph Loader) gives an important such undecidability result and characterizes some consequences: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.6860
See the Church-Turing thesis, where lambda-definable functions (from Church) are those that give us "effectively computable" functions. Turing showed that programs implementable on a Turing machine are equivalent to lambda-definable functions.

Pure functional bottom up tree algorithm

Say I wanted to write an algorithm working on an immutable tree data structure that has a list of leaves as its input. It needs to return a new tree with changes made to the old tree going upwards from those leaves.
My problem is that there seems to be no way to do this purely functional without reconstructing the entire tree checking at leaves if they are in the list, because you always need to return a complete new tree as the result of an operation and you can't mutate the existing tree.
Is this a basic problem in functional programming that only can be avoided by using a better suited algorithm or am I missing something?
Edit: I not only want to avoid to recreate the entire tree but also the functional algorithm should have the same time complexity than the mutating variant.
The most promising I have seen so far (which admittedly is not very long...) is the Zipper data structure: It basically keeps a separate structure, a reverse path from the node to root, and does local edits on this separate structure.
It can do multiple local edits, most of which are constant time, and write them back to the tree (reconstructing the path to root, which are the only nodes that need to change) all in one go.
The Zipper is a standard library in Clojure (see the heading Zippers - Functional Tree Editing).
And there's the original paper by Huet with an implementation in OCaml.
Disclaimer: I have been programming for a long time, but only started functional programming a couple of weeks ago, and had never even heard of the problem of functional editing of trees until last week, so there may very well be other solutions I'm unaware of.
Still, it looks like the Zipper does most of what one could wish for. If there are other alternatives at O(log n) or below, I'd like to hear them.
You may enjoy reading
http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!248.entry
This depends on your functional programming language. For instance in Haskell, which is a Lazy functional programming language, results are calculated at the last moment; when they are acutally needed.
In your example the assumption is that because your function creates a new tree, the whole tree must be processed, whereas in reality the function is just passed on to the next function and only executed when necessary.
A good example of lazy evaluation is the sieve of erastothenes in Haskell, which creates the prime numbers by eliminating the multiples of the current number in the list of numbers. Note that the list of numbers is infinite. Taken from here
primes :: [Integer]
primes = sieve [2..]
where
sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]
I recently wrote an algorithm that does exactly what you described - https://medium.com/hibob-engineering/from-list-to-immutable-hierarchy-tree-with-scala-c9e16a63cb89
It works in 2 phases:
Sort the list of nodes by their depth in the hierarchy
constructs the tree from bottom up
Some caveats:
No Node mutation, The result is an Immutable-tree
The complexity is O(n)
Ignores cyclic referencing in the incoming list

Resources