I understood Left Recursive Grammar (LRG) and how to remove it.
But i dont know how to remove recursive grammar which combine both left and right recursive:
A -> aAb | c
The full question is construct parsing table of LL(1) grammar of this:
S -> aABb
A -> aAb | e (epsilon)
B -> bB | c
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
The well-known Church encoding of natural numbers can be generalized to use an arbitrary functor F. The result is the type, call it C, defined by
data C = Cfix { run :: forall r. (F r -> r) -> r }
Here and below, for simplicity, we will assume that F is a fixed, already defined functor.
It is widely known and stated that the type C is a fixpoint of the functor F, and also that C is an initial F-algebra. For example, if the functor F a is defined by
data F a b = Empty | Cons a b
then a fixpoint of F a is [a] (the list of values of type a). Also, [a] is the initial algebra. The Church encoding of lists is well known. But I could not find a rigorous proof of either of these statements (C is a fixpoint, and C is the initial algebra).
The question is, how to prove rigorously one of the two statements:
The type C is a fixpoint of the type isomorphism F C ≅ C. In other words, we need to prove that there exist two functions, fix :: F C -> C and unfix :: C -> F C such that fix . unfix = id and unfix . fix = id.
The type C is the initial algebra of the functor F; that is, the initial object in the category of F-algebras. In other words, for any type A such that a function p :: F A -> A is given (that is, A is an F-algebra), we can find a unique function q :: C -> A which is an F-algebra morphism. This means, q must be such that the law q . fix = p . fmap q holds. We need to prove that, given A and p, such q exists and is unique.
These two statements are not equivalent; but proving (2) implies (1). (Lambek's theorem says that an initial algebra is an isomorphism.)
The code of the functions fix and unfix can be written relatively easily:
fix :: F C -> C
fix fc = Cfix (forall r. \g -> g . fmap (\h -> h g) fc )
unfix :: C -> F C
unfix c = (run c) (fmap fix)
Given a function p :: F A -> A, the code of the function q is written as
q :: C -> A
q c = (run c) p
However, it seems difficult to prove directly that the functions fix, unfix, q satisfy the required properties. I was not able to find a complete proof.
Is it easier to prove that C is an initial algebra, i.e., that q is unique, than to prove that fix . unfix = id?
In the rest of this question, I will show some steps that I was able to make towards the proof that fix . unfix = id.
It is not possible to prove either (1) or (2) simply by using the given code of the functions. We need additional assumptions. Similarly to the Yoneda identity,
forall r. (A -> r) -> F r ≅ F A ,
we need to assume that the functions' code is fully parametric (no side effects, no specially chosen values or fixed types) so that the parametricity theorem can be applied. So, we need to assume that the type C contains only functions of type forall r. (F r -> r) -> r that satisfy the appropriate naturality law.
The parametricity theorem gives the following naturality law for this type signature: for any types A and B, and for any functions p :: F B -> A and f :: A -> B, the function c :: forall r. (F r -> r) -> r must satisfy the equation
c (f . p) = f . c (p . fmap f)
Using this naturality law with appropriately chosen p and f, one can show that the composition fix . unfix is a certain function of type C -> C that must be equal to \c -> (run c) fix.
However, further progress in the proof does not seem to be possible; it is not clear why this function must be equal to id.
Let us temporarily define the function m:
m :: (F C -> C) -> C -> C
m t c = (run c) t
Then the result I have is written as
fix . unfix = m fix
One can also show that unfix . fix = fmap (m fix).
It remains to prove that m fix = id. Once that is proved, we will have proved that F C ≅ C.
The same naturality law of c with different choice of p and f gives the strange identity
m fix . m (m fix . fix) = m (m fix . fix)
But I do not know how to derive from this identity that m fix = id.
I have an example pedgree with a structure as shown here.
My ultimate goal is to extract the ancestry of certain people in the so-called trio format, which is a table with columns id mom dad.
In my example, the result for the pedigree of the two most recent persons G and H would be
+-----+-----+-----+
| id | mom | dad |
+-----+-----+-----+
| D | A | B |
| E | C | B |
| G | D | E |
| H | F | E |
+-----+-----+-----+
The closest thing I could come up with in AQL is the following query.
LET last_generation = ['people/G', 'people/H']
FOR person IN last_generation
FOR v, e, p in 1..10 OUTBOUND person is_mom, is_dad
LET role = contains('mom', e._id) ? 'mom': 'dad'
SORT e._from DESC
RETURN DISTINCT {'id': DOCUMENT('people', e._from)._key,
'parent': DOCUMENT('people', e._to)._key,
'role': role}
Altough the result is not yet in the right format, post-processing is easy.
Now my questions are:
I am forced to use the DISTINCT keyword to ensure uniqueness of rows. However, I would like to avoid unnesseary traversal in the first place rather than filtering. Ideally, I think I need the option uniqueEdges: "global", which is sadly not availabe any more. For instance, after having processed the pedigree of person G, I don't want to traverse the part of the pedigree shared between G and H (i.e., person E and its parents) again. Using uniqueVertices: "global" is not an option, because I would then miss the edge between H --> E.
Is there some kind of option to know the edge collection type during a traversal rather than using the kind of cumbersome checking I do? Please note that it is not an option for me to put the sex into a property of the person (which is reasonable for most humans), because in reality I am dealing with plants, which can (usually) be mother and father at the same time.
If I try to compile the following code for adding to a fingertree, the elm compiler waits a long time and then reports that it is out of memory.
module FingerTree exposing(..)
type Node a
= Node2 a a
| Node3 a a a
type Finger a
= One a
| Two a a
| Three a a a
| Four a a a a
type FingerTree a
=Empty
|Single a
|Deep (Finger a) (FingerTree(Node a)) (Finger a)
fLeftAdd: a -> Finger a -> Finger a
fLeftAdd a0 finger =
case finger of
One a1 -> Two a0 a1
Two a1 a2 -> Three a0 a1 a2
Three a1 a2 a3 -> Four a0 a1 a2 a3
Four a1 a2 a3 a4 -> finger
leftAdd: a -> FingerTree a -> FingerTree a
leftAdd a0 fingerTree=
case fingerTree of
Empty -> Single a0
Single a1 -> Deep (One a0) Empty (One a1)
Deep left middle right ->
case left of
Four a1 a2 a3 a4 ->
Deep(Two a0 a1) ( leftAdd (Node3 a2 a3 a4) middle) right
_ -> Deep (fLeftAdd left a0) middle right
My first thought was that perhaps you just can't have polymorphic recursion (a polymorphic function calling itself with a different type signature). However this variant, replacing the custom "Finger" and "Node" types with lists, compiles fine:
module HackyTree exposing(..)
type HackyTree a
= Empty
|Single a
|Deep (List a) (HackyTree (List a)) (List a)
leftAdd: a -> HackyTree a -> HackyTree a
leftAdd a0 tree=
case tree of
Empty -> Single a0
Single a1 -> Deep [a0] Empty [a1]
Deep left middle right ->
case left of
[a1, a2, a3, a4] ->
Deep [a0, a1] ( leftAdd [a2, a3, a4] middle) right
_ -> Deep (a0::left) middle right
I'd like to get the first version working. Is this a compiler bug? Is there a recommended way to refactor to avoid this?
Are you sure your last line is _ -> Deep (fLeftAdd left a0) middle right and not _ -> Deep (fLeftAdd a0 left) middle right? If I change it everything compiles fine.
Note that the signature of fLeftAdd is fLeftAdd: a -> Finger a -> Finger a. You are pattern matching on a FingerTree a, in particular the Deep (Finger a) (FingerTree(Node a)) (Finger a) case.
With _ -> Deep (fLeftAdd left a0) middle right you're applying fLeftAdd to a left, which is a Finger a and to a0, which is an a.
You also have the constraint that the result of (fLeftAdd left a0) and right have the same type.
This means that (fLeftAdd left a0) should produce a Finger a when given a Finger a and an a as parameters, which breaks type inference since fLeftAdd: a -> Finger a -> Finger a.
This is a minimal example where the compiler doesn't go out of memory:
leftAdd: a -> FingerTree a -> FingerTree a
leftAdd a0 fingerTree=
case fingerTree of
Deep left middle right ->
Deep (fLeftAdd left a0) middle right
_ -> Single a0
I pasted it in Try Elm and I got the following error messages:
-- TYPE MISMATCH ---------------------------------------------------------------
The type annotation for leftAdd does not match its definition.
27| leftAdd: a -> FingerTree a -> FingerTree a
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The type annotation is saying:
a -> FingerTree a -> FingerTree a
But I am inferring that the definition has this type:
Finger ? -> FingerTree ? -> FingerTree (Finger ?)
Hint: A type annotation is too generic. You can probably just switch
to the type I inferred. These issues can be subtle though, so read
more about it.
https://github.com/elm-lang/elm-compiler/blob/0.17.0/hints/type-annotations.md
-- INFINITE TYPE ---------------------------------------------------------------
I am inferring a weird self-referential type for left
30| Deep left middle right ->
^^^^ Here is my best effort at writing down the type. You will see ? and ∞ for parts of the type that repeat something already
printed out infinitely.
?
Usually staring at the type is not so helpful in these cases, so
definitely read the debugging hints for ideas on how to figure this
out:
https://github.com/elm-lang/elm-compiler/blob/0.17.0/hints/infinite-type.md
I'd recommend you to try to create a simple self contained compilable example and raise an issue on the compiler project
I have come to love this syntax in OCaml
match myCompare x y with
|Greater->
|Less->
|Equal->
However, it needs 2 things, a custom type, and a myCompare function that returns my custom type.
Would there be anyway to do this without doing the steps above?
The pervasives module seems to have 'compare' which returns 0 if equal, pos int when greater and neg int when less. Is it possible to match those? Conceptually like so (which does not compile):
match myCompare x y with
| (>0) ->
| (0) ->
| (<0) ->
I know I could just use if statements, but pattern matching is more elegant to me. Is there an easy (if not maybe standard) way of doing this?
Is there an easy … way of doing this?
No!
The advantage of match over what switch does in another language is that OCaml's match tells you if you have thought of covering all the cases (and it allows to match in-depth and is compiled more efficiently, but this could also be considered an advantage of types). You would lose the advantage of being warned if you do something stupid, if you started using arbitrary conditions instead of patterns. You would just end up with a construct with the same drawbacks as a switch.
This said, actually, Yes!
You can write:
match myCompare x y with
| z when (z > 0) -> 0
| 0 -> 0
| z when (z < 0) -> 0
But using when makes you lose the advantage of being warned if you do something stupid.
The custom type type comparison = Greater | Less | Equal and pattern-matching over the three only constructors is the right way. It documents what myCompare does instead of letting it return an int that could also, in another language, represent a file descriptor. Type definitions do not have any run-time cost. There is no reason not to use one in this example.
You can use a library that already provide those variant-returning compare functions. This is the case of the BatOrd module of Batteries, for example.
Otherwise your best bet is to define the type and create a conversion function from integers to comparisons.
type comparison = Lt | Eq | Gt
let comp n =
if n < 0 then Lt
else if n > 0 then Gt
else Eq
(* ... *)
match comp (Pervasives.compare foo bar) with
| Lt -> ...
| Gt -> ...
| Eq -> ...
Which of the three (if any (please provide an alternative)) would be used to add elements to a list of items?
Fold
Map
Filter
Also; how would items be added? (appended to the end / inserted after working item / other)
A list in functional programming is usually defined as a recursive data structure that is either a special empty value, or is composed of a value (dubbed "head") and another list (dubbed "tail"). In Haskell:
-- A* = 1 + A x A*
-- there is a builtin list type:
data [a] = [] | (a : [a])
To add an element at the head, you can use "cons": the function that takes a head and a tail, and produces the corresponding list.
-- (:) is "cons" in Haskell
(:) :: a -> [a] -> [a]
x = [1,2,3] -- this is short for (1:(2:(3:[])))
y = 0 : x -- y = [0,1,2,3]
To add elements at the end, you need to recurse down the list to add it. You can do this easily with a fold.
consAtEnd :: a -> [a] -> [a]
consAtEnd x = foldr [x] (:)
-- this "rebuilds" the whole list with cons,
-- but uses [x] in the place of []
-- effectively adding to the end
To add elements in the middle, you need to use a similar strategy:
consAt :: Int -> a -> [a] -> [a]
consAt n x l = consAtEnd (take n l) ++ drop n l
-- ++ is the concatenation operator: it joins two lists
-- into one.
-- take picks the first n elements of a list
-- drop picks all but the first n elements of a list
Notice that except for insertions at the head, these operations cross the whole list, which may become a performance issue.
"cons" is the low-level operation used in most functional programming languages to construct various data structure including lists. In lispy syntax it looks like this:
(cons 0 (cons 1 (cons 2 (cons 3 nil))))
Visually this is a linked list
0 -> 1 -> 2 -> 3 -> nil
Or perhaps more accurately
cons -- cons -- cons -- cons -- nil
| | | |
0 1 2 3
Of course you could construct various "tree"-like data structures with cons as well.
A tree like structure might look something like this
(cons (cons 1 2) (cons 3 4))
I.e. Visually:
cons
/ \
cons cons
/ \ / \
1 2 3 4
However most functional programming languages will provide many "higher level" functions for manipulating lists.
For example, in Haskell there's
Append: (++) :: [a] -> [a] -> [a]
List comprehension: [foo c | c <- s]
Cons: (:) :: a -> [a] -> [a] (as Martinho already mentioned)
And many many more
Just to offer a concluding remark, you wouldn't often operate on individual elements in a list in the way that you're probably thinking, this is an imperative mindset. You're more likely to copy the entire structure using a recursive function or something in that line. The compiler/virtual machine is responsible recognizing when the memory can be modified in place and updating pointers etc.