Writting order of ABNF - abnf

What order of writing is beneficial to the readability of ABNF?
In fact, the process of writing ABNF is like traversing an infinite tree. So what tree traversal type should we use? Both types of traversal have their own disadvantages. Breadth-first will separate adjacent rules, which is not conducive to reading. Depth-first will delay further interpretation of the rules, which is not conducive to understanding.
There is an example in RFC 5234, ABNF Definition of ABNF, which did not abide by breadth-first or depth-first. So we can't get inspiration from it.

Related

how to specify the breadth-first strategy or the depth-first strategy in gremlin

The depth-first strategy is the default strategy in Gremlin unless specified otherwise.how to specify the breadth-first strategy?
gremlin language as follow:
g.V().repeat(_,in('edgelabel').simplePath()).times(3).path()
Currently when using repeat it defaults to BFS, which is unfortunate given that all other OLTP queries are DFS. There's a ticket to address that: https://github.com/apache/tinkerpop/pull/838

which is more general recursion or iteration?

is Recursion is more general than Iteration?
for me Iteration means repetitive control using language constructs other than subroutine calls (like loop-constructs and/or explicit
goto’s), whereas recursion means as repetitive control obtained using subroutine calls). which is more general in these two?
Voted to close as likely to prompt opinion-based responses; my opinion-based response is: recursion is more general because:
it's simply a user-case of function calls, which a language will already have; and
recursion captures both a direct one-to-one pattern of repetition and more complicated patterns, such as tree traversal, divide and conquer, etc.
Conversely iteration tends to be a specific language-level construct allowing only a direct linear traversal.
In so far as it is not opinion-based, the most reasonable answer is that neither recursion nor iteration is more general than the other. A language can be Turing complete with recursion but no iteration (minimal Lisps are like that) and a language can also be Turing complete with iteration but no recursion (earlier versions of Fortran didn't support recursion). Both recursion and iteration are widely used. Iteration is probably more commonly used since for every person who learns programming with something like Lisp or Haskell there are probably a dozen who learn programming with things like Java or Visual Basic -- but I don't think that "most commonly used" is a good synonym for "general".

How do rule-based systems deal with recursion?

I am just beginning to learn about rule-based systems, and am having trouble understanding how they can handle recursion.
Some quick background. I have studied a little programming language theory and so am pretty familiar with Context-Free Grammars and stuff like that. I have recently come across rule-based systems and am wondering how they may be related to automata and recursion.
My question is: how do rule-based systems handle recursion? In particular, could a rule-based system be used for parsing nested parentheses? (Or parsing a CFG)?
I don't have a deep understanding of rule-based systems yet, but from what it seems, rules are never organized into a hierarchy or a network. Is this true, or is there a way to represent them as a hierarchy or network?
Without the ability to create a hierarchy of rules, it seems like the rule-system memory would have to keep track of the context manually, and that context would be used to test conditions. So like:
if parentConditionX and parentParentConditionY {
perform some action
}
But then the problem is, how do you return to a "parent" rule (like a parent function context)? If rules can't be organized/executed hierarchically, it seems you can't do recursion? That's where I'm confused.
Any ideas?

iterative or recursive to implement a binary search tree?

I'm learning Data Structures & Algorithms now.
My lecture notes have an implementation of a binary search tree which is implemented using a recursive method. That is an elegant way, but my question is in real life code, should I implement a binary search tree recursively, will it generate a lot of calling stack if the tree has large height/depth number.
I understand that recursion is a key concept to understand lots of data structure concepts, but would you choose to use recursion in real life code?
A tree is recursive by nature. Each node of a tree represents a subtree, and each child of each note represents a subtree of that subtree, so recursion is the best bet, especially in practice where other people people might have to edit and maintain your code.
Now, IF depth becomes a problem for your call stack, than I'm afraid that there are deeper problems with your data structure (either it's monstrously huge, or it's very unbalanced)
"I understand that recursive is a key concept to understand lots of
data structure, but will you choose to use recursive in real life
code?"
After first learning about recursion I felt the same way. However, having been working in the Software industry for over a year now, I can say that I have used the concept of recursion to solve several problems. There are often times that recursion is cleaner, easier to understand/read, and just downright better. And to emphasize a point in the previous answer, a tree is a recursive data structure. IMO, there is no other way to traverse a BST :)
Many times, the compiler can optimize your code, to avoid creating a new stack frame for each recursive call (look up tail recursion, for example). Of course, it all depends on the algorithm and on your data structure. If the tree is reasonably balanced, I don't think a recursive algorithm should cause any problems.
its true that recursion is intutive and elegent and it produces code that is clear and concise. its also correct that some methods such as quick sort, DFS etc. are really hard to implement iterativelly.
but in practice recursive implementations are almost always going to be slow when compared to iterative counterparts because of all the function calls (To really understand the performance hit I suggest you learn how much book keeping stuff assembler has to do for a single function call).
the optimizations that we talk about are not applicable to every recursive method in general and manny compilers and interpreters dont even support them.
so in summary if you are writing something which is performance critical such as a data strucute then stay away from recursion (or use it if you are sure that your compiler/interpreter got you covered)
PS: CLRS (introduction to algorithms, page 290, last line) suggests that iterative search procedure for a BST is faster compared to recursive one.

Explanation of combinators for the working man

What is a combinator??
Is it "a function or definition with no free variables" (as defined on SO)?
Or how about this: according to John Hughes in his well-known paper on Arrows, "a combinator is a function which builds program fragments from program fragments", which is advantageous because "... the programmer using combinators constructs much of the desired program automatically, rather than writing every detail by hand". He goes on to say that map and filter are two common examples of such combinators.
Some combinators which match the first definition:
S
K
Y
others from To Mock a Mockingbird (I may be wrong -- I haven't read this book)
Some combinators which match the second definition:
map
filter
fold/reduce (presumably)
any of >>=, compose, fmap ?????
I'm not interested in the first definition -- those would not help me to write a real program (+1 if you convince me I'm wrong). Please help me understand the second definition. I think map, filter, and reduce are useful: they allow me to program at a higher level -- fewer mistakes, shorter and clearer code. Here are some of my specific questions about combinators:
What are more examples of combinators such as map, filter?
What combinators do programming languages often implement?
How can combinators help me design a better API?
How do I design effective combinators?
What are combinators similar to in a non-functional language (say, Java), or what do these languages use in place of combinators?
Update
Thanks to #C. A. McCann, I now have a somewhat better understanding of combinators. But one question is still a sticking point for me:
What is the difference between a functional program written with, and one written without, heavy use of combinators?
I suspect the answer is that the combinator-heavy version is shorter, clearer, more general, but I would appreciate a more in-depth discussion, if possible.
I'm also looking for more examples and explanations of complex combinators (i.e. more complex than fold) in common programming languages.
I'm not interested in the first definition -- those would not help me to write a real program (+1 if you convince me I'm wrong). Please help me understand the second definition. I think map, filter, and reduce are useful: they allow me to program at a higher level -- fewer mistakes, shorter and clearer code.
The two definitions are basically the same thing. The first is based on the formal definition and the examples you give are primitive combinators--the smallest building blocks possible. They can help you to write a real program insofar as, with them, you can build more sophisticated combinators. Think of combinators like S and K as the machine language of a hypothetical "combinatory computer". Actual computers don't work that way, of course, so in practice you'll usually have higher-level operations implemented behind the scenes in other ways, but the conceptual foundation is still a useful tool for understanding the meaning of those higher-level operations.
The second definition you give is more informal and about using more sophisticated combinators, in the form of higher-order functions that combine other functions in various ways. Note that if the basic building blocks are the primitive combinators above, everything built from them is a higher-order function and a combinator as well. In a language where other primitives exist, however, you have a distinction between things that are or are not functions, in which case a combinator is typically defined as a function that manipulates other functions in a general way, rather than operating on any non-function things directly.
What are more examples of combinators such as map, filter?
Far too many to list! Both of those transform a function that describes behavior on a single value into a function that describes behavior on an entire collection. You can also have functions that transform only other functions, such as composing them end-to-end, or splitting and recombining arguments. You can have combinators that turn single-step operations into recursive operations that produce or consume collections. Or all kinds of other things, really.
What combinators do programming languages often implement?
That's going to vary quite a bit. There're relatively few completely generic combinators--mostly the primitive ones mentioned above--so in most cases combinators will have some awareness of any data structures being used (even if those data structures are built out of other combinators anyway), in which case there are typically a handful of "fully generic" combinators and then whatever various specialized forms someone decided to provide. There are a ridiculous number of cases where (suitably generalized versions of) map, fold, and unfold are enough to do almost everything you might want.
How can combinators help me design a better API?
Exactly as you said, by thinking in terms of high-level operations, and the way those interact, instead of low-level details.
Think about the popularity of "for each"-style loops over collections, which let you abstract over the details of enumerating a collection. These are just map/fold operations in most cases, and by making that a combinator (rather than built-in syntax) you can do things such as take two existing loops and directly combine them in multiple ways--nest one inside the other, do one after the other, and so on--by just applying a combinator, rather than juggling a whole bunch of code around.
How do I design effective combinators?
First, think about what operations make sense on whatever data your program uses. Then think about how those operations can be meaningfully combined in generic ways, as well as how operations can be broken down into smaller pieces that are connected back together. The main thing is to work with transformations and operations, not direct actions. When you have a function that just does some complicated bit of functionality in an opaque way and only spits out some sort of pre-digested result, there's not much you can do with that. Leave the final results to the code that uses the combinators--you want things that take you from point A to point B, not things that expect to be the beginning or end of a process.
What are combinators similar to in a non-functional language (say, Java), or what do these languages use in place of combinators?
Ahahahaha. Funny you should ask, because objects are really higher-order thingies in the first place--they have some data, but they also carry around a bunch of operations, and quite a lot of what constitutes good OOP design boils down to "objects should usually act like combinators, not data structures".
So probably the best answer here is that instead of combinator-like things, they use classes with lots of getter and setter methods or public fields, and logic that mostly consists of doing some opaque, predefined action.

Resources