I would like to create a grammar for parsing a toy like formula language that resembles S-expression syntax.
I read through the "Getting Started with PyParsing" book and it included a very nice section that sort of covers a similar grammar.
Two examples of data to parse are:
sum(5,10,avg(15,20))+10
stdev(5,10)*2
Now, I have come up with a grammar that sort-of parses the formula but disregards
expanding the functions and operator precedence.
What would be the best practice to continue on with it: Should I add parseActions
for words that match oneOf the function names ( sum, avg ... ). If I build a nested
list, I could do a depth-first walking of parse results and evaluate the functions ?
It's a little difficult to advise without seeing more of your code. Still, from what you describe, it sounds like you are mostly tokenizing, to recognize the various bits of punctuation and distinguishing variable names from numeric constants from algebraic operators. nestedExpr will impart some structure, but only basic parenthetical nesting - this still leaves operator precedence handling for your post-parsing work.
If you are learning about parsing infix notation, there is a succession of pyparsing examples to look through and study (at the pyparsing wiki Examples page). Start with fourFn.py, which is actually a five function infix notation parser. Look through its BNF() method, and get an understanding of how the recursive definitions work (don't worry about the pushFirst parse actions just yet). By structuring the parser this way, operator precedence gets built right into the parsed results. If you parse 4 + 2 * 3, a mere tokenizer just gives you ['4','+','2','*','3'], and then you have to figure out how to do the 2*3 before adding the 4 to get 10, and not just brute force add 4 and 2, then multiply by 3 (which gives the wrong answer of 18). The parser in fourFn.py will give you ['4','+',['2','*','3']], which is enough structure for you to know to evaluate the 2*3 part before adding it to 4.
This whole concept of parsing infix notation with precedence of operations is so common, I wrote a helper function that does most of the hard work, called operatorPrecedence. You can see how this works in the example simpleArith.py and then move on to eval_arith.py to see the extensions need to create an evaluator of the parsed structure. simpleBool.py is another good example showing precedence for logical terms AND'ed and OR'ed together.
Finally, since you are doing something Excel-like, take a look at excelExpr.py. It tries to handle some of the crazy corner cases you get when trying to evaluate Excel cell references, including references to other sheets and other workbooks.
Good luck!
Related
I'd like something to convert pretty basic math expressions, having nested parentheses and fractions, to LaTeX notation. Like mathquill, but a function (or even the building blocks of one).
There seem to be some Lua and Haskell solutions in a pandoc/Rmarkdown context, but I can't use those, because (a) I'm scared of real languages, and (b) I'm generating PNGs (via webtex) to be featured in a flextable table, outside of a rendered document.
I'm inexperienced with regular expressions, so I don't know how to leverage something like this, but I'd appreciate any pointers if that seems like a productive path.
"Write a parser" is something best left to others, at least in my case!
Example expression below. Just a few levels of nesting, no big deal. I can bound it at, say, 5 levels, if that helps. And the input is parseable as an R expression—I can guarantee matched parentheses, for example.
(y0 - (y0/((1-y0)*exp(B*dx)+y0)))*Pop
Here's what I'd want in the case above. The \cdots are sugar; I can handle those.
\left(y0-\frac{y0}{\left(1-y0\right)\cdot\exp\left(B\cdot dx\right)+y0}\right)\cdot Pop
Visually:
I was reading up on the documentation of macros and ran into the following under the `Hold up: why macros' section. The reasoning given to use macros is as follows:
Macros are necessary because they execute when code is parsed,
therefore, macros allow the programmer to generate and include
fragments of customized code before the full program is run
This leads me to wonder why someone would want to use "generate and include fragments of customized code before the full program is run". Can someone provide context as to why this would be beneficial and/or other good use cases for macros?
Let me give you my view on macros.
A macro basically is a code -> code function. It takes code (a Julia expression) as input and spits out code (a different Julia expression).
Why is this useful? It has multiple purposes:
compile time copy-and-paste: You don't have to write the same piece of code multiple times but instead can define a short macro that writes it for you wherever you put it. (example)
domain specific language (DSL): You can create special syntax that after the macros code -> code transform is replaced by pure Julia constructs. This is used in many packages to define special syntax, for example here and here.
code generation: Imagine you want to write a really long piece of code which, although being long, is very simple because it has some kind of pattern that repeats itself rather trivially. Writing that code by hand can be a pain (or even practically impossible). A macro can programmatically generate the code for you. One example is for-loop unrolling (see here and here). But even the #time macro isn't doing much more than just putting a bunch of Base.time_ns() function calls around the provided Julia expression.
special string parsing: If you type the literal 3.2 in Julia it will be parsed and interpreted as a Float64. Now, imagine you want to supply a number literally that goes beyond Float64 precision but would fit into a BigFloat. Typing big(3.123124812498124812498) won't work, because the literal number is first interpreted as a Float64 and then handed to the big function. Instead you need a way to tell Julia at parse time that this should become a BigFloat. This is handled by a #big_str 3.2 macro which (for convenience) can also be written as big"3.2". The latter is just syntactic sugar.
There might be many more applications of macros, but those are the most important to me.
Let me end by referencing Steven G. Johnson's great talk at JuliaCon 2019:
Most of the time, don't do metaprogramming :)
In CL, we have many operators to check for equality that depend on the data type: =, string-equal, char=, then equal, eql and whatnot, so on for other data types, and the same for comparison operators (edit don't forget to answer about these please :) do we have generic <, > etc ? can we make them work for another object ?)
However the language has mechanisms to make them generic, for example generics (defgeneric, defmethod) as described in Practical Common Lisp. I imagine very well the same == operator that will work on integers, strings and characters, at least !
There have been work in that direction: https://common-lisp.net/project/cdr/document/8/cleqcmp.html
I see this as a major frustration, and even a wall, for beginners (of which I am), specially we who come from other languages like python where we use one equality operator (==) for every equality check (with the help of objects to make it so on custom types).
I read a blog post (not a monad tutorial, great serie) today pointing this. The guy moved to Clojure, for other reasons too of course, where there is one (or two?) operators.
So why is it so ? Is there any good reasons ? I can't even find a third party library, not even on CL21. edit: cl21 has this sort of generic operators, of course.
On other SO questions I read about performance. First, this won't apply to the little code I'll write so I don't care, and if you think so do you have figures to make your point ?
edit: despite the tone of the answers, it looks like there is not ;) We discuss in comments.
Kent Pitman has written an interesting article that tackles this subject: The Best of intentions, EQUAL rights — and wrongs — in Lisp.
And also note that EQUAL does work on integers, strings and characters. EQUALP also works for lists, vectors and hash tables an other Common Lisp types but objects… For some definition of work. The note at the end of the EQUALP page has a nice answer to your question:
Object equality is not a concept for which there is a uniquely determined correct algorithm. The appropriateness of an equality predicate can be judged only in the context of the needs of some particular program. Although these functions take any type of argument and their names sound very generic, equal and equalp are not appropriate for every application.
Specifically note that there is a trick in my last “works” definition.
A newer library adds generic interfaces to standard Common Lisp functions: https://github.com/alex-gutev/generic-cl/
GENERIC-CL provides a generic function wrapper over various functions in the Common Lisp standard, such as equality predicates and sequence operations. The goal of the wrapper is to provide a standard interface to common operations, such as testing for the equality of two objects, which is extensible to user-defined types.
It does this for equality, comparison, arithmetic, objects, iterators, sequences, hash-tables, math functions,…
So one can define his own + operator for example.
Yes we have! eq works with all values and it works all the time. It does not depend on the data type at all. It is exactly what you are looking for. It's like the is operator in python. It must be exactly what you were looking for? All the other ones agree with eq when it's t, however they tend to be t for totally different values that have various levels of similarities.
(defparameter *a* "this is a string")
(defparameter *b* *a*)
(defparameter *c* "this is a string")
(defparameter *d* "THIS IS A STRING")
All of these are equalp since they contain the same meaning. equalp is perhaps the sloppiest of equal functions. I don't think 2 and 2.0 are the same, but equalp does. In my mind 2 is 2 while 2.0 is somewhere between 1.95 and 2.04. you see they are not the same.
equal understands me. (equal *c* *d*) is definitely nil and that is good. However it returns t for (equal *a* *c*) as well. Both are arrays of characters and each character are the same value, however the two strings are not the same object. they just happen to look the same.
Notice I'm using string here for every single one of them. We have 4 equal functions that tells you if two values have something in common, but only eq tells you if they are the same.
None of these are type specific. They work on all types, however they are not generics since they were around long before that was added in the language. You could perhaps make 3-4 generic equal functions but would they really be any better than the ones we already have?
Fortunately CL21 introduces (more) generic operators, particularly for sequences it defines length, append, setf, first, rest, subseq, replace, take, drop, fill, take-while, drop-while, last, butlast, find-if, search, remove-if, delete-if, reverse, reduce, sort, split, join, remove-duplicates, every, some, map, sum (and some more). Unfortunately the doc isn't great, it's best to look at the sources. Those should work at least for strings, lists, vectors and define methods of the new abstract-sequence.
see also
https://github.com/cl21/cl21/wiki
https://lispcookbook.github.io/cl-cookbook/cl21.html
In various web pages, I see references to jq functions with a slash and a number following them. For example:
walk/1
I found the above notation used on a stackoverflow page.
I could not find in the jq Manual page a definition as to what this notation means. I'm guessing it might indicate that the walk function that takes 1 argument. If so, I wonder why a more meaningful notation isn't used such as is used with signatures in C++, Java, and other languages:
<function>(type1, type2, ..., typeN)
Can anyone confirm what the notation <function>/<number> means? Are other variants used?
The notation name/arity gives the name and arity of the function. "arity" is the number of arguments (i.e., parameters), so for example explode/0 means you'd just write explode without any arguments, and map/1 means you'd write something like map(f).
The fact that 0-arity functions are invoked by name, without any parentheses, makes the notation especially handy. The fact that a function name can have multiple definitions at any one time (each definition having a distinct arity) makes it easy to distinguish between them.
This notation is not used in jq programs, but it is used in the output of the (new) built-in filter, builtins/0.
By contrast, in some other programming languages, it (or some close variant, e.g. module:name/arity in Erlang) is also part of the language.
Why?
There are various difficulties which typically arise when attempting to graft a notation that's suitable for languages in which method-dispatch is based on types onto ones in which dispatch is based solely on arity.
The first, as already noted, has to do with 0-arity functions. This is especially problematic for jq as 0-arity functions are invoked in jq without parentheses.
The second is that, in general, jq functions do not require their arguments to be any one jq type. Having to write something like nth(string+number) rather than just nth/1 would be tedious at best.
This is why the manual strenuously avoids using "name(type)"-style notation. Thus we see, for example, startswith(str), rather than startswith(string). That is, the parameter names in the documentation are clearly just names, though of course they often give strong type hints.
If you're wondering why the 'name/arity' convention isn't documented in the manual, it's probably largely because the documentation was mostly written before jq supported multi-arity functions.
In summary -- any notational scheme can be made to work, but name/arity is (1) concise; (2) precise in the jq context; (3) easy-to-learn; and (4) widely in use for arity-oriented languages, at least on this planet.
Are there limitation to rebol parse function power ? Would it be capable of parsing the whole css2 / css 3 spec or will it encounter theorical impossibility to form some rules ?
Update after HostileFork answer: I mean in regexp I think it would be rather impossible, is parse much more powerfull ?
If yes does it mean it would be possible to build a browser in rebol vid compatible with html5 ?
Your question of "are there limits" is slippery. I'll try and give you "the answer" instead of just "yeah, sure"...which would be more expedient albeit not too educational. :)
Consider the following snippet. It captures the parser position into x, and then runs what's in parentheses in the DO dialect. That code re-sets x to the tail of the input if the css-parser function succeeds, or to the head of the input if the function fails. Finally, it sets the parse position to the current x. And as we know, PARSE returns true only if we're at the end of the input series when the rules finish...
parse my-css [x: (x: either css-parser x [tail x] [head x]]) :x]
That's valid parse dialect code AND it returns true if (and only if) the css-parser function succeeds. Therefore, if you can write a css parser in Rebol at all, you can write it "in the parse dialect".
(This leads to the question of it's possible to solve a given computing problem in a Rebol function. Thankfully, computer scientists don't have to re-answer that question each time a new language pops up. You can compute anything that be computed by a Turing machine, and nothing that can't be...and check out Alan Turing's own words, in layman's terms. CSS parsing isn't precisely the halting problem, so yeah... it can be done.)
I'll take a stab at re-framing your question:
"Is it possible to write a block of rules (which do not use PAREN!, SET-WORD!, or GET-WORD! constructs) that can be passed into the PARSE function and return TRUE on any valid CSS file and FALSE on any malformed one?"
The formal specification of what makes for good or bad CSS is put out by the W3C:
http://www.w3.org/TR/CSS2/grammar.html
But notice that even there, it's not all cut-and-dry. Their "formal" specification of color constants can't rule out #abcd, they had to write about it in the comments, in English:
/*
* There is a constraint on the color that it must
* have either 3 or 6 hex-digits (i.e., [0-9a-fA-F])
* after the "#"; e.g., "#000" is OK, but "#abcd" is not.
*/
hexcolor
: HASH S*
;
This leads us to ask if we would forgive Rebol for not being able to do that kind of recognition after we've tied PARSE's hands by taking away PAREN!/GET-WORD!/SET-WORD! (I just want to point out this kind of issue in light of your question).
As part of the Rebol 3 parse project there's been a write-up of the Theory of Parse...
The PARSE dialect is an enhanced member of the family of Top-down parsing languages (TDPL family) including the Top-down parsing language (TDPL), the Generalized top-down parsing language (GTDPL) and the Parsing expression grammar (PEG) and uses the same "ordered choice" parsing method as the other members of the family.
As pointed out in the link above, being a member of this class makes Rebol's PARSE strictly more powerful than both regular expressions and LL parsers. I assume it's more powerful than LL(k) and LL* parsers as well, but it has been a while since I've studied this stuff and I wouldn't bet my life on it. :)
You don't really need to understand what all that means in order to make use of it to answer your "can it be done" question. Since people have claimed to parse CSS with ANTLR, and ANTLR is an LL* parser, then I'd say Rebol can do it. PAREN! is the ace-in-the-hole which lets you do "anything" if you hit a wall, but it's a slippery slope to start using it too carelessly.
Should be perfectly capable of parsing the spec, should you have motive and patience to write the rules. It'd be a bit more involved than, say, a JSON parser, but it'd be the same idea.