Lex & Yacc AST homework - abstract-syntax-tree

I need to write this function in AST, preorder, but when I run my yacc file, it prints "Segmentatio fault(core dumped)". If you can please help me resolve my problem, because it as been a few days and I still do not understand what to do. I checked my syntax and it is working, but for some reason when I add mknode and printtree to it, it prints this message. Please help me.
void foo(int x, y, z; real f){
if (x>y) {
x = x + f;
}
else {
y = x + y + z;
x = f*2;
z = f;
}
This is my yacc file, including my function printtree and mknode.
%{
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef struct node{
char *token;
struct node *left;
struct node *right;
}node;
node *mknode(char *token, node *left, node *right);
void printtree(node *tree);
%}
%union
{
char *s;
struct node *node;
}
%token IF ELSE INT CHAR VOID REAL RETURN GUI
%left '*'
%left '+'
%token <s> NUM ID FUNC
%type <node> S start function func args args1 body if_st ret_st expr block ass calc
%type <s> type
%%
S: start {printtree($1);};
start: function {$$ = mknode("CODE",$1,NULL);};
function: func { $$ = mknode("FUNC",$1, NULL); };
func: type ID '(' args ')' '{' body '}' {$$ = mknode($2,NULL, mknode("ARGS", $4,mknode($1, NULL,$7)));};
type: INT {$$ = "INT";}
| CHAR {$$ = "CHAR";}
| VOID {$$ = "VOID";}
| REAL {$$ = "REAL";};
args: type args1 args {$$ = mknode($1,$2,$3);} | type args1 {$$ = mknode($1,$2,NULL);} ;
args1: ID {$$ = mknode($1,NULL,NULL);}
| ID ';' {$$ = mknode($1,NULL,NULL);}
| ID ',' args1 {$$ = mknode($1,NULL,$3);}
| { $$ = NULL; };
body: if_st {$$ = mknode("BODY", $1, NULL);}
| ret_st {$$ = mknode("BODY", $1, NULL);};
if_st: IF'(' expr ')' '{'block'}' ELSE '{'block'}' {$$ = mknode("IF-ELSE",mknode(NULL,$3,mknode(NULL,$6,$10)),NULL);}
| IF '(' expr ')' '{'block'}'{$$ = mknode("IF",$3,$6);} ;
expr: ID '<' ID {$$ = mknode("<",mknode($1,NULL,NULL),mknode($3,NULL,NULL));}
| ID '>' ID {$$ = mknode(">",mknode($1,NULL,NULL),mknode($3,NULL,NULL));}
| ID '=' ID {$$ = mknode("==",mknode($1,NULL,NULL),mknode($3,NULL,NULL));}
| ID '<' NUM {$$ = mknode("<",mknode($1,NULL,NULL),mknode($3,NULL,NULL));}
| ID '>' NUM {$$ = mknode(">",mknode($1,NULL,NULL),mknode($3,NULL,NULL));}
| ID '=' NUM {$$ = mknode("==",mknode($1,NULL,NULL),mknode($3,NULL,NULL));};
block: block ass {$$ = mknode(NULL,$1,$2);}
| ass {$$ = mknode(NULL,$1,NULL);};
ass: ID '=' calc ';'{$$ = mknode("=",mknode($1,NULL,NULL),mknode(NULL,$3,NULL));};
calc: ID '+' calc {$$ = mknode("+",mknode($1,NULL,NULL),mknode(NULL,$3,NULL));}
| ID '*' calc {$$ = mknode("*",mknode($1,NULL,NULL),mknode(NULL,$3,NULL));}
| NUM '+' calc {$$ = mknode("+",mknode($1,NULL,NULL),mknode(NULL,$3,NULL));}
| NUM '*' calc {$$ = mknode("*",mknode($1,NULL,NULL),mknode(NULL,$3,NULL));}
| NUM {$$ = mknode($1,NULL,NULL);}
| ID {$$ = mknode($1,NULL,NULL);};
ret_st: RETURN GUI calc GUI ';' { $$ = mknode("RET", $3, NULL); };
%%
#include "lex.yy.c"
int main()
{
return yyparse();
}
node *mknode(char *token,node *left,node *right)
{
node *newnode = (node*)malloc(sizeof(node));
char *newstr = (char*)malloc(sizeof*(token)+1);
strcpy(newstr,token);
newnode->left = left;
newnode->right = right;
newnode->token = newstr;
return newnode;
}
void printtree(node *tree)
{
printf("%s\n",tree->token);
if(tree->left)
printtree(tree->left);
if(tree->right)
printtree(tree->right);
}
int yyerror()
{
printf("ERROR\n");
return 0;
}

Most likely cause of the crash:
you call mknode in a couple of places (eg, the block rule) with NULL as the first argument, but mknode calls strcpy with this argument as the source string, so it will crash
Other problems:
you use sizeof(token) where token is a char * (getting the size of a pointer, not the length of the string. You need strlen(token). Better yet, use strdup(token) to do the malloc+strcpy all in one.
your grammar is inflexible, with almost-dupliacted rules and limited nesting. You're better off using fewer rules -- get rid of all the calc/expr stuff and just have
expr: expr '+' expr
| expr '*' expr
| expr '<' expr
| expr '>' expr
| expr '=' expr
| ID
| NUM
| '(' expr ')'
and set precedence of your operators appropriately. Similarly block and body should be combined into one non-terminal and a couple of rules.

Related

Finding all possible permutations of the characters in a set of strings using recursion

I have this set of (Greek) strings:
ἸἼΙἹἽ,
ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ,
σς,
οὸόὀὄὅὂ,
ὺὖυῦύὐὑὔΰϋὕὗὓὒῢ
I'd like to find all possible permutations of the characters in these 5 strings. For example, Ἰῇσοὺ, Ἰῇσοὖ, Ἰῇσου, etc. I know it should involve recursion since the number of strings is not fixed but I'm a beginner and I'm completely dumbfounded by recursion.
I did the following in Python and it does give me all combinations of the characters in each string. But I need the 'ἸἼΙἹἽ' to always come first, 'ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ' second,'σς' third, etc.
# -*- coding: utf-8 -*-
def Gen( wd, pos, chars ):
if pos < len( chars ):
for c in chars:
for l in c:
Gen( wd + l, pos + 1, chars )
else:
print wd
chars = [ u'ἸἼΙἹἽ', u'ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ', u'σς', u'οὸόὀὄὅὂ', u'ὺὖυῦύὐὑὔΰϋὕὗὓὒῢ' ]
Gen( "", 0, chars )
Thanks for the help everybody. My mind is completely blown. Recursion! Here's what I ended up doing in Python:
# -*- coding: utf-8 -*-
s = [ u'ἸἼΙἹἽ', u'ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ', u'σς', u'οὸόὀὄὅὂ', u'ὺὖυῦύὐὑὔΰϋὕὗὓὒῢ' ]
results = []
def recur( wd, strings ):
index = 0
if index < len( strings ):
for c in strings[ index ]:
recur( wd + c, strings[ index + 1: ] )
else:
results.append( wd )
def main():
recur( '', s )
for r in results:
print r.encode( 'utf-8' )
main()
You create a char array which will contain the string you want to work with
char str[] = "ABC";
then you get the length of the string int n = strlen(str); and lastly you permutate.
You make a new function which will contain the input string, starting index of the string and ending index of the string.
Check if the starting index (int s) equals the ending index (int e)
if it does, that means you're done, if not you go into a loop where you go from start (s) to end (e), swap the values, recurse, swap again to backtrack.
An example in C++:
#include <stdio.h>
#include <string.h>
void swap(char *i, char *j)
{
char temp;
temp = *i;
*i = *j;
*j = temp;
}
void permutate(char *str, int start, int end)
{
int i;
if (start == end)
printf("%s\n", str);
else
{
for (i = start; i <= end; i++)
{
swap((str + start), (str + i));
permutate(str, start + 1, end);
swap((str + start), (str + i)); //backtrack
}
}
}
int main()
{
char str[] = "ABC";
int n = strlen(str);
permutate(str, 0, n - 1);
return 0;
}
I'm not that familliar with Python, but I've found something that might help in your case:
def comb(first_str, second_str):
if not first_str:
yield second_str
return
if not second_str:
yield first_str
return
for result in comb(first_str[1:], second_str):
yield first_str[0] + result
for result in comb(first_str, second_str[1:]):
yield second_str[0] + result
>>> for result in comb("ἸἼΙἹἽ", "ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ"):
print(result)
Just write down the five nested loops. In pseudocode,
for a in "ἸἼΙἹἽ"
for b in "ῇηἤήῃὴῆἡἠἢᾖἥἣῄἦᾗᾐἧᾔᾑ"
for c in "σς"
for d in "οὸόὀὄὅὂ"
for e in "ὺὖυῦύὐὑὔΰϋὕὗὓὒῢ"
emit [a,b,c,d,e]
To encode these five loops with recursion, so it's good for any number of input strings, again in pseudocode,
g(list-of-strings) =
| case list-of-strings
| of empty --> end-of-processing
| of (first-string AND rest-of-strings) -->
for each ch in first-string
DO g(rest-of-strings)
Now you only need to figure out where to hold each current first-string's character ch and how to combine them all while at the end-of-processing (basically, your two options are a global accumulator, or an argument to a function invocation).

javacc' LOOKAHEAD( AllSymbols() ) AllSymbols() not chosen, sole to be parsed correctly

The grammar, in a pinch, is as follows:
Phi ::= Phi_sub ( ("&&" | "||") Phi_sub )*
Phi_sub ::= "(" Phi ")" | ...
Psi ::= Psi_sub ( ("&&" | "||") Psi_sub )*
Psi_sub ::= "(" Psi ")" | ...
Xi ::= LOOKAHEAD( Phi ) Phi | LOOKAHEAD( Psi ) Psi
As you can see, an infinite lookahead would in general be required in the Xi production, because the parser needs to distinguish cases like:
((Phi_sub && Phi_sub) || Phi_sub) vs ((Psi_sub && Psi_sub) || Psi_sub)
i.e. an arbitrary amount of prefixing (.
I thought, that making the lookahead like above would work, but it doesn't. For example, Phi is chosen, even if Xi does not expand to Phi, but does to Psi. This can be easily checked on a certain stream S by calling Phi with the debugger just after the parsed decided, within Xi, to choose Phi, and is about to call Phi. The debugger in such a case shows a proper expansion to Psi, while allowing the parser just to call Phi as it wants would cause a parse exception.
The other way of testing it is swapping Phi and Psi:
Xi ::= LOOKAHEAD( Psi ) Psi | LOOKAHEAD( Phi ) Phi
This will make the parser parse the particular S correctly, and so it seems that simply the first branch within Xi is chosen, be it the valid one or not.
I guess I got some basic assumption wrong, but have no idea what can it be. Should the above work in general, if there are no additional factors, like an ignored inner lookahead?
Your assumptions are not wrong. What you are trying to do should work. And it should work for the reasons you think it should work.
Here is a complete example written in JavaCC.
void Start() : {} { Xi() <EOF> }
void Xi() : {} {
LOOKAHEAD( Phi() ) Phi() { System.out.println( "Phi" ) ; }
| LOOKAHEAD( Psi() ) Psi() { System.out.println( "Psi" ) ; }
}
void Phi() : {} { Phi_sub() ( ("&&" | "||") Phi_sub() )*}
void Phi_sub() : {} { "(" Phi() ")" | "Phi_sub" }
void Psi() : {} { Psi_sub() ( ("&&" | "||") Psi_sub() )* }
void Psi_sub() : {} { "(" Psi() ")" | "Psi_sub" }
And here is some sample output:
Input is : <<Phi_sub>>
Phi
Input is : <<Psi_sub>>
Psi
Input is : <<((Phi_sub && Phi_sub) || Phi_sub)>>
Phi
Input is : <<((Psi_sub && Psi_sub) || Psi_sub)>>
Psi
The problem you are having lies in something not shown in the question.
By the way, it's a bad idea to put a lookahead specification in front of every alternative.
void X() : {} { LOOKAHEAD(Y()) Y() | LOOKAHEAD(Z()) Z() }
is roughly equivalent to
void X() : {} { LOOKAHEAD(Y()) Y() | LOOKAHEAD(Z()) Z() | fail with a stupid error message }
For example, here is another run of the above grammar
Input is : <<((Psi_sub && Psi_sub) || Phi_sub)>>
NOK.
Encountered "" at line 1, column 1.
Was expecting one of:
After all lookahead has failed, the parser is left with an empty set of expectations!
If you change Xi to
void Xi() : {} {
LOOKAHEAD( Phi() ) Phi() { System.out.println( "Phi" ) ; }
| Psi() { System.out.println( "Psi" ) ; }
}
you get a slightly better error message
Input is : <<((Psi_sub && Psi_sub) || Phi_sub)>>
NOK.
Encountered " "Phi_sub" "Phi_sub "" at line 1, column 26.
Was expecting one of:
"(" ...
"Psi_sub" ...
You can also make a custom error message
void Xi() : {} {
LOOKAHEAD( Phi() ) Phi() { System.out.println( "Phi" ) ; }
| LOOKAHEAD( Psi() ) Psi() { System.out.println( "Psi" ) ; }
| { throw new ParseException( "Expected either a Phi or a Psi at line "
+ getToken(1).beginLine
+ ", column " + getToken(1).beginColumn + "." ) ;
}
}

Left recursive in ANTLR4

I have this below code :
grammar Hello;
expr : expr ('*'|'/') expr
| expr ('+'|'-') expr
| (expr '[' expr ']')
| ID
| INT
;
ID : ID_LETTER (ID_LETTER | DIGIT)* ; // INDENTIFIER
fragment ID_LETTER : [a-z] | [A-Z] | '_' ;
fragment DIGIT : [0-9] ;
INT : DIGIT+ ;
WS : [ \t\r\n]+ -> skip;
I want to build a index expression but i have an error :
mutually left-recursive
in the 4th line of my code
I had tried :
expr '[' expr ']'// without (
The error was gone, but when i put an input : a[2] to the rule expr, it has an error:
Hello::expr:1:0: mismatched input 'a[2]' expecting {BOOL, ID, INT, FLOAT, STRING}
I cannot reproduce the error you mention.
Given the grammar:
grammar Hello;
expr
: expr ('*'|'/') expr
| expr ('+'|'-') expr
| expr '[' expr ']'
| ID
| INT
;
ID : ID_LETTER (ID_LETTER | DIGIT)* ; // INDENTIFIER
fragment ID_LETTER : [a-z] | [A-Z] | '_' ;
fragment DIGIT : [0-9] ;
INT : DIGIT+ ;
WS : [ \t\r\n]+ -> skip;
And a driver class that parses both a[2] and a [ 2 ]:
import org.antlr.v4.runtime.*;
public class Main {
public static void main(String[] args) throws Exception {
String[] tests = {"a[2]", "a [ 2 ]"};
for (String test : tests) {
HelloLexer lexer = new HelloLexer(new ANTLRInputStream(test));
HelloParser parser = new HelloParser(new CommonTokenStream(lexer));
System.out.println(parser.expr().toStringTree(parser));
}
}
}
will print (expr (expr a) [ (expr 2) ]) twice.

Date Time Parser using YACC shift reduce conflicts

I have the following YACC parser
%start Start
%token _DTP_LONG // Any number; Max upto 4 Digits.
%token _DTP_SDF // 17 Digit number indicating SDF format of Date Time
%token _DTP_EOS // end of input
%token _DTP_MONTH //Month names e.g Jan,Feb
%token _DTP_AM //Is A.M
%token _DTP_PM //Is P.M
%%
Start : DateTimeShortExpr
| DateTimeLongExpr
| SDFDateTimeExpr EOS
| DateShortExpr EOS
| DateLongExpr EOS
| MonthExpr EOS
;
DateTimeShortExpr : DateShortExpr TimeExpr EOS {;}
| DateShortExpr AMPMTimeExpr EOS {;}
;
DateTimeLongExpr : DateLongExpr TimeExpr EOS {;}
| DateLongExpr AMPMTimeExpr EOS {;}
;
DateShortExpr : Number { rc = vDateTime.SetDate ((Word) $1, 0, 0);
}
| Number Number { rc = vDateTime.SetDate ((Word) $1, (Word) $2, 0); }
| Number Number Number { rc = vDateTime.SetDate ((Word) $1, (Word) $2, (Word) $3); }
;
DateLongExpr : Number AbsMonth { // case : number greater than 31, consider as year
if ($1 > 31) {
rc = vDateTime.SetDateFunc (1, (Word) $2, (Word) $1);
}
// Number is considered as days
else {
rc = vDateTime.SetDateFunc ((Word) $1, (Word) $2, 0);
}
}
| Number AbsMonth Number {rc = vDateTime.SetDateFunc((Word) $1, (Word) $2, (Word) $3);}
;
TimeExpr : Number { rc = vDateTime.SetTime ((Word) $1, 0, 0);}
| Number Number { rc = vDateTime.SetTime ((Word) $1, (Word) $2, 0); }
| Number Number Number { rc = vDateTime.SetTime ((Word) $1, (Word) $2, (Word) $3); }
;
AMPMTimeExpr : TimeExpr _DTP_AM { rc = vDateTime.SetTo24hr(TP_AM) ; }
| TimeExpr _DTP_PM { rc = vDateTime.SetTo24hr(TP_PM) ; }
| _DTP_AM TimeExpr { rc = vDateTime.SetTo24hr(TP_AM) ; }
| _DTP_PM TimeExpr { rc = vDateTime.SetTo24hr(TP_PM) ; }
;
SDFDateTimeExpr : SDFNumber { rc = vDateTime.SetSDF ($1);}
;
MonthExpr : AbsMonth { rc = vDateTime.SetNrmMth ($1);}
| AbsMonth Number { rc = vDateTime.Set ($1,$2);}
;
Number : _DTP_LONG { $$ = $1; }
;
SDFNumber : _DTP_SDF { $$ = $1; }
;
EOS : _DTP_EOS { $$ = $1; }
;
AbsMonth : _DTP_MONTH { $$ = $1; }
;
%%
It is giving three shift reduce conflicts.How can i remove them????
The shift-reduce conflicts are inherent in the "little language" that your grammar describes. Consider the stream of input tokens
_DTP_LONG _DTP_LONG _DTP_LONG EOS
Each _DTP_LONG can be reduced as a Number. But should
Number Number Number
be reduced as a 1-number DateShortExpr followed by a 2-number TimeExpr or as a 2-number DateShortExpr followed by a 1-number TimeShortExpr? The ambiguity is built in.
If possible, redesign your language by adding additional symbols to distinguish dates from times--colons to set off the parts of a time and slashes to set off the parts of a date, for instance.
Update
I don't think that you can use yacc/bison's precedence features here, because the tokens are indistinguishable.
You will have to rely on yacc/bison's default behavior when it encounters a shift/reduce conflict, that is, to shift rather than reduce. Consider this example in your output:
+------------------------- STATE 9 -------------------------+
+ CONFLICTS:
? sft/red (shift & new state 12, rule 11) on _DTP_LONG
+ RULES:
DateShortExpr : Number^ (rule 11)
DateShortExpr : Number^Number
DateShortExpr : Number^Number Number
DateLongExpr : Number^AbsMonth
DateLongExpr : Number^AbsMonth Number
+ ACTIONS AND GOTOS:
_DTP_LONG : shift & new state 12
_DTP_MONTH : shift & new state 13
: reduce by rule 11
Number : goto state 26
AbsMonth : goto state 27
What the parser will do is to shift and apply rule 12, rather than reduce by rule 11 (DateShortExpr : Number). This means the parser will never interpret a single Number as a DateShortExpr; it will always shift.
And a difficulty with relying on the default behavior is that it might change as you make modifications to your grammar.

ANTLRWorks - Code Generation getting stuck and not generating

Ive defining a grammar for arithmetric expressions using the following syntax. Its a subset of a more complicated whole, but the problems only occured when i extended the grammar to include Logical Operations.
When I try to code gen using antlrworks it take a very long time to even start generating. I think the problems is in the rule for paren, as it includes a loop to the start of expr. Any help in fixing this would be great
Thanks in advance
the options used:
options {
tokenVocab = MAliceLexer;
backtrack = true;
}
code for the Grammar is below:
type returns [ASTTypeNode n]
: NUMBER {$n = new IntegerTypeNode();}
| LETTER {$n = new CharTypeNode();}
| SENTENCE { $n = new StringTypeNode();}
;
term returns [ASTNode n]
: IDENTIFIER {$n = new IdentifierNode($IDENTIFIER.text);}
| CHAR {$n = new LetterNode($CHAR.text.charAt(1));}
| INTEGER {$n = new NumberNode(Integer.parseInt( $INTEGER.text ));}
| STRING { $n = new StringNode( $STRING.text ); }
;
paren returns [ASTNode n]
:term { $n = $term.n; }
| LPAR expr RPAR { $n = $expr.n; }
;
negation returns [ASTNode n]
:BITNEG (e = negation) {$n = new BitNotNode($e.n);}
| paren {$n = $paren.n;}
;
unary returns [ASTNode n]
:MINUS (u =unary) {$n = new NegativeNode($u.n);}
| negation {$n = $negation.n;}
;
mult returns [ASTNode n]
: unary DIV (m = mult) {$n = new DivideNode($unary.n, $m.n);}
| unary MULT (m = mult) {$n = new MultiplyNode($unary.n, $m.n);}
| unary MOD (m=mult) {$n = new ModNode($unary.n, $m.n);}
| unary {$n = $unary.n;}
;
binAS returns [ASTNode n]
: mult PLUS (b=binAS) {$n = new AdditionNode($mult.n, $b.n);}
| mult MINUS (b=binAS) {$n = new SubtractionNode($mult.n, $b.n);}
| mult {$n = $mult.n;}
;
comp returns [ASTNode n]
: binAS GREATEREQ ( e =comp) {$n = new GreaterEqlNode($binAS.n, $e.n);}
|binAS GREATER ( e = comp ) {$n = new GreaterNode($binAS.n, $e.n);}
|binAS LESS ( e = comp ) {$n = new LessNode($binAS.n, $e.n);}
|binAS LESSEQ ( e = comp ) {$n = new LessEqNode($binAS.n, $e.n);}
|binAS {$n = $binAS.n;}
;
equality returns [ASTNode n]
: comp EQUAL ( e = equality) {$n = new EqualNode($comp.n, $e.n);}
|comp NOTEQUAL ( e = equality ) {$n = new NotEqualNode($comp.n, $e.n);}
|comp { $n = $comp.n; }
;
bitAnd returns [ASTNode n]
: equality BITAND (b=bitAnd) {$n = new BitAndNode($equality.n, $b.n);}
| equality {$n = $equality.n;}
;
bitXOr returns [ASTNode n]
: bitAnd BITXOR (b = bitXOr) {$n = new BitXOrNode($bitAnd.n, $b.n);}
| bitAnd {$n = $bitAnd.n;}
;
bitOr returns [ASTNode n]
: bitXOr BITOR (e =bitOr) {$n = new BitOrNode($bitXOr.n, $e.n);}
| bitXOr {$n = $bitXOr.n;}
;
logicalAnd returns [ASTNode n]
: bitOr LOGICALAND (e = logicalAnd){ $n = new LogicalAndNode( $bitOr.n, $e.n ); }
| bitOr { $n = $bitOr.n; }
;
expr returns [ASTNode n]
: logicalAnd LOGICALOR ( e = expr ) { $n = new LogicalOrNode( $logicalAnd.n, $e.n); }
| IDENTIFIER INC {$n = new IncrementNode(new IdentifierNode($IDENTIFIER.text));}
| IDENTIFIER DEC {$n = new DecrementNode(new IdentifierNode($IDENTIFIER.text));}
| logicalAnd {$n = $logicalAnd.n;}
;
`
This seems to be a bug introduced in version 3.3 (and upwards). ANTLR 3.2 produces the following error when generating a parser from your grammar:
warning(205): Test.g:31:2: ANTLR could not analyze this decision in rule equality; often this is because of recursive rule references visible from the left edge of alternatives. ANTLR will re-analyze the decision with a fixed lookahead of k=1. Consider using "options {k=1;}" for that decision and possibly adding a syntactic predicate.
error(10): internal error: org.antlr.tool.Grammar.createLookaheadDFA(Grammar.java:1279): could not even do k=1 for decision 6; reason: timed out (>1000ms)
It looks to me you've used an LR grammar as the basis for your ANTLR grammar. Consider starting over but then with LL parsing in mind. Have a look at the following Q&A to see how to parse expressions using ANTLR: ANTLR: Is there a simple example?
Also, I see you're using some tokens that look an awful lot like each other: LETTER, CHAR, SENTENCE and IDENTIFIER. You must realize that if all of them may start with, for example, a lower case letter, only one of the rules is matched (the one that matches most, or in case of a tie, the one defined first in the lexer grammar). The lexer does not produce tokens based on what the parser "asks" for, it creates tokens independently from the parser.
Finally, for a simple expression parser, you really don't need predicates (and backtrack=true causes ANTLR to automatically inserts predicates in front of all parser rules!).

Resources