Pyparsing indentedBlock - pyparsing

I'm trying to use the "indentedBlock" based on the following code:
import pyparsing as pp
import pprint
class Parser(object):
def __init__(self):
indentStack = [1]
stmt = pp.Forward()
start = pp.Word(pp.alphanums + "/" + "\." + "-" + ":" + "!" + "*"+" ")
funcDecl = (pp.OneOrMore(start)+ pp.restOfLine)
func_body = pp.indentedBlock(stmt, indentStack)
funcDef = pp.Group(funcDecl + func_body)
stmt << (funcDef)
self.__parser = pp.OneOrMore(stmt)
def parse(self, line):
try:
res = self.__parser.parseString(line)
pprint.pprint(res)
print("done")
except pp.ParseException as x:
print(x)
After applaying i got the following error:
Expected W:(ABCD...) (at char 226), (line:6, col:37)
The main look like:
if __name__ == "__main__":
parser = Parser()
test = """first level config parameter 1-n
second level config parameter 1-n
thirt level config parameter 1-n
second level config parameter 1-n
thirt level config parameter 1-n
first level config parameter 1-n"""
print(test)
parser.parse(test)
Any ideas what went wrong

Whitespace-sensitive parsing is always a challenge with pyparsing, given its default behavior of skipping over whitespace. In addition, defining an expression as a Word whose valid characters include ' ' is usually asking for trouble. But since this is being wrapped in an indentedBlock (which will take care of looking for newlines in the right places), you might get away with it here.
I expanded your test string to include some blank lines and some multiple line blocks, and came up with this:
import pyparsing as pp
test = """\
first level config parameter 1-n
second level config parameter 1-n
thirt level config parameter 1-n
thirt level config parameter n+1-m
second level config parameter 1-n
thirt level config parameter 1-n
first level config parameter 1-n"""
import pyparsing as pp
indent_stack = [1]
func_body = pp.Forward()
# what you had as `start` looks like pretty much just any line of characters
stmt = pp.Word(pp.printables + " ")
func_body <<= pp.Group(stmt + pp.indentedBlock(func_body | stmt, indent_stack))
# parse your sample text and output results with pprint
pp.OneOrMore(func_body | pp.Group(stmt)).parseString(test).pprint()
Gives:
[['first level config parameter 1-n',
[[['second level config parameter 1-n',
[['thirt level config parameter 1-n'],
['thirt level config parameter n+1-m']]]],
[['second level config parameter 1-n',
[['thirt level config parameter 1-n']]]]]],
['first level config parameter 1-n']]

Related

Migrating to Qt6/PyQt6: what are all the deprecated short-form names in Qt5?

I'm trying to migrate a codebase from PyQt5 to PyQt6. I read in this article (see https://www.pythonguis.com/faq/pyqt5-vs-pyqt6/) that all enum members must be named using their fully qualified names. The article gives this example:
# PyQt5
widget = QCheckBox("This is a checkbox")
widget.setCheckState(Qt.Checked)
# PyQt6
widget = QCheckBox("This is a checkbox")
widget.setCheckState(Qt.CheckState.Checked)
Then the article continues:
"There are too many updated values to mention them all here. But if you're converting a codebase you can usually just search online for the short-form and the longer form will be in the results."
I get the point. This quote basically says something along the lines:
"If the Python interpreter runs into an error, and the error turns out to be a short-form enum, you'll likely find the solution online."
I get that. But this is not how I want to migrate the codebase. I want a full list of all the short-form enums and then perform a global search-and-replace for each.
Where can I find such a list?
I wrote a script to extract all the short-form and corresponding fully qualified enum names from the PyQt6 installation. It then does the conversions automatically:
# -*- coding: utf-8 -*-
# ================================================================================================ #
# ENUM CONVERTER TOOL #
# ================================================================================================ #
from typing import *
import os, argparse, inspect, re
q = "'"
help_text = '''
Copyright (c) 2022 Kristof Mulier
MIT licensed, see bottom
ENUM CONVERTER TOOL
===================
The script starts from the toplevel directory (assuming that you put this file in that directory)
and crawls through all the files and folders. In each file, it searches for old-style enums to
convert them into fully qualified names.
HOW TO USE
==========
Fill in the path to your PyQt6 installation folder. See line 57:
pyqt6_folderpath = 'C:/Python39/Lib/site-packages/PyQt6'
Place this script in the toplevel directory of your project. Open a terminal, navigate to the
directory and invoke this script:
$ python enum_converter_tool.py
WARNING
=======
This script modifies the files in your project! Make sure to backup your project before you put this
file inside. Also, you might first want to do a dry run:
$ python enum_converter_tool.py --dry_run
FEATURES
========
You can invoke this script in the following ways:
$ python enum_converter_tool.py No parameters. The script simply goes through
all the files and makes the replacements.
$ python enum_converter_tool.py --dry_run Dry run mode. The script won't do any replace-
ments, but prints out what it could replace.
$ python enum_converter_tool.py --show Print the dictionary this script creates to
convert the old-style enums into new-style.
$ python enum_converter_tool.py --help Show this help info
'''
# IMPORTANT: Point at the folder where PyQt6 stub files are located. This folder will be examined to
# fill the 'enum_dict'.
pyqt6_folderpath = 'C:/Python39/Lib/site-packages/PyQt6'
# Figure out where the toplevel directory is located. We assume that this converter tool is located
# in that directory. An os.walk() operation starts from this toplevel directory to find and process
# all files.
toplevel_directory = os.path.realpath(
os.path.dirname(
os.path.realpath(
inspect.getfile(
inspect.currentframe()
)
)
)
).replace('\\', '/')
# Figure out the name of this script. It will be used later on to exclude oneself from the replace-
# ments.
script_name = os.path.realpath(
inspect.getfile(inspect.currentframe())
).replace('\\', '/').split('/')[-1]
# Create the dictionary that will be filled with enums
enum_dict:Dict[str, str] = {}
def fill_enum_dict(filepath:str) -> None:
'''
Parse the given stub file to extract the enums and flags. Each one is inside a class, possibly a
nested one. For example:
---------------------------------------------------------------------
| class Qt(PyQt6.sip.simplewrapper): |
| class HighDpiScaleFactorRoundingPolicy(enum.Enum): |
| Round = ... # type: Qt.HighDpiScaleFactorRoundingPolicy |
---------------------------------------------------------------------
The enum 'Round' is from class 'HighDpiScaleFactorRoundingPolicy' which is in turn from class
'Qt'. The old reference style would then be:
> Qt.Round
The new style (fully qualified name) would be:
> Qt.HighDpiScaleFactorRoundingPolicy.Round
The aim of this function is to fill the 'enum_dict' with an entry like:
enum_dict = {
'Qt.Round' : 'Qt.HighDpiScaleFactorRoundingPolicy.Round'
}
'''
content:str = ''
with open(filepath, 'r', encoding='utf-8', newline='\n', errors='replace') as f:
content = f.read()
p = re.compile(r'(\w+)\s+=\s+\.\.\.\s+#\s*type:\s*([\w.]+)')
for m in p.finditer(content):
# Observe the enum's name, eg. 'Round'
enum_name = m.group(1)
# Figure out in which classes it is
class_list = m.group(2).split('.')
# If it belongs to just one class (no nesting), there is no point in continuing
if len(class_list) == 1:
continue
# Extract the old and new enum's name
old_enum = f'{class_list[0]}.{enum_name}'
new_enum = ''
for class_name in class_list:
new_enum += f'{class_name}.'
continue
new_enum += enum_name
# Add them to the 'enum_dict'
enum_dict[old_enum] = new_enum
continue
return
def show_help() -> None:
'''
Print help info and quit.
'''
print(help_text)
return
def convert_enums_in_file(filepath:str, dry_run:bool) -> None:
'''
Convert the enums in the given file.
'''
filename:str = filepath.split('/')[-1]
# Ignore the file in some cases
if any(filename == fname for fname in (script_name, )):
return
# Read the content
content:str = ''
with open(filepath, 'r', encoding='utf-8', newline='\n', errors='replace') as f:
content = f.read()
# Loop over all the keys in the 'enum_dict'. Perform a replacement in the 'content' for each of
# them.
for k, v in enum_dict.items():
if k not in content:
continue
# Compile a regex pattern that only looks for the old enum (represented by the key of the
# 'enum_dict') if it is surrounded by bounds. What we want to avoid is a situation like
# this:
# k = 'Qt.Window'
# k found in 'qt.Qt.WindowType.Window'
# In the situation above, k is found in 'qt.Qt.WindowType.Window' such that a replacement
# will take place there, messing up the code! By surrounding k with bounds in the regex pat-
# tern, this won't happen.
p = re.compile(fr'\b{k}\b')
# Substitute all occurences of k (key) in 'content' with v (value). The 'subn()' method re-
# turns a tuple (new_string, number_of_subs_made).
new_content, n = p.subn(v, content)
if n == 0:
assert new_content == content
continue
assert new_content != content
print(f'{q}{filename}{q}: Replace {q}{k}{q} => {q}{v}{q} ({n})')
content = new_content
continue
if dry_run:
return
with open(filepath, 'w', encoding='utf-8', newline='\n', errors='replace') as f:
f.write(content)
return
def convert_all(dry_run:bool) -> None:
'''
Search and replace all enums.
'''
for root, dirs, files in os.walk(toplevel_directory):
for f in files:
if not f.endswith('.py'):
continue
filepath = os.path.join(root, f).replace('\\', '/')
convert_enums_in_file(filepath, dry_run)
continue
continue
return
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description = 'Convert enums to fully-qualified names',
add_help = False,
)
parser.add_argument('-h', '--help' , action='store_true')
parser.add_argument('-d', '--dry_run' , action='store_true')
parser.add_argument('-s', '--show' , action='store_true')
args = parser.parse_args()
if args.help:
show_help()
else:
#& Check if 'pyqt6_folderpath' exists
if not os.path.exists(pyqt6_folderpath):
print(
f'\nERROR:\n'
f'Folder {q}{pyqt6_folderpath}{q} could not be found. Make sure that variable '
f'{q}pyqt6_folderpath{q} from line 57 points to the PyQt6 installation folder.\n'
)
else:
#& Fill the 'enum_dict'
type_hint_files = [
os.path.join(pyqt6_folderpath, _filename)
for _filename in os.listdir(pyqt6_folderpath)
if _filename.endswith('.pyi')
]
for _filepath in type_hint_files:
fill_enum_dict(_filepath)
continue
#& Perform requested action
if args.show:
import pprint
pprint.pprint(enum_dict)
elif args.dry_run:
print('\nDRY RUN\n')
convert_all(dry_run=True)
else:
convert_all(dry_run=False)
print('\nQuit enum converter tool\n')
# MIT LICENSE
# ===========
# Copyright (c) 2022 Kristof Mulier
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
# associated documentation files (the "Software"), to deal in the Software without restriction, in-
# cluding without limitation the rights to use, copy, modify, merge, publish, distribute, sublicen-
# se, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to
# do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substan-
# tial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT
# NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRIN-
# GEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Make sure you backup your Python project. Then place this file in the toplevel directory of the project. Modify line 57 (!) such that it points to your PyQt6 installation folder.
First run the script with the --dry_run flag to make sure you agree with the replacements. Then run it without any flags.

Update a field within a NamedTuple (from typing)

I'm looking for a more pythonic way to update a field within a NamedTuple (from typing).
I get field name and value during runtime from a textfile und therefore used exec, but I believe, there must be a better way:
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
from typing import NamedTuple
class Template (NamedTuple):
number : int = 0
name : str = "^NAME^"
oneInstance = Template()
print(oneInstance)
# If I know the variable name during development, I can do this:
oneInstance = oneInstance._replace(number = 77)
# I get this from a file during runtime:
para = {'name' : 'Jones'}
mykey = 'name'
# Therefore, I used exec:
ExpToEval = "oneInstance = oneInstance._replace(" + mykey + " = para[mykey])"
exec(ExpToEval) # How can I do this in a more pythonic (and secure) way?
print(oneInstance)
I guess, from 3.7 on, I could solve this issue with dataclasses, but I need it for 3.6
Using _replace on namedtuples can not be made "pythonic" whatsoever. Namedtuples are meant to be immutable. If you use a namedtuple other developers will expect that you do not intent to alter your data.
A pythonic approach is indeed the dataclass. You can use dataclasses in Python3.6 as well. Just use the dataclasses backport from PyPi.
Then the whole thing gets really readable and you can use getattrand setattr to address properties by name easily:
from dataclasses import dataclass
#dataclass
class Template:
number: int = 0
name: str = "^Name^"
t = Template()
# use setattr and getattr to access a property by a runtime-defined name
setattr(t, "name", "foo")
print(getattr(t, "name"))
This will result in
foo

Revit Python Shell - Change Parameter Group

I'm trying to write a quick script to open a family document, change the parameter group of 2 specified parameters, and then close and save the document. I've done multiple tests and I am able to change the parameter groups of the specified parameters, but the changes of the groups don't save back to the family file. When I open the newly saved family, the parameter groups revert back to their original group.
This is with Revit 2017.2.
The same script, when run in RPS in Revit 2018 will do as desired.
import clr
import os
clr.AddReference('RevitAPI')
clr.AddReference('RevitAPIUI')
from Autodesk.Revit.DB import *
from Autodesk.Revit.UI import UIApplication
from System.IO import Directory, SearchOption
searchstring = "*.rfa"
dir = r"C:\Users\dboghean\Desktop\vanity\2017"
docs = []
if Directory.Exists(dir):
files = Directory.GetFiles(dir, searchstring, SearchOption.AllDirectories)
for f in files:
name, extension = os.path.splitext(f)
name2, extension2 = os.path.splitext(name)
if extension2:
os.remove(f)
else:
docs.append(f)
else:
print("Directory does not exist")
doc = __revit__.ActiveUIDocument.Document
app = __revit__.Application
uiapp = UIApplication(app)
currentPath = doc.PathName
pgGroup = BuiltInParameterGroup.PG_GRAPHICS
for i in docs:
doc = app.OpenDocumentFile(i)
paramList = [i for i in doc.FamilyManager.Parameters]
t = Transaction(doc, "test")
t.Start()
for i in paramList:
if i.Definition.Name in ["Right Sidesplash Edge line", "Left Sidesplash Edge line"]:
i.Definition.ParameterGroup = pgGroup
t.Commit()
doc.Close(True)
Any ideas?
Thanks!
I can confirm that this happens in Revit 2017. Strange!
A simple way around it is to arbitrarily rename the parameter using doc.FamilyManager.RenameParameter, then rename it back to the original name.
So in your case this would be three additional lines of code after changing the Parameter group:
originalName = i.Definition.Name
doc.FamilyManager.RenameParameter(i, "temp")
doc.FamilyManager.RenameParameter(i, originalName)
Doesnt get to the root problem, but works around it

Simplifying device creation in sip.conf

I often have to define many similar devices in sip.conf like this:
[device](!)
; setting some parameters
[device01](device)
callerid=dev01 <01>
[device02](device)
callerid=dev02 <02>
; ...
[deviceXX](device)
callerid=devXX <XX>
The question is perhaps I could avoid setting device-name specific parameters by using some variable like following?
[device](!)
callerid=dev${DEVICE_NAME:-2} <${DEVICE_NAME:-2}>
; setting some parameters
[device01](device)
[device02](device)
; ...
[deviceXX](device)
P.S.
It would be perfect, if there was some device constructor, so I could reduce the script to following, but, I think, that is not possible in Asterisk.
[device](!)
callerid=dev${DEVICE_NAME:-2} <${DEVICE_NAME:-2}>
; setting some parameters
;[device${MAGIC_LOOP(1,XX,leading_zeroes)}](device)
I've had good results writing a small program that takes care of it. It checks for a line saying something like
------- Automatically generated -------
and whatever is after that line, it's going to be regenerated as soon as it detects that there are new values for it (it could be from a database or from a text file). Then, I run it with supervisor and it checks every XX seconds if there are changes.
If there are changes, it issues a sip reload command after updating the sip.conf file
I wrote it in python, but whatever language you feel comfortable with should work just fine.
That's how I managed that and has been working fine so far (after a couple of months). I'd be extremely interested in learning about other approaches though. It's basically this (called from another script with supervisor):
users = get_users_logic()
#get the data that will me used on the sip.conf file
data_to_be_hashed = reduce(lambda x, y: x + y, map(lambda x: x['username'] + x['password'] + x['company_prefix'], users))
m = hashlib.md5()
m.update(str(data_to_be_hashed).encode("ascii"))
new_md5 = m.hexdigest()
last_md5 = None
try:
file = open(os.path.dirname(os.path.realpath(__file__)) + '/lastMd5.txt', 'r')
last_md5 = file.read().rstrip()
file.close()
except:
pass
# if it changed...
if new_md5 != last_md5:
#needs update
with open(settings['asterisk']['path_to_sip_conf'], 'r') as file:
sip_content = file.read().rstrip()
parts = sip_content.split(";-------------- BEYOND THIS POINT IT IS AUTO GENERATED --------------;")
sip_content = parts[0].rstrip()
sip_content += "\n\n;-------------- BEYOND THIS POINT IT IS AUTO GENERATED --------------;\n\n"
for user in users:
m = hashlib.md5()
m.update(("%s:sip.ellauri.it:%s" % (user['username'], user['password'])).encode("ascii"))
md5secret = m.hexdigest()
sip_content += "[%s]\ntype = friend\ncontext = %sLocal\nmd5secret = %s\nhost = dynamic\n\n" % (
user['username'], user['company_prefix'], md5secret)
#write the sip.conf file
f = open(settings['asterisk']['path_to_sip_conf'], 'w')
print(sip_content, file=f)
f.close()
subprocess.call('asterisk -x "sip reload"', shell=True)
#write the new md5
f = open(os.path.dirname(os.path.realpath(__file__)) + '/lastMd5.txt', 'w')
print(new_md5, file=f)
f.close()

pyparsing multiple lines optional missing data in result set

I am quite new pyparsing user and have missing match i don't understand:
Here is the text i would like to parse:
polraw="""
set policy id 800 from "Untrust" to "Trust" "IP_10.124.10.6" "MIP(10.0.2.175)" "TCP_1002" permit
set policy id 800
set dst-address "MIP(10.0.2.188)"
set service "TCP_1002-1005"
set log session-init
exit
set policy id 724 from "Trust" to "Untrust" "IP_10.16.14.28" "IP_10.24.10.6" "TCP_1002" permit
set policy id 724
set src-address "IP_10.162.14.38"
set dst-address "IP_10.3.28.38"
set service "TCP_1002-1005"
set log session-init
exit
set policy id 233 name "THE NAME is 527 ;" from "Untrust" to "Trust" "IP_10.24.108.6" "MIP(10.0.2.149)" "TCP_1002" permit
set policy id 233
set service "TCP_1002-1005"
set service "TCP_1006-1008"
set service "TCP_1786"
set log session-init
exit
"""
I setup grammar this way:
KPOL = Suppress(Keyword('set policy id'))
NUM = Regex(r'\d+')
KSVC = Suppress(Keyword('set service'))
KSRC = Suppress(Keyword('set src-address'))
KDST = Suppress(Keyword('set dst-address'))
SVC = dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
ADDR = dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
EXIT = Suppress(Keyword('exit'))
EOL = LineEnd().suppress()
P_SVC = KSVC + SVC + EOL
P_SRC = KSRC + ADDR + EOL
P_DST = KDST + ADDR + EOL
x = KPOL + NUM('PId') + EOL + Optional(ZeroOrMore(P_SVC)) + Optional(ZeroOrMore(P_SRC)) + Optional(ZeroOrMore(P_DST))
for z in x.searchString(polraw):
print z
Result set is such as
['800', 'MIP(10.0.2.188)']
['724', 'IP_10.162.14.38', 'IP_10.3.28.38']
['233', 'TCP_1002-1005', 'TCP_1006-1008', 'TCP_1786']
The 800 is missing service tag ???
What's wrong here.
Thanks by advance
Laurent
The problem you are seeing is that in your expression, DST's are only looked for after having skipped over optional SVC's and SRC's. You have a couple of options, I'll go through each so you can get a sense of what all is going on here.
(But first, there is no point in writing "Optional(ZeroOrMore(anything))" - ZeroOrMore already implies Optional, so I'm going to drop the Optional part in any of these choices.)
If you are going to get SVC's, SRC's, and DST's in any order, you could refactor your ZeroOrMore to accept any of the three data types, like this:
x = KPOL + NUM('PId') + EOL + ZeroOrMore(P_SVC|P_SRC|P_DST)
This will allow you to intermix different types of statements, and they will all get collected as part of the ZeroOrMore repetition.
If you want to keep these different types of statements in groups, then you can add a results name to each:
x = KPOL + NUM('PId') + EOL + ZeroOrMore(P_SVC("svc*")|
P_SRC("src*")|
P_DST("dst*"))
Note the trailing '*' on each name - this is equivalent to calling setResultsName with the listAllMatches argument equal to True. As each different expression is matched, the results for the different types will get collected into the "svc", "src", or "dst" results name. Calling z.dump() will list the tokens and the results names and their values, so you can see how this works.
set policy id 233
set service "TCP_1002-1005"
set dst-address "IP_10.3.28.38"
set service "TCP_1006-1008"
set service "TCP_1786"
set log session-init
exit
shows this for z.dump():
['233', 'TCP_1002-1005', 'IP_10.3.28.38', 'TCP_1006-1008', 'TCP_1786']
- PId: 233
- dst: [['IP_10.3.28.38']]
- svc: [['TCP_1002-1005'], ['TCP_1006-1008'], ['TCP_1786']]
If you wrap ungroup on the P_xxx expressions, maybe like this:
P_SVC,P_SRC,P_DST = (ungroup(expr) for expr in (P_SVC,P_SRC,P_DST))
then the output is even cleaner-looking:
['233', 'TCP_1002-1005', 'IP_10.3.28.38', 'TCP_1006-1008', 'TCP_1786']
- PId: 233
- dst: ['IP_10.3.28.38']
- svc: ['TCP_1002-1005', 'TCP_1006-1008', 'TCP_1786']
This is actually looking pretty good, but let me pass on one other option. There are a number of cases where parsers have to look for several sub-expressions in any order. Let's say they are A,B,C, and D. To accept these in any order, you could write something like OneOrMore(A|B|C|D), but this would accept multiple A's, or A, B, and C, but not D. The exhaustive/exhausting combinatorial explosion of (A+B+C+D) | (A+B+D+C) | etc. could be written, or you could maybe automate it with something like
from itertools import permutations
mixNmatch = MatchFirst(And(p) for p in permutations((A,B,C,D),4))
But there is a class in pyparsing called Each that allows to write the same kind of thing:
Each([A,B,C,D])
meaning "must have one each of A, B, C, and D, in any order". And like And, Or, NotAny, etc., there is an operator shortcut too:
A & B & C & D
which means the same thing.
If you want "must have A, B, and C, and optionally D", then write:
A & B & C & Optional(D)
and this will parse with the same kind of behavior, looking for A, B, C, and D, regardless of the incoming order, and whether D is last or mixed in with A, B, and C. You can also use OneOrMore and ZeroOrMore to indicate optional repetition of any of the expressions.
So you could write your expression as:
x = KPOL + NUM('PId') + EOL + (ZeroOrMore(P_SVC) &
ZeroOrMore(P_SRC) &
ZeroOrMore(P_DST))
I looked at using results names with this expression, and the ZeroOrMore's seem to be confusing things, maybe still a bug in how this is done. So you may have to reserve using Each for more basic cases like my A,B,C,D example. But I wanted to make you aware of it.
Some other notes on your parser:
dblQuotedString.setParseAction(lambda t: t[0].replace('"','')) is probably better written
dblQuotedString.setParseAction(removeQuotes). You don't have any embedded quotes in your examples, but it's good to be aware of where your assumptions might not translate to a future application. Here are a couple of ways of removing the defining quotes:
dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \ and an ending quote \'
# removed leading and trailing "s, but also internal ones too, which are
# really part of the quoted string
dblQuotedString.setParseAction(lambda t: t[0].strip('"'))
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \" and an ending quote \'
# removed leading and trailing "s, and leaves the one internal ones but strips off
# the escaped ending quote
dblQuotedString.setParseAction(removeQuotes)
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \" and an ending quote \"'
# just removes leading and trailing " characters, leaves escaped "s in place
KPOL = Suppress(Keyword('set policy id')) is a bit fragile, as it will break if there are any extra spaces between 'set' and 'policy', or between 'policy' and 'id'. I usually define these kind of expressions by first defining all the keywords individually:
SET,POLICY,ID,SERVICE,SRC_ADDRESS,DST_ADDRESS,EXIT = map(Keyword,
"set policy id service src-address dst-address exit".split())
and then define the separate expressions using:
KSVC = Suppress(SET + SERVICE)
KSRC = Suppress(SET + SRC_ADDRESS)
KDST = Suppress(SET + DST_ADDRESS)
Now your parser will cleanly handle extra whitespace (or even comments!) between individual keywords in your expressions.

Resources