How to create a dictionary with a slash as key in postscript - dictionary

So I know these basic dict commands in Postscript:
i dict creates a dictionary with i entries
d begin moves dictionary d to the dictionary stack
end removes the topmost dictionary from the dictionary stack
d n w put store value w under name n in dictionary d
d n get get the value of name n in dictionary d, and put it on the stack
What I would like to do is create a dictionary that represents the associativity of differend operands. The + operand and the - operand share the same priority, therefor they are both set to 0, the * and / operands also share the same priority, but have higher priority than + and - and are therefor set to 1 and so on...
The Problem is, that I am not able to set the dictionary key to / as it is treated as some kind of "delimiter". Is there any way around this, since i cannot change the keys, and I cannot create a dict like this:
/prioritydict 5 dict def
prioritydict /(-) 0 put
prioritydict /(+) 0 put
prioritydict /(*) 1 put
prioritydict /(/) 1 put
prioritydict /(^) 2 put
nor like this:
/prioritydict 5 dict def
prioritydict /- 0 put
prioritydict /+ 0 put
prioritydict /* 1 put
prioritydict // 1 put
prioritydict /^ 2 put
Any help is greatly appreciated

You could use the cvn (convert to name) operator to turn a string of any form you want into a literal name. These should all be equivalent:
prioritydict <2f> cvn 1 put
prioritydict (\/) cvn 1 put
prioritydict (\057) cvn 1 put

%!
/prioritydict 5 dict def
prioritydict begin
/+ 0 def
/- 0 def
/* 1 def
%// 1 def conflicts with immediately evaluated name
/^ 2 def
* ^ add =

If you're only going to mention it using a literal name, then you can use just a single slant.
/
That is, an empty name indicated by just the decorator followed by zero characters before the next delimiter (space or newline or parens or braces).
You should be able to define it and load it, but you can't directly make an executable name for it, so you can't make use of automatic loading like with other definitions.
/ 1 def
/ load =
/ cvx exec =

Related

How to prompt a user for input until the input is valid in Julia

I am trying to make a program to prompt a user for input until they enter a number within a specific range.
What is the best approach to make sure the code does not error out when I enter a letter, a symbol, or a number outside of the specified range?
In alternative to parse, you can use tryparse:
tryparse(type, str; base)
Like parse, but returns either a value of the requested type, or
nothing if the string does not contain a valid number.
The advantage over parse is that you can have a cleaner error handling without resorting to try/catch, which would hide all exceptions raised within the block.
For example you can do:
while true
print("Please enter a whole number between 1 and 5: ")
input = readline(stdin)
value = tryparse(Int, input)
if value !== nothing && 1 <= value <= 5
println("You entered $(input)")
break
else
#warn "Enter a whole number between 1 and 5"
end
end
Sample run:
Please enter a whole number between 1 and 5: 42
┌ Warning: Enter a whole number between 1 and 5
└ # Main myscript.jl:9
Please enter a whole number between 1 and 5: abcde
┌ Warning: Enter a whole number between 1 and 5
└ # Main myscript.jl:9
Please enter a whole number between 1 and 5: 3
You entered 3
This is one possible way to achieve this sort of thing:
while true
print("Please enter a whole number between 1 and 5: ")
input = readline(stdin)
try
if parse(Int, input) <= 5 || parse(Int, input) >= 1
print("You entered $(input)")
break
end
catch
#warn "Enter a whole number between 1 and 5"
end
end
Sample Run:
Please enter a whole number between 1 and 5: 2
You entered 2
See this link for how to parse the user input into an int.

How to perform pandas drop_duplicates based on index column

I am banging my head against the wall when trying to perform a drop duplicate for time series, base on the value of a datetime index.
My function is the following:
def csv_import_merge_T(f):
dfsT = [pd.read_csv(fp, index_col=[0], parse_dates=[0], dayfirst=True, names=['datetime','temp','rh'], header=0) for fp in files]
dfT = pd.concat(dfsT)
#print dfT.head(); print dfT.index; print dfT.dtypes
dfT.drop_duplicates(subset=index, inplace=True)
dfT.resample('H').bfill()
return dfT
which is called by:
inputcsvT = ['./input_csv/A08_KI_T*.csv']
for csvnameT in inputcsvT:
files = glob.glob(csvnameT)
print ('___'); print (files)
t = csv_import_merge_T(files)
print csvT
I receive the error
NameError: global name 'index' is not defined
what is wrong?
UPDATE:
The issue appear to arise when csv input files (which are to be concatenated) are overlapped.
inputcsvT = ['./input_csv/A08_KI_T*.csv'] gets files
A08_KI_T5
28/05/2015 17:00,22.973,24.021
...
08/10/2015 13:30,24.368,45.974
A08_KI_T6
08/10/2015 14:00,24.779,41.526
...
10/02/2016 17:00,22.326,41.83
and it runs correctly, whereas:
inputcsvT = ['./input_csv/A08_LR_T*.csv'] gathers
A08_LR_T5
28/05/2015 17:00,22.493,25.62
...
08/10/2015 13:30,24.296,44.596
A08_LR_T6
28/05/2015 17:00,22.493,25.62
...
10/02/2016 17:15,21.991,38.45
which leads to an error.
IIUC you can call reset_index and then drop_duplicates and then set_index again:
In [304]:
df = pd.DataFrame(data=np.random.randn(5,3), index=list('aabcd'))
df
Out[304]:
0 1 2
a 0.918546 -0.621496 -0.210479
a -1.154838 -2.282168 -0.060182
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
In [308]:
df.reset_index().drop_duplicates('index').set_index('index')
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
EDIT
Actually there is a simpler method is to call duplicated on the index and invert it:
In [309]:
df[~df.index.duplicated()]
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252

Pyparsing - name not starting with a character

I am trying to use Pyparsing to identify a keyword which is not beginning with $ So for the following input:
$abc = 5 # is not a valid one
abc123 = 10 # is valid one
abc$ = 23 # is a valid one
I tried the following
var = Word(printables, excludeChars='$')
var.parseString('$abc')
But this doesn't allow any $ in var. How can I specify all printable characters other than $ in the first character position? Any help will be appreciated.
Thanks
Abhijit
You can use the method I used to define "all characters except X" before I added the excludeChars parameter to the Word class:
NOT_DOLLAR_SIGN = ''.join(c for c in printables if c != '$')
keyword_not_starting_with_dollar = Word(NOT_DOLLAR_SIGN, printables)
This should be a bit more efficient than building up with a Combine and a NotAny. But this will match almost anything, integers, words, valid identifiers, invalid identifiers, so I'm skeptical of the value of this kind of expression in your parser.

Grep examples - can't understand

Given the following commands:
ls | grep ^b[^b]*b[^b]
ls | grep ^b[^b]*b[^b]*
I know that ^ marks the start of the line, but can anyone give me a brief explanation about
these commands? what do they do? (Step by step)
thanks!
^ can mean two things:
mark the beginning of a line
or it negates the character set (whithin [])
So, it means:
lines starting with 'b'
matching any (0+) characters Other than 'b'
matching another 'b'
followed by something not-'b' (or nothing at all)
It will match
bb
bzzzzzb
bzzzzzbzzzzzzz
but not
zzzzbb
bzzzzzxzzzzzz
1)starts with b and name continues with a 0 or more characters which are not b and then b and then continues with a character which is not b
2)starts with b and name continues with a 0 or more characters which are not b and then b and then continues with 0 or more characters which are not b

Doing a complex join on files

I have files (~1k) that look (basically) like this:
NAME1.txt
NAME ATTR VALUE
NAME1 x 1
NAME1 y 2
...
NAME2.txt
NAME ATTR VALUE
NAME2 x 19
NAME2 y 23
...
Where the ATTR collumn is same in everyfile and the name column is just some version of the filename. I would like to combine them together into 1 file that looks like:
All_data.txt
ATTR NAME1_VALUE NAME2_VALUE NAME3_VALUE ...
X 1 19 ...
y 2 23 ...
...
Is there simple way to do this with just command line utilities or will I have to resort to writing some script?
Thanks
You need to write a script.
gawk is the obvious candidate
You could create an associative array in a BEGIN block, using FILENAME as the KEY and
ATTR " " VALUE
values as the value.
Then create your output in an END block.
gawk can process all txt files together by using *txt as the filename
It's a bit optimistic to expect there to be a ready made command to do exactly what you want.
Very few command join data horizontally.

Resources