how to print trees onto console? - console

it would be nice if i could print the binary search trees i am writing onto python console ? any idea how to do it?

You can use something like this:
def printTree(tree, depth = 0):
if tree == None or len(tree) == 0:
print "\t" * depth, "-"
else:
for key, val in tree.items():
print "\t" * depth, key
printTree(val, depth+1)
(Source: http://www.siafoo.net/snippet/91)
This method will yield:
n1
n2
n4
n5
n3
n6
n7
You can go along these lines and prettify as necessary.

Related

Incrementing a Counter in Prolog

This is the current code I have for a problem I am working on. It is supposed to read in from a file, and increment a counter, R, every time it comes across a vowel.
Currently, I have it stop when reaching a vowel, but I would like it to increment a counter, then continue processing. Once done, I want it to print R to the console. Thanks in advance!
readWord(InStream, W) :-
get0(InStream,Char),
checkChar_readRest(Char,Chars,InStream, R),
atom_codes(Code,Chars),
write(Code).
%checkChar_readRest(10,[],_) :- !. % Return
%checkChar_readRest(32,[],_) :- !. % Space
checkChar_readRest(-1,[],_,_) :- !. % End of Stream
checkChar_readRest(97,[],_,R) :- !. % a
checkChar_readRest(101,[],_,R) :- !. % e
checkChar_readRest(105,[],_,R) :- !. % i
checkChar_readRest(111,[],_,R) :- incr(R,R1), write(R1). % o
checkChar_readRest(117,[],_,R) :- !. % u
%checkChar_readRest(end_of_file,[],_,_) :- !.
checkChar_readRest(Char,[Char|Chars],InStream,R) :-
get0(InStream,NextChar),
checkChar_readRest(NextChar,Chars,InStream,R).
incr(X, X1) :- X1 is X+1.
vowel(InStream, R) :-
open(InStream, read, In),
repeat,
readWord(In, W),
close(In).
Here's my attempt (ISO predicates):
% open Src, count vowels, close stream, print to console
count_vowels_in(Src) :-
open(Src, read, Stream),
count(Stream, Total),
close(Stream),
% cutting here because our Stream is now closed, any backtracking will break things that rely on it
% you could also put this after at_end_of_stream/1 in count_/3
% not sure what best practices are here
!,
write(Total).
% just a nice wrapper to get started counting with the initial count set to 0
count(Stream, Total) :-
count_(Stream, 0, Total).
% at end of stream, Count = Total and we're done
count_(Stream, Count, Count) :-
at_end_of_stream(Stream).
% read from stream recursively, incrementing Count as needed
count_(Stream, Count0, Total) :-
\+at_end_of_stream(Stream),
get_char(Stream, Char),
char_value(Char, Value),
Count1 is Count0 + Value,
% recursively call count_, but now with our new Count1 value instead, carrying forward the results
count_(Stream, Count1, Total).
char_value(Char, 1) :-
vowel(Char).
char_value(Char, 0) :-
\+vowel(Char).
vowel(a).
vowel(e).
vowel(i).
vowel(o).
vowel(u).
The biggest difference is that I use two variables for keeping track of the count. Count (equivalent to your R) is the current count, and Total is a variable representing the final count. We unify Total with Count when we are finished counting: at the end of the stream.
In the original program posted, there were many singleton variables (variables that are never unified with anything, for example W). This is usually indicative of a bug and will generate warnings. Remember that Prolog is a logical language, it can be good to take a step back and think "what am I actually trying to do with these variables?". It can also help to break the problem down into smaller chunks instead of trying to write one predicate that does everything.
I might approach it like this:
Slurp in the entire file as a list of characters.
Traverse that list and tally the vowels it contains.
Write the tally.
Something like
findall(C, ( get0(V) , V \= -1 , char_code(C,V) ), Cs).
should suffice for slurping the text.
And then, something along these lines:
count_vowels :-
findall(C, ( get0(V) , V \= -1 , char_code(C,V) ), Cs),
count_vowels(Cs,N),
writeln( total_vowels : N )
.
count_vowels( S , N ) :- string(S), !, string_chars(S,Cs), count_vowels(Cs,N) .
count_vowels( Cs , N ) :- count_vowels(Cs,0,N) .
count_vowels( [] , N , N ) .
count_vowels( [C|Cs] , T , N ) :- tally(C,T,T1), count_vowels(Cs,T1,N).
tally( C , M , N ) :- vowel(C), !, N is M+1 .
tally( _ , N , N ) .
vowel( a ).
vowel( e ).
vowel( i ).
vowel( o ).
vowel( u ).

Why does my Prolog S-expression tokenizer fail on its base case?

To learn some Prolog (I'm using GNU Prolog) and grok its parsing abilities, I am starting by writing a Lisp (or S-expression, if I'm being exact) tokenizer, which given a set of tokens like ['(', 'f', 'o', 'o', ')'] should produce ['(', 'foo', ')']. It's not working as expected, which is why I'm here! I thought my thought process shined through in my pseudocode:
tokenize([current | rest], buffer, tokens):
if current is '(' or ')',
Tokenize the rest,
And the output will be the current token buffer,
Plus the parenthesis and the rest.
if current is ' ',
Tokenize the rest with a clean buffer,
And the output will be the buffer plus the rest.
if the tail is empty,
The output will be a one-element list containing the buffer.
otherwise,
Add the current character to the buffer,
And the output will be the rest tokenized, with a bigger buffer.
I translated that to Prolog like this:
tokenize([Char | Chars], Buffer, Tokens) :-
((Char = '(' ; Char = ')') ->
tokenize(Chars, '', Tail_Tokens),
Tokens is [Buffer, Char | Tail_Tokens];
Char = ' ' ->
tokenize(Chars, '', Tail_Tokens),
Tokens is [Buffer | Tail_Tokens];
Chars = [] -> Tokens is [Buffer];
atom_concat(Buffer, Char, New_Buffer),
tokenize(Chars, New_Buffer, Tokens)).
print_tokens([]) :- write('.').
print_tokens([T | N]) :- write(T), write(', '), print_tokens(N).
main :-
% tokenize(['(', 'f', 'o', 'o', '(', 'b', 'a', 'r', ')', 'b', 'a', 'z', ')'], '', Tokens),
tokenize(['(', 'f', 'o', 'o', ')'], '', Tokens),
print_tokens(Tokens).
When running the result, below, like this: gprolog --consult-file lisp_parser.pl it just tells me no. I traced main, and it gave me the stack trace below. I do not understand why tokenize fails for the empty case. I see that the buffer is empty since it was cleared with the previous ')', but even if Tokens is empty at that point in time, wouldn't Tokens accumulate a larger result recursively? Can someone who is good with Prolog give me a few tips here?
| ?- main.
no
| ?- trace.
The debugger will first creep -- showing everything (trace)
(1 ms) yes
{trace}
| ?- main.
1 1 Call: main ?
2 2 Call: tokenize(['(',f,o,o,')'],'',_353) ?
3 3 Call: tokenize([f,o,o,')'],'',_378) ?
4 4 Call: atom_concat('',f,_403) ?
4 4 Exit: atom_concat('',f,f) ?
5 4 Call: tokenize([o,o,')'],f,_429) ?
6 5 Call: atom_concat(f,o,_454) ?
6 5 Exit: atom_concat(f,o,fo) ?
7 5 Call: tokenize([o,')'],fo,_480) ?
8 6 Call: atom_concat(fo,o,_505) ?
8 6 Exit: atom_concat(fo,o,foo) ?
9 6 Call: tokenize([')'],foo,_531) ?
10 7 Call: tokenize([],'',_556) ?
10 7 Fail: tokenize([],'',_544) ?
9 6 Fail: tokenize([')'],foo,_519) ?
7 5 Fail: tokenize([o,')'],fo,_468) ?
5 4 Fail: tokenize([o,o,')'],f,_417) ?
3 3 Fail: tokenize([f,o,o,')'],'',_366) ?
2 2 Fail: tokenize(['(',f,o,o,')'],'',_341) ?
1 1 Fail: main ?
(1 ms) no
{trace}
| ?-
How about this. I think that's what you want to do, but let's use Definite Clause Grammars (which are just horn clauses with :- replaced by --> and two elided arguments holding the input character list and remaining character list. An example DCG rule:
rule(X) --> [c], another_rule(X), {predicate(X)}.
List processing rule rule//1 says: When you find character c in the input list, then continue list processing with another_rule//1, and when that worked out, call predicate(X) as normal.
Then:
% If we encounter a separator symbol '(' or ')', we commit to the
% clause using '!' (no point trying anything else, in particular
% not the clause for "other characters", tokenize the rest of the list,
% and when we have done that decide whether 'MaybeToken', which is
% "part of the leftmost token after '(' or ')'", should be retained.
% it is dropped if it is empty. The caller is then given an empty
% "part of the leftmost token" and the list of tokens, with '(' or ')'
% prepended: "tokenize('', [ '(' | MoreTokens] ) -->"
tokenize('', [ '(' | MoreTokens] ) -->
['('],
!,
tokenize(MaybeToken,Tokens),
{drop_empty(MaybeToken,Tokens,MoreTokens)}.
tokenize('',[')'|MoreTokens]) -->
[')'],
!,
tokenize(MaybeToken,Tokens),
{drop_empty(MaybeToken,Tokens,MoreTokens)}.
% No more characters in the input list (that's what '--> []' says).
% We succeed, with an empty token list and an empty buffer fro the
% leftmost token.
tokenize('',[]) --> [].
% If we find a 'Ch' that is not '(' or ')', then tokenize
% more of the list via 'tokenize(MaybeToken,Tokens)'. On
% returns 'MaybeToken' is a piece of the leftmost token found
% in that list, so we have to stick 'Ch' onto its start.
tokenize(LargerMaybeToken,Tokens) -->
[Ch],
tokenize(MaybeToken,Tokens),
{atom_concat(Ch,MaybeToken,LargerMaybeToken)}.
% ---
% This drops an empty "MaybeToken". If "MaybeToken" is
% *not* empty, it is actually a token and prepended to the list "Tokens"
% ---
drop_empty('',Tokens,Tokens) :- !.
drop_empty(MaybeToken,Tokens,[MaybeToken|Tokens]).
% -----------------
% Call the DCG using phrase/2
% -----------------
tokenize(Text,Result) :-
phrase( tokenize(MaybeToken,Tokens), Text ),
drop_empty(MaybeToken,Tokens,Result),!.
And so:
?- tokenize([h,e,l,l,o],R).
R = [hello].
?- tokenize([h,e,l,'(',l,')',o],R).
R = [hel,(,l,),o].
?- tokenize([h,e,l,'(',l,l,')',o],R).
R = [hel,(,ll,),o].
I think in GNU Prolog, the notation `hello` generates [h,e,l,l,o] directly.
I do not understand why tokenize fails for the empty case.
The reason anything fails in Prolog is because there is no clause that makes it true. If your only clause for tokenize is of the form tokenize([Char | Chars], ...), then no call of the form tokenize([], ...) will ever be able to match this clause, and since there are no other clauses, the call will fail.
So you need to add such a clause. But first:
:- set_prolog_flag(double_quotes, chars).
This allows you to write ['(', f, o, o, ')'] as "foo".
Also, you must plan for the case where the input is completely empty, or other cases where you must maybe emit a token for the buffer, but only if it is not '' (since there should be no '' tokens littering the result).
finish_buffer(Tokens, Buffer, TokensMaybeWithBuffer) :-
( Buffer = ''
-> TokensMaybeWithBuffer = Tokens
; TokensMaybeWithBuffer = [Buffer | Tokens] ).
For example:
?- finish_buffer(MyTokens, '', TokensMaybeWithBuffer).
MyTokens = TokensMaybeWithBuffer.
?- finish_buffer(MyTokens, 'foo', TokensMaybeWithBuffer).
TokensMaybeWithBuffer = [foo|MyTokens].
Note that you can prepend the buffer to the list of tokens, even if you don't yet know what that list of tokens is! This is the power of logical variables. The rest of the code uses this technique as well.
So, the case for the empty input:
tokenize([], Buffer, Tokens) :-
finish_buffer([], Buffer, Tokens).
For example:
?- tokenize([], '', Tokens).
Tokens = [].
?- tokenize([], 'foo', Tokens).
Tokens = [foo].
And the remaining cases:
tokenize([Parenthesis | Chars], Buffer, TokensWithParenthesis) :-
( Parenthesis = '('
; Parenthesis = ')' ),
finish_buffer([Parenthesis | Tokens], Buffer, TokensWithParenthesis),
tokenize(Chars, '', Tokens).
tokenize([' ' | Chars], Buffer, TokensWithBuffer) :-
finish_buffer(Tokens, Buffer, TokensWithBuffer),
tokenize(Chars, '', Tokens).
tokenize([Char | Chars], Buffer, Tokens) :-
Char \= '(',
Char \= ')',
Char \= ' ',
atom_concat(Buffer, Char, NewBuffer),
tokenize(Chars, NewBuffer, Tokens).
Note how I used separate clauses for the separate cases. This makes the code more readable, but it does have the drawback compared to (... -> ... ; ...) that the last clause must exclude characters handled by previous clauses. Once you have your code in this shape, and you're happy that it works, you can transform it into a form using (... -> ... ; ...) if you really want to.
Examples:
?- tokenize("(foo)", '', Tokens).
Tokens = ['(', foo, ')'] ;
false.
?- tokenize(" (foo)", '', Tokens).
Tokens = ['(', foo, ')'] ;
false.
?- tokenize("(foo(bar)baz)", '', Tokens).
Tokens = ['(', foo, '(', bar, ')', baz, ')'] ;
false.
Finally, and very importantly, the is operator is meant only for evaluation of arithmetic expressions. It will throw an exception when you apply it to anything that is not arithmetic. Unification is different from the evaluation of arithmetic expression. Unification is written as =.
?- X is 2 + 2.
X = 4.
?- X = 2 + 2.
X = 2+2.
?- X is [a, b, c].
ERROR: Arithmetic: `[a,b,c]' is not a function
ERROR: In:
ERROR: [20] throw(error(type_error(evaluable,...),_3362))
ERROR: [17] arithmetic:expand_function([a,b|...],_3400,_3402) at /usr/lib/swi-prolog/library/arithmetic.pl:175
ERROR: [16] arithmetic:math_goal_expansion(_3450 is [a|...],_3446) at /usr/lib/swi-prolog/library/arithmetic.pl:147
ERROR: [14] '$expand':call_goal_expansion([system- ...],_3512 is [a|...],_3492,_3494,_3496) at /usr/lib/swi-prolog/boot/expand.pl:863
ERROR: [13] '$expand':expand_goal(_3566 is [a|...],_3552,_3554,_3556,user,[system- ...],_3562) at /usr/lib/swi-prolog/boot/expand.pl:524
ERROR: [12] setup_call_catcher_cleanup('$expand':'$set_source_module'(user,user),'$expand':expand_goal(...,_3640,_3642,_3644,user,...,_3650),_3614,'$expand':'$set_source_module'(user)) at /usr/lib/swi-prolog/boot/init.pl:443
ERROR: [8] '$expand':expand_goal(user:(_3706 is ...),_3692,user:_3714,_3696) at /usr/lib/swi-prolog/boot/expand.pl:458
ERROR: [6] setup_call_catcher_cleanup('$toplevel':'$set_source_module'(user,user),'$toplevel':expand_goal(...,...),_3742,'$toplevel':'$set_source_module'(user)) at /usr/lib/swi-prolog/boot/init.pl:443
ERROR:
ERROR: Note: some frames are missing due to last-call optimization.
ERROR: Re-run your program in debug mode (:- debug.) to get more detail.
^ Call: (14) call('$expand':'$set_source_module'(user)) ? abort
% Execution Aborted
?- X = [a, b, c].
X = [a, b, c].

Binary trees as innested pairs

I'm trying to represent a generic binary tree as a pair.
I'll use the SML syntax as example. This is my btree type definition:
datatype btree = leaf | branch of btree*btree;
So, I'd like to write a function that, given a btree, print the following:
bprint leaf = 0
bprint (branch (leaf,leaf)) = (0,0)
bprint (branch (leaf, branch (leaf,leaf))) = (0, (0, 0))
and so on.
The problem is that this function always return different types. This is obviously a problem for SML and maybe for other functional languages.
Any idea?
Since all you want to do is to print the tree structure to the screen, you can just do that and have your function's return type be unit. That is instead of trying to return the tuple (0, (0, 0)) just print the string (0, (0, 0)) to the screen. This way you won't run into any difficulties with types.
If you really do not need a string representation anywhere else, as already mentioned by others, just printing the tree might be the easiest way:
open TextIO
datatype btree = leaf | branch of btree * btree
fun print_btree leaf = print "0"
| print_btree (branch (s, t)) =
(print "("; print_btree s; print ", "; print_btree t; print ")")
In case you also want to be able to obtain a string representing a btree, the naive solution would be:
fun btree_to_string leaf = "0"
| btree_to_string (branch (s, t)) =
"(" ^ btree_to_string s ^ ", " ^ btree_to_string t ^ ")"
However, I do not really recommend this variant since for big btrees there is a problem due to the many string concatenations.
Something nice to think about is the following variant, which avoids the concatenation problem by a trick (that is for example also used in Haskell's Show class), i.e., instead of working on strings, work on functions from char lists to char lists. Then concatenation can be replaced by function composition
fun btree_to_string' t =
let
fun add s t = s # t
fun add_btree leaf = add [#"0"]
| add_btree (branch (s, t)) =
add [#"("] o add_btree s o add [#",", #" "] o add_btree t o add [#")"]
in implode (add_btree t []) end

scapy hexdump()

i wonder which hexdump() scapy uses, since i would like to modify it, but i simply cant find anything.
what i DO find is:
def hexdump(self, lfilter=None):
for i in range(len(self.res)):
p = self._elt2pkt(self.res[i])
if lfilter is not None and not lfilter(p):
continue
print "%s %s %s" % (conf.color_theme.id(i,"%04i"),
p.sprintf("%.time%"),
self._elt2sum(self.res[i]))
hexdump(p)
but that simply is an alternative for pkt.hexdump(), which does a pkt.summary() with a following hexdump(pkt)
could anyone tell me where to find the hexdump(pkt) sourcecode?
what i want to have is the hex'ed packet, almost like str(pkt[0]) (where i can check byte by byte via str(pkt[0])[0] ), but with nothing else than hexvalues, just like displayed in hexdump(pkt).
maybe you guys could help me out with this one :)
found it, so, to answer my own question, it is located in utils.py
def hexdump(x):
x=str(x)
l = len(x)
i = 0
while i < l:
print "%04x " % i,
for j in range(16):
if i+j < l:
print "%02X" % ord(x[i+j]),
else:
print " ",
if j%16 == 7:
print "",
print " ",
print sane_color(x[i:i+16])
i += 16

Prolog parse postfix math expressions

I solved this my self. I'll post the solution when were past due date for my homework.
Okay, I'm going to build a parser or an evaluator. The de facto standard when parsing with prefix notation is to just use a stack. Add to the stack if input is a number, if it is an operator you can pop twice apply operator and put the result back on the stack.
The stack here would be a list, so I need to know how I can apply the operators. The input would be a string. "(11+2*)" This would 1+1=2*2=4. First it would read 1, and 1 to the stack. Read another 1 and add it to the stack. Now it reads "+", so it removes(pop) twice from the stack and apply + and puts the result back. Read 2, put 2 on the stack. Read *, pop twice and apply *.
Hope this makes sense. How would the predicate look like? I need one variable for the input string, one to maintain the stack, and one for the result? Three?
I'm especially wondering about push and pop on the stack as well as removing as I go from the input string.
I'll post the teacher's solution:
% Løsning oblig 3, INF121, Høsten 2009.
% Skrevet av: Dag Hovland
% Opphavsrett: Universitetet i Bergen
% Lisensiert under GPL v3, www.gnu.org. Etter tillatelse fra administrasjonen.
% Oppgave 1
alignment([],[],[]).
alignment([X|Xs],[X|Ys],[X|A]) :- alignment(Xs,Ys,A).
alignment(Xs,[_|Ys],A) :- alignment(Xs,Ys,A).
alignment([_|Xs],Ys,A) :- alignment(Xs,Ys,A).
maximum([X|Xs],Max) :- maximum(Xs,X,Max).
maximum([],(X,_),X).
maximum([X|Xs],(_,LM),MX) :- length(X,LX), LX > LM, !, maximum(Xs, (X,LX), MX).
maximum([X|Xs],(M,LM),MX) :- length(X,LX), LX < LM, !, maximum(Xs, (M,LM), MX).
% Pga. kuttene over vet vi at dersom tilfellene under brukes, så er
% X akkurat like lang som lengste sett så langt
maximum([X|Xs],_,MX) :- length(X,LX), maximum(Xs, (X,LX), MX).
maximum([_|Xs],M,MX) :- maximum(Xs, M, MX).
maxAlignment(Xs,Ys,A) :- findall((N,A),alignment(Xs,Ys,N,A),All),!,
maximum(All,(_,A)).
% Oppgave 2
path(S,S,_).
path(S,End,Edges) :- select((S,Next),Edges,EdgesRest),
path(Next, End, EdgesRest).
% select er innebygd. Skriv "listing(select) for å se definisjonen:
%select(A, [A|B], B).
%select(B, [A|C], [A|D]) :-
% select(B, C, D).
% polish(I,V,S) evaluates expression I to value V with stack S.
polish([],V,[V]).
polish(I,V,S) :- append(" ",I1,I),polish(I1,V,S).
polish([NC|I],V,S) :- name(N,[NC]),integer(N),polish(I,V,[N|S]).
polish(I,V,[F1,F2|S]) :- append("+",I1,I),Sum is F1+F2,polish(I1,V,[Sum|S]).
polish(I,V,[F1,F2|S]) :- append("-",I1,I),Sum is F2-F1,polish(I1,V,[Sum|S]).
polish(I,V,[F1,F2|S]) :- append("/",I1,I),Sum is F2/F1,polish(I1,V,[Sum|S]).
polish(I,V,[F1,F2|S]) :- append("*",I1,I),Sum is F1*F2,polish(I1,V,[Sum|S]).
evalPost(S,E) :- polish(S,E,[]).
I'm posting the whole file as it is. The following shows how it works:
?- evalPost("1 2 3 * +", V).
V = 7
?- evalPost("1 3 2 * 2 + +",V).
V = 9
?- evalPost("1 2 3 * 4 + +",V).
V = 11
?- evalPost("1 2 3 * 4 + -",V).
V = -9
?- evalPost("4 2 / 1 +",V).
V = 3

Resources