Drop all of a kind of thing - inform7

I would like the player to be able to drop all of a kind of thing. e.g.
piece is a kind of thing.
Pawn is a kind of piece.
Rook is a kind of piece.
The player carries 8 pawns.
The player carries 2 rooks.
Now if:
>drop all rooks
Rook: Dropped.
Rook: Dropped.
But if:
>drop all pieces
You can't use multiple objects with that verb.
How can I uses pieces to mean all rooks and pawns?

Kinds don't have nouns or keywords by default, thus Inform7 doesn't realise that the player has anything called a piece. This is easily fixed with Understand:
piece is a kind of thing.
Pawn is a kind of piece.
Rook is a kind of piece.
[note the slightly different syntax for Understanding a kind.]
Understand "piece" and "pieces" as a piece.
The player carries 4 pawns, and 2 rooks.
The result is:
> i
You are carrying:
four Pawns
two Rooks
> drop all pieces
Pawn: Dropped.
Pawn: Dropped.
Pawn: Dropped.
Pawn: Dropped.
Rook: Dropped.
Rook: Dropped.
> i
You are carrying nothing.

Related

Moral of the story from SICP Ex. 1.20?

In this exercise we are asked to trace Euclid's algorithm using first normal order then applicative order evaluation.
(define (gcd a b)
(if (= b 0)
a
(gcd b (remainder a b))))
I've done the manual trace, and checked it with the several solutions available on the internet. I'm curious here to consolidate the moral of the exercise.
In gcd above, note that b is re-used three times in the function body, plus this function is recursive. This being what gives rise to 18 calls to remainder for normal order, in contrast to only 4 for applicative order.
So, it seems that when a function uses an argument more than once in its body, (and perhaps recursively! as here), then not evaluating it when the function is called (i.e. applicative order), will lead to redundant recomputation of the same thing.
Note that the question is at pains to point out that the special form if does not change its behaviour when normal order is used; that is, if will always run first; if this didn't happen, there could be no termination in this example.
I'm curious regarding delayed evaluation we are seeing here.
As a plus, it might allow us to handle infinite things, like streams.
As a minus, if we have a function like here, it can cause great inefficiency.
To fix the latter it seems like there are two conceptual options. One, wrap it in some data structure that caches its result to avoid recomputation. Two, selectively force the argument to realise when you know it will otherwise cause repeated recomputation.
The thing is, neither of those options seem very nice, because both represent additional "levers" the programmer must know how to use and choose when to use.
Is all of this dealt with more thoroughly later in the book?
Is there any straightforward consolidation of these points which would be worth making clear at this point (without perhaps going into all the detail that is to come).
The short answer is yes, this is covered extensively later in the book.
It is covered in most detail in Chapter 4 where we implement eval and apply, and so get the opportunity to modify their behaviour. For example Exercise 4.31 suggests the following syntax:
(define (f a (b lazy) c (d lazy-memo))
As you can see this identifies three different behaviours.
Parameters a and c have applicative order (they are evaluated before they are passed to the f).
b is normal or lazy, it is evaluated everytime it is used (but only if it is used).
d lazy but it's value it memoized so it is evaluated at most once.
Although this syntax is possible it isn't used. I think the philosopy is that the the language has an expected behaviour (applicative order) and that normal order is only used by default when necessary (e.g., the consequent and alternative of an if statement, and in creating streams). When it is necssary (or desirable) to have a parameter with normal evaluation then we can use delay and force, and if necessary memo-proc (e.g. Exercise 3.77).
So at this point the authors are introducing the ideas of normal and applicative order so that we have some familiarity with them by the time we get into the nitty gritty later on.
In a sense this a recurring theme, applicative order is probably more intuitive, but sometimes we need normal order. Recursive functions are simpler to write, but sometimes we need the performance of iterative functions. Programs where we can use the substition model are easier to reason about, but sometimes we need environmental model because we need mutable state.

Why after pressing semicolon program is back in deep recursion?

I'm trying to understand the semicolon functionality.
I have this code:
del(X,[X|Rest],Rest).
del(X,[Y|Tail],[Y|Rest]) :-
del(X,Tail,Rest).
permutation([],[]).
permutation(L,[X|P]) :- del(X,L,L1), permutation(L1,P).
It's the simple predicate to show all permutations of given list.
I used the built-in graphical debugger in SWI-Prolog because I wanted to understand how it works and I understand for the first case which returns the list given in argument. Here is the diagram which I made for better understanding.
But I don't get it for the another solution. When I press the semicolon it doesn't start in the place where it ended instead it's starting with some deep recursion where L=[] (like in step 9). I don't get it, didn't the recursion end earlier? It had to go out of the recursions to return the answer and after semicolon it's again deep in recursion.
Could someone clarify that to me? Thanks in advance.
One analogy that I find useful in demystifying Prolog is that Backtracking is like Nested Loops, and when the innermost loop's variables' values are all found, the looping is suspended, the vars' values are reported, and then the looping is resumed.
As an example, let's write down simple generate-and-test program to find all pairs of natural numbers above 0 that sum up to a prime number. Let's assume is_prime/1 is already given to us.
We write this in Prolog as
above(0, N), between(1, N, M), Sum is M+N, is_prime(Sum).
We write this in an imperative pseudocode as
for N from 1 step 1:
for M from 1 step 1 until N:
Sum := M+N
if is_prime(Sum):
report_to_user_and_ask(Sum)
Now when report_to_user_and_ask is called, it prints Sum out and asks the user whether to abort or to continue. The loops are not exited, on the contrary, they are just suspended. Thus all the loop variables values that got us this far -- and there may be more tests up the loops chain that sometimes succeed and sometimes fail -- are preserved, i.e. the computation state is preserved, and the computation is ready to be resumed from that point, if the user presses ;.
I first saw this in Peter Norvig's AI book's implementation of Prolog in Common Lisp. He used mapping (Common Lisp's mapcan which is concatMap in Haskell or flatMap in many other languages) as a looping construct though, and it took me years to see that nested loops is what it is really all about.
Goals conjunction is expressed as the nesting of the loops; goals disjunction is expressed as the alternatives to loop through.
Further twist is that the nested loops' structure isn't fixed from the outset. It is fluid, the nested loops of a given loop can be created depending on the current state of that loop, i.e. depending on the current alternative being explored there; the loops are written as we go. In (most of the) languages where such dynamic creation of nested loops is impossible, it can be encoded with nested recursion / function invocation / inside the loops. (Here's one example, with some pseudocode.)
If we keep all such loops (created for each of the alternatives) in memory even after they are finished with, what we get is the AND-OR tree (mentioned in the other answer) thus being created while the search space is being explored and the solutions are found.
(non-coincidentally this fluidity is also the essence of "monad"; nondeterminism is modeled by the list monad; and the essential operation of the list monad is the flatMap operation which we saw above. With fluid structure of loops it is "Monad"; with fixed structure it is "Applicative Functor"; simple loops with no structure (no nesting at all): simply "Functor" (the concepts used in Haskell and the like). Also helps to demystify those.)
So, the proper slogan could be Backtracking is like Nested Loops, either fixed, known from the outset, or dynamically-created as we go. It's a bit longer though. :)
Here's also a Prolog example, which "as if creates the code to be run first (N nested loops for a given value of N), and then runs it." (There's even a whole dedicated tag for it on SO, too, it turns out, recursive-backtracking.)
And here's one in Scheme ("creates nested loops with the solution being accessible in the innermost loop's body"), and a C++ example ("create n nested loops at run-time, in effect enumerating the binary encoding of 2n, and print the sums out from the innermost loop").
There is a big difference between recursion in functional/imperative programming languages and Prolog (and it really became clear to me only in the last 2 weeks or so):
In functional/imperative programming, you recurse down a call chain, then come back up, unwinding the stack, then output the result. It's over.
In Prolog, you recurse down an AND-OR tree (really, alternating AND and OR nodes), selecting a predicate to call on an OR node (the "choicepoint"), from left to right, and calling every predicate in turn on an AND node, also from left to right. An acceptable tree has exactly one predicate returning TRUE under each OR node, and all predicates returning TRUE under each AND node. Once an acceptable tree has been constructed, by the very search procedure, we are (i.e. the "search cursor" is) on a rightmost bottommost node .
Success in constructing an acceptable tree also means a solution to the query entered at the Prolog Toplevel (the REPL) has been found: The variable values are output, but the tree is kept (unless there are no choicepoints).
And this is also important: all variables are global in the sense that if a variable X as been passed all the way down the call chain from predicate to predicate to the rightmost bottommost node, then constrained at the last possible moment by unifying it with 2 for example, X = 2, then the Prolog Toplevel is aware of that without further ado: nothing needs to be passed up the call chain.
If you now press ;, search doesn't restart at the top of the tree, but at the bottom, i.e. at the current cursor position: the nearest parent OR node is asked for more solutions. This may result in much search until a new acceptable tree has been constructed, we are at a new rightmost bottommost node. The new variable values are output and you may again enter ;.
This process cycles until no acceptable tree can be constructed any longer, upon which false is output.
Note that having this AND-OR as an inspectable and modifiable data structure at runtime allows some magical tricks to be deployed.
There is bound to be a lot of power in debugging tools which record this tree to help the user who gets the dreaded sphynxian false from a Prolog program that is supposed to work. There are now Time Traveling Debuggers for functional and imperative languages, after all...

Why is there no generic operators for Common Lisp?

In CL, we have many operators to check for equality that depend on the data type: =, string-equal, char=, then equal, eql and whatnot, so on for other data types, and the same for comparison operators (edit don't forget to answer about these please :) do we have generic <, > etc ? can we make them work for another object ?)
However the language has mechanisms to make them generic, for example generics (defgeneric, defmethod) as described in Practical Common Lisp. I imagine very well the same == operator that will work on integers, strings and characters, at least !
There have been work in that direction: https://common-lisp.net/project/cdr/document/8/cleqcmp.html
I see this as a major frustration, and even a wall, for beginners (of which I am), specially we who come from other languages like python where we use one equality operator (==) for every equality check (with the help of objects to make it so on custom types).
I read a blog post (not a monad tutorial, great serie) today pointing this. The guy moved to Clojure, for other reasons too of course, where there is one (or two?) operators.
So why is it so ? Is there any good reasons ? I can't even find a third party library, not even on CL21. edit: cl21 has this sort of generic operators, of course.
On other SO questions I read about performance. First, this won't apply to the little code I'll write so I don't care, and if you think so do you have figures to make your point ?
edit: despite the tone of the answers, it looks like there is not ;) We discuss in comments.
Kent Pitman has written an interesting article that tackles this subject: The Best of intentions, EQUAL rights — and wrongs — in Lisp.
And also note that EQUAL does work on integers, strings and characters. EQUALP also works for lists, vectors and hash tables an other Common Lisp types but objects… For some definition of work. The note at the end of the EQUALP page has a nice answer to your question:
Object equality is not a concept for which there is a uniquely determined correct algorithm. The appropriateness of an equality predicate can be judged only in the context of the needs of some particular program. Although these functions take any type of argument and their names sound very generic, equal and equalp are not appropriate for every application.
Specifically note that there is a trick in my last “works” definition.
A newer library adds generic interfaces to standard Common Lisp functions: https://github.com/alex-gutev/generic-cl/
GENERIC-CL provides a generic function wrapper over various functions in the Common Lisp standard, such as equality predicates and sequence operations. The goal of the wrapper is to provide a standard interface to common operations, such as testing for the equality of two objects, which is extensible to user-defined types.
It does this for equality, comparison, arithmetic, objects, iterators, sequences, hash-tables, math functions,…
So one can define his own + operator for example.
Yes we have! eq works with all values and it works all the time. It does not depend on the data type at all. It is exactly what you are looking for. It's like the is operator in python. It must be exactly what you were looking for? All the other ones agree with eq when it's t, however they tend to be t for totally different values that have various levels of similarities.
(defparameter *a* "this is a string")
(defparameter *b* *a*)
(defparameter *c* "this is a string")
(defparameter *d* "THIS IS A STRING")
All of these are equalp since they contain the same meaning. equalp is perhaps the sloppiest of equal functions. I don't think 2 and 2.0 are the same, but equalp does. In my mind 2 is 2 while 2.0 is somewhere between 1.95 and 2.04. you see they are not the same.
equal understands me. (equal *c* *d*) is definitely nil and that is good. However it returns t for (equal *a* *c*) as well. Both are arrays of characters and each character are the same value, however the two strings are not the same object. they just happen to look the same.
Notice I'm using string here for every single one of them. We have 4 equal functions that tells you if two values have something in common, but only eq tells you if they are the same.
None of these are type specific. They work on all types, however they are not generics since they were around long before that was added in the language. You could perhaps make 3-4 generic equal functions but would they really be any better than the ones we already have?
Fortunately CL21 introduces (more) generic operators, particularly for sequences it defines length, append, setf, first, rest, subseq, replace, take, drop, fill, take-while, drop-while, last, butlast, find-if, search, remove-if, delete-if, reverse, reduce, sort, split, join, remove-duplicates, every, some, map, sum (and some more). Unfortunately the doc isn't great, it's best to look at the sources. Those should work at least for strings, lists, vectors and define methods of the new abstract-sequence.
see also
https://github.com/cl21/cl21/wiki
https://lispcookbook.github.io/cl-cookbook/cl21.html

How to find Hash/Cipher

is there any tool or method to figure out what is this hash/cipher function?
i have only a 500 item list of input and output plus i know all of the inputs are numeric, and output is always 2 Byte long hexadecimal representation.
here's some samples:
794352:6657
983447:efbf
479537:0796
793670:dee4
1063060:623c
1063059:bc1b
1063058:b8bc
1063057:b534
1063056:b0cc
1063055:181f
1063054:9f95
1063053:f73c
1063052:a365
1063051:1738
1063050:7489
i looked around and couldn't find any hash this short, is this a hash folded on itself? (with xor maybe?) or maybe a simple trivial cipher?
is there any tool or method for finding the output of other numbers?
(i want to figure this out; my next option would be training a Neural Network or Regression, so i thought i ask before taking any drastic action )
Edit: The Numbers are directory names, and for accessing them, the Hex parts are required.
Actually, Wikipedia's page on hashes lists three CRCs and three checksum methods that it could be. It could also be only half the output from some more complex hashing mechanism. Cross your fingers and hope that it's of the former. Hashes are specifically meant to be difficult (if not impossible) to reverse engineer.
What it's being used for should be a very strong hint about whether or not it's more likely to be a checksum/CRC or a hash.

prolog recursion

am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer ..
thank you
the problem is that am trying to find all possible distributions for a list into other lists .. the code
addIn(_,[],Result,Result).
addIn(C,[Element|Rest],[F|R],Result):-
member( Members , [F|R]),
sumlist( Members, Sum),
sumlist([Element],ElementLength),
Cap is Sum + ElementLength,
(Cap =< Ca,
append([Element], Members,New)....
by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like
bp(3,11,[8,2,4,6,1,8,4],Answer).
it will just enter a while loop .. more over if i changed the
bp(NB,C,OL,A):-
addIn(C,OL,[[],[],[]],A);
bp(NB,C,_,A).
to and instead of Or .. i get error :
ERROR: is/2: Arguments are not
sufficiently instantiated
appreciate the help ..
Thanks alot #hardmath
It sounds like you are trying to write your own version of findall/3, perhaps limited to a special case of an underlying goal. Doing it generally (constructing a list of all solutions to a given goal) in a user-defined Prolog predicate is not possible without resorting to side-effects with assert/retract.
However a number of useful special cases can be implemented without such "tricks". So it would be helpful to know what predicate defines your "all possible elements". [It may also be helpful to state which Prolog implementation you are using, if only so that responses may include links to documentation for that version.]
One important special case is where the "universe" of potential candidates already exists as a list. In that case we are really asking to find the sublist of "all possible elements" that satisfy a particular goal.
findSublist([ ],_,[ ]).
findSublist([H|T],Goal,[H|S]) :-
Goal(H),
!,
findSublist(T,Goal,S).
findSublist([_|T],Goal,S) :-
findSublist(T,Goal,S).
Many Prologs will allow you to pass the name of a predicate Goal around as an "atom", but if you have a specific goal in mind, you can leave out the middle argument and just hardcode your particular condition into the middle clause of a similar implementation.
Added in response to code posted:
I think I have a glimmer of what you are trying to do. It's hard to grasp because you are not going about it in the right way. Your predicate bp/4 has a single recursive clause, variously attempted using either AND or OR syntax to relate a call to addIn/4 to a call to bp/4 itself.
Apparently you expect wrapping bp/4 around addIn/4 in this way will somehow cause addIn/4 to accumulate or iterate over its solutions. It won't. It might help you to see this if we analyze what happens to the arguments of bp/4.
You are calling the formal arguments bp(NB,C,OL,A) with simple integers bound to NB and C, with a list of integers bound to OL, and with A as an unbound "output" Answer. Note that nothing is ever done with the value NB, as it is not passed to addIn/4 and is passed unchanged to the recursive call to bp/4.
Based on the variable names used by addIn/4 and supporting predicate insert/4, my guess is that NB was intended to mean "number of bins". For one thing you set NB = 3 in your test/0 clause, and later you "hardcode" three empty lists in the third argument in calling addIn/4. Whatever Answer you get from bp/4 comes from what addIn/4 is able to do with its first two arguments passed in, C and OL, from bp/4. As we noted, C is an integer and OL a list of integers (at least in the way test/0 calls bp/4).
So let's try to state just what addIn/4 is supposed to do with those arguments. Superficially addIn/4 seems to be structured for self-recursion in a sensible way. Its first clause is a simple termination condition that when the second argument becomes an empty list, unify the third and fourth arguments and that gives "answer" A to its caller.
The second clause for addIn/4 seems to coordinate with that approach. As written it takes the "head" Element off the list in the second argument and tries to find a "bin" in the third argument that Element can be inserted into while keeping the sum of that bin under the "cap" given by C. If everything goes well, eventually all the numbers from OL get assigned to a bin, all the bins have totals under the cap C, and the answer A gets passed back to the caller. The way addIn/4 is written leaves a lot of room for improvement just in basic clarity, but it may be doing what you need it to do.
Which brings us back to the question of how you should collect the answers produced by addIn/4. Perhaps you are happy to print them out one at a time. Perhaps you meant to collect all the solutions produced by addIn/4 into a single list. To finish up the exercise I'll need you to clarify what you really want to do with the Answers from addIn/4.
Let's say you want to print them all out and then stop, with a special case being to print nothing if the arguments being passed in don't allow a solution. Then you'd probably want something of this nature:
newtest :-
addIn(12,[7, 3, 5, 4, 6, 4, 5, 2], Answer),
format("Answer = ~w\n",[Answer]),
fail.
newtest.
This is a standard way of getting predicate addIn/4 to try all possible solutions, and then stop with the "fall-through" success of the second clause of newtest/0.
(Added) Suggestions about coding addIn/4:
It will make the code more readable and maintainable if the variable names are clear. I'd suggest using Cap instead of C as the first argument to addIn/4 and BinSum when you take the sum of items assigned to a "bin". Likewise Bin would be better where you used Members. In the third argument to addIn/4 (in the head of the second clause) you don't need an explicit list structure [F|R] since you never refer to either part F or R by itself. So there I'd use Bins.
Some of your predicate calls don't accomplish much that you cannot do more easily. For example, your second call to sumlist/2 involves a list with one item. Thus the sum is just the same as that item, i.e. ElementLength is the same as Element. Here you could just replace both calls to sumlist/2 with one such call:
sumlist([Element|Bin],BinSum)
and then do your test comparing BinSum with Cap. Similarly your call to append/3 just adjoins the single item Element to the front of the list (I'm calling) Bin, so you could just replace what you have called New with [Element|Bin].
You have used an extra pair of parentheses around the last four subgoals (in the second clause for addIn/4). Since AND is implied for all the subgoals of this clause, using the extra pair of parentheses is unnecessary.
The code for insert/4 isn't shown now, but it could be a source of some unintended "backtracking" in special cases. The better approach would be to have the first call (currently to member/2) be your only point of indeterminacy, i.e. when you choose one of the bins, do it by replacing it with a free variable that gets unified with [Element|Bin] at the next to last step.

Resources