CLIPS Error: Illegal use of the module specifier in clipspy - rules

I'm using clipspy. I want to define some module-rules to assert module-facts. The rule is based on facts from multiple modules. I get a CLIPS Error:
Illegal use of the module specifier
import clips
env = clips.Environment()
env.build("(defmodule module1)")
env.build("(defmodule module2)")
env.build("(deftemplate module1::X (slot A) (slot B))")
env.build("(deftemplate module2::Y (slot C) (slot D))")
env.build("(defrule module2::rule1 (module1::X (A ?A) (B One) => (printout t hello crlf))")
print(env.find_rule("module2::rule1)
I learned that if I define a rule within a module using the fact from the same module, I don't get the error.
env.build("(defrule module1::rule1 (X (A ?A) (B One) => (printout t hello crlf))")
print(env.find_rule("module1::rule1))
I want to construct a rule using facts from multiple modules. I don't get the error when working with Jess but I run into this problem when using clipspy.

In CLIPS, you use the export and import keywords in the defmodule definition to share constructs between modules (section 10.4, Importing and Exporting Modules, in the Basic Programming Guide). A pattern can't reference a deftemplate in another module using a module specifier (e.g. module1::X).

Related

How can I map frama-c CLI code to the original c statement? And how can I find the documentation of the api of the frama-c?

I'm trying to get the program dependence graph (PDG) using frama-c at the original code's statement level. However, 'pdg' plug-in in frama-c prints the PDG at the parsed code's node level.
Since frama-c-gui can highlight the original statement corresponds to the node in the parsed code, I'm pretty sure that there is a mapping between the node in parsed code and the original code's statement. How can I get this mapping? Just the line number at the original code is fine, too.
Frama-C's GUI presents two views of the code:
The CIL code (C Intermediate Language), often called normalized source code, which corresponds to a pretty-printing of Frama's AST, in the top center panel;
And the original source code, on the top right panel.
I'm assuming that by parsed code you are talking about the CIL (normalized) code.
Every element in Frama-C's AST contains a location, which is a pair of positions: the first and last coordinates (line, row, column) in the original code which correspond to that element (minus a few exceptions, such as generated elements, macro expansions, etc). Most AST elements have ways to retrieve that location.
In the case of PDG nodes, you can get the associated statements (if any) and then print their location, as in the code below (run with frama-c -pdg -load-module print_pdg.ml <file>):
(* print_pdg.ml *)
let () = Db.Main.extend (fun () ->
Globals.Functions.iter (fun kf ->
let pdg = !Db.Pdg.get kf in
!Db.Pdg.iter_nodes (fun n ->
match PdgTypes.Node.stmt n with
| None -> ()
| Some st ->
Format.printf "%a: %a#."
Printer.pp_location (Cil_datatype.Stmt.loc st) Printer.pp_stmt st
) pdg
)
)
Note that my example script will print each statement multiple times, if there are multiple PDG nodes associated to the same statement.
By default, Printer.pp_location only prints the file name and line of the starting character, but you can make a custom pretty-printer to include the column as well, or the coordinates of the last character.
API and Plug-in Documentation (from question in comment)
Some Frama-C plug-ins (Eva, WP, E-ACSL, etc.) have their own manuals, which are available in the Frama-C download page.
There is no specific manual for the Pdg plug-in, but some Ocamldoc-generated HTML pages can be obtained from the Frama-C API archive.
However, what most Frama-C plug-in developers prefer is to use the OCaml Merlin plug-in in their favorite editor (emacs, vim, etc) to navigate the code and read the source comments (in the .mli files, for instance).
On Emacs, for instance, C-c C-l on a module/variable name jumps to its definition, and C-c C-a alternates between .ml and .mli files (implementation - documentation). Combined with auto-completion for module/function discovery, this provides a form of interactive documentation that many OCaml developers are comfortable with.

Let, flet, macrolet : is there a way to do a "class-let"?

I have a macro which defines a class under certain rules, pseudo-code :
(defvar *all-my-classes* nil)
(defmacro my-macro (param)
`(if ,param
(progn
(defclass class-A () ...)
(push class-A *all-my-classes*))
(progn
(defclass class-B () ...)
(push class-B *all-my-classes*))))
I want to test the behaviour of the macro. Let is a convenient tool to mock variables. If I have an instance of *all-my-classes* running, I just have to do :
(let ((*all-my-classes* my-new-value)) ; generally `nil` for the test
(my-macro false))
But I would like to conserve the correspondance between *all-my-classes* and the classes defined. Since I want to test all the cases, let us suppose class-A is defined in the current environment, and i want to test if running (my-macro false) correctly defines class-B.
Since it is just a test, I would like the test to assert that class-B is currently defined, and that class-A is undefined in the current local environment; then when the test is over, class-B is undefined in the global environment, and class-A is still defined (without any alteration).
This way would be the best for my use :
(let ((*all-my-classes* nil))
(class-let ((class-A nil) ; or a way to map to a pre-defined
(class-B nil)) ; empty class temporarily.
(my-macro false)
(and
;; assert that the class is added to the list
(eql (length *all-my-classes*) 1)
;; assert that class-A is not defined
(null (find-class 'class-A))
;; assert that class-B is defined
(find-class 'class-B))))
I've searched to see if I can undefine a class, but it seems to be complex and implementation-dependent. And I want to preserve the current environment.
Restarting LISP each time for each tests is too long, and I would prefer a solution without having to load-unload packages for each tests (I don't know if it could work and if the classes will be garbage-collected when unloading the package...).
Thank you for your answers.
I do not think so.
The mechanism of how classes are stored is completely implementation defined, they just need to conform to the MOP (at least as far as it is mandated by the standard). However, the MOP does not prescribe anything that would make the classes registry dynamic. In fact, types and class names are specified to be part of the global environment (CLHS ch. 3.1.1.1), so it would be difficult for a conforming implementation to get dynamic here.
As you wrote, there is also no specified way to get rid of a class once defined.
As a rationale, I think that without this it would be very difficult to provide the kind of optimized runtime that the existing implementations have. Class lookup needs to be fast.
Now, to get to the meta question: what are you trying to do? Usually, while code is data, you should not confuse program logic with the programmed logic. What you propose looks like it might be intended to have code represent data. I'd advise to think about a clean separation and orthogonal representation.

How can I pass a ML value as an argument to an outer syntax command?

I define an outer syntax command, imake to write some code to a file and do some other things. The intended usage is as follows:
theory Scratch
imports Complex_Main "~/Is0/IsS"
begin
imake ‹myfile›
end
The above example will write some contents to the file myfile. myfile should be a path relative to the location of the Scratch theory.
ML ‹val this_path = File.platform_path(Resources.master_directory #{theory})
I would like to be able to use the value this_path in specifying myfile. The imake command is defined in the import ~/Is0/IsS and currently looks as follows:
ML‹(*imake*)
val _ = Outer_Syntax.improper_command #{command_spec "imake"} ""
(Parse.text >>
(fn path => Toplevel.keep
(fn _ => Gc.imake path)))›
The argument is pased using Parse.text, but I need feed it the path based on the ML value this_path, which is defined later (in the Scratch theory). I searched around a lot, trying to figure out how to use something like Parse.const, but I won't be able to figure anything out any time soon.
So: It's important that I use, in some way, Resources.master_directory #{theory} in Scratch.thy, so that imake gets the folder Scratch is in, which will come from the use of #{theory} in Scratch.
If I'm belaboring the last point, it's because in the past, I wasted a lot of time getting the wrong folder, because I didn't understand how to use the command above correctly.
How can I achieve this?
Your minimal examples uses Resource.master_directory with the parameter #{theory} to define your path. #{theory} refers (statically) to the theory at the point where you write down the antiquotation. This is mostly for interactive use, when you explore stuff. For code which is used in other places, you must use the dynamically passed context and extract the theory from it.
The function Toplevel.keep you use takes a function Toplevel.state -> unit as an argument. The Toplevel.state contains a context (see chapter 1 of the Isabelle Implementation Manual), which again contains the current theory; with Toplevel.theory_of you can extract the theory from the state. For example, you could use
Toplevel.keep (fn state => writeln
(File.platform_path (Resources.master_directory (Toplevel.theory_of state))))
to define a command that prints the master_directory for your current theory.
Except in simple cases, it is very likely that you do not only need the theory, but the whole context (which you can get with Toplevel.context_of).
Use setup from preceding (parts of the) theory
In the previous section, I assumed that you always want to use the master directory. For the case where the path should be configurable, Isabelle knows the concept of configuration options.
In your case, you would need to define an configuration option before you declare your imake command
ML ‹
val imake_path = Attrib.setup_config_string #{binding imake_path}
(K path)
› (* declares an option imake_path with the value `path` as default value *)
Then, the imake command can refer to this attribute to retrieve the path via Config.get:
Toplevel.keep (fn state =>
let val path = Config.get (Toplevel.context_of state) imake_path
in ... end)
The value of imake_path can then be set in Isar (only as a string):
declare [[imake_path="/tmp"]]
or in ML, via Config.map (for updating proof contexts) or Config.map_global (for updating theories). Note that you need to feed the updated context back to the system. Isar has the command setup (takes an ML expression of type theory -> theory) for that:
setup ‹Config.map_global imake_path (K "/tmp")›
Configuration options are described in detail in the Isar Implementation Manual, section 1.1.5.
Note: This mechanism does not allow you to automatically set imake_path to the master directory for each new theory. You need to set it manually, e.g. by adding
setup ‹
Config.map imake_path
(K (File.platform_path (Resources.master_directory #{theory})))
›
at the beginning of each theory.
The more general mechanism behind configuration options is context data. For details, see section 1.1 and in particular section 1.1.4 of the Isabelle Implementation Manual). This mechanism is used in a lot of places in Isabelle; the simpset, the configuration of the simplifier, is one example for this.

Importing scripts at runtime

I'm using Clojure to write a small test framework.
(ns pvt.core.runner
(use
[pvt.tests.deployment]
[pvt.tests.files]
[pvt.tests.jms]))
(defn- run-test
[test-name]
{test-name (test-and-log test-name)})
(defn- run-all-tests-in-namespace
[namespace-name]
(map
run-test
(vals (ns-publics (symbol namespace-name))))
)
(defn run-all-tests
[namespace-list]
(map run-all-tests-in-namespace namespace-list))
My run-all-tests function accepts a list of clojure scripts, loads all the public functions in those scripts and runs them. This is great, only that i have to actually import those scripts. I call my function like this (run-all-tests ["pvt.tests.deployment" "pvt.tests.files" "pvt.tests.jms"]), but this only works if I import each of these scripts as seen at the beginning of my code excerpt. This is not ok, since I hvae no idea who will call run-all-tests, and what parameters will be used.
I was wondering if there's a way of importing these scripts at runtime. I already know the namespace of each script, so I have all the required information. Can this be done?
Thanks
Yes, you can import Clojure source files from arbitrary file paths using load-file. If the source file contains a namespace declaration, those namespaces are now available to your Clojure application (framework).
Obviously, at a minimum you'll have to write some code that either takes names of Clojure source files from the command-line, or points to directories where the source files are located. Then your code will load the files using (load-file).
Your stated problem is that you want to execute some tests from the namespace without knowing the namespace names in advance. There are two ways to achieve this:
1) Use a naming convention. i.e. run your tests for each namespace that has the name matching your convention, i.e.
user=> (load-file "/home/noahlz/foo.clj")
#<Var#1e955d29: #<core$foo foo.test.core$foo#48a7a9bd>>
user=> (filter #(re-matches #".*\.test\..*" %) (map str (all-ns)))
("foo.test.core")
Using code like the above, you've obtained a list of namespaces upon which you can execute your framework code.
2) Use metadata. Rather than follow a naming convention, require users of your framework to add metadata to their namespaces. This reduces the chance of accidentally testing a namespace that accidentally followed your convention.
(See: What are some uses of Clojure metadata?)
Note that this is the approach used by Clojure's own clojure.test/deftest macro.
Here is an example of finding namespaces with your custom metadata. Your namespace declaration in a source file defining tests:
(ns ^{:doc "some documentation" :my-framework-tests true}
foo.test.core)
At the REPL, an example of how you can obtain these programmatically:
user=> (load-file "foo.clj")
user=> (filter (fn [[n m]] (:my-framework-tests m))
(map #(vector (str %) (meta %)) (all-ns)))
(["foo.test.core" {:my-framework-tests true, :doc "some documentation"}])
Now you have a list of namespaces that have been flagged as containing tests for your custom test framework. You could even use metadata in the namespace functions to avoid needing a naming convention for those as well.
There might be a more concise way to obtain namespaces having certain metadata (if someone knows of it, by all means, comment!)
Another important note: I'm loading arbitrary files to demonstrate it's possible, buy you really should consider following conventions followed by Leiningen, Maven or other build frameworks. For example, see lein-perforate
Good luck!
Thanks for helping me out. I managed to find what I was looking for. It was actually simpler than I thought. Didn't know that use is actually a function. Now i simply do this:
(defn- run-all-tests-in-namespace
[namespace-name]
(use (symbol namespace-name))
(map
run-test
(vals (ns-publics (symbol namespace-name))))
)
I create a symbol from the namespace name and then pass it to the use function. Works great!

For a Common Lisp library made of multiple packages, is there a standard way to export the API?

Obviously, the externally visible API is published by exporting symbols. But... what if I have multiple packages (say A, B and C) and A's exported symbols are not all meant to be part of the external API - some of them are needed for B and C? (similarly, B exports some symbols for A and C and some for the external API; C is the 'toplevel' package and all its exported symbols are part of the public API; I want to keep things modular and allow A to hide its innards from B and C, so I avoid '::').
My solution right now is to re-export everything that is meant to be public from C and document that the public API consists only of C's exported symbols and people should stay away from public symbols of A and B under pain of bugs and code broken in the future when internal interfaces change.
Is there a better way?
UPDATE: This is my implementation of my understanding of Xach's answer:
First, let me complete my example. I want to export symbols symbol-a-1 and symbol-a-2 from package a, symbols symbol-b-1 and symbol-b-2 from package b and symbols api-symbol-1 and api-symbol-2 from package c. Only the symbols exported from c are part of the public API.
First, the definition for a:
(defpackage #:a
(:use #:cl))
Note that there aren't any exported symbols :-)
A helper macro (uses Alexandria):
(defmacro privately-export (package-name &body symbols)
`(eval-when (:compile-toplevel :load-toplevel :execute)
(defun ,(alexandria:format-symbol *package*
"IMPORT-FROM-~a"
(symbol-name package-name)) ()
(list :import-from
,package-name
,#(mapcar (lambda (to-intern)
`',(intern (symbol-name to-intern) package-name))
symbols)))))
Use the macro to 'export privately' :-) :
(privately-export :a :symbol-a-1 :symbol-a-2)
Now the definition of b:
(defpackage #:b
(:use #:cl)
#.(import-from-a))
... b's 'exports':
(privately-export :b :symbol-b-1 :symbol-b-2)
... c's definition:
(defpackage #:c
(:use #:cl)
#.(import-from-a)
#.(import-from-b)
(:export :api-symbol-1 :api-symbol-2)
Problems with this approach:
a cannot use symbols from b (without importing symbols from b from a after both have been defined);
the syntax package:symbol is basically not usable for symbols exported 'privately' (it's either just symbol or package::symbol).
If A and B are primarily for the implementation of C, you can have C's defpackage form drive things with selective use of :import-from, since you can import things that aren't external. Then you can selectively re-export from there.
You could add a third package, D, that exports all public API symbols, and consider the A, B and C packages private. You could then qualify all definitions of functions and variables in the API package using qualified names like in
(defun D:blah () ...)
to make it easy to visually spot the definitions of public entry points.
Probably, the easiest way is proposed by Hans.
You may also wan to take a look at Tim Bradshaw's Conduit packages

Resources