The R7RS report on the programming language Scheme describes two ways of running Scheme code in a Scheme system:
1) A scheme system can run a program as described in section 5.1 in the report.
2) A scheme system can offer a read-eval-print-loop, in which Scheme code is interpreted interactively.
My question is on how these two ways of running Scheme code can be reflected inside a Scheme system with what's included in the R7RS report.
There is the eval library procedure eval, which executes Scheme code inside a running Scheme system so eval looks like what I am searching for.
However, the only guaranteed mutable environment I can plug in eval is the environment returned by the interaction-environment repl library procedure. With this, however, I cannot reliably simulate a REPL (point 2) from above) because a REPL allows the import form, which the eval procedure does not have to.
Also, I cannot use the interaction environment for evaling a full Scheme program by other reasons: It is generally not empty, in particular it contains all bindings of (scheme base).
For realising 1) inside a running Scheme system, the eval library procedure environment looks promising as it allows to import libraries beforehand (which is part of running programs). However, the environment is immutable, so I cannot evaluate defines inside the environment. A way out would be to wrap the body of the program to be run in a lambda form so that define would define local variables. However, this also doesn't work: Inside a lambda form all defines have to come at the beginning of the body (which is not true for the top-level of a Scheme program) and inside a lambda form library bindings can be lexically overwritten, which is not possible by top-level bindings.
As Scheme is Turing-complete, I can, of course, simulate a Scheme system inside a running Scheme system but I am wondering whether it is possible just by using the eval procedure. One reason for my interest is that eval might be optimized (e.g. by a JIT compiler backend), so using this procedure might give near native speed (compared to writing a simple interpreter by hand).
R7RS-small is not intended for this sort of reflective implementation. R7RS-large will provide a library that supports user-created mutable environments.
Related
There are contexts like coming across a scheme library that I'd like to play around with or reuse in Julia, or implementing a recursive algorithm that could benefit from tail call optimization, etc. Assuming the scheme file is perfectly compatible with femtolisp, how can the scheme file be used from within Julia code?
I type the following in a repl (clozure common lisp)
(defparameter test 1)
The repl responds with test
Now I enter:
(format *standard-input* "(defparameter test 2)")
Repl outputs (defparameter test 2) followed by nil.
But the value of test remains unchanged at 1.
Why is this? Isn't writing to the variable *standard-input* the same as entering the text at the repl?
How do I achieve the desired evaluation?
Some context:
I'm making a custom frontend for common lisp development using sockets. I need to write to standard input because even though I can evaluate code using eval and read, I cannot debug the code on errors.
For instance entering 1 to unwind the stack and return to top level is impossible without writing to standard input (as far as I can tell). I have the output parts figured out.
*standard-input* is an input stream, as its name implies. It's a stream you read from, not one you write to. It may be an output stream as well, but if it is then writing to it is not going to inject strings into the REPL.
I'd suggest looking at SLIME or SLY if you want to understand how to have REPLs & debuggers which interact with things down streams. In particular SWANK is probably the interesting bit to understand, or the equivalent for SLY, which is SLYNK (or slynk, not sure of the capitalisation). The implementations of these protocols in various Lisps are not entirely trivial, but the implementations exist already: you don't have to write them. Screen-scraping an interface made for humans to interact with is almost always a terrible approach: it's only reasonable when there is no better way, and in this case there is a better way, in fact there are at least two.
I've got a procedure within a SPARK module that calls the standard Ada-Text_IO.Put_Line.
During proving I get the following warning warning: no Global contract available for "Put_Line".
I do already know how to add the respective data dependency contract to procedures and functions written by myself but how do I add them to a procedures / functions written by others where I can't edit the source files?
I looked through sections 5.2 and 7.4 of the Adacore SPARK 2014 user's guide but didn't found an example with a solution to my problem.
This means that the analyzer cannot "see" whether global variables might be affected when this function is called. It therefore assumes this call is not modifying anything (otherwise all other proofs could be refuted immediately). This is likely a valid assumption for your specific example, but it might not be valid on an embedded system, where a custom implementation of Put_Line might do anything.
There are two ways to convey the missing information:
verifier can examine the source code of the function. Then it can try to generate global contracts itself.
global contracts are specified explicitly, see RM 6.1.4 (http://docs.adacore.com/spark2014-docs/html/lrm/subprograms.html#global-aspects)
In this case, the procedure you are calling is part of the run-time system (RTS), and therefore the source is not visible, and you probably cannot/should not change it.
What to do in practice?
Suppressing warnings is almost never a good idea, especially not when you are working on something safety-critical. Usually the code has to be changed until the warning goes away, or some justification process has to start.
If you are serious about the analysis results, I recommend to not use such subprograms. If you really need output there, either write your own procedure that replaces the RTS subprogram, or ensure that the subprogram really has no side effects. This is further backed up by what Frédéric has linked: Even if the callee has no side effects, you don't know whether it raises an exception for specific inputs (e.g., very long strings).
If you are not so serious about the results, then you can consider this specific one as a warning that you could live with.
Wrapper packages for use in development of SPARK applications may be found here:
https://github.com/joakim-strandberg/aida_2012
I think you just can't add Spark contracts on code you don't own, especially code from the Ada standard.
About Text_Io, I found something that may be valuable to you in the reference manual.
EDIT
Another solution compared to what Martin said, according to "Building high integrity applications with Spark" book, is to create a wrapper package.
As Spark requires you to deal with Spark packages but allows you to depend on a Spark spec with an Ada body, the solution is to build a Spark package wrapping your Ada.Text_io calls.
It might be tedious as you will have to wrap possible exceptions, possibly define specific types and so on but this way, you'll be able to discharge VCs on your full Spark package.
(define hypot
(lambda (a b)
(sqrt (+ (* a a) (* b b)))))
This is a Scheme programming language.
"define" creates a variable and global binding
lambda creates a procedure
I would like to know if "define" would be considered as an imperative language feature! As long as I know imperative feature is static scoping. I think it is an imperative feature since "define" create a global binding and static scoped looks at global binding for any variable definition where as in dynamic it looks at the current most active binding.
Please help me find the correct answer!! And I would like to know why or why not?
In a Scheme program (define var expr) statement is both a declaration and an initialization. Declarations introduce a new name into the scope. Declarations and initialization are present in both imperative and declarative languages.
However if the same variable is defined twice, then define behave as an assignment - which belongs to the imperative paradigm.
You've put your finger on a subtle and contentious issue. There have long been two informal camps on how define should work, which I would label (very imperfectly, and very controversially!) as the static vs. dynamic camps.
The static camp sees define as a non-side-effecting top-level declaration—it's a syntax that simply defines a name in a top-level scope, just like let is a syntax that defines a name in a local scope. A bit more precisely, this camp tends to see the top-level environment as equivalent to a big letrec with all the defines as the bindings, and all "loose" top-level expressions as the body. This is, incidentally, similar to the way that simple compilers work—read the whole program from one or more files, figure out all of the top-level bindings and generate code with knowledge of the whole program's source text.
The dynamic camp, on the other hand, tends to conceive of the top-level environment as a mutable data structure to which bindings can be added at runtime, and define is then an operation that modifies the top-level environment. This is, incidentally, similar to how simple interactive interpreters work—read definitions interactively from input, one at a time, and incorporate them into the environment as the user provides them.
To give one example, the SLIB library is one that I recall has been criticized for being much too firmly in the "dynamic" camp. If you read Section 1.1 on "features", you see this right from the beginning:
SLIB maintains a list of features supported by a Scheme session. The set of features provided by a session may change during that session.
The documentation for the require form that you use in SLIB to "load" modules continues with this:
Procedure: require feature
If (provided? feature) is true, then require just returns.
Otherwise, if feature is found in the catalog, then the corresponding files will be loaded and (provided? feature) will henceforth return #t. That feature is thereafter provided.
Otherwise (feature not found in the catalog), an error is signaled.
If you read this carefully, you will be struck that it's framing the whole thing as modules being "loaded" at runtime—and not as compile-time linking, which is foreign to the design.
So a "session" is a set of bindings whose keys—not just their values—changes during the runtime of the program. Programs are able to mutate the session with provide and require. They are able to directly observe the mutation with provided?. And it is implied that they can indirectly observe the set of identifiers bound in top-level environment change as a result of require—a call to require causes procedure invocations that would result in a runtime error before its invocation to no longer be so afterwards.
So we can't help but conclude that going by the philosophy of the people who designed this library, define is imperative. But not every Scheme user or implementer shares this philosophy.
First off Scheme is lexically scoped. Define usually is not limited to top level bindings like it is in Racket. It can create bindings within other procedure bodies.
In some implementations define can manipulate state but only for top level definitions. Otherwise it acts like let and binds a variable to the local scope. To actually take advantage of the top-level rebinding programatically is difficult.
So define doesn't introduce an imperative style into scheme code. Compare define to set! and its relatives, which by modify the variable in whatever environment it is bound, thereby allowing imperative style in scheme code.
Supposed I wanted to take advantage of Common Lisp's ability to read and execute Common Lisp code so that my program can execute external code written in Lisp, but I don't trust that code so I don't want it to have access the full power of Common Lisp. Is it possible for me to restricts its environment so that it can only see the packages/symbols to which I explicitly give it access, effectively creating a DSL?
To read the code, start by disabling *read-eval* (that stops people injecting execution during parsing, using something like #.(do-evil-stuff). You probably want to do the reading using a custom read-table that disables most (if not all) read-macros. You probably want to do the reading with a custom, one-off, package, importing only symbols you allow.
Once you've read the user-provided code, you still need to validate that there's no unexpected function/macro references in the code. If you have used a custom package, you should be able to confirm that each symbol falls in either of the two classes "belongs to the custom one-off package" (this is user-supplied stuff) or "explicitly allowed from elsewhere" (you would need this list to construct the custom package).
Once that's been done, you can then evaluate it.
However, doing this correctly would take a fair bit of care and you really should have someone else have a look at the code and actively try to break out of the sandbox.
Take a look at the section 'Reader security' in chapter 4 of Let over lambda which discusses this topic in some depth. In particular, you probably want to set *read-eval* to nil. To address your question regarding restricting access to the environment, this is generally difficult in Common Lisp, as it is designed to allow access to most pieces of the system in the first place. Maybe you can use elaborate the ideas of Let over lambda in the direction of white listing symbols (in comparison to the blacklisting of macro characters in the linked chapter). I don't think there are any ready-made solutions.