How do I keep global state, not using dynamic scope, in Emacs Lisp? - global-variables

I know I can create a global variable with dynamic binding with defvar. However, using dynamic scope is not recommended these days.
I also know I can create a customization variable with defcustom.
What if I am writing a program that needs to store some internal state (the program uses lexical binding)? Do I simply do an initial setq when the package is loaded, and that's all? What would be the recommended way, currently?
Thank you!

However, using dynamic scope is not recommended these days.
Not true.
Dynamic scope is not recommended for variables which only need to be lexically-scoped. That is all.
If there is any need (or potential) for the variable to be accessed or set outside of the lexical scope, then that variable should be dynamic.
Using defvar or defcustom is absolutely correct for the uses you're talking about. If it's not something which users should touch, then you would use defvar and the double-hyphen naming scheme PREFIX--REST-OF-NAME for the prefix used for your library, to indicate that it's intended to be "private".
Note that there is no default 'top level' lexical environment in a library, so you can't simply setq a variable and have it be visible to (only) all forms in the same library. Lexical environments are only established for "binding constructs" like let. You could wrap your entire library in a let form, of course; but I've never seen anyone recommend doing that.

Related

Special variable in hunchentoot

Currently, I'm developing 2 web-based tools for my own need with hunchentoot.
Before starting hunchentoot, I want to set some special variable with let so there values will be available while hunchentoot is running.
Like :
(let ((*db-path* "my-db-file"))
(start-hunchentoot))
But, once the handlers get invoiced, they don't seam to be in the let anymore and db-path falls back to its global state (which is nil).
At the moment I'm resolving this by writing the let in every handler.
But, I want a more general approach so I will be able to run both applications with different db-path in one run-time.
Is it possible to set db-path in a way so it will be valid for one instance of hunchentoot and not the other?
The used environment is SBCL 1.2.4 on Debian Jessie.
Around method
Adding db-path as a slot in the acceptor might be a suitable option. However, you can also write an around method for handle-request. Assuming *my-acceptor* is globally bound:
(defmethod hunchentoot:handle-request :around ((acceptor (eql *my-acceptor*)) request)
(let ((*db-path* "my-db-file"))
(call-next-method)))
Of course, you don't need to specialize with EQL, you can define your own subclass instead. The advantage of an around method over storing a configuration variable in the class instance is that you keep the benefits of using special variables, i.e. bindings are visible from everywhere in the dynamic scope.
Here is what the documentation string, visible on Lispdoc, says about handle-request (emphasis mine):
This function is called once the request has been read and
a REQUEST object has been created. Its job is to actually handle the
request, i.e. to return something to the client.
Might be a good place
for around methods specialized for your subclass of ACCEPTOR which
bind or rebind special variables which can then be accessed by your
handlers.
I encourage you to read Hunchentoot's documentation.
Special variables and threads
The behavior you observe is because:
The server is running in another thread.
Dynamic bindings are local to each thread, as explained in the manual:
The interaction of special variables with multiple threads is mostly as one would expect, with behaviour very similar to other implementations.
global special values are visible across all threads;
bindings (e.g. using LET) are local to the thread;
threads do not inherit dynamic bindings from the parent thread
If you make your own threads, you can build a closure which binds variables as you want. You can also rely on the portable bordeaux-threads library which takes a list of bindings to be effective inside a thread.

Is the "define" primitive of Scheme an imperative languages feature? Why or why not?

(define hypot
(lambda (a b)
(sqrt (+ (* a a) (* b b)))))
This is a Scheme programming language.
"define" creates a variable and global binding
lambda creates a procedure
I would like to know if "define" would be considered as an imperative language feature! As long as I know imperative feature is static scoping. I think it is an imperative feature since "define" create a global binding and static scoped looks at global binding for any variable definition where as in dynamic it looks at the current most active binding.
Please help me find the correct answer!! And I would like to know why or why not?
In a Scheme program (define var expr) statement is both a declaration and an initialization. Declarations introduce a new name into the scope. Declarations and initialization are present in both imperative and declarative languages.
However if the same variable is defined twice, then define behave as an assignment - which belongs to the imperative paradigm.
You've put your finger on a subtle and contentious issue. There have long been two informal camps on how define should work, which I would label (very imperfectly, and very controversially!) as the static vs. dynamic camps.
The static camp sees define as a non-side-effecting top-level declaration—it's a syntax that simply defines a name in a top-level scope, just like let is a syntax that defines a name in a local scope. A bit more precisely, this camp tends to see the top-level environment as equivalent to a big letrec with all the defines as the bindings, and all "loose" top-level expressions as the body. This is, incidentally, similar to the way that simple compilers work—read the whole program from one or more files, figure out all of the top-level bindings and generate code with knowledge of the whole program's source text.
The dynamic camp, on the other hand, tends to conceive of the top-level environment as a mutable data structure to which bindings can be added at runtime, and define is then an operation that modifies the top-level environment. This is, incidentally, similar to how simple interactive interpreters work—read definitions interactively from input, one at a time, and incorporate them into the environment as the user provides them.
To give one example, the SLIB library is one that I recall has been criticized for being much too firmly in the "dynamic" camp. If you read Section 1.1 on "features", you see this right from the beginning:
SLIB maintains a list of features supported by a Scheme session. The set of features provided by a session may change during that session.
The documentation for the require form that you use in SLIB to "load" modules continues with this:
Procedure: require feature
If (provided? feature) is true, then require just returns.
Otherwise, if feature is found in the catalog, then the corresponding files will be loaded and (provided? feature) will henceforth return #t. That feature is thereafter provided.
Otherwise (feature not found in the catalog), an error is signaled.
If you read this carefully, you will be struck that it's framing the whole thing as modules being "loaded" at runtime—and not as compile-time linking, which is foreign to the design.
So a "session" is a set of bindings whose keys—not just their values—changes during the runtime of the program. Programs are able to mutate the session with provide and require. They are able to directly observe the mutation with provided?. And it is implied that they can indirectly observe the set of identifiers bound in top-level environment change as a result of require—a call to require causes procedure invocations that would result in a runtime error before its invocation to no longer be so afterwards.
So we can't help but conclude that going by the philosophy of the people who designed this library, define is imperative. But not every Scheme user or implementer shares this philosophy.
First off Scheme is lexically scoped. Define usually is not limited to top level bindings like it is in Racket. It can create bindings within other procedure bodies.
In some implementations define can manipulate state but only for top level definitions. Otherwise it acts like let and binds a variable to the local scope. To actually take advantage of the top-level rebinding programatically is difficult.
So define doesn't introduce an imperative style into scheme code. Compare define to set! and its relatives, which by modify the variable in whatever environment it is bound, thereby allowing imperative style in scheme code.

Localizing global variables

When using the Extended Program Check, I get the following warning:
Do not declare fields and field symbols (variable name) globally.
This is from declaring global data before the selection screen. The obvious solution is that they should be declared locally in a subroutine.
If I decide to do this, the data will now be out of scope for the other subroutines, so I would end up creating something to the effect of a main() function from C or Java. This sounds like a good idea - however, events such as INITIALIZATION are not allowed to be inside of subroutines, meaning that it forces a break in scope.
Observe the sample program below:
REPORT Z_EXAMPLE.
SELECTION-SCREEN BEGIN OF BLOCK upload WITH FRAME TITLE text-H01.
PARAMETERS: p_infile TYPE rlgrap-filename LOWER CASE OBLIGATORY.
SELECTION-SCREEN END OF BLOCK upload.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_infile.
PERFORM main1 CHANGING p_infile.
INITIALIZATION.
PERFORM main2.
TOP-OF-PAGE.
PERFORM main3.
...
main1, main2, and main3 cannot to my knowledge pass any data to one another without global declaration. If the data is parsed from the uploaded file p_infile in main1, it cannot be accessed in main2 or main3. Aside from omitting events all together, is there any way to abide by the warning but let data be passed over events?
There are a variety of techniques - I prefer to code almost everything except for the basic selection screen handling in a separate controller class. The report simply defers to that class and calls its methods. Other than that - it's just a warning that you can ignore if you know what you're doing. Writing a program without any global variable at all will certainly not be practical - however, you should think at least twice before using global variables or attributes in a place where a method parameter would be more appropriate.
As #vwegert so rightly said, it's almost impossible to write an ABAP program that doesn't have at least a few global variables (the selection screen and events enforce that, unfortunately).
One approach is to use a controller class, another is to have a main subroutine and have it call other subroutines as required, passing values as required. I tend to favour the latter approach in a lot of cases, if only because it's easier to split the subroutines into logical groupings in separate includes (doing so with classes can sometimes be a little ugly). It really is a matter of approach though, but the key thing is reducing global variables to a minimum - unfortunately too few ABAP developers that I've encountered care about such issues.
Update
#Christian has reminded me that as of ABAP AS 7.02, subroutines are considered obsolete:
Subroutines should no longer be created in new programs for the following reasons:
The parameter interface has clear weaknesses when compared with the parameter interface of methods, such as:
positional parameters instead of keyword parameters
no genuine input parameters in pass by reference
typing is optional
no optional parameters
Every subroutine implicitly belongs to the public interface of its program. Generally this is not desirable.
Calling subroutines externally is critical with regard to the assignment of the container program to a program group in the internal session. This assignment cannot generally be defined as static.
Those are all valid points and I think in light of that, using classes for modularisation is definitely the preferred approach (and from a purely aesthetic point of view, they also "fit" better with the syntax enhancements in 7.02 and later).

Is it possible to determine the calling context (function, symbol) in a Common Lisp function?

There are probably several ways to implement this introspection feature through macros and code walkers, but is there a simpler (possible, implementation-dependent) way? I'd imagine, invoking and then releasing the debugger could open access to frame stack, but that seems like an overkill too.
What would be some simpler ideas to try?
Macros can take an &env argument that passes in the lexical environment of the calling context. You can then query the lexical environment using these functions: https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node102.html
In particular, check out variable-information and function-information.
I believe there are also implementation-specific ways to get the current lexical environment at run-time.

What is the usefulness of the `access` parameter mode?

There are three 'normal' modes of passing parameters in Ada: in, out, and in out.
But then there's a fourth mode, access… is there anything wherein they're required?
(i.e. something that would otherwise be impossible.)
Now, I do know that the GNAT JVM Ada-compiler makes pretty heavy use of them in the imported [library] specifications. (Also, they could arguably be seen as essential for C/C++ translations.)
One of the primary drivers of the access mode was to work-around the restriction that, prior to Ada 2012, function parameters could only be of mode 'in'.
So while there may still be areas where they're an appropriate solution, perhaps in bindings, Ada 2012's relaxation of the allowed function parameters modes to now include 'in out' will probably significantly reduce the need for access mode.
Regardless of what other uses there are for them, I rather like using them when coding bindings to C API's that take in pointers (if and only if 0 is not a valid value for that parameter on the C side).
This way on the Ada side I can deal with a nice object rather than a messy error-prone pointer.
Of course you can just specify in the bindings that the parameter is passed by reference, which gets you the same thing.
In my latest project, the only time I've needed to use access so far is when defining my own stream subprograms (Read, Write, X'Class'Output etc. etc.). These functions require not null access Ada.Streams.Root_Stream_Type'Class as a parameter.
For example:
package Example is
type Printable_Type is private;
procedure Print_Printable(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Print : in Printable_Type);
for Printable_Type'Write use Print_Printable;
end Example

Resources