Allegro common lisp in-package does not work - common-lisp

I need help in understanding why Allegro CL does not execute the in-package directive put in loaded file. More precisely I load the LISA environment and would like to make Lisa the active package, but after loading the lisp file with the directive the prompt continues to be CL-USER-

This is the correct behavior.
From CLHS documentation of LOAD
load binds *readtable* and *package* to the values they held before loading the file.
So any reassignments to *package* made while loading a file are discarded.
This allows IN-PACKAGE to be used within the file to specify how the code in that file is processed, without having side effects on the user's environment.

Related

How to edit a file from within configure.ac?

I have a configure script to set up some paths for my R package during installation. I wish to edit a file based on some conditions. Is there any way to edit a file from within the configure.ac? It would be great if the solution is provided for all operating systems.
Is there any way to edit a file from within the configure.ac?
configure.ac is not executable, but I suppose you mean that you want the configure script generated from it to edit a file. The configure script is a shell script, and you can cause arbitrary shell code to be included in it, more or less just by including that code at the corresponding point in configure.ac.
The question, then, is how you would automate editing a file with a shell script. There is a variety of alternatives, but sed is high on my list. You will find it on every system that can support Autoconf configure scripts, because such scripts use it internally.
On the other hand, this sort of thing is one of the main activities of a configure script, in the form of creating files (especially makefiles, but not limited to those) from templates. You should consider building your target file of interest from a template in this way, instead of making custom-programmed edits to a file packaged in your program distribution. This would involve
setting output variables containing the chosen content for the parts of the file that need to be configured;
designating the target file as one for configure to build; and
providing the template, maybe by taking a complete example file and replacing each variable part with a reference to the appropriate #output_variable#.

.rprofile not source when creating RStudio project

In Windows 7, I have my .Rprofile in a custom location (not R_HOME, not HOME). I informed the OS of this location via the user environment variable R_ENVIRON_USER pointing to this location. There is no other .Rprofile anywhere else.
In RStudio, I set the default working directory (when not in a project) to this same location
When not in a project, the .Rprofile is properly sourced. However, when inside another project or when creating a new one, the .Rprofile is not sourced.
How do I ensure, that my .RProfile is properly sourced even inside projects (assuming there is no project-specific .RProfile inside the project dir)? I thought the environment variable would take care of that.
Answer & Update
I had to set the environment variable R_PROFILE_USER and provide the full path and filename of the .Rprofile. In a command prompt, I typed:
SETX R_PROFILE_USER "C:\Users\tspeidel\OneDrive\.Rprofile"
You misunderstand what R_ENVIRON_USER is for; it sets a value to source an (optional) .Renviron file for the user from the location it provies.
It does not affect what the system thinks your home directory is. That is still governed by HOME which you set on Windows with the same UI. And you can't just substitute R_HOME for it.
You can however read very carefully what R tells you about its process in help(Startup). Which is, as often, somewhat dense and terse but it does get to the real meat. In short, I think you may want to use another variable to point to alternate Rprofile.
None of this has anything to do with RStudio which, after all, just calls R for you (and cannot, as a running process, alter HOME).

Relative paths in R: how to avoid my computer being set on fire?

A while back I was reading an article about improving project workflow. The advice was not to use setwd or my computer would burn:
If the first line of your R script is
setwd("C:\Users\jenny\path\that\only\I\have")
I will come into your office and SET YOUR COMPUTER ON FIRE 🔥.
I started using the here package and it worked great until I started to schedule scripts using cronR. After asking this question my laptop was again threatened with arson:
If the first line of your #rstats script is wd <- here(), I will come
into your lab and SET YOUR COMPUTER ON FIRE.
Fearing for my laptop's safety I started using the method suggested in the answer to get relative file paths:
wd <- Sys.getenv("HOME")
wd <- file.path(wd, "projects", "my_proj")
Which worked for me but not people I was working with who didn't have the same projects directory. So now I'm confused. What is the safest / best way get relative file paths so that a project can be portable?
There are quite a few options: 1, 2. My requirements are to source functions/scripts and read/write csv files. Perhaps the rprojroot package is the best bet?
Create an RStudio project and then reference all files with relative paths from the project's root folder. That way, all users will open the project and automatically have the correct working directory.
There are many ways to organize code and data for use with R. Given that the "arsonist" described in the OP has rejected at least two approaches for locating the project files in an R script, the best next step is to ask the arsonist how s/he performs this function, and adjust your code and file structures accordingly.
UPDATE: Since the "arsonists" appear to be someone who writes on Tidyverse.org (see Tidyverse article in OP) and an answer on SO (see additional links in OP), your computer appears to be relatively safe.
If you are sharing code or executing it with batch processes where the "user" is someone other than you, a useful approach is to place the code, data, and configuration under version control, and develop a runbook to explain how others can retrieve the components and execute them on another computer.
As noted in the comments to the OP, there's nothing wrong with here::here() if its use can be made reliable through documentation in a runbook.
I structure all of my R code into Projects within RStudio, which are organized into a gitrepositories directory. All of the projects can be accessed as subdirectories from the gitrepositories directory. If I need to share a project, I make the project accessible to other users on GitHub.
In my R code I reference external files as subdirectories from the project root directory, such as ./data/gen01.csv.
There are two parts to this question:
how to load data from a relative path, and
how to load code from a relative path
For most use cases (including when invoking tools from a CRON job or similar) the location of the data should either be specified by the user (via command line arguments, standard input or environment variables) or should be relative to the current working directory (getwd() in R).
… Unless the data is a fixed part of the project itself — more on this below.
Loading code from a path that’s relative to other code is simply not supported by base R. For example, source('xyz.r') won’t source an xyz.r file from the project. It will always try to load it from the current working directory, whatever that happens to be. Which is pretty much never what you want. And as you’ve noticed, the ‘here’ package also doesn’t always work.
R basically only works when code is only loaded from packages. But packages aren’t suitable for all types of projects. R has no built-in solution for those other cases. I recommend using ‘box’ modules to solve this. ‘box’ provides a modern module system for R, which means that you can have R projects consisting of multiple code files (and nested sub-projects), without having to wrap them in packages. Loading code inside the same relative path in a module is as simple as
box::use(./xyz)
This always works, as you’d expect from a modern module system, and doesn’t require ‘here’ or similar hacks.
OK, back to the point about data that’s bundled with a project itself. If your project is an R package, you’d use system.file() to load that data. However, this once again doesn’t work for non-package projects. But if you use ‘box’ modules to structure your project, you can use box::file() to load data that’s associated with a module.
Packages such as ‘here’ or ‘rprojroot’, while well-intended, are essentially hacks to work around limitations in R’s handling of non-package code. The proper solution is to make non-package code into a first-class citizen of the R world, and ‘box’ does that.
You can check docs of RSuite package (https://RSuite.io). It is working with script_path that points to currently run R script. I use it to make relative paths using 'file.path' command

Load Folder of Scripts in R at startup?

I'm new to R and frankly the amount of documentation is overwhelming, and I haven't been able to find the answer to this question.
I have created a number of .R script files, all stored in a folder that I can access on my server (let's say the folder is, using the Windows backslash character \\servername\Paige\myscripts)
I know that in R you can call each script individually, for example (using the forward slash required in R)
source(file="//servername/Paige/myscripts/con_mdb.r")
and now this script, con_mdb, is available for use.
If I want to make all the scripts in this folder available at startup, how do I do this?
Briefly:
Use your ~/.Rprofile in the directory found via Sys.getenv("HOME") (or if that fails, in R's own Rprofile.site)
Loop over the contents of the directory via dir() or list.files().
Source each file.
as eg via this one liner
sapply(dir("//servername/Paige/myscripts/", "*.r"), source)
but the real story is that you should not do this. Create a package instead, and load that. Bazillion other questions here on how to build a package. Research it -- it is worth it.
Far the best way is to create a package! But as first step, you could also create one r script file (collection.r) in your script directory which includes all the scripts in a relative manner.
In your separate project scripts you can than include only that script with
source(file="//servername/Paige/myscripts/collection.r", chdir = TRUE)
which changes the directory before sourcing. Therefore you would have only to include one file for each project.
In the collection file you could use a loop over all files (except collection.r) or simply list them all.

Strange symbols in filespec when calling load

I'm trying to get familiar with a large project, possibly, initially written in Allegro Common Lisp. I have come across this piece of code:
(load "epilog:lib;compile.lisp")
Could please anyone explain what does it mean? Perhaps, if that helps, "epolig" is the name of a package "lib;compile.lisp" is a file "lib/compile.lisp", or so I could understand.
Is this a standard way to do something? And if so, what was the intention of this code? SBCL doesn't recognize colon as a special character in file name, i.e. it reports Couldn't load "epilog:lib;compile.lisp": file does not exist.
Logical Pathnames are a standard Common Lisp feature
It's not a symbol, it is a logical pathname.
Common Lisp has a portable logical pathname facility. The purpose is to abstract from physical pathnames like /usr/local/lisp/src/epilog/lib/compile.lisp or lispm:>sources>epilog>lib>compile.lisp.432 or any other type of pathname (just think of the differences between Unix, Mac OS X, Windows, ...).
The purpose is to use one single pathname scheme and one single logical file organization for your software. Regardless on what machine you are and where your files are, all you need is a mapping from the real file organization into the logical Lisp organization.
History
This facility came from a time when there were lots of different operating system and many different files (DEC VMS, IBM MVS, Multics, Unix, Lisp Machines, MS DOS, Macs, ...). The Lisp Machines were networked and could talk to all kinds of computers - so they learned the native file syntax for all those. In different laboratories (MIT, Xerox, SRI, ...) there were different machines on the network and different file servers. But the Lisp users wanted to load epilog:src;load.lisp and not remember where the stuff really is: on the local machine? but where? On a file server? But where? So on each network there was a registry for the translations from real file locations to logical pathnames.
So this is like an early 'URIs' facility for files - Uniform Resource Identifiers'.
The example explained
"epilog:lib;compile.lisp" is the name of a logical pathname.
epilog is the name of the logical host
lib; is the directory path
compile is the file name
lisp is the file type
Logical Pathname Translations
What you need is a translation between logical pathnames and physical pathnames:
Let's say we have a logical host EPILOG with just one translation rule. All files are on this machine for this Lisp under /usr/local/sources/epilog/. So we use some Unix conventions.
CL-USER 40 > (setf (logical-pathname-translations "EPILOG")
`(("**;*.*" "/usr/local/sources/epilog/**/*.*")))
(("**;*.*" "/usr/local/sources/epilog/**/*.*"))
Above only has one translation rule:
From EPILOG:**;*.* to /usr/local/sources/epilog/**/*.*.
It maps the logical hosts and all its subdirectories to a directory in a UNIX file system.
One could have more rules:
the documentation might be in a different place
there might be data files on a larger file system
compiled fasl files might be stored somewhere else
it might use logical subdirectories from other physical directories
But, again, here we use only one translation rule.
the example explained - part 2
Now we can parse a logical pathname:
CL-USER 41 > (pathname "epilog:lib;compile.lisp")
#P"EPILOG:LIB;COMPILE.LISP"
Let's describe it:
CL-USER 42 > (describe *)
#P"EPILOG:LIB;COMPILE.LISP" is a LOGICAL-PATHNAME
HOST "EPILOG"
DEVICE :UNSPECIFIC
DIRECTORY (:ABSOLUTE "LIB")
NAME "COMPILE"
TYPE "LISP"
VERSION NIL
As you see above, the parts have been parsed from our string.
Now we can also see how a logical pathname translates into a real pathname:
Translate a Logical Pathname to a physical pathname
CL-USER 43 > (translate-logical-pathname "epilog:code;ui;demo.lisp")
#P"/usr/local/sources/epilog/code/ui/demo.lisp"
So, now when you call (load "epilog:lib;compile.lisp"), then Lisp will translate the logical pathname and then really load the file from the translated physical pathname. What we also really want is that the Lisp for all purposes remembers the logical pathname - not the physical one. For example, when the file has a function named FOO, we want that Lisp records the location of the source of the function - but using the logical pathname. This way you can move a compiled file, a compiled application or a Lisp image to a different computer, update the translations and immediately it will be able to locate the source of FOO - if it is available on that machine or somewhere on a network accessible to that machine.
Logical Pathnames need to have a translation
To work with a logical pathname one needs to have a logical pathname translation like above. Often they are stored in a translations file by themselves. Define the translation, load it and then you can use corresponding logical pathnames to compile and load files. A typical software system using them, thus needs a corresponding translation. Sometimes it needs to be edited according to your file path, but sometimes they can be computed while loading the translations file. You'd to look where and how the logical host and the translations are defined.
History part 2
On a Symbolics Lisp Machine there is a site-wide directory, where systems and logical pathnames can be registered. Loading a system can then look up the system definition using this central directory and it also usually load a translations file. Thus the mechanism tells you what the structure of the system is (files, versions, patches, system versions, ...) and it tells you where it is located (which can be scattered around over several hosts or file systems).
Logical pathnames are not much used in newer software - you will encounter them sometimes in certain older software and especially those which were running on Lisp Machines - where this feature was extensively used throughout the system.

Resources