How do I make time$ work with ctrl+t e in ACL2 and emacs? - acl2

When trying to copy a form into my ACL2 shell buffer using ctrl+t e, and I've already placed a time$ in my shell buffer, I get an error about not being able to paste the form. How do I change the emacs macro so that I can paste into ACL2 shell buffers that already have (time$ written into them?

Place this at the end of your ~/.emacs file:
(setq *acl2-insert-pats* '(:not ".*%[ ]*$" "[^(]*$[ ]*$" "^$"))

Related

How to stop current execution process in ESS when C-c C-c is unresponsive?

I'm handling a datatable with a big amount of text in its fields and when, by mistake, I call a command that starts to print it causes R to freeze or to slowly print everything, I then have to kill emacs and resetup all my windows and buffers. This because during the printing process the command C-c C-c is unresponsive.
Do you knnow how to proceed to handle this without killing the whole working setup ?
You could kill just the ess process with something like,
(defun ess-abort ()
(interactive)
(kill-process (ess-get-process)))
(define-key ess-mode-map (kbd "C-c C-a") 'ess-abort)
(define-key inferior-ess-mode-map (kbd "C-c C-a") 'ess-abort)
eg, in R repl,
library(ggplot2)
toString(diamonds)
followed by C-c C-a. Haven't tried it on Windows however.

Vim execute a command and send out buffer over stdout [duplicate]

This question already has answers here:
Redirect ex command to STDOUT in vim
(3 answers)
Closed 9 years ago.
Here's how you can automate vim in an interesting way:
vim -c '0,$d | r source.txt | 1d | w | q' dest.txt
This uses vim ex commands to erase dest.txt, read source.txt into the buffer, erase the first line (which ends up as a blank line due to the way r works), write to the file (dest.txt), and then quit.
This (as far as I can tell) skips the entire vim terminal UI from loading and is conceptually a little like having a vimscript interpreter.
Now I'd love to be able to take this just one little step further to abuse the capabilities of vim: I want a script to peer at the currently edited changes of an opened file (as part of an interactive automation shell script) which exist in the vim *.swp swapfiles, apply the changes through vim's recover command, and then obtain the output.
Of course it would be perfectly serviceable to use an actual file, e.g. orig_file.txt is being edited in vim in another terminal; my script could do this at each point that the swapfile is detected to change:
cp orig_file.txt orig_file_ephemeral.txt
cp .orig_file.txt.swp .orig_file.txt_ephemeral.txt.swp
vim -c 'recover | w | q'
At this point orig_file_ephemeral.txt shall contain the content of the vim buffer from the other process in which editing is taking place, and we obtained this data without requiring any direct interaction with said process. Which is neat.
Of course for practical purposes it would probably make more sense to do exactly that, and just have the primary vim participate in the process. It would be splitting the functionality for the script out into the configuration of vim, which is a downside, but it would be more straightforward conceptually and computationally as it already has the buffer contents readily available for writing, and it should be straightforward to do so as I believe there exists an autocommand we can use (though whether that autocommand is run prior to saving the swapfile or not remains to be seen).
Either way, for the sake of completeness I'm curious to know if there exists an ex command to write stuff to the STDOUT of vim. Or if this even makes any sense.
I think it perhaps makes no sense as STDOUT is bound to be the actual terminal, e.g. it is where vim sends out its "view" of its UI and the buffer, and everything, to the terminal. So that for example if any of the vim -c 'vimscript commands' commands produce vim errors, I'll be seeing vim's terminal output to display these errors over STDOUT.
Therefore it may only be practical to use a file. But maybe there's some kind of craziness like !tee /dev/fd/3 I could do?
In addition, there is a wrinkle with this roundabout approach, which is that vim presents a Warning: Original file may have been changed error in bright red background text for about a second, and this is surely due to renaming the file. I can likely work around that by doing this work inside of a sub-directory while keeping the filenames identical.
That's the p command (and where the p in grep comes from):
ex -sc '%p|q' file
Would be a bit like cat file.

making commandargs comma delimited or parsing spaces

I'm trying to run R from the command line using command line arguments. This includes passing in some filepaths as arguments for use inside the script. It all works most of the time, but sometimes the paths have spaces in and R doesn't understand.
I'm running something of the form:
R CMD BATCH --slave "--args inputfile='C:/Work/FolderWith SpaceInName/myinputfile.csv' outputfile='C:/Work/myoutputfile.csv'" RScript.r ROut.txt
And R throws out a file saying
Fatal error: cannot open file 'C:\Work\FolderWith': No such file or directory
So evidently my single quotes aren't enough to tell R to take everything inside the quotes as the argument value. I'm thinking this means I should find a way to delimit my --args using a comma, but I can't find a way to do this. I'm sure it's simple but I've not found anything in the documentation.
The current script is very basic:
ca = commandArgs(trailingOnly=TRUE)
eval(parse(text=ca))
tempdata = read.csv(inputFile)
tempdata$total = apply(tempdata[,4:18], 1, sum)
write.csv(tempdata, outputFile, row.names = FALSE)
In case it's relevant I'm using windows for this, but it seems like it's not a cmd prompt problem.
Using eval(parse()) is probably not the best and most efficient way to parse command line arguments. I recommend to use a package like the optparse to do the parsing for you. Parsing command line args has already been solved, no need to reimplement this. I could imagine that this solves your problems. Although, spaces in path names are a bad idea to begin with.
Alternatively, you could take a very simple approach and pass the arguments like this:
R CMD BATCH --slave arg1 arg2
Where you can retrieve them like:
ca = commandArgs(TRUE)
arg1 = ca[2]
arg2 = ca[3]
This avoids the eval(parse which I think is causing the issues. Finally, you could try and escape the space like this:
R CMD BATCH --slave "C:/spam\ bla"
You could also give Rscript a try, R CMD BATCH seems to be less favored than Rscript.
As an enhancement of #PaulHimestra answer here how you can use Rscript :
you create a launcher.bat ,
echo off
C:
PATH R_PATH;%path%
cd DEMO_PATH
Rscript youscript.R arg1 arg2
exit
with R_PATH something like C:/Program Files/R/R-version
There are many similarities with this post:
R command line passing a filename to script in arguments (Windows)
Also this post is very OS related. My answer applies only to Windows.
Probably what you are looking for is RScript.exe instead of R.exe. The latter has no problem with spaces: path\to\RScript "My script.r".
One boring thing may be searching or setting the path for RScript and doing this every time one updates R.
Among the convenience scripts I have in my search path, I wrote a little facility to run RScript without bothering with paths. Just in case it may be of interest for someone:
#echo off
setlocal
::Get change to file dir par (-CD must be 1st par)
::================================================
Set CHANGEDIR="F"
If /I %1 EQU -cd (
Set CHANGEDIR="T"
SHIFT
)
::No args given
::=============
If [%1] EQU [] GoTo :USAGE
::Get R path from registry
::========================
:: may check http://code.google.com/p/batchfiles for updates on R reg keys
Call :CHECKSET hklm\software\R-core\R InstallPath
Call :CHECKSET hklm\software\wow6432Node\r-core\r InstallPath
if not defined RINSTALLPATH echo "Error: R not found" & goto:EOF
::Detect filepath when arg not starting with "-"
::==============================================
::Note the space after ARGS down here!!!
Set ARGS=
:LOOP
if [%1]==[] (GoTo :ELOOP)
Set ARGS=%ARGS% %1
::Echo [%ARGS%]
Set THIS=%~1
if [%THIS:~0,1%] NEQ [-] (Set FPATH=%~dp1)
SHIFT
GoTo :LOOP
:ELOOP
::echo %FPATH%
::Run Rscript script, changing to its path if asked
::=================================================
If %CHANGEDIR%=="T" (CD %FPATH%)
Echo "%RINSTALLPATH%\bin\Rscript.exe" %ARGS%
"%RINSTALLPATH%\bin\Rscript.exe" %ARGS%
endlocal
:: ==== Subroutines ====
GoTo :EOF
:USAGE
Echo USAGE:
Echo R [-cd] [RScriptOptions] Script [ScriptArgs]
Echo.
Echo -cd changes to script dir. Must be first par.
Echo To get RScript help on options etc.:
Echo R --help
GoTo :EOF
:CHECKSET
if not defined RINSTALLPATH for /f "tokens=2*" %%a in ('reg query %1 /v %2 2^>NUL') do set RINSTALLPATH=%%~b
GoTo :EOF
The script prints the actual RScript invoking line, before running it.
Note that there is an added argument, -cd, to change automatically to the script directory. In fact it is not easy to guess the script path from inside R (and set it with setwd()), in order to call other scripts or read/write data files placed in the same path (or in a relative one).
This (-cd) might possibly make superfluous your other commandargs, as you may find convenient calling them straight from inside the script.

is there a way to put comments in a unix command line?

I'm writing a program (in python) that calls a separate program (via subprocess). I'm finding that in some cases the sub program is getting stuck running. I can see the sub-program by running top, and if i press "c", I can see the full command line.
What I want, is to be able to stick debugging data (like current thread id, etc) in the command line when i'm calling the sub program, so I can futher debug my problem.
Is there a way to put comments in command line arguments such that they show up in top?
I can't think of a direct way but you could write a little shell script to which you pass the actual command to run plus argument and debugging information. It would show up in the top/ps output.
Instead of making them comments, put them in the environment. For example, if you have a /proc file system, you could do:
FOO=value cmd
When top shows the pid of the command, do:
tr '\000' '\012' < /proc/pid/environ | grep FOO
to see the value of FOO in the environment of the cmd. If the values contain newlines, you will need to be more careful about the display, something like:
perl -n0E 'say if /FOO/' /proc/pid/environ

How do you pass arguments (I.E. the binarys name) to the Emacs gdb command?

Right now, I have F5 set to start gdb in emacs for me:
(global-set-key [f5] 'gdb)
This switches to the mini-buffer, which I then type a path to an executable... I'd like to find a way to bypass this path typing...
I wrote an executable that looks at the Makefile, parses it and figures out the full path of the executable and prints it to standard out... Is it possible to call this from my .emacs... And then somehow pass the output to the gdb command?
(defun gdb-getpath ()
"Figures out the path to executable and launches gdb."
(interactive)
(let ((path (shell-command-to-string "/path/to/your/executable")))
(gdb (concat "gdb " path))
))
(global-set-key [f5] 'gdb-getpath)

Resources