This is a follow-up to
loading a precompiled heap image in Isabelle
Now I am on Windows. I created a Nominal2 heap image into the standard location:
$HOME/.isabelle/Isabelle2015/heaps/polyml-5.5.2_x86-cygwin
I cannot select it in the Theories panel to load.
I tried to start isabelle jedit -d ... -l ... from a cygwin bash script but that did not work. The script contained
#!/bin/bash
isabelle jedit -d /cygdrive/d/phd/thy/Nominal2-Isabelle2015/Nominal -l Nominal2
but id did nothing, jEdit did not come up.
How can I create an executable that automatically loads my prebuilt Nominal2 image? Or, let Isabelle/jEdit know that there is a Nominal2 image in the standard heap location?
UPDATE: I copied the image from the user's home directory to the main heap directory:
in /cygdrive/d/isabelle/Isabelle2015/heaps/polyml-5.5.2_x86-cygwin
$ cp ~/.isabelle/Isabelle2015/heaps/polyml-5.5.2_x86-cygwin/Nominal2 .
and restarted Isabelle/jEdit but I could not find Nominal2 in the menu for session images.
Instead of trying to assemble heap images manually and moving them around, you should let the system do it. You merely need to tell it where to find session source trees, either by isabelle jedit -d DIR or permanently via some ROOTS file (in some already known session directory).
A good place is $ISABELLE_HOME_USER/ROOTS: just add the directory location (in Isabelle/POSIX notation) on a separate line, and the Isabelle/jEdit logic selector should know the new sessions after restart.
Then you can select a new session, and its heap will be built after the next restart of the application.
Related
I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.
I am spinning up an instance of rstudio server and I need the working directory of R to be a specific directory. I would also like the file pane in the bottom right corner to be pointing to the same directory. Is there a way to do this? Currently it runs from the home directory of whichever user is running the program. I have tried the --server-working-dir flag, and it does not seem to work. Here is the command I am using:
/usr/lib/rstudio-server/bin/rserver \
--server-daemonize=0 \
--server-user=user \
--server-working-dir=/some/path \
--auth-none=1 \
--auth-minimum-user-id=0
Any help would be useful here.
[edit] Just wanted to clarify that I would like the server to start in this directory. I am building a container that will be deployed multiple times, and I don't want the users to have to set their directories every time it is deployed.
If you want to modify the file pane in the right, you should edit /etc/rstudio/rsession.conf.
And add two lines in below:
session-default-working-dir=/some/path
session-default-new-project-dir=/some/path
You can do this by edditing the (global) R profile startup script. Here's a step by step guide:
1) Run Rscript -e "R.home()" -- this will tell you the location of your R directory home. In my case (Mac) it is /Library/Frameworks/R.framework/Resources
2) Go to /Library/Frameworks/R.framework/Resources/etc -- e.g., $R_HOME/etc
3) sudo touch Rprofile.site if it doesn't exist, then sudo nano Rprofile.site
4) Add the following lines and save:
cat("hi\n")
setwd("/some/path/")
You should avoid overwriting the users home directory.
Amongs the [.Rprofile] files you should only edit the Rprofile.site as a last resort since it acts globally.
Suggested solution:
R read the "initialization file" at start, in the following order:
.Rprofile.site
.Rprofile (located in the current directory).
.Rprofile (in the users home directory).
In your case if you are planning to login to R-Studio server you will end up in the users home directory, so I would suggest you to just edit the [.Rprofile] in the home directory. In case the [.Rprofile] is missing you need to create it.
Add this line in your .Rprofile [in your home directory]:
setwd ('/your/path/')
Logout/login to your R-studio server session and you will notice that the "file pane to the right location" has changed accordingly to what you specified in your .Rprofile.
The operating system is AIX. I have done multiple tests by running tail -f commands on text files. Then from another terminal session i try to delete the tailed file. I have always been successful to delete them and no problem occurred but i did not find any factual documentation saying that tail -f does not lock or prevent a file from being deleted. So i would like to know if there is such a formal information and if the tail command may lock or prevent a file from being deleted how can i reproduce the use case ?
I suspect that the unlink() system call in AIX behaves similar enough to Linux that the first paragraph in this Linux man page adequately describes it:
unlink deletes a name from the filesystem. If that name was the last
link to a file and no processes have the file open the file is deleted
and the space it was using is made available for reuse.
When removing large log files that are being tailed (or written to), the disk space isn't free'd until all these processes close the file or terminate.
You can delete/move file while tail -f , but it will not create if deleted, have to create manually, hope this helps.
In Question 2918898, users discussed how to avoid caching because
modules were changing, and solutions focused on reloading. My question is
somewhat different; I want to avoid caching in the first place.
My application runs on Un*x and lives in /usr/local. It imports a
module with some shared code used by this application and another.
It's normally run as an ordinary user, and Python doesn't cache the
module in that case, because it doesn't have write permission for that
system directory. All good so far.
However, I sometimes need to run the application as superuser, and then
it does have write permission and it does cache it, leaving unsightly
footprints in a system directory. Do not want.
So ... any way to tell CPython 3.2 (or later, I'm willing to upgrade)
not to cache the module? Or some other way to solve the problem?
Changing the directory permissions doesn't work; root can still write,
root is all-powerful.
I looked through PEP 3147 but didn't see a way to prevent caching.
I don't recall any way to import code other than import. I suppose I
could read a simple text file and exec it, but that seems inelegant
and bug-prone.
The run-as-root is accomplished by calling the program with sudo in a
shell script, and I can have the shell script delete the cache after the
run, but I'm hoping for something more elegant that doesn't change the
directory's last-modified timestamp.
Implemented solution, based on Wander Nauta's answer:
Since I run the executable as a plain filename, not as python executablename, I went with the environment variable. First, the
sudoers file needs to be changed to allow setting environment
variables:
tom ALL=(ALL) SETENV: NOPASSWD: /usr/local/bkup/bin/mkbkup
Then, the invocation needs to include the variable:
/usr/bin/sudo PYTHONDONTWRITEBYTECODE=true /usr/local/bkup/bin/mkbkup "$#"
You can start python with the -B command-line flag to prevent it from writing cached bytecode.
$ ls
bar.py foo.py
$ cat foo.py
import bar
$ python -B foo.py; ls
bar.py foo.py
$ python foo.py; ls
bar.py foo.py __pycache__
Setting the PYTHONDONTWRITEBYTECODE environment variable to a non-empty string or the sys.dont_write_bytecode to True will have the same effect.
Of course, I'd say that the benefits in this case (faster loading times for your app, for free) vastly outweigh the perceived unsightliness you were talking about - but if you really want to disable caching, here's how.
Source: man python
I have a bunch of customizations and would like to run my test program in a pristine environment.
Sure I could use a tiny shell script to wrap and pass of arguments but it would be cool and useful if I could invoke a pre and possibly post script only to commands located under certain sub directories. The shell I'm using is zsh.
I don't know what you include in your “pristine environment”.
If you want to isolate yourself from the whole system, then maybe chroot is what you're after. You can set up a complete new system, with its own /etc, /bin and so on, but sharing the kernel, networking and other non-filesystem stuff with your running system. Root's cooperation is required (the chroot system call is reserved to root).
If you want to isolate yourself from your dot files, run the program with a different value for the HOME environment variable:
HOME=~/test-environment /path/to/test-program
HOME=~/test-environment zsh
If this is specifically about zsh's configuration files, you can set the ZDOTDIR environment variable before starting it to tell zsh to run its own dot files from a directory other than $HOME (or zsh --no-rcs to not load any dot file).
If by pristine environment you mean a fully controlled set of environment variables, then the env program does this.
env -i PATH=$PATH HOME=$HOME program args
will run program args with only the environment variables you specified.