Is it possible to set a default path/ direction for a conda environment once activating it? - directory

I am looking for a method to go to a directory (by default) after activating a conda environment. In other words, I am just lazy enough to use an addition command "cd".
I have tried to google for a while but seems that most of the answers online are teaching people setting a default path for "storing/ creating" the conda environment.

You can do this via an activation script:
## activate 'foo' environment
conda activate foo
## ensure activation script path exists
mkdir -p ${CONDA_PREFIX}/etc/conda/activate.d
## create script for 'cd bar'
echo 'cd bar' > ${CONDA_PREFIX}/etc/conda/activate.d/cd_to_bar.sh
A more sophisticated approach might also save the current working directory and switch back to it via a deactivation script (see documentation).

Related

Prevent R in conda environment from running `~/.Rprofile`

Problem
I have a conda environment with an R installation. My issue is that the conda R will run ~/.Rprofile during startup. This is breaking the expectation that conda environments are self-contained. Specifically, I am loading packages in my ~/.Rprofile that are not installed in the conda R (I am using require(), so just warnings). The process works the following way:
The first .Rprofile file found on the R startup search path is processed. The search path is (in order): (i) Sys.getenv("R_PROFILE_USER"), (ii) ./.Rprofile, and (iii) ~/.Rprofile. Source: startup package vignette
Goal
Ideally, I would like to alter the third path to a location within the environment directory and somehow do this in the environment yaml during setup, such that I can easily replicate the setup on another device. I realize that this might not work, so a solution that permanently sets R_PROFILE_USER to an environment-specific location would also be appreciated.
Edit:
Since I am using R through rpy2 I don't think I can use the --no-init-file flag.
Quoting from ?.Rprofile, which invokes the man page for Startup,
unless --no-init-file was given, R searches for a user profile, a file
of R code. The path of this file can be specified by the
R_PROFILE_USER environment variable (and tilde expansion will be
performed). If this is unset, a file called ‘.Rprofile’ is searched
for in the current directory or in the user's home directory (in that
order). The user profile file is sourced into the workspace.
I wrote a bash script to solve this issue. Run it in the project directory containing the environment.yml. It prompts for a name of the newly created conda environment, then creates and activates it. Subsequently an empty .Rprofile is created in the environment directory and R_PROFILE_USER is set to this location. Finally, the environment is reactivated for the change to take effect. Thus, any time R is run from this environment, the newly created .Rprofile is used.
It should be noted that an .Rprofile in R_PROFILE_USER takes precedence over an .Rprofile in the project directory. This might lead to confusion if the user wants to create use such a file and is unaware of the setup.
echo What name do you want to give to the conda environment?
read input
conda env create -n $input -f environment.yml
conda activate $input
touch $CONDA_PREFIX/.Rprofile
conda env config vars set R_PROFILE_USER=$CONDA_PREFIX
conda activate $input

WSL PATH environment variable is incorrect / zsh config not loaded [duplicate]

I am using Ubuntu via WSL 2.0 on Windows 10 and would like to run Texlive from the Windows command line. To do so I prepended the Texlive folder to the path in /etc/environment (I also tried a number of other locations eg. $HOME/.bashrc):
C:\Users\scott\Documents>wsl echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
C:\Users\scott\Documents>wsl
scott#SCOTT-PC:/mnt/c/Users/scott/Documents$ echo $PATH
/usr/local/texlive/2020/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Windows/system32:...
Why is there a difference between these two paths? Is it possible to change the first PATH variable?
To be honest, when I first looked at this question, I thought it would be an easy answer. Oh how wrong I was. There are a lot of nuances to how this works.
Let's start with the fairly "easy" part, though. The main difference between the first method and the second:
wsl by itself launches into a login (and interactive) shell
the shell launched with wsl echo $PATH is neither a login shell nor an interactive shell
So the first will source both login scripts (e.g. ~/.profile) and interactive startup scripts (e.g. ~/.bashrc). The second form does not get to source either of these.
You can see this a different way (and get to the solution) with the following commands:
wsl -e bash -c 'echo $PATH'
wsl -e bash -li -c 'echo $PATH'
The -li forces bash to run as a login and interactive shell, thus sourcing all of the applicable startup scripts. And, as #bovquier points out in the comments, a single quote is needed here to prevent PowerShell from interpolating the $ before it gets to Bash. That, or escape it.
You should be able to run TeX Live the same way, just replacing the "echo $PATH" with the startup command you need for TeX Live.
A second option would be to create a script that both adds the path and runs the command, and just launch that script through wsl /path/to/script.sh
That said, I honestly don't think that your current login/interactive PATH is coming from /etc/environment. In my testing, at least, /etc/environment has no use in WSL, and that's to be expected. /etc/environment is only sourced by PAM modules, and with no login check performed by WSL, there's no reason to invoke PAM in either the wsl nor the wsl echo $PATH commands.
I'd expect that you still have the PATH setting in ~/.bashrc or somewhere similar), and that's where the shell is picking it up from at the moment.
While this isn't necessarily critical to understanding the answer, you might also wonder, if /etc/environment isn't used for setting the default (non-login, non-interactive) path in WSL, what is? The answer seems to be that it is hard-coded into the init that starts up WSL. That init is also what appends the Windows path (assuming you don't have that feature disabled in /etc/wsl.conf).

Checking package versions in a miniconda environment

I hava a conda environment called test and a requirements file-requirements.txt.
What i need to achieve is that i need to check the versions of different packages against those in requirements.txt and display which are upto date and which are not.
I need to write a python script for the task. For eg: if requirements.txt has django==2.0.6, i have to check against the installed version of django in the test environment and display accordingly.
The steps that i thought are :
Activating the environment inside script
running "conda list" command and saving all the packages along with their versions in a map as key-value pairs
matching against the requirements.txt
How to activate the environment inside a python script using "conda activate test" and run the command "conda list"?
conda list accepts the argument -n to specify an environment like this:
conda list -n test
so no need to activate the conda env

How to change Jupyter start-up folder for different conda environments

After seeing this post on how to set the start-up folder for Jupyter Notebooks, I looked for how to do so for specific conda environments and haven't found an answer.
Is there a way to open up a Jupyter notebook in a location that is different depending on which conda environment within which you're activating it? I'm looking for a solution like the one above, where I could change c.NotebookApp.notebook_dir = '/the/path/to/home/folder/', but in some environment-specific config file.
I guess an alternative would be to set some macro to activate the environment, cd to the desired folder location for this environment, then run jupyter notebook from that location.
I was able to generate a DOSKEY macro to do the job. I combined this answer which shows how to set persistent aliases (macros) in command prompt, with this answer which shows how to use multiple separate commands in a DOSKEY macro. As a summary here (mostly from Argyll's answer in the above persistent macro/DOSKEY post):
Create a file called something like alias.cmd
Insert the macro to automatically activate a conda environment, change file locations, and run a jupyter notebook from that location:
doskey start_myEnv = conda activate myEnv $T cd C:\Users\user\path\to\my\notebooks\ $T jupyter notebook
Run regedit and go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor
or HKEY_CURRENT_USER\Software\Microsoft\Command Processor if not on Windows 10.
Add a String entry with the name AutoRun with the value set as the full path to the alias.cmd file.
Anytime you open the command prompt, executing start_myEnv will now activate myEnv, change to the folder that relates to that environment, and start a jupyter notebook.

How do you create a fake install of a debian package for use in testing?

I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html

Resources