After seeing this post on how to set the start-up folder for Jupyter Notebooks, I looked for how to do so for specific conda environments and haven't found an answer.
Is there a way to open up a Jupyter notebook in a location that is different depending on which conda environment within which you're activating it? I'm looking for a solution like the one above, where I could change c.NotebookApp.notebook_dir = '/the/path/to/home/folder/', but in some environment-specific config file.
I guess an alternative would be to set some macro to activate the environment, cd to the desired folder location for this environment, then run jupyter notebook from that location.
I was able to generate a DOSKEY macro to do the job. I combined this answer which shows how to set persistent aliases (macros) in command prompt, with this answer which shows how to use multiple separate commands in a DOSKEY macro. As a summary here (mostly from Argyll's answer in the above persistent macro/DOSKEY post):
Create a file called something like alias.cmd
Insert the macro to automatically activate a conda environment, change file locations, and run a jupyter notebook from that location:
doskey start_myEnv = conda activate myEnv $T cd C:\Users\user\path\to\my\notebooks\ $T jupyter notebook
Run regedit and go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor
or HKEY_CURRENT_USER\Software\Microsoft\Command Processor if not on Windows 10.
Add a String entry with the name AutoRun with the value set as the full path to the alias.cmd file.
Anytime you open the command prompt, executing start_myEnv will now activate myEnv, change to the folder that relates to that environment, and start a jupyter notebook.
Related
Problem
I have a conda environment with an R installation. My issue is that the conda R will run ~/.Rprofile during startup. This is breaking the expectation that conda environments are self-contained. Specifically, I am loading packages in my ~/.Rprofile that are not installed in the conda R (I am using require(), so just warnings). The process works the following way:
The first .Rprofile file found on the R startup search path is processed. The search path is (in order): (i) Sys.getenv("R_PROFILE_USER"), (ii) ./.Rprofile, and (iii) ~/.Rprofile. Source: startup package vignette
Goal
Ideally, I would like to alter the third path to a location within the environment directory and somehow do this in the environment yaml during setup, such that I can easily replicate the setup on another device. I realize that this might not work, so a solution that permanently sets R_PROFILE_USER to an environment-specific location would also be appreciated.
Edit:
Since I am using R through rpy2 I don't think I can use the --no-init-file flag.
Quoting from ?.Rprofile, which invokes the man page for Startup,
unless --no-init-file was given, R searches for a user profile, a file
of R code. The path of this file can be specified by the
R_PROFILE_USER environment variable (and tilde expansion will be
performed). If this is unset, a file called ‘.Rprofile’ is searched
for in the current directory or in the user's home directory (in that
order). The user profile file is sourced into the workspace.
I wrote a bash script to solve this issue. Run it in the project directory containing the environment.yml. It prompts for a name of the newly created conda environment, then creates and activates it. Subsequently an empty .Rprofile is created in the environment directory and R_PROFILE_USER is set to this location. Finally, the environment is reactivated for the change to take effect. Thus, any time R is run from this environment, the newly created .Rprofile is used.
It should be noted that an .Rprofile in R_PROFILE_USER takes precedence over an .Rprofile in the project directory. This might lead to confusion if the user wants to create use such a file and is unaware of the setup.
echo What name do you want to give to the conda environment?
read input
conda env create -n $input -f environment.yml
conda activate $input
touch $CONDA_PREFIX/.Rprofile
conda env config vars set R_PROFILE_USER=$CONDA_PREFIX
conda activate $input
I've got 10 jupyter notebooks, each with many unique package dependencies (that conflict), so I've created a different anaconda environment for each notebook. Each notebook relies on the output of the previous one, which I store and read from local csv files.
Right now I am running each jupyter notebook manually (with their own anaconda environment) to get the final result. Is there a way to run a single script that runs the code of all the jupyter notebooks sequentially (with the correct anaconda environment for each one)?
You could do it in python and use runipy. You just have to install it with:
pip install runipy
An example on how to use it from the docs:
from runipy.notebook_runner import NotebookRunner
from IPython.nbformat.current import read
notebook = read(open("MyNotebook.ipynb"), 'json')
r = NotebookRunner(notebook)
r.run_notebook()
If you want to run each notebook in a different environment, you can activate each conda environmentfrom a python script. There are multiple ways to do so, one of them is this:
subprocess.run('source activate environment-name && "enter command here" && source deactivate', shell=True)
Replace the "enter command here" with the command you want to run. You
don't need the "source deactivate" at the end of the command but it's
included just to be safe.
This will temporarily activate the Anaconda environment for the
duration of the subprocess call, after which the environment will
revert back to your original environment. This is useful for running
any commands you want in a temporary environment.
No problem in other directories. Is there an environmental variable or something else I need to erase?
Deleted cache file...
OK, I think I need be much clearer here.
First software:
MacOS Catalina 10.15.6
jupyter notebook 6.0.3
Python 3.8.3
IPython 7.16.1
jupyter notebook is installed and runs fine.
jupyter notebook runs just fine in any user directory on the computer except exactly one.
There is nothing obvious in this directory that shouldn't be there. An 'ls -al' shows nothing but some .py files.
I can create a jupyter notebook in this directory, but the kernel crashes and won't restart. I can rename the directory, rename the jupyter notebook, but the behavior persists beyond everything I have been able to reset including a cold computer restart. It is reproducible and happens every time.
This behavior is not seen in any other directory.
My question: are there environmental variables or caches stored not visibly in the directory (obviously) that are responsible for this incredibly annoying behavior and how can I reset them?
Problem solved: jupyter notebooks apparently uses some reserved names for local directory .py files when starting up the notebook. So far I've found that "string.py" and "decorator.py" cannot be in the startup directory unless they contain the expected data (looks like it needs to be related to some template info)
To start-up a kernel
You first activate your virtual environment:
For instance: conda activate vision
Second, you type jupyter notebook
as stated here
I have some passwords and such set in my bash_profile that I want to be able to access from within Jupyter notebooks. They are successfully loaded whenever I use just a Jupyter notebook, but not when I use JupyterLab.
I am using Anaconda, and I double checked that the location of my JupyterLab and JupyterNotebook's are the same using which jupyter/jupyter-lab/jupyter-notebook and they do all point to my Anaconda bin directory. I also made sure the anaconda3/bin directory is in my bash profile and that the conda environment was activated.
Whenever I run os.environ in a notebook using Lab versus just a plain notebook I expect the same output, however the Lab instance does not load anything I have manually added to my profile while a plain notebook does.
Is there some way to tell jupyter notebook what the default conda env should be when creating new notebooks? Launching it on AWS Deep Learning AMI's gives me a pretty long list, but I really only care about one specific env.
If you go to your terminal first and activate the virtual environment:
$ source venv/bin/activate
or
$ conda activate venv
for conda environment.
And after that step, do the following:
$ jupyter notebook
And when you make a new script it should give you the option for chosing python3/python2, chose the one that solves your purpose. And this script will be using the activated environment. You can check it by importing a libraray specific to that environment.