I isolate my data science projects into virtual environments using pipenv. However, running a Jupyter notebok does not access the local environment and uses the default IPyKernel. I've seen that you can register virtual environments from within the environment, but this requires installing the ipykernel package which itself requires Jupyter!
Is there anyway to avoid this and just use a single Jupyter install for all virtual environments?
Generally, you'd install jupyter once and do the following in your virtual environments:
pip install ipykernel
python -m ipykernel install --user
This isn't enough when you're running multiple Python versions.
There's a guide here that tries to address this:
https://medium.com/#henriquebastos/the-definitive-guide-to-setup-my-python-workspace-628d68552e14
It's not 100% failsafe, but it can help you avoid reinstalling jupyter notebook all the time.
I found that there are few problems when reinstall jupyter for each environment separately: i.e. pip install jupyter jupyterlab in new environments.
I had multiple issues (with and without Conda), where Jupyter would install packages to a different python environment when you use !pip install a_package_name within a cell. The shell environment still kept track of the non-environment python, and you can tell this by comparing the outputs of !which python and
import sys
sys.executable
Therefore, when you tried to import the package, it would not be available, because the cells used the environment python/ kernel (as it detected the venv directory).
I found a workaround that I'd appreciate feedback on. I changed pipenv to install virtual environments into the working directory by add to .bashrc/.bash_profile:
export PIPENV_VENV_IN_PROJECT=1
Now when opening a Jupyter notebook, I simply tack on the virtual environment's packages to the Python path:
import sys
sys.path.append('./.venv/lib/python3.7/site-packages/')
Is this a terrible idea?
Related
I have a conda virtual environment with Python 3.7.16 and several installed libraries such as 'lifelines', etc. In the conda console all the installed libraries are shown; however, when I open a Jupyter Notebook on the same environment and try to load, for example, the library 'lifelines', it gives me the error message ModuleNotFoundError: No module named 'lifelines'.
I have searched on Github and Stack Overflow, and tried several solutions such as this one and others, but I still can't import libraries on Jupyter Notebooks that are already installed in the conda environment.
Does anybody know how to solve this issue?
In fact, I needed to install Jupyter-lab and notebook in the same environment; otherwise it was searching for all libraries in the base one.
I have uninstalled and reinstalled Anaconda many times. I have tried:
Installing geopandas directly into the base (hangs, will not install)
Installing geopandas into the geo_env environment (works, but will not activate on the Jupyter kernel)
(By the way, I have also tried the microcondas environment and rolling back to the windows 32 bit environment, neither worked.)
Here's where I am now, and I am stumped:
Anaconda Prompt
I read the part about the Linking Jupyter kernelspec to Anaconda Python #2898 and it looks okay, picture below. https://github.com/jupyter/notebook/issues/2898
kernel.json
Which results in this: Disconnected kernel
So I want to avoid using anaconda. How can I download packages into an ipykernel I made? I have the location, I just don't know how to activate ipykernels. I see the option for making a new .ipynb file once I'm within the jupyter API but this doesn't help me add the libraries I want to keep isolated on my machine.
you can install packages inside jupyter note book by
!pip install pandas("Your package name")
in a cell and run it
I installed pytorch using anaconda3 and my created virtual conda environment named 'torchTest'.
I installed all the modules needed but, codes doesn't work in jupyter python.
I installed torchtext using
1.pip install https://github.com/pytorch/text/archive/master.zip
2.and also pip install torchtext too.
all I mentioned successfully downloaded in my MAC OS X, but can't get what's wrong with my Jupyter notebook..
After having the same issue with torchtext from within my jupyterlab, I opened an issue on Github at the jupyterlab project as well as at the torchtext repository.
My current solution is to add the PYTHONPATH from the Anaconda env.
The Anaconda path is usually like that $HOME/anaconda/bin
You can add it from within Jupyter Lab/Notebook like that:
import sys
sys.path.append("/some/path/to/add")
import torchtext
Here is the big question:
Do i need to explicitly install a library, such as Plotly, in order for my locally hosted Notebook to import it?
Yes you need to have the library installed in your local environment to import it into your Jupyter Notebook.
However, you can check whether a package exists from within Jupyter Notebook and also automatically install it if it isn't already available.
you can run pip as well as shell commands from within a cell of the notebook
The syntax is as follows !pip install plotly Here the ! explicitly forces the kernel to execute the command.
If it's already installed you'll get this message Requirement already satisfied: plotly in /opt/conda/lib/python2.7/site-packages