Jupyter Notebook not loading libraries - jupyter-notebook

I have a conda virtual environment with Python 3.7.16 and several installed libraries such as 'lifelines', etc. In the conda console all the installed libraries are shown; however, when I open a Jupyter Notebook on the same environment and try to load, for example, the library 'lifelines', it gives me the error message ModuleNotFoundError: No module named 'lifelines'.
I have searched on Github and Stack Overflow, and tried several solutions such as this one and others, but I still can't import libraries on Jupyter Notebooks that are already installed in the conda environment.
Does anybody know how to solve this issue?

In fact, I needed to install Jupyter-lab and notebook in the same environment; otherwise it was searching for all libraries in the base one.

Related

Cannot start jupyter notebook after deleting anaconda

What happened:
I have had Pip and Anaconda installed on my Mac. I tried to tidy up and decided to delete Anaconda since I assumed I only needed Pip for my Python programming.
My question:
I have been using jupyter notebook for my university project. Apparently, it had been installed by me using anaconda. Now that anaconda is gone, when I try running jupyter notebook, it is still looking in the deleted anaconda directory and throws the following error in the command line
/Users/username/anaconda3/bin/jupyter: No such file or directory
Furthermore, I now get this error when installing jupyter using pip3:
WARNING: The scripts jupyter, jupyter-migrate and jupyter-troubleshoot are installed in '/Users/username/Library/Python/3.8/bin' which is not on PATH.
Unfortunately, I barely have an idea of what I am doing when installing anything using the command line. Could you help me out in fixing the issue?

IJulia fails with precompiling LoadError

I have installed Julia 1.5.3 on Ubuntu but IJulia falis with LoadError as shown in the screenshot.
The first path to the conda environment is very wrong, this is running on Ubuntu. The path shown below Precompliling is a WINDOWS path.
Where in the scripts can I correct this reference and allow IJulia to install ?
It looks like you have had an Anaconda installation that is not available anymore yet your paths are pointing to it. The best thing to do is to install an Anaconda inside Julia. This also works best in practice.
using Pkg
ENV["PYTHON"]=""
Pkg.add("PyCall")
Pkg.build("PyCall")
Pkg.add("Conda")
using Conda
Conda.runconda(`install jupyter --yes`)
Pkg.build("IJulia")
Now your code will work.
using IJulia
notebook(dir=".")
Remember also to try Pluto Pkg.add("Pluto") - a new generation of notebooks for Julia.

Installed package on Anaconda not accessible in Jupyter Notebook

I installed Anaconda on an external drive inode/directory/0C707E95707E84EC. I opened Anaconda-Navigator to install r-aer in base(root). So far so good. Then I start Jupyter Notebook from inode/directory/0C707E95707E84EC and am unable to access the aer library.
This all started last week. when I was unable to install aerand after a circus of affairs I finally ended up updating Anaconda and then having to delete and reinstall it. All I want to do is access aeron Jupyter Notebook. The image below (last image) shows library(raer); I have tried many different command versions, e.g. library("Raer")and variations thereof.
I know that AER is installed in Anaconda . Could this be a path issue?
calling library in Jupyter Notebook
for any file on desktop (win 10) Jupyter Notebook can see that file without specifying the path, try putting that file on desktop.
see this it could help:
specifying R library path for RKernel in Anaconda Jupyter notebook_Stack OverFlow

How do you add Jupyter Notebook kernels for prior versions of Julia?

I am using a Windows machine and trying to have Jupyter Notebook kernels for multiple versions of Julia (0.7.0 and 1.1.1) because package AWS does not support the latest version, but does support 0.7.0.
I had Julia 1.1.1 installed on my computer first and got something similar to the following error when I tried to install package AWS: https://github.com/JuliaLang/Pkg.jl/issues/792
Then I installed Julia 0.7.0 and was able to install AWS in the Julia 0.7.0 terminal with Pkg.add("AWS") with no problems.
In the Julia 0.7.0 terminal, I installed IJulia again with Pkg.add("IJulia") and restarted my Jupyter notebook instance. Now I'd like to use AWS via Jupyter notebook but when I create a new one, only Julia 1.1.1 appears.
I ended up having success by showing which kernels I had using jupyter kernelspec list in terminal, which showed where my other Julia kernel was located.
>>> jupyter kernelspec list
Available Kernels:
julia-1.1 C:\Users\{%USERNAME%}\AppData\Roaming\jupyter\kernels\julia-1.1
python3 C:\ProgramData\Anaconda3\share\jupyter\kernels\python3
I navigated to the file path listed after julia-1.1
Created a julia-0.7 folder in that same directory
Copied over contents from the julia-1.1 folder
Edited the kernel.json file by replacing every instance of julia-1.1.1 with julia-0.7.0
What I ended up having success with seems like a very rudimentary way to solve this problem. I'd like a more elegant way to achieve the same result, similar to when adding multiple kernels for different versions of Python. (Using both Python 2.x and Python 3.x in IPython Notebook)
Please help, thank you!
You (probably) just need to Pkg.build("IJulia") on the second Julia version.
Since Julia 0.7 the package manager uses separate directories for each version of a package, meaning that, from the package managers perspective, the package is already installed, and no downloading or building is performed when you install the same version from a different Julia version. The package manager does not know, however, that IJulia needs to be rebuilt for this new Julia version. You can trigger the build manually by Pkg.build("IJulia").

Do I need to install Jupyter notebook in every virtual environment?

I isolate my data science projects into virtual environments using pipenv. However, running a Jupyter notebok does not access the local environment and uses the default IPyKernel. I've seen that you can register virtual environments from within the environment, but this requires installing the ipykernel package which itself requires Jupyter!
Is there anyway to avoid this and just use a single Jupyter install for all virtual environments?
Generally, you'd install jupyter once and do the following in your virtual environments:
pip install ipykernel
python -m ipykernel install --user
This isn't enough when you're running multiple Python versions.
There's a guide here that tries to address this:
https://medium.com/#henriquebastos/the-definitive-guide-to-setup-my-python-workspace-628d68552e14
It's not 100% failsafe, but it can help you avoid reinstalling jupyter notebook all the time.
I found that there are few problems when reinstall jupyter for each environment separately: i.e. pip install jupyter jupyterlab in new environments.
I had multiple issues (with and without Conda), where Jupyter would install packages to a different python environment when you use !pip install a_package_name within a cell. The shell environment still kept track of the non-environment python, and you can tell this by comparing the outputs of !which python and
import sys
sys.executable
Therefore, when you tried to import the package, it would not be available, because the cells used the environment python/ kernel (as it detected the venv directory).
I found a workaround that I'd appreciate feedback on. I changed pipenv to install virtual environments into the working directory by add to .bashrc/.bash_profile:
export PIPENV_VENV_IN_PROJECT=1
Now when opening a Jupyter notebook, I simply tack on the virtual environment's packages to the Python path:
import sys
sys.path.append('./.venv/lib/python3.7/site-packages/')
Is this a terrible idea?

Resources