I was able to create a customized kernel by using
conda create -n myenv python=3.8 -y
and the customized env myenv works fine in sagemaker terminal. However, when trying to use kernel myenv within a sagemaker notebook, the kernal was there but will almost always die (more than 90% of the time and by luck it will work) and returns to showing no kernel. This happens for whatever simple python command I tried to run in a notebook cell. WTF is sagemaker doing that can't even work with such a trivial functionality?
Related
I've got 10 jupyter notebooks, each with many unique package dependencies (that conflict), so I've created a different anaconda environment for each notebook. Each notebook relies on the output of the previous one, which I store and read from local csv files.
Right now I am running each jupyter notebook manually (with their own anaconda environment) to get the final result. Is there a way to run a single script that runs the code of all the jupyter notebooks sequentially (with the correct anaconda environment for each one)?
You could do it in python and use runipy. You just have to install it with:
pip install runipy
An example on how to use it from the docs:
from runipy.notebook_runner import NotebookRunner
from IPython.nbformat.current import read
notebook = read(open("MyNotebook.ipynb"), 'json')
r = NotebookRunner(notebook)
r.run_notebook()
If you want to run each notebook in a different environment, you can activate each conda environmentfrom a python script. There are multiple ways to do so, one of them is this:
subprocess.run('source activate environment-name && "enter command here" && source deactivate', shell=True)
Replace the "enter command here" with the command you want to run. You
don't need the "source deactivate" at the end of the command but it's
included just to be safe.
This will temporarily activate the Anaconda environment for the
duration of the subprocess call, after which the environment will
revert back to your original environment. This is useful for running
any commands you want in a temporary environment.
I'm trying to use the terminal to run a jupyter notebook (kernel: Julia v1.6.2), which contains generated using Plots.jl, before uploading the notebook to github for viewing on nbviewer.com.
Following this question:
How to run an .ipynb Jupyter Notebook from terminal?
I have been using nbconvert as follows:
jupyter nbconvert --execute --to notebook --inplace
This runs the notebook (if you tweak the timeout limits), however, it does not display plots when using Plots.jl, even when I explicitly call display(plot()) at the end of a cell.
Does anyone have any idea how notebooks can be run remotely in such a manner that plots will be generated and displayed, particularly when using Julia?
I managed to generate Plots.jl plots by getting from IJulia the same configuration it uses to run notebooks (this is probably the most sure way when you have many Pyhtons etc.).
using Conda, IJulia
Conda.add("nbconvert") # I made sure nbconvert is installed
mycmd = IJulia.find_jupyter_subcommand("nbconvert")
append!(mycmd.exec, ["--ExecutePreprocessor.timeout=600","--to", "notebook" ,"--execute", "note1.ipynb"])
Now mycmd has exactly the same environment as seen by IJulia so we can do run(mycmd):
julia> run(mycmd)
[NbConvertApp] Converting notebook note1.ipynb to notebook
Starting kernel event loops.
[NbConvertApp] Writing 23722 bytes to note1.nbconvert.ipynb
The outcome got saved to note1.nbconvert.ipynb, I open it with nteract to show that graphs actually got generated:
Launch notebook with using IJulia and notebook() in the REPL
Is there some way to tell jupyter notebook what the default conda env should be when creating new notebooks? Launching it on AWS Deep Learning AMI's gives me a pretty long list, but I really only care about one specific env.
If you go to your terminal first and activate the virtual environment:
$ source venv/bin/activate
or
$ conda activate venv
for conda environment.
And after that step, do the following:
$ jupyter notebook
And when you make a new script it should give you the option for chosing python3/python2, chose the one that solves your purpose. And this script will be using the activated environment. You can check it by importing a libraray specific to that environment.
I have some R code to update a database stored in update_db.ipynb. When I try to %run update_db.ipynb from a jupyter notebook with a python kernel, I get an error
File "<ipython-input-8-815efb9473c5>", line 14
city_weather <- function(start,end,airports){
^
SyntaxError: invalid syntax
Looks like it thinks that update_db.ipynb is written in python. Can I specify which kernel to use when I use %run?
Your error is not due to the kernel selected. Your command %runĀ is made to run python only, but it has to be a script, not a notebook. You can check in details the ipython magic commands
For your use case I would suggest to install both python and R kernel in jupyter. Then you can use the magic cell command %%R to select to run R kernel for a cell inside the python notebook. Source :this great article on jupyter - tip 19
Other solution is to put your R code in an R script, and then execute it from a jupyter notebook. For this you can run a bash command from a jupyter notebook that will execute the script
!R path/to/script.r
Once you have the Jupyter notebook installed using the Google Initialisation script is it possible to create and activate a conda environment on all the worker nodes?
Do I need to ssh onto each worker and create the same environment or is there a way to accomplish the same via a notebook if the number of modules to add is small?
Once you have a cluster up, there is not a great way to re-configure all workers.
If you create a new cluster the conda initialization action might be helpful.