Our team has developed python script to process data in 10 separated but functionally related jupyter notebooks, i.e. output of notebook 1 will be used as input for notebook 2 and so on.
Our next step is to automate the data processing process. Are there any ways to sequentially invoke jupyter notebooks?
nbconvert allows you to run notebooks. To run a notebook and replace the existing output with the new one, you can use.
jupyter nbconvert --execute --to notebook --inplace <notebook>
For more options and different approaches, you can have a look at this
You can create a script with the above commands for each notebook and this should be able to execute the notebooks in the sequential order.
Script:
jupyter nbconvert --execute --to notebook --inplace <notebook1>
jupyter nbconvert --execute --to notebook --inplace <notebook2>
Run script.
Alternative way would be to arrange file names in alphabetical order,
and then in the terminal, you can use
jupyter nbconvert --inplace --execute *.ipynb
Note that, --inplace overwrites the notebook in itself (same file) after execution.
Related
I'm trying to use the terminal to run a jupyter notebook (kernel: Julia v1.6.2), which contains generated using Plots.jl, before uploading the notebook to github for viewing on nbviewer.com.
Following this question:
How to run an .ipynb Jupyter Notebook from terminal?
I have been using nbconvert as follows:
jupyter nbconvert --execute --to notebook --inplace
This runs the notebook (if you tweak the timeout limits), however, it does not display plots when using Plots.jl, even when I explicitly call display(plot()) at the end of a cell.
Does anyone have any idea how notebooks can be run remotely in such a manner that plots will be generated and displayed, particularly when using Julia?
I managed to generate Plots.jl plots by getting from IJulia the same configuration it uses to run notebooks (this is probably the most sure way when you have many Pyhtons etc.).
using Conda, IJulia
Conda.add("nbconvert") # I made sure nbconvert is installed
mycmd = IJulia.find_jupyter_subcommand("nbconvert")
append!(mycmd.exec, ["--ExecutePreprocessor.timeout=600","--to", "notebook" ,"--execute", "note1.ipynb"])
Now mycmd has exactly the same environment as seen by IJulia so we can do run(mycmd):
julia> run(mycmd)
[NbConvertApp] Converting notebook note1.ipynb to notebook
Starting kernel event loops.
[NbConvertApp] Writing 23722 bytes to note1.nbconvert.ipynb
The outcome got saved to note1.nbconvert.ipynb, I open it with nteract to show that graphs actually got generated:
Launch notebook with using IJulia and notebook() in the REPL
ipynb file while starting a Sagemaker instance.
current status is:
Cloudwatch(success) -> Lambda(success) - > Sagemaker instance(success) -> Running Particular Notebook (failed)
1.I tried using "Sagemaker Lifecycle" config with the code
jupyter nbconvert --execute prediction-12hr.ipynb --ExecutePreprocessor.kernel_name=conda_tensorflow_p36
but getting an error
[NbConvertApp] Converting notebook prediction-12hr.ipynb to html [NbConvertApp] Executing notebook with kernel: conda_tensorflow_p36
...
raise NoSuchKernel(kernel_name) jupyter_client.kernelspec.NoSuchKernel: No such kernel named conda_tensorflow_p36
on running
`!conda env list'
conda environments:
base * /home/ec2-user/anaconda3
JupyterSystemEnv /home/ec2-user/anaconda3/envs/JupyterSystemEnv
chainer_p27 /home/ec2-user/anaconda3/envs/chainer_p27
chainer_p36 /home/ec2-user/anaconda3/envs/chainer_p36
mxnet_p27 /home/ec2-user/anaconda3/envs/mxnet_p27
mxnet_p36 /home/ec2-user/anaconda3/envs/mxnet_p36
python2 /home/ec2-user/anaconda3/envs/python2
python3 /home/ec2-user/anaconda3/envs/python3
pytorch_p27 /home/ec2-user/anaconda3/envs/pytorch_p27
pytorch_p36 /home/ec2-user/anaconda3/envs/pytorch_p36
tensorflow_p27 /home/ec2-user/anaconda3/envs/tensorflow_p27
tensorflow_p36 /home/ec2-user/anaconda3/envs/tensorflow_p36
Also tried injecting a python/bash code to run the instance startup, pausing the start-up code to wait untill conda instance is setup by sagemaker.
Still no luck
Can someone suggest a plan to run .ipynb file in anyways possible.
Try to activate the relevant Python virtualenv that the notebooks relies.
source /home/ec2-user/anaconda3/envs/tensorflow_p36/bin/activate
jupyter nbconvert --execute ...
Learn more How to activate virtualenv?
Can you try activating tensorflow_p36 env and execute notebook file in that environment? That way you don't have to specify a kernel.
source activate tensorflow_p36
jupyter nbconvert --execute prediction-12hr.ipynb
I have some R code to update a database stored in update_db.ipynb. When I try to %run update_db.ipynb from a jupyter notebook with a python kernel, I get an error
File "<ipython-input-8-815efb9473c5>", line 14
city_weather <- function(start,end,airports){
^
SyntaxError: invalid syntax
Looks like it thinks that update_db.ipynb is written in python. Can I specify which kernel to use when I use %run?
Your error is not due to the kernel selected. Your command %runĀ is made to run python only, but it has to be a script, not a notebook. You can check in details the ipython magic commands
For your use case I would suggest to install both python and R kernel in jupyter. Then you can use the magic cell command %%R to select to run R kernel for a cell inside the python notebook. Source :this great article on jupyter - tip 19
Other solution is to put your R code in an R script, and then execute it from a jupyter notebook. For this you can run a bash command from a jupyter notebook that will execute the script
!R path/to/script.r
I have a Jupyter notebook (python3) which is a batch job -- it runs three separate python3 notebooks using %run. I want to invoke a fourth Jupyter R-kernel notebook from my batch.
Is there a way to execute an external R notebook from a Python notebook in Jupyter / iPython?
Current setup:
run_all.ipynb: (python3 kernel)
%run '1_py3.ipynb'
%run '2_py3.ipynb'
%run '3_py3.ipynb'
%run '4_R.ipynb'
The three python3 notebooks run correctly. The R notebook runs correctly when opened separately in Jupyter -- however it fails when called using %run from run_all.ipynb. It is interpreted as python, and the cell gives a python error on the first line:
cacheDir <- "caches"
TypeError: bad operand type for unary -: 'str'
I am interested in any solution for running a separate R notebook from a python notebook -- Jupyter magic, shell, python library, et cetera. I would also be interested in a workaround -- e.g. a method (like a shell script) that would run all four notebooks (both python3 and R) even if this can't be done from inside a python3 notebook.
(NOTE: I already understand how to embed %%R in a cell. This is not what I am trying to do. I want to call a complete separate R notebook.)
I don't think you can use the %run magic command that way as it executes the file in the current kernel.
Nbconvert has an execution API that allows you to execute notebooks. So you could create a shell script that executes all your notebooks like so:
#!/bin/bash
jupyter nbconvert --to notebook --execute 1_py3.ipynb
jupyter nbconvert --to notebook --execute 2_py3.ipynb
jupyter nbconvert --to notebook --execute 3_py3.ipynb
jupyter nbconvert --to notebook --execute 4_R.ipynb
Since your notebooks require no shared state this should be fine. Alternatively, if you really wanna do it in a notebook, you use the execute Python API to call nbconvert from your notebook.
import nbformat
from nbconvert.preprocessors import ExecutePreprocessor
with open("1_py3.ipynb") as f1, open("2_py3.ipynb") as f2, open("3_py3.ipynb") as f3, open("4_R.ipynb") as f4:
nb1 = nbformat.read(f1, as_version=4)
nb2 = nbformat.read(f2, as_version=4)
nb3 = nbformat.read(f3, as_version=4)
nb4 = nbformat.read(f4, as_version=4)
ep_python = ExecutePreprocessor(timeout=600, kernel_name='python3')
#Use jupyter kernelspec list to find out what the kernel is called on your system
ep_R = ExecutePreprocessor(timeout=600, kernel_name='ir')
# path specifies which folder to execute the notebooks in, so set it to the one that you need so your file path references are correct
ep_python.preprocess(nb1, {'metadata': {'path': 'notebooks/'}})
ep_python.preprocess(nb2, {'metadata': {'path': 'notebooks/'}})
ep_python.preprocess(nb3, {'metadata': {'path': 'notebooks/'}})
ep_R.preprocess(nb4, {'metadata': {'path': 'notebooks/'}})
with open("1_py3.ipynb", "wt") as f1, open("2_py3.ipynb", "wt") as f2, open("3_py3.ipynb", "wt") as f3, open("4_R.ipynb", "wt") as f4:
nbformat.write(nb1, f1)
nbformat.write(nb2, f2)
nbformat.write(nb3, f3)
nbformat.write(nb4, f4)
Note that this is pretty much just the example copied from the nbconvert execute API docs: link
I was able to use the answer to implement two solutions to running an R notebook from a python3 notebook.
1. call nbconvert from ! shell command
Adding a simple ! shell command to the python3 notebook:
!jupyter nbconvert --to notebook --execute r.ipynb
So the notebook looks like this:
%run '1_py3.ipynb'
%run '2_py3.ipynb'
%run '3_py3.ipynb'
!jupyter nbconvert --to notebook --execute 4_R.ipynb
This seems simple and easy to use.
2. invoke nbformat in a cell
Add this to a cell in the batch notebook:
import nbformat
from nbconvert.preprocessors import ExecutePreprocessor
rnotebook = "r.ipynb"
rnotebook_out = "r_out.ipynb"
rnotebook_path = '/home/jovyan/work/'
with open(rnotebook) as f1:
nb1 = nbformat.read(f1, as_version=4)
ep_R = ExecutePreprocessor(timeout=600, kernel_name='ir')
ep_R.preprocess(nb1, {'metadata': {'path': rnotebook_path}})
with open(rnotebook_out, "wt") as f1:
nbformat.write(nb1, f1)
This is based on the answer from Louise Davies (which is based on the nbcovert docs example), but it only processes one file -- the non-R files can be processed in separate cells with %run.
If the batch notebook is in the same folder as the notebook it is executing then the path variable can be set with the %pwd shell magic, which returns the path of the batch notebook.
When we use nbformat.write we choose between replacing the original notebook (which is convenient and intuitive, but could corrupt or destroy the file) and creating a new file for output. A third option if the cell output isn't needed (e.g. in a workflow that manipulates files and writes logs) is to just ignore writing the cell output entirely.
drawbacks
One drawback to both methods is that they do not pipe cell results back into the master notebook display -- as opposed to the way that %run displays the output of a notebook in its result cell. The !jupyter nbconvert method appears to show stdout from nbconvert, while the import nbconvert method showed me nothing.
I'm using following command to execute "file1.ipynb" and write the output to "file2.ipynb".
jupyter nbconvert file1.ipynb --to notebook --execute --output file2.ipynb
It appears "file2.ipynb" is created only after the whole notebook is executed. However I want to see the output for already executed cells halfway through the execution (as opposed to seeing it at the very end).
Is there a way to update the output file after executing each cell?