I'm writing a startup script for users of my Jupyter Lab extension that customizes the pandas dataframe output display using a formatter. The formatting only makes sense in the context of Jupyter Lab, but because the file is in the user's .ipython/profile_default/startup folder, it gets executed every time a IPython kernel is started.
I have tried using get_ipython().__class__.__name__, however 'ZMQInteractiveShell' is returned both in JupyterLab, Jupyter Notebooks, and VScode. I'm specifically looking for the case where the user is in Jupyter Lab.
Does anyone know how to detect if the IPython kernel was started by launching Jupyter Lab and either 1) preventing the startup file from running all together, or 2) just casing in my startup file itself to not load the formatter when it wasn't started through Jupyter Lab?
My startup file:
try:
import pandas as pd
def df_formatter(obj):
return f'<div><div onclick="console.log(\'You are in Jupyter Lab\'">In Jupyter Lab?</div> {obj.to_html()}</div>'
html_formatter = get_ipython().display_formatter.formatters['text/html']
html_formatter.for_type(pd.DataFrame, df_formatter)
except:
print('Unable to run startup script')
Related
I've got 10 jupyter notebooks, each with many unique package dependencies (that conflict), so I've created a different anaconda environment for each notebook. Each notebook relies on the output of the previous one, which I store and read from local csv files.
Right now I am running each jupyter notebook manually (with their own anaconda environment) to get the final result. Is there a way to run a single script that runs the code of all the jupyter notebooks sequentially (with the correct anaconda environment for each one)?
You could do it in python and use runipy. You just have to install it with:
pip install runipy
An example on how to use it from the docs:
from runipy.notebook_runner import NotebookRunner
from IPython.nbformat.current import read
notebook = read(open("MyNotebook.ipynb"), 'json')
r = NotebookRunner(notebook)
r.run_notebook()
If you want to run each notebook in a different environment, you can activate each conda environmentfrom a python script. There are multiple ways to do so, one of them is this:
subprocess.run('source activate environment-name && "enter command here" && source deactivate', shell=True)
Replace the "enter command here" with the command you want to run. You
don't need the "source deactivate" at the end of the command but it's
included just to be safe.
This will temporarily activate the Anaconda environment for the
duration of the subprocess call, after which the environment will
revert back to your original environment. This is useful for running
any commands you want in a temporary environment.
I would like to setup a system such that it not only runs jupyter notebook on start, but it also starts executing a specific notebook on that jupyter server (running all cells in sequence).
Is this possible? I specifically want to be able to access the notebook web interface and inspect/stop/etc the running notebook at any point.
I know nbconvert can execute a notebook, but it seems to run independently of any existing jupyter servers?
Maybe there is some API I can access so that I can write a shell script to run jupyter notebook and then use the API to open and run a notebook?
Requirement:
Be able to audit (using logs) all the commands run in Jupyter Notebook by a user. The Jupyter Notebook is installed on Dataproc.
Is there a way we can log the command run by the user at the same time.
I have already tried changing Application.log_level in jupyter config file to 0 but no luck.
Looks like there was some discussion about this FR in the Jupyter community: https://groups.google.com/forum/#!topic/jupyter/sLKCCBwlKEc. You would have to modify the Jupyter kernel to print out all commands to a file.
I am hoping to use pyRoot, the data analysis framework developed by CERN, by integrating it in a Jupyter notebook. That said, I believe that I did the proper installation of ROOT on my macOS 10.
When I launch the ROOT Jupyter notebook using the following command, it launches properly:
root --notebook
and I get the Jupyter notebook editor. However, whenever I start the notebook by:
import ROOT
The command is kept loading a pop up notification telling me that:
The kernel appears to have died. It will restart automatically.
ipynb file while starting a Sagemaker instance.
current status is:
Cloudwatch(success) -> Lambda(success) - > Sagemaker instance(success) -> Running Particular Notebook (failed)
1.I tried using "Sagemaker Lifecycle" config with the code
jupyter nbconvert --execute prediction-12hr.ipynb --ExecutePreprocessor.kernel_name=conda_tensorflow_p36
but getting an error
[NbConvertApp] Converting notebook prediction-12hr.ipynb to html [NbConvertApp] Executing notebook with kernel: conda_tensorflow_p36
...
raise NoSuchKernel(kernel_name) jupyter_client.kernelspec.NoSuchKernel: No such kernel named conda_tensorflow_p36
on running
`!conda env list'
conda environments:
base * /home/ec2-user/anaconda3
JupyterSystemEnv /home/ec2-user/anaconda3/envs/JupyterSystemEnv
chainer_p27 /home/ec2-user/anaconda3/envs/chainer_p27
chainer_p36 /home/ec2-user/anaconda3/envs/chainer_p36
mxnet_p27 /home/ec2-user/anaconda3/envs/mxnet_p27
mxnet_p36 /home/ec2-user/anaconda3/envs/mxnet_p36
python2 /home/ec2-user/anaconda3/envs/python2
python3 /home/ec2-user/anaconda3/envs/python3
pytorch_p27 /home/ec2-user/anaconda3/envs/pytorch_p27
pytorch_p36 /home/ec2-user/anaconda3/envs/pytorch_p36
tensorflow_p27 /home/ec2-user/anaconda3/envs/tensorflow_p27
tensorflow_p36 /home/ec2-user/anaconda3/envs/tensorflow_p36
Also tried injecting a python/bash code to run the instance startup, pausing the start-up code to wait untill conda instance is setup by sagemaker.
Still no luck
Can someone suggest a plan to run .ipynb file in anyways possible.
Try to activate the relevant Python virtualenv that the notebooks relies.
source /home/ec2-user/anaconda3/envs/tensorflow_p36/bin/activate
jupyter nbconvert --execute ...
Learn more How to activate virtualenv?
Can you try activating tensorflow_p36 env and execute notebook file in that environment? That way you don't have to specify a kernel.
source activate tensorflow_p36
jupyter nbconvert --execute prediction-12hr.ipynb