Shutdown jupyter kernel - jupyter-notebook

I am sharing my notebooks with colleagues by exposing the jupyter door on our internal network.
When they leave the notebook, a lot of kernels keep running.
I am looking for a way for automatically shutdown a jupyter-notebook kernel when exiting.
Does anyone know if it is possible?

Related

problem with kernel creation in vscode while connected to remote jupyter server?

I am trying to connect to remote jupyter server which is running in remote server inside a docker. from vs code (local machine) with the help of jupyter extension, and running a newly created notebook.
The problem seems to be the kernel creation via this method for this newly created notebook.
My Jupyter Server is running in remote server, inside a docker environment, with port forwarding enable.
I can access it via my browser by http://{remote_machine_ip}:{port}/, and I am able to create new jupyter notebook.
However when I use the vs code to open local notebook file, and connect to the remote jupyter server.
When I try to run the cells, it shows the following error:
Failed to start the Kernel.
'_xsrf' argument missing from POST.
View Jupyter log for further details.
Possible solution
However, if I try to open a new kernel in browser, and connect it to same kernel in vs code, the problem goes away.
This issue seems to be arising when vs code sends the request to create jupyter kernel to the jupyter server.
As, when I tested with already running jupyter kernel in vs code, it works fine.
The issue here is, security implementation of jupyter server, which don't allow cross site request to create a kernel, as it security vulnerability.
For more details about _xsrf token, you can read here, although it doesn't talk about specific to jupyter server, its very easy to deduce the logic.
In one post of jetbrains, I found the solution, to make jupyter ignore _xsrf token, by adding new flag while starting the jupyter server,
--NotebookApp.disable_check_xsrf=True
Or add it to your notebook config.
Also, to make sure your request are not been blocked, as suggested by vscode blog. Add the following flag too,
--NotebookApp.allow_origin='*'

Speed up Jupyter server spawn like on Kaggle

Currently, I am developing a jupyterhub server inside a Kubernetes cluster, hence using "Kubespawner" with autoscale on a cloud platform. It took a while to spawn a notebook server due to the nature of "auto scale" which is going to scale up when required to have more Notebook servers.
Could anyone please give me advice to gain spawn speed similar to Kaggle? because currently what I have will takes around 3-5 minutes just to spawn a notebook server. While Kaggle could spawn a notebook server just at 30seconds max. It is very impressive compared to mine.
Thanks in advance.

Jupyter notebook "Dead kernel" on connect of GDB/LLDB for debugging of extension modules

I would like to debug a custom python extension module, by attaching GDB/LLDB to the python kernel of an interactive Jupyter Notebook session, where I can interact with the module. However, as soon as I attach to the process (using CLion's "Attach to process" feature), it halts the process. At the same time, the Jupyter notebooks reports that the kernel has died, even though the debugger does report any crash of the process. I therefore suspect that the Notebook server exchanges a fast heart-beat with the kernel, which the kernel fails to provide once haltet. If that is the case, is there a way to configure the timeout for that heart-beat? Is there a way to prevent LLDB from halting the process when attaching?

How to activate a debugger or access logs in Jupyter notebooks?

I am trying to run a R notebook on Microsoft's Azure notebooks cloud service.
When I am trying to run all cells, it displays a Loading required package: ggplot2 in the last cell and then the Kernel systematically crashes. I get:
The kernel appears to have died. It will restart automatically.
But the Kernel does not restart automatically.
How can I get a log describing the encountered issue? Is there a way to activate a debugger?
When you're running Jupyter usually you'll see messages about kernel issues in standard I/O of the console that you launch. In Azure Notebooks this gets redirected to a file at ~/.nb.log. You can open a new terminal by clicking on the Jupyter icon, and then doing New->Terminal, and doing cat ~/.nb.log. You could also start a new Python notebook for this purpose and do "!cat ~/.nb.log" - but unfortunately you can't just do that from an R notebook, they don't support the "magic" ! commands.
Usually that gives you a good starting point. If that doesn't help much you could try invoking R directly from the terminal and trying the repro steps there and see if that's more useful.

Locking an IPython notebook for editing

I have IPython notebooks running on a server, and I'm editing/prototyping them locally. I use rsync to push my local notebooks when I'm ready to show them to others.
Problem is, with all these notebooks open, it's easy to accidentally edit things on the server notebooks instead of the local ones. Is there some reasonable mechanism to prevent accidental editing of notebooks? I still want to be able to run the server-notebooks, and they still should be able to write output - I just want to somehow lock them so they can't be edited.
You can also just change the notebook permissions to be read-only by opening a new Terminal directly in your Jupyter server.

Resources