How to start jupyter notebook with 'qt' backend by default [duplicate] - jupyter-notebook

Right now the default backend for matplotlib is 'module://ipykernel.pylab.backend_inline'
I want to switch that to TkAGG. I edited the matplotlibrc file in
~/anaconda2/lib/python2.7/site-packages/matplotlib/mpl-data/matplotlibrc/
to add
backend : TkAgg
and It did switch the backend for the python but not in Jupyter.
For now everytime I start a new notebook in Jupyter, I have to do %matplotlib tk, isnt there a nice way to make TkAGG the default backend in Jupyter?

The question is similar to Automatically run %matplotlib inline in IPython Notebook, except that you want to automatically use TK backend instead of inline backend.
So the idea is to locate you IPython configuration file. See configure IPython. It should be
/.ipython/profile_default/ipython_kernel_config.py
If it doesn't yet exist, create it via > ipython profile create.
Inside this file locate the setting c.InteractiveShellApp.matplotlib and set it to "tk". It should then look like
## Configure matplotlib for interactive use with the default matplotlib backend.
c.InteractiveShellApp.matplotlib = "tk"
Save the file and restart the kernel.

Related

How to use Tensorboard within a notebook running on Amazon SageMaker Studio Lab?

I have a Jupyter notebook running within an Amazon SageMaker Studio Lab (https://studiolab.sagemaker.aws/) environment, and I want to use Tensordboard to monitor my model's performance inside the notebook.
I have used the following commands to set up the Tensorboard:
%load_ext tensorboard
# tb_log_dir variable holds the path to the log directory
%tensorboard --logdir tb_log_dir
But nothing shows up in the output of the cell where I execute the commands. See:
The two buttons shown in the picture are not responding, BTW.
How to solve this problem? Any suggestions would be appreciated.
I would try the canonical way to use tensorboard in AWS Sagemaker, it should be supported also by Studio Lab, it is described here. Basically install tensorboard and using the EFS_PATH_LOG_DIR launch tensorboard using the embedded console (you can do the following also from a cell):
pip install tensorboard
tensorboard --logdir <EFS_PATH_LOG_DIR>
Be careful with the EFS_PATH_LOG_DIR, be sure this folder is valida path from the location you are, for example by default you are located in studio-lab-user/sagemaker-studiolab-notebooks/ so the proper command would be !tensorboard --logdir logs/fit.
Then open a browser to:
https://<YOUR URL>/studiolab/default/jupyter/proxy/6006/

How do I have my Jupyter notebook server run arbitrary Python code before running notebook code?

I'm trying to replicate the functionality of the code editor on a platform I was previously using called Odoo.sh. The platform would let me create a .ipynb notebook, but in the cells I could reference pre-set variables which required no boilerplate code inside of the notebook. Extremely convenient.
If you're familiar with Odoo, it was like having odoo-bin shell be implicitly run before executing any of the cells inside the notebook. It was wonderful to work with, but Odoo.sh is proprietary, so I'm trying to replicate the same functionality on my local machine.
A minimal example of what I'm going for here would be to have the following python code run before executing any of my .ipynb notebook file's cells.
example_value = False
def example_func():
global example_value
example_value = True
example_func()
So that inside of any notebook's cells I could simply run something like example_value and get an output of True.
In the case of Odoo.sh it almost seemed like there was a special custom kernel set up that was nothing more than a regular Python 3 kernel with some initialization code. This may be exactly what was going on, but I don't know enough about how Jupyter works to know for myself. How do I replicate this functionality?
I figured it out! You need to create a custom kernel, but for this use case you can just reuse the default IPython kernel and just pass some variables into the user namespace.
First, create a Python file for your kernel. Let's use test_kernel.py. Here are the contents:
from ipykernel.ipkernel import IPythonKernel
from ipykernel.kernelapp import IPKernelApp
if __name__ == "__main__":
example_value = False
def example_func():
global example_value
example_value = True
example_func()
IPKernelApp.launch_instance(
kernel_class=IPythonKernel,
user_ns={"example_value": example_value})
See how the arbitrary code from the question is run before launching the kernel instance. Using the user_ns argument, we can pass arbitrary data to the user environment.
To get our kernel up and running we need to make a test directory and then a test/kernel.json file. It will have these contents:
{
"argv": ["python", "-m", "test_kernel", "-f", "{connection_file}"],
"display_name": "Test"
}
Let's install that bad boy. Run jupyter kernelspec install --user test. In that command, test is the name of the directory we created. The --user argument makes Jupyter install the kernel only for the current user. You don't have to use it if you don't want to.
Now we should be good to go! Start things up with jupyter notebook and you will see your new kernel is available to use when using notebooks. And check it out, we can see the variable we passed into the namespace:
Last of all, be sure to note that in order for this to work your test_kernel.py file will need to be somewhere where Python can import it. I'm not an expert on this, but from a bit of Googling I took this to mean that the directory containing the file should either be the current working directory or be in your PYTHONPATH.

Jupyterlab: turn on tab completion for text editor as in Notebook?

In Jupyterlab, there is a text editor that we can open .py files, is it possible to also turn on tab completion, just like how it works in Notebook ?
By now, tab completion in the text editor of jupyter lab has been implemented in this pull request (see also discussion in this issue). However, for it to be working you need to open a console for the editor (right click in the editor window and select Create Console for Editor).
No, it is currently an open issue. https://github.com/jupyterlab/jupyterlab/issues/1276
The package jupyterlab-lsp now provides tab completion in the text editor. You need can install it from pip or conda, along with a language server for Python:
pip install jupyter-lsp
pip install jedi-language-server
I also needed to enable the server side extension:
jupyter server extension enable --user --py jupyter_lsp
And enabled #krassowski/jupyterlab-lsp and #krassowski/completion-theme via JupyterLab's extension GUI (the puzzle piece on the right hand side). Then I restarted JupyterLab, and completion worked (with Tab). I am not sure if all these steps are neccessary, it might depend on your environment.

Cannot import .py file to ipython notebook

With apologies in advance for the "I can't get it to work" question: How should I load a .py file into ipython notebook? I want to convert python code to notebooks (first simple scripts and later scripts that include nbconvert directives embedded as comments-- see bottom of the linked file.)
Perhaps I'm doing it wrong, but perhaps there's something wrong with my set-up. When I drag a .py file to the Notebook's file list, I get the message
Invalid file type: Uploaded notebooks must be .ipynb files.
I even tried changing the extension to .ipynb (keeping the python script unmodified); reasonably enough, I got an error:
Error loading notebook: Bad request
Any idea what's going wrong?
System information: I'm on OS X (10.8, Mountain Lion), using Firefox 28.0 and Anaconda 1.9.2 (x86_64), which supplies python 2.7.6 and ipython 2.0. Anaconda is not on the default PATH; I add it in a bash session from which I then launch notebook with ipython notebook, and I'm able to open and edit .ipynb files normally in the browser.
But I do get some curious behavior:
When exporting from notebook as a .py file, I don't get the control comments documented here but a simpler format, without version number:
# coding: utf-8
# In[ ]:
print "This is a slide"
## Top-level title
### Second-level heading
#### Third-level heading
# This is some `markdown` text.
#
# And some more here.
Any idea what's going on here?
The same format is generated by ipython nbconvert. However, if I start the notebook server with ipython notebook --script (which exports the notebook as a python script every time it is saved), the result contains the nbconvert directives we need to convert back to a notebook!
I had the same problem.
This post helped:
How to load/edit/run/save text files (.py) into an IPython notebook cell?
Basically, we just have to use the following command in the cell. And the .py file has to be in the same directory.
%load filename.py
I'm not sure why notebook doesn't support this natively, but I've concluded that the answer is: It can't be done from the command line or notebook GUI.
Control comments like <markdowncell> can only be interpreted by accessing notebook's API through python, as shown by #CliffordVienna in this answer to my related question.
import IPython.nbformat.current as nbf
nb = nbf.read(open('test.py', 'r'), 'py')
nbf.write(nb, open('test.ipynb', 'w'), 'ipynb')
Edit: The above method does not work with the current version (v4) of the Notebook API, so I have added this self-answer to show how it's done.
If you only need to import a local file, first use:
sys.path.append(os.getcwd())
to place the .pynb file's directory in sys.path, and then import the local file.

Can I use variables on an IPython notebook markup cell?

I have an IPython notebook and I would like to use one of my variables inside a markup cell. Is this even possible? If so, how do you do it?
If you don't mind a code cell that does the job, there is a possibility without adding any extensions.
from IPython.display import Markdown as md
fr=2 #GHz
md("$f_r = %i$ GHz"%(fr))
This will show a markdown cell in a nicely LaTeX formatted output
Currently, this is not possible, however there is a large discussion on this topic here https://github.com/ipython/ipython/pull/2592. The PR is currently closed, but a corresponding issue is opened https://github.com/ipython/ipython/issues/2958 and marked as wishlist.
Update
In the meantime an IPython extension has appeared which allows to render python variables in markdown cells. This extension is part of the IPython notebook extensions and works with IPython 2.x and 3.x. For a detailed description see the wiki page.
It is not officially supported, but installing the python markdown extension will allow you to do so. It is part of the nbextensions, for which you will find installation instructions on their github page. Make sure you'll enable the python markdown extension using a jupyter command or the extension configurator.
Calling python variables then should work with the {{var-name}} syntax, which is described in the readme of the corresponding github page (linked in the wiki):
For example: If you set variable a in Python
a = 1.23
and write the following line in a markdown cell:
a is {{a}}
It will be displayed as:
a is 1.23
Further info on this functionality being integrated into ipython/jupyter is discussed in the issue trackers for ipython and jupyter.
The link: installing notebook extention
gives a clear description of what is necessary to enable the use of variables in markdown cells. Following it, performed the following actions to realize it:
conda install -c conda-forge jupyter_contrib_nbextensions
jupyter contrib nbextension install --user
after a successful completion of the above command I enabled the python markup extension, from jupyter dashboard, as per the following illustration:
Last but not least!!! The NOTEBOOK HAS TO BE TRUSTED to make the markup extension works with python variables
and it worked for me!

Resources