Julia Jupyter Notebook persists despite logout from browser - julia

Running Julia in a terminal (macOS) and then launching a Jupyter notebook
julia> using IJulia
julia> notebook(detached=true)
This works fine. However when I log out of the notebook and closing the browser, there is still a jupyter-notebook running and I have to kill -2 pid to make it go away.
Is this expected behaviour? Is there a parameter I need to set somewhere?

Yes, this is the expected behaviour for the way you've called notebook. The server doesn't stop when your browser page is closed (you might open another, for example!). Using the detached keyword argument means the server process is started in the background, so doesn't block the Julia session you started it from, so you'd have to do extra work if you wanted to stop it from there. From the IJulia readme:
You can use notebook(detached=true) to launch a notebook server in the background that will persist even when you quit Julia

Related

Unable to set the kernel ("Not specified") for a jupyter notebook in Pycharm

As seen in the screenshot the kernel is "Not specified" and can not set since the dropdown is disabled. Can this be remedied?
The Project interpreter is python3 and otherwise for other file types the project works fine
I've run into the same issue of PyCharm not allowing me to select a Jupyter kernel other than the one that is saved to the notebook (the drop-down menu is either grayed-out as in your picture, or disappears altogether). This condition appears to occur when the kernel that was active when the notebook was saved is not available in the current environment. The only workaround I've found for this situation is:
Start a Jupyter notebook server using the same Python environment as you are using in PyCharm
From the Jupyter server web page, open the notebook
From the Kernel menu, select Change kernel, and then select the desired kernel
Save the notebook, then close/halt it
Re-open the notebook in PyCharm. It should then be able to execute with the kernel that was chosen in Step 3 above.

Jupyter notebook seems to remember previous path (!pwd) after being moved to a different directory?

I initially had a notebook in one directory in AWS SageMaker JupyterLab, say /A, but then moved it into /A/B. However, when I run !pwd in a jupyter notebook cell, I still get /A. This happens even when I press 'restart kernel'. How does the notebook remember this, and is there a way to prevent or reset this?
Thanks
I was actually using AWS SageMaker, and restarting the kernel from the toolbar was not enough. I needed to restart the kernel session, by pressing 'shut down' in the "Running terminals and kernels" section on the left navigation.
They are currently discussing warning users about the need to restart the kernel when a notebook is moved.

Opening a specific Julia notebook (via IJulia) in the REPL

the Julia (i'm using 0.6.2) REPL makes it possible to do some work and then execute
julia> using IJulia
julia> notebook(dir=pwd(), detached=true)
which nicely launches jupyter in the directory specified by dir.
is it possible from the REPL to include a specific notebook to open ?
This worked for me recently (julia v 1.5.1, macOS 10.14):
using IJulia
notebook(dir="/path/to/directory/with/my/notebook",detached=true)
One thing I noticed is starting jupyter in the background, julia doesn't give you a link to where to open the notebook in the browser.
I was able to open http://localhost:8888/ in my browser to find the notebook, although I thought there was a URL token needed. This link in the browser also worked:
file:///$HOME/Library/Jupyter/runtime/nbserver-12912-open.html
I got this from the REPL documentation:
?notebook
#search: notebook
# notebook(; dir=homedir(), detached=false)
# ..etc
this kinda works (but feels like a hack):
julia> ;jupyter notebook someJuliaNotebook.ipynb 2>/dev/null &
which produces a relatively clutter free terminal window i can keep using.

IOPub data rate exceeded in Jupyter notebook (when viewing image)

I want to view an image in Jupyter notebook. It's a 9.9MB .png file.
from IPython.display import Image
Image(filename='path_to_image/image.png')
I get the below error:
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
A bit surprising and reported elsewhere.
Is this expected and is there a simple solution?
(Error msg suggests changing limit in --NotebookApp.iopub_data_rate_limit.)
Try this:
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
Or this:
yourTerminal:prompt> jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
I ran into this using networkx and bokeh
This works for me in Windows 7 (taken from here):
To create a jupyter_notebook_config.py file, with all the defaults commented out, you can use the following command line:
$ jupyter notebook --generate-config
Open the file and search for c.NotebookApp.iopub_data_rate_limit
Comment out the line c.NotebookApp.iopub_data_rate_limit = 1000000 and change it to a higher default rate. l used c.NotebookApp.iopub_data_rate_limit = 10000000
This unforgiving default config is popping up in a lot of places. See git issues:
jupyter
IOPub data rate exceeded
It looks like it might get resolved with the 5.1 release
Update:
Jupyter notebook is now on release 5.2.2. This problem should have been resolved. Upgrade using conda or pip.
Removing print statements can also fix the problem.
Apart from loading images, this error also happens when your code is printing continuously at a high rate, which is causing the error "IOPub data rate exceeded". E.g. if you have a print statement in a for loop somewhere that is being called over 1000 times.
By typing 'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10' in Anaconda PowerShell or prompt, the Jupyter notebook will open with the new configuration. Try now to run your query.
Some additional advice for Windows(10) users:
If you are using Anaconda Prompt/PowerShell for the first time, type "Anaconda" in the search field of your Windows task bar and you will see the suggested software.
Make sure to open the Anaconda prompt as administrator.
Always navigate to your user directory or the directory with your Jupyter Notebook files first before running the command. Otherwise you might end up somewhere in your system files and be confused by an unfamiliar file tree.
The correct way to open Jupyter notebook with new data limit from the Anaconda Prompt on my own Windows 10 PC is:
(base) C:\Users\mobarget\Google Drive\Jupyter Notebook>jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
I have the same problem in my Jupyter NB on Win 10 when querying from a MySQL database.
Removing any print statements solved my problem.
For already running docker containers, try editing the file name - ~/.jupyter/jupyter_notebook_config.py
uncomment the line - NotebookApp.iopub_data_rate_limit =
and set high number like 1e10.
Restart the docker, it should fix the problem
I ran into this problem running version 6.3.0. When I tried the top rated solution by Merlin the powershell prompt notified me that iopub_data_rate_limit has moved from NotebookApp to ServerApp. The solution still worked but wanted to mention the variation, especially as internal handling of the config may become deprecated.
Easy workaround is to create a for loop and print. Then there wont be any issue. Printing directly wcc would cause if graph is huge. Hence any of below code will work as workaround.
wcc=list(nx.weakly_connected_components(train_graph))
for i in range(1,10):
print(wcc[i])
for i in wcc):
print(wcc)
Like others pointed out, print statement at a high rate can cause this. Resolve it by printing modulo a number using if statement. Example in python:
k = 10
if (i % k == 0):
print("Something")
Increase k if the warning persists.
Using Visual Studio Code, the Jupyter extension will be able to handle big data. launch from anaconda navigator
In general, trying to print something that is too long will trigger this error. I tried to print a string that was 9221593 characters long (too long), and that triggered the error.

Jupyter Notebooks Hang in Browser on Windows 10

I just installed Miniconda and the R Essentials bundle on my Windows 10 machine, following the instructions given here. Everything went swimmingly until I opened up an Anaconda command prompt and entered jupyter notebook and got an error. I then used ipython notebook which worked, so okay, no problem there.
However, after creating a new folder and trying to create a new R notebook within that folder, my Jupyter tabs started to hang. Whenever I try to do something, whether it is rename the notebook, run a block of code, basically anything, all of the Jupyter tabs sit there loading endlessly saying "Waiting for localhost..."
I try stopping the server and restarting it, but every time I try to do anything I get the same result. I also tried changing the port and running the command prompt as administrator--same result. I am using Chrome, which shouldn't be an issue.
Any ideas? I was really excited about using a Jupyter notebook to keep track of my analyses in R, but if I can't even get it to function out of the box I'll have to find a better solution.

Resources