Got an error when creating new script in Jupyter Notebook - jupyter-notebook

I got an error when I try to create a new script in Jupyter Notebook.
It only shows an error, but not other specific description.
Also, for the scripts I created before, I can not run any cells.
Can any body know how to solve this issues? Thanks a lot!

Related

jupyter notebook reindex with pandas error

I'm having a strange issue. I have a problem with reindexing when i call my script with a jupyter notebook but it work fine when i call it directly using pycharm.
The first time i execute the notebook after i just started jupyter notebook it work but then it never work again. And it give me this error :
ValueError: cannot reindex from a duplicate axis
I suspect a problem between pandas and jupyter notebook. Because this error never appear when i use pycharm.
Do you have any idea on how i can fix this problem so that i can call my script from a jupyter notebook ?
I'm using the same conda env for both the jupyter notebook and pycharm.
I fond that Jupiter-notebook does not re-import the modules every time you execute it.
I had a problem with a variable that was not overwrite I changed the constructor of my script to be able to rewrite it and it work fine now.

Interactive R script on Binder does not work. **Kernel not found**

I am new to data science and code sharing.
I am trying to share my R codes with my team. I have created a jupyter notebook, created a github repository, then upload my notebook to Binders.
My script in Binders is interactive, however, it runs as python code and not R code.
there are a few steps that I was not too sure that might mess this up.
1- for the requremnt.txt file, I had to manually type the package name with their version. I would love it if someone tells me what is the right way to create requirement file for R script
2-In the repository creation on github I didn't know what I had to pick for "Add.fitignore" or "Add a license" so I left them to "None"
when I open my notebook in Binders, it gives me an error Kernel not found.
What am I doing wrong? Please help
I have installed R on jupyternote book, and it works fine when I open the notebook using Anaconda.

unexpected input in "%load_ext rpy2.ipython" in R - ipython

I am new to R and trying to execute the code in this site but unfortunately, I am experiencing this error "Error: unexpected input in "%load_ext rpy2.ipython"" when entering "%load_ext rpy2.ipython" in the R console and I tried searching google for answers but no luck.
Any help would be appreciated. Thank you in advance.
rpy2.ipython is an extension for ipython, and Jupyter, when you use the IPython kernel. Not for R. What you linked to is using IPython and calling into R from IPython, not calling in Python from within R. Not sure why they do that in what you linked, from a quick read they should use a R kernel, which would work without saying %%R each time.
So you should be able to reproduce what's there by not loading the rpy2 magic, skipping the %%R prefixes.
If you want to reproduce exactly you will need to install IPython/jupyter and run the code from the Python console.

IOPub data rate exceeded in Jupyter notebook (when viewing image)

I want to view an image in Jupyter notebook. It's a 9.9MB .png file.
from IPython.display import Image
Image(filename='path_to_image/image.png')
I get the below error:
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
A bit surprising and reported elsewhere.
Is this expected and is there a simple solution?
(Error msg suggests changing limit in --NotebookApp.iopub_data_rate_limit.)
Try this:
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
Or this:
yourTerminal:prompt> jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
I ran into this using networkx and bokeh
This works for me in Windows 7 (taken from here):
To create a jupyter_notebook_config.py file, with all the defaults commented out, you can use the following command line:
$ jupyter notebook --generate-config
Open the file and search for c.NotebookApp.iopub_data_rate_limit
Comment out the line c.NotebookApp.iopub_data_rate_limit = 1000000 and change it to a higher default rate. l used c.NotebookApp.iopub_data_rate_limit = 10000000
This unforgiving default config is popping up in a lot of places. See git issues:
jupyter
IOPub data rate exceeded
It looks like it might get resolved with the 5.1 release
Update:
Jupyter notebook is now on release 5.2.2. This problem should have been resolved. Upgrade using conda or pip.
Removing print statements can also fix the problem.
Apart from loading images, this error also happens when your code is printing continuously at a high rate, which is causing the error "IOPub data rate exceeded". E.g. if you have a print statement in a for loop somewhere that is being called over 1000 times.
By typing 'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10' in Anaconda PowerShell or prompt, the Jupyter notebook will open with the new configuration. Try now to run your query.
Some additional advice for Windows(10) users:
If you are using Anaconda Prompt/PowerShell for the first time, type "Anaconda" in the search field of your Windows task bar and you will see the suggested software.
Make sure to open the Anaconda prompt as administrator.
Always navigate to your user directory or the directory with your Jupyter Notebook files first before running the command. Otherwise you might end up somewhere in your system files and be confused by an unfamiliar file tree.
The correct way to open Jupyter notebook with new data limit from the Anaconda Prompt on my own Windows 10 PC is:
(base) C:\Users\mobarget\Google Drive\Jupyter Notebook>jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
I have the same problem in my Jupyter NB on Win 10 when querying from a MySQL database.
Removing any print statements solved my problem.
For already running docker containers, try editing the file name - ~/.jupyter/jupyter_notebook_config.py
uncomment the line - NotebookApp.iopub_data_rate_limit =
and set high number like 1e10.
Restart the docker, it should fix the problem
I ran into this problem running version 6.3.0. When I tried the top rated solution by Merlin the powershell prompt notified me that iopub_data_rate_limit has moved from NotebookApp to ServerApp. The solution still worked but wanted to mention the variation, especially as internal handling of the config may become deprecated.
Easy workaround is to create a for loop and print. Then there wont be any issue. Printing directly wcc would cause if graph is huge. Hence any of below code will work as workaround.
wcc=list(nx.weakly_connected_components(train_graph))
for i in range(1,10):
print(wcc[i])
for i in wcc):
print(wcc)
Like others pointed out, print statement at a high rate can cause this. Resolve it by printing modulo a number using if statement. Example in python:
k = 10
if (i % k == 0):
print("Something")
Increase k if the warning persists.
Using Visual Studio Code, the Jupyter extension will be able to handle big data. launch from anaconda navigator
In general, trying to print something that is too long will trigger this error. I tried to print a string that was 9221593 characters long (too long), and that triggered the error.

Kernel loading forever in jupyter notebook

I'm using OSX Yosemite.
I've update ipython via conda and turns out notebook also has updated version, which I'm very excited to try this.The notebook has been converted into jupyter.
I'm using python 2.X, and already have existing .ipynb files. When I open it, new window appears but there's blank and nothing. I can create first cell there, but I already have my notebook. And the kernel also loading forever. There isn't any error log in the console. What do I have to do? Please help!
Nevermind, this solve the problem https://github.com/ipython/ipython/issues/5746
I'm using ccp notebook extension, and as Ian Hawke mentioned in the thread, remove the call to the extention at profile/static/custom/custom.js

Resources