ggplot graphs not outputting Jupytr Notebook (R/tidyverse) - r

My jupytr notebook won't output any of my gig-lot objects no matter what I do. I've restarted the kernel, jupytr notebook, Anaconda Navigator, and my computer and nothing seems to help. I've tried opening new notebooks and reformatting but nothing is working.

Related

Running Jupyter notebook (and generating plots) from the command line

I'm trying to use the terminal to run a jupyter notebook (kernel: Julia v1.6.2), which contains generated using Plots.jl, before uploading the notebook to github for viewing on nbviewer.com.
Following this question:
How to run an .ipynb Jupyter Notebook from terminal?
I have been using nbconvert as follows:
jupyter nbconvert --execute --to notebook --inplace
This runs the notebook (if you tweak the timeout limits), however, it does not display plots when using Plots.jl, even when I explicitly call display(plot()) at the end of a cell.
Does anyone have any idea how notebooks can be run remotely in such a manner that plots will be generated and displayed, particularly when using Julia?
I managed to generate Plots.jl plots by getting from IJulia the same configuration it uses to run notebooks (this is probably the most sure way when you have many Pyhtons etc.).
using Conda, IJulia
Conda.add("nbconvert") # I made sure nbconvert is installed
mycmd = IJulia.find_jupyter_subcommand("nbconvert")
append!(mycmd.exec, ["--ExecutePreprocessor.timeout=600","--to", "notebook" ,"--execute", "note1.ipynb"])
Now mycmd has exactly the same environment as seen by IJulia so we can do run(mycmd):
julia> run(mycmd)
[NbConvertApp] Converting notebook note1.ipynb to notebook
Starting kernel event loops.
[NbConvertApp] Writing 23722 bytes to note1.nbconvert.ipynb
The outcome got saved to note1.nbconvert.ipynb, I open it with nteract to show that graphs actually got generated:
Launch notebook with using IJulia and notebook() in the REPL

jupyter notebook reindex with pandas error

I'm having a strange issue. I have a problem with reindexing when i call my script with a jupyter notebook but it work fine when i call it directly using pycharm.
The first time i execute the notebook after i just started jupyter notebook it work but then it never work again. And it give me this error :
ValueError: cannot reindex from a duplicate axis
I suspect a problem between pandas and jupyter notebook. Because this error never appear when i use pycharm.
Do you have any idea on how i can fix this problem so that i can call my script from a jupyter notebook ?
I'm using the same conda env for both the jupyter notebook and pycharm.
I fond that Jupiter-notebook does not re-import the modules every time you execute it.
I had a problem with a variable that was not overwrite I changed the constructor of my script to be able to rewrite it and it work fine now.

jupyter notebook takes forever to open and then pages unresponsive - [MathJax] issue

I'm trying to open a jupyter notebook and it takes a long time and I see at the bottom it's trying to load various [MathJax] extension, e.g. at the bottom left of the chrome browser it says:
Loading [MathJax]/extensions/safe.js
Eventually, the notebook loads, but it's frozen and then at the bottom left it keeps showing that it's trying to load other [MathJax] .js files.
Meanwhile, the "pages unresponsive do you want to kill them" pop up keeps popping up.
I have no equations or plots in my notebook so I can't understand what is going on. My notebook never did this before.
I googled this and some people said to delete the ipython checkpoints. Where would those be? I'm on Mac OS and using Anaconda.
conda install -c conda-forge nbstripout
nbstripout filename.ipynb. Make sure that there is no whitespace in the filename.
I had a feeling that the program in my Jupyter notebook was stuck trying to produce some output, so I restarted the kernel and cleared output and that seemed to do the trick!
If Jupyter crashes while opening the ipynb file, try "using nbstripout to clear output directly from the .ipynb file via command line"(bndwang). Install with pip install nbstripout
I was having the same problem with jupyter notebook. My recommendations to you are as follows:
First, check the size of the .ipynb file you are trying to open. Probably the file size is in MB and is large. One of the reasons for this might be the output of a dataset that you previously displayed all rows.
For example;
In order to check the dataset, sometimes I use pd.set_option('display.max_rows', None) instead of the .head() function. And so I view all the rows in the data set.
The large number of outputs increases the file size, making the notebook slower. Try to delete such outputs.
I think this will solve your problem.
Here restarting your kernel will not help. Instead use nbstripout to strip the output from command line.
Run this command -> nbstripout FILE.ipynb
Install nbstripout if it is not there
https://pypi.org/project/nbstripout/
It happened to me the time I decided to print a matrix for 100000 times. The notebook file became 150MB and Jupyter (in Chrome) was not able to open it: it said all the things you experienced and then the page died saying it was "OutOfMemory".
I solved the issue opening it in Visual Studio Code, there is a button "Clear All Output", then I saved the notebook again and it was back to some hundreds of KB, which I could open normally.
If you don't have Visual Studio Code installed, you can open the notebook with another editor (gedit if you use Linux or Notepad++ in Windows) and try to delete the output cells. This is more tricky since you have to pay a lot of attention in what you are deleting, otherwise the notebook will stop working.

Use workspace of an RStudio session in Jupyter notebook

As my RAM is scarce, I'd like to not replicate data and use objects created in an RStudio session inside my Jupyter notebook (running w/ R kernel).
Any idea how to do it?
Basically I'd like to use the same workspace in both, the RStudio and the Jupyter notebook session.
Thanks for help!
One problem I encountered with an R notebook in Jupyter, though, was saving my workspace. In a normal R session I’m used to saving my workspace at the end of the session and coming back to it later to pick up where I left off. However, with the Jupyter notebook I found that I had to rerun all the code to regenerate all the objects again! This appears to be an issue for Python notebook users too.
There’s a very simple fix for this: Just run the standard R command
save.image()
Your workspace will then be saved to the usual hidden .RData file in the same folder as the Jupyter notebook. If you want to share the code and the workspace, you’ll have to make sure that you copy both the notebook file and the .RData file that goes along with it.
Likewise, if you start a notebook in a folder that already has an .RData file, you’ll find that you can access that workspace from the Jupyter notebook – just run ls() to see what’s there.

Jupyter Notebooks Hang in Browser on Windows 10

I just installed Miniconda and the R Essentials bundle on my Windows 10 machine, following the instructions given here. Everything went swimmingly until I opened up an Anaconda command prompt and entered jupyter notebook and got an error. I then used ipython notebook which worked, so okay, no problem there.
However, after creating a new folder and trying to create a new R notebook within that folder, my Jupyter tabs started to hang. Whenever I try to do something, whether it is rename the notebook, run a block of code, basically anything, all of the Jupyter tabs sit there loading endlessly saying "Waiting for localhost..."
I try stopping the server and restarting it, but every time I try to do anything I get the same result. I also tried changing the port and running the command prompt as administrator--same result. I am using Chrome, which shouldn't be an issue.
Any ideas? I was really excited about using a Jupyter notebook to keep track of my analyses in R, but if I can't even get it to function out of the box I'll have to find a better solution.

Resources