I'm looking for a library/tool that can generate a .py file from a notebook, with the added feature that I want to control which cells (both code and markdown cells) get exported and/or excluded from the .py file. For example adding to cells something like %exclude or %include and have control at export time how to work with these tags. I found jupytext that may do this, but got me confused with the version control part / link of .ipynb/.md.
Related
I wrote a python script that generates a tree and outputs some variable creation and function calls in Julia syntax to a text file (I am testing the correctness of some Julia tree algorithms in phylogenetics).
I was wondering if there is a way to "run" the text file in a Julia Jupyter notebook?
It gets tedious to manually copy the file and run it as I am generating many files.
You can run include("treealgos.jl") in a Jupyter cell, to run the entire file there. It's equivalent to copy-pasting the file contents to that cell, and all the variables and functions defined in the file become available in the notebook after that.
Note that this is very different from using or importing a module, which require a module name and come with extra features like namespacing and exports. An include in contrast is a more basic and simpler feature, similar to a #include in C language, just bringing the code that was included into wherever the include statement happens to be.
I am working on my office computer, which due to security restriction do not allow me to install program (like miktex, ....). So i decided to export my notebook to .html.
As you can see the rendering is not good:
Some code is cutted
A lot of space is spoiled, there is large blank margin that i would like to see disapear completely, and i do not need the "IN[1]" in front of cell to allow the code to be more visible
How would you do to get an html whose printing display is good ?
I'm not sure if following steps meet your requirements.
(Open the terminal in the corresponding path and then execute the command)
Following command exports the .ipynb file to .html file, and by specifying args(TemplateExporter.exclude_input_prompt and TemplateExporter.exclude_out_prompt), the in[] and out[] is ommitted.
jupyter nbconvert --to html --TemplateExporter.exclude_input_prompt=True --TemplateExporter.exclude_output_prompt=True <your_file>.ipynb
TemplateExporter.exclude_input_prompt=True removes the in[] while TemplateExporter.exclude_output_prompt=True removes the out[] from the generated .html file.
Is there a convenient way to render all markdown cells in a Jupyter notebook at once without running the code cells?
I find it quite annoying that while moving through my notebook and doing some little corrections the markdown cells "loose" their formatting. Is there an extension or a command with which I can "run" (i.e. render) all and only the markdown cells? If not, is there a way to at least update the table of content from the markdown cells. My table of content is realized via nbextensions.
You could use JupyterLab which provides a Render all Markdown cells action if you are not limited to plain Jupyter notebooks. Doing this programmatically within the notebook seems to be not trivial to do as we can derive from this GitHub issue. We might be able to implement this ourselves, but I am not aware of any resources that provide something similar.
What is the difference between using R console vs writing R code in a text file? I wrote this question on Kaggle but there were no previous questions on this matter.
When you supply code via text file (.R file) you "run the file with R" without visualizing it and it can stop somewhere due to error i.e. (which can be handled, etc.). Also running an .R file with R (for example via .bat file) generates a .Rout file, which is basically a print out of the console and some aditional info like runtime, etc.
If you feed the code in the console, each line is treated independently: even if there is an error you can process an aditional line (if it depends on the failed comand then it will fail also though) and you get to see each result as soon as the comand is run. In comparision to the .R file you will have no copy of the code other than that stored in the session - meaning you will end up needing to save to disk the code you have written if you want it to persist between session. Now you can choose to use whatever text format you like for this task from simple .txt to .docx BUT if you use .R format you can manipulate with notepad++ or the notepad editor and still run/complipe the file with R (via .bat file for example). In case of opting against .R file to store the written code, you will have to feed it to the console again to run.
In R Studio you can open .R files and manage (extend, correct) your code and feed it comand per comand or as a block to the console. So one could say you use .R files to manage you code, having the possiblity to compile/run these .R files directly with R to execute on a button click or repeatedly for example.
Not sure if that is what you are looking for?
I wish to use libraries across multiple .Rmd files in an r notebook without having to reload the library each time.
An example: I have loaded the library kableExtra in the index.Rmd file but when I call it in another .Rmd file such as ExSum.Rmd I would get this error:
Error in Kable....: could not find funciton "kable" Calls:...
If I load the kableExtra library again this problem goes away. Is there a workaround?
R Markdown files are intended to be standalone, so you shouldn't do this. There are two workarounds that come close:
If you process your .Rmd files within the R console by running code like rmarkdown::render("file.Rmd") then any packages attached in the session will be available to the code in the .Rmd file.
You can put all the setup code (e.g. library(kableExtra)) into a file (called setup.R, for example), and source it into each document in the first code chunk using source('setup.R'). Every file will run the same setup, but you only need to type it once.
The second is the better approach.