ProgressBar in IJulia printing every new line - julia

I am currently learning Julia and have been practicing in the Jupyter notebook environment. However, the ProgressBar package (similar to TQDM in python) is updated every new line instead of updating on the same line (picture attached). Is there any way to fix this? Thanks.
UPDATE : Here is the full function that I wrote.
function spike_rate(raw_dat, width)
N = size(raw_dat)[1]
domain = collect(1:N);
spike_rat = zeros(N);
for i in ProgressBar(1:N)
dx = i .- domain;
window = gaussian.(dx, width);
spike_rat[i] = sum(window .* raw_dat) ./ width;
end
return spike_rat;
end

This seems to be a known issue with ProgressBars.jl, unfortunately. It's not clear what changed to make these progress bars not work properly anymore, but the maintainer's comment says that tqdm uses "a custom ipywidget" to make this work for the Python library, and that hasn't been implemented for the Julia package yet.
To expand on #Zitzero's mention of ] up, that calls Pkg.update() which also prints a progress bar - so the suggestion is to use the mechanism Pkg uses for it. Pkg has an internal module called MiniProgressBars which handles this output.
Edit: Tim Holy's ProgressMeter package seems well-maintained, and is a much better option than relying on an internal non-exported Pkg submodule with no docs. So I'd recommend ProgressMeter over the below.
The Readme mentions a caveat regarding printing additional information with the progress bar when in Jupyter, which likely applies to MiniProgressBar as well. So, using ProgressMeter, and separating the progress output vs other relevant output to different cells, seems like the best option.
(not recommended)
using Pkg.MiniProgressBars
bar = MiniProgressBar(; indent=2, header = "Progress", color = Base.info_color(),
percentage=false, always_reprint=true)
bar.max = 100
# start_progress(stdout, bar)
for i in 1:100
sleep(0.05) # replace this with your code
# print_progress_bottom(stdout)
bar.current = i
show_progress(stdout, bar)
end
# end_progress(stdout, bar)
This is based on how Pkg uses it, from this file. The commented out lines (with start_progress, print_progress_bottom, and end_progress) are in the original usage in Pkg, but it's not clear what they do and here they just seem to mess up the output - maybe I'm using them wrongly, or maybe Jupyter notebooks only support a subset of the ANSI codes that MiniProgressBars uses.

There is a way, the package module does that as far as I know when you do:
] up
Could you share a little of your code?
For example, where you define what is written to the console.
One guess is that
print("text and progressbar")
instead of
println("text and progressbar")
could help, because println() always creates a new line, while print() should just overwrite you current line.

Related

Difference between environment etc. in testthat::test_check vs testthat::test_dir

This is somewhat deep R testing question, and as such, I'm not sure if general Stack Overflow is the right place for it, or if there's an R specific forum that would be better.
Any pointers on that are welcome.
The scenario is: I have package that is using testthat and has some tests in tests/testthat and (for reasons that are important but, to be honest, I don't totally understand) there are some other tests in inst/validation that need to be run as well, as part of a validation script (i.e. the script that this post is about).
I was running test_check(pkg) in my tests folder and it was working fine, but I wasn't getting the extra tests (which makes sense). So then I switched to the following:
test_dirs <- c("tests/testthat", "inst/validation")
for (.t in test_dirs) {
test_dir(.t)
}
Now a bunch of my tests are failing because they can't find some of the constants, etc. that are part of my package! (see note at the bottom for more details...)
So I dig in to the source code and find that test_check() actually calls testthat:::test_package_dir under the hood. Note the ::: this is an unexported function, so I don't really just want to call it in my own code.
testthat:::test_package_dir in turn calls the following, before calling test_dir() itself:
env <- test_pkg_env(package)
withr::local_options(list(topLevelEnvironment = env))
withr::local_envvar(list(
TESTTHAT_PKG = package,
TESTTHAT_DIR = maybe_root_dir(test_path)
))
test_dir(...
Sooooo... it seems like test_check() essentially just does some things to load the package environment (note test_pkg_env is also unexported) and then calls test_dir().
So I guess my question is: why? I've actually noticed this before with test_file() not working because it doesn't have everything in the package environment. Why do these functions not load the package environment like the other testing functions do?
Or really, my question is: is there a way to make them load it? And specifically in my case, is there a way to do what I'm trying to do (run tests in a few different directories) and have it load the package environment?
I notice this in the test_dirs docs:
env -- Environment in which to execute the tests. Expert use only.
which is set to test_env() by default. I have a feeling this is my answer, but I can't figure out how to get the package environment without basically copy/pasting a bunch of code out of functions that are hidden in :::. Perhaps I don't qualify as an "expert"...
Thanks for any insight and/or solutions!
note at the bottom:
Specifically my issue is that I have some "constants" in my aaa.R that are mostly just hard-coded strings or lists like:
SUMMARY_NAME <- "summary"
SUMMARY_COUNT <- "sum_count"
SUMMARY_PATH <- "sum_path"
SUM_REQ_COLS <- list(
list(name = SUMMARY_NAME, type = "character"),
list(name = SUMMARY_COUNT, type = "numeric"),
list(name = SUMMARY_PATH, type = "character"),
)
These are things that I use for checking S3 classes and other purposes so that I don't have hard-coded strings all over my code. The point is: I use some of these in my tests, which works fine for test_check() and devtools::check() and devtools::test() but dies when I try to use test_dir() or test_file() because they can't be found, presumably because the package environment isn't loaded.

Editing a function in R using trace?

I noticed there is a bug in a function from a package I want to use. An issue has been made on GitHub, but the creator hasn't adressed this yet, and I need the function as soon as possible.
Therefore I want to edit the code. Apparently this is possible by editing the source, repacking and installing the entire package, I can rewrite the function and reassign the namespace, but also possibly by just editing the function in the current session using trace().
I already found out I can do:
as.list(body(package:::function_inside_function))
The line I want to edit is located in the second step of the function.
Specifically, it is this line in the code I need to edit. I have to change ignore.case to ignore.case=TRUE. An example in case the link dies:
functionx(){if{...} else if(grepl("miRNA", data.type, ignore.case)) {...}}
I haven't really found a practical example on how to proceed from here, so can anyone show me an example of how to do this, or lead me to a practical example of using trace? Or perhaps reassigning the function to the namespace?
For your specific case, you could probably indeed work around it using trace.
From the link you provide I don't know why you speak of a function inside a function, but this should work:
# example
trace("grepl", tracer = quote(ignore.case <- TRUE))
grepl("hi", "Hi")
## Tracing grepl("hi", "Hi") on entry
## [1] TRUE
# your case (I assume)
trace("readTranscriptomeProfiling", tracer = quote(ignore.case <- TRUE))
Note that this would be more complicated if the argument ignore.case that you want to fix wasn't already at the right position in the call.
I faced a similar problem once and solved it using assignInNamespace(). I don't have your package installed so I cannot be sure this will work for you, but I think it should. You would proceed as follows:
Make the version of the function you want, as edited:
# I would just copy the function off github and change the offending line
readTranscripttomeProfiling <- function() {"Insert code here"}
# Get the problematic version of the function out of the package namespace
tmpfun <- get("readTranscripttomeProfiling",
envir = asNamespace("TCGAbiolinks"))
# Make sure the new function has the environment of the old
# function (there are possibly easier ways to do this -- I like
# to get the old function out of the namespace to be sure I can do
# it and am accessing what I want to access)
environment(readTranscripttomeProfiling) <- environment(tmpfun)
# Replace the old version of the function in the package namespace
# with your new version
assignInNamespace("readTranscripttomeProfiling",
readTranscripttomeProfiling, ns = "TCGAbiolinks")
I found this solution in another StackOverflow response, but cannot seem to find the original at the moment.

Modifying an R package function for current R session; assignInNamespace not behaving like fixInNamespace?

I would like to be able to modify a hidden function inside an R package in an "automated" way, like using fixInNamespace, but where I can write the code in advance, and not in an "edit" window which fixInNamespace does. I think assignInNamespace could do the job, but currently it's not working. Here's an example of the problem.
require(quantmod)
getSymbols("AAPL")
chartSeries(AAPL) # Works fine up to here.
Now say I want to yaxis ticks to be drawn on the left side of the plot, instead of the right. This can be done by modifying the source code in the quantmod package. The relevant code for modifying the plot layout is in a hidden quantmod function called chartSeries.chob.
This could be done by doing this:
fixInNamespace("chartSeries.chob", ns = "quantmod")
and in the edit window, manually modify line 117 from axis(4) to axis(2), click OK, and again run chartSeries(AAPL) (now the y axis labels will plot on the left side of the plot). Everything is good, the plot is generated as expected, no problems.
But ... now suppose I want to modify chartSeries.chob in advance (in an automated way), presumably by sourcing a modified version of the chartSeries.chob function, without using the edit window. I might want modify dozens of lines in the function for example, and opening the edit window each time for a new R session is not practical.
How can I do this?
Right now I am doing this, which is not working:
assignInNamespace("chartSeries.chob", value = chartSeries.chob2, ns = "quantmod")
where I source from the console a full copy of chartSeries.chob with the modified code on line 117.
chartSeries.chob2 <- function (x)
{
old.par <- par(c("pty", "mar", "xpd", "bg", "xaxs", "las",
"col.axis", "fg"))
on.exit(par(old.par))
....
[Edit On 117:] axis(2)
...
}
When I run from the console:
chartSeries(AAPL)
or
quantmod:::chartSeries(AAPL)
I get errors -- the calls to other functions in quantmod from within the chartSeries.chob function are not found, presumably because the edited chartSeries.chob function is not in the quantmod namespace?
I notice that when typing quantmod:::chartSeries.chob from the console after the assignInNamespace command, there is no environment: namespace:quantmod at the end of the function definition.
But if I do the fixInNamespace modification approach, when I type quantmod:::chartSeries.chob, then I do see environment: namespace:quantmod appended to the end of the function definition.
Since fixInNamespace calls assignInNamespace you should be able to get it to work, the problem is probably that the environment is not the same and possibly some other attributes. If you change those to match then I would expect it to work better, possibly using code like:
tmpfun <- get("chartSeries.chob", envir = asNamespace("quantmod"))
environment(chartSeries.chob2) <- environment(tmpfun)
attributes(chartSeries.chob2) <- attributes(tmpfun) # don't know if this is really needed
assignInNamespace("chartSeries.chob", chartseries.chob2, ns="quantmod")
Another option for some changes would be to use the trace function. This would make temporary changes and would be fine for inserting code, but I don't know if it would be reasonable for deleting commands or modifying in place (specifying an editor that changed the code rather than letting you change it might make this possible).

Function to clear the console in R and RStudio

I am wondering if there is a function to clear the console in R and, in particular, RStudio I am looking for a function that I can type into the console, and not a keyboard shortcut.
Someone has already provided such a function in this StackExchange post from 2010. Unfortunately, this depends on the RCom package and will not run on Mac OS X.
cat("\014")
is the code to send CTRL+L to the console, and therefore will clear the screen.
Far better than just sending a whole lot of returns.
If you are using the default R console, the key combination Option + Command + L will clear the console.
You may define the following function
clc <- function() cat(rep("\n", 50))
which you can then call as clc().
shell("cls") if on Windows,
shell("clear") if on Linux or Mac.
(shell() passes a command (or any string) to the host terminal.)
cat("\f") may be easier to remember than cat("\014").
It works fine for me on Windows 10.
In Ubuntu-Gnome, simply pressing CTRL+L should clear the screen.
This also seems to also work well in Windows 10 and 7 and Mac OS X Sierra.
Here's a function:
clear <- function() cat(c("\033[2J","\033[0;0H"))
then you can simply call it, as you call any other R function, clear().
If you prefer to simply type clear (instead of having to type clear(), i.e. with the parentheses), then you can do
clear_fun <- function() cat(c("\033[2J","\033[0;0H"));
makeActiveBinding("clear", clear_fun, baseenv())
I developed an R package that will do this, borrowing from the suggestions above. The package is called called mise, as in "mise en place." You can install and run it using
install.packages("mise")
library(mise)
mise()
Note that mise() also deletes all variables and functions and closes all figures by default. To just clear the console, use mise(vars = FALSE, figs = FALSE).
In linux use system("clear") to clear the screen.
If you are using the default R console CTRL + L
RStudio - CTRL + L
cat("\014") . This will work. no worries
You can combine the following two commands
cat("\014");
cat(rep("\n", 50))
Another option for RStudio is rstudioapi::sendToConsole("\014"). This will work even if output is diverted.
sink("out.txt")
cat("\014") # Console not cleared
rstudioapi::sendToConsole("\014") # Console cleared
sink()
I know this question is very old, but I found myself visiting it many times looking for a totally different answer:
n = 20
for (i in 0:n) {
cat(100*i/n, "% \r")
flush.console()
Sys.sleep(0.01) #do something slow
}
flush.console() will kind of "clear the console in r and studio", maybe not in OP's terms but still.
This code will act like a progress bar in the console. For each iteration, the percentage is incremented and then erased on the next iteration.
Note that this won't work without \r or with an \n, for some reason.

How to preserve changes to function with fix() between R sessions?

If I edit a function with R v2.14.0 using fix(), those fixes are applied during the session.
For example, I might make the following edit to get a white background in a hive plot:
> library(HiveR)
> fix(plotHive)
... :%s/black/white/g
... :w
... :q
> plotHive(myHiveData)
I then get a white background in the hive plot, as expected.
But if I quit and reopen R, I have lost those changes, and the plot has a black background again.
How do I preserve the edits I make with fix() between R sessions?
EDIT
If I source() the modified plotHive() function, I get the following error:
> modifiedPlotHive <- source("modifiedPlotHive.R")
Error in source("modifiedPlotHive.R") :
modifiedPlotHive.R:1160:1: unexpected '<'
1159: }
1160: <
^
In addition: Warning message:
In readLines(file) : incomplete final line found on 'modifiedPlotHive.R'
The final line in the modified plotHive() function is:
<environment: namespace:HiveR>
If I remove this line before source()-ing, then the function no longer works.
Sorry I missed this when it came out, but the latest version of HiveR has the option to control the background color (available on CRAN 0.2-1) Bryan
Here's the safer way of doing what you want, referenced by #joran.
The sink/source pair is fine for dealing with R code files. But saving to text files and then reading back in other types of objects can strip them of important attributes, especially those relating to environments. That's what you just experienced.
The save/load pair stores objects in R's own binary format, so is much less liable to lose important information/environments attached to functions.
In this example, I define a personal version of ls, which differs from the base function in that it by default lists objects that start with a dot/period:
my_ls <- ls
fix(my_ls)
# 1) On the first line, change 'all.names=FALSE' to 'all.names=TRUE'
# 2) Say "Yes", I want to save the changes
save("my_ls", file="my_ls.Rdata")
# Then, in a later session, test that it works
load("my_ls.Rdata")
.TrysToHide <- 99
my_ls()
# [1] ".TrysToHide" "my_ls"
One more note: it's much cleaner to give your modified function a name of its own. To really edit a packaged function, and have the changes persist, you'd need to edit the sources and recompile the package. But if you do that, beware, as you may well break the function for other packaged functions that depend on it.
There are a couple of options:
Save your workspace before quiting and load it again when you reopen R.
Save the modified function to script file and source it:
sink("modified_plotHive.r")
plotHive
sink()
In the next session:
plotHive <- source("modified_plotHive.r")
HTH

Resources