I'm using a non standard module in Julia, created by someone else.
Due to user restrictions I cannot modify this module.
The module prints the step it is currently executing to the console.
My console is full of excess information.
Question: Is there a way to suppress the console printing without modifying the module code?
To suppress the output of a function foo:
oldstd = stdout
redirect_stdout(devnull)
foo()
redirect_stdout(oldstd) # recover original stdout
If foo doesn’t have arguments one can also use redirect_stdout(foo, devnull)
However it works on my Linux box, but I am not 100% sure it works on Windows too..
Related
I'm using the R package 'here' to define my working directory using the following command at the start of my script:
here::set_here(path='/path/to/my_directory/', verbose = F)
Every time I run the script it prints this to the console:
here() starts at /path/to/my_directory
Is there a way to suppress this output? I tried using the invisible() function but that didn't work...
The message you’re seeing is printed when you’re attaching the ‹here› package. Simply don’t do that (it’s unnecessary anyway) to prevent it.
Otherwise, load it as follows:
suppressPackageStartupMessages(library(here))
… yeah, not exactly elegant.
I can run julia script with arguments from Powershell as > julia test.jl 'a' 'b'. I can run a script from REPL with include("test.jl") but include accepts just one argument - the path to the script.
From playing around with include it seems that it runs a script as a code block with all the variables referencing the current(?) scope so if I explicitly redefine ARGS variable in REPL it catches on and displays corresponding script results:
>ARGS="c","d"
>include("test.jl") # prints its arguments in a loop
c
d
This however gives a warning for redefining ARGS and doesn't seem the intended way of doing that. Is there another way to run a script from REPL (or from another script) while stating its arguments explicitly?
You probably don't want to run a self-contained script by includeing it. There are two options:
If the script isn't in your control and calling it from the command-line is the canonical interface, just call it in a separate Julia process. run(`$JULIA_HOME/julia path/to/script.jl arg1 arg2`). See running external commands for more details.
If you have control over the script, it'd probably make more sense to split it up into two parts: a library-like file that just defines Julia functions (but doesn't run any analyses) and a command-line file that parses the arguments and calls the functions defined by the library. Both command-line interface and the second script your writing now can include the library — or better yet make the library-like file a full-fledged package.
This solution is not clean or Julia style of doing things. But if you insist:
To avoid the warning when messing with ARGS use the original ARGS but mutate its contents. Like the following:
empty!(ARGS)
push!(ARGS,"argument1")
push!(ARGS,"argument2")
include("file.jl")
And this question is also a duplicate, or related to: juliapassing-argument-to-the-includefile-jl as #AlexanderMorley pointed to.
Not sure if it helps, but it took me a while to figure this:
On your path "C:\Users\\.julia\config\" there may be a .jl file called startup.jl
The trick is that not always Julia setup will create this. So, if neither the directory nor the .jl file exists, create them.
Julia will treat this .jl file as a command list to be executed every time you run REPL. It is very handy in order to set the directory of your projects (i.e. C:\MyJuliaProject\MyJuliaScript.jl using cd("")) and frequent used libs (like using Pkg, using LinearAlgebra, etc)
I wanted to share this as I didn't find anyone explicit saying this directory might not exist in your Julia device's installation. It took me more than it should to figure this thing out.
In line with this question, the debugger browser does not show me where it has stopped when it is called from within a do.call statement. First it prints all the arguments to the console and then generally the browser is unresponsive leaving no other option than to force quit RStudio. Does anyone have experience with anything equivalent and can point to any fixes?
This also seems to describe a similar issue.
This is likely occurring because you have passed some dataset to a function called using do.call. R's default behavior when an error occurs is to enter debugging mode and print the entire function call for context. However, because do.call deparses each argument before calling the function, this can result in a very long statement, causing R to hang.
To limit the length of the call returned when entering browser() model for debugging, you can set the maximum deparse length:
options(deparse.max.lines = 10)
As of R 4.0, you can limit the length of the function call printed when entering debugging mode without changing other uses of deparse() by setting option traceback.max.lines:
options(traceback.max.lines = 10)
This should prevent the hangs caused by printing deparsed function calls when debugging within functions called by do.call.
A bug has been identified with Rstudio that causes it to hang even when these options are set. You may be able to debug this code within the R console or using other tools.
Basically, I want to avoid visiting a browser to get the detailed test results.
I had similar requirements, as I want to run "drush test-run" from within vim and get the results traversable in vim.
So I started a small and rough project https://github.com/DirkR/junitlog2vim. It takes the xml results file and generates line by line report. The script junitlog2vim.py requires python3.
As a convenience I created a Makefile. It takes the optional arguments "CASE" and "METHODS" to define the proper arguments for "drush test-run" and it has sensible defaults. You only have to provide an argument SITE_ALIAS or edit the Makefile.
if you run
make SITE_ALIAS=#mysite CASE=MyTestCase
then you get linewise error report with filename, line number and error message.
I hope it helps. Feel free to hack or adopt it.
I need to understand an R script. Since I did not use R until now, I try to understand the script step by step. At the beginning of the script command line arguments (input files) are passed with commandArgs(). I know that one can access additional arguments for an R script with commandArgs().
But I just cannot figure out how to run a script with arguments in the interactive mode, so that I can print all variables used in the script later on. For example source("script.R") does not seem to take arguments.
My apologies if I am just not capable of using the right search query...
I think you're misunderstanding the use of commandArgs - it's for getting the arguments supplied when run through the command line... not the interpreter. If you just want to "supply arguments" when sourcing a file then just put those into the global namespace (just create the variables you want to use). Using source is almost just like copying the script and pasting it into the interpreter.