I want to conditionally run 3 different modules in one R script. Right now, I am using if(0) on each of them. e.g. I am loading a graph, and then running module B using following code. It is painful to use if() loops for every module (scrolling), is there way I can do conditional sourcing of R script in RStudio (like MACROS in C)?
load_graph()
if(0){
module A
.....
}
#if(0){
module B
....
}
if(0){
module C
....
}
Just assign a variable to indicate which you would like to run
whichmodule='b'
load_graph()
if(whichmodule=='a'){
module A
.....
}
if(whichmodule=='b'){
module B
....
}
if(whichmodule=='c'){
module C
....
}
with this structure you can set the module in the first bit then source the whole document and only get the b module to run.
Related
I am working on a julia code where I have several files and call functions from these files to a main function called run.jl. Every time I make changes to any one of these files I need to restart the julia REPL which is a bit annoying. Anyway to work around this.
For example
# --------------------------------- run.jl
include("agent_types.jl")
include("resc_funcs.jl")
using .AgentTypes: Cas, Resc
using .RescFuncs: update_rescuers_at_pma!, travel_to_loc
# ... some code
for i = 1:500
update_rescuers_at_pma!(model)
travel_to_loc(model)
end
# ------------------------------ resc_funcs.jl
function travel_to_loc(model)
println(get_resc_by_prop(model, :on_way_to_iz))
for resc_id in get_resc_by_prop(model,:on_way_to_iz)
push!(model[resc_id].dist_traject, model[resc_id].dist_to_agent) #This is just for checking purposes
model[resc_id].dist_to_agent -= model.dist_per_step
push!(model[resc_id].loc_traject, "on_way_to_iz")
end
#If we add new print statement here and execute run.jl it wont print
println(model[resc_id].loc_traject) #new amendment to code
end
But now when I go and update travel_to_loc function for example. I need to restart the julia repl before those changes are reflected. I am wondering if there is a way after you save the file (resc_funcs.jl in this case) those amendments are reflected when you execute run.jl.
The simplest workflow could be the following:
Select some folder to be your current working folder in Julia (eg. by using cd() command from Julia)
Create sample file MyMod.jl
module MyMod
export f1
function f1(x)
x+1
end
end
Add the current folder to LOAD_PATH and load Revise.jl
push!(LOAD_PATH, ".")
using Revise
Load the module and test it:
julia> using MyMod
[ Info: Precompiling MyMod [top-level]
julia> f1(3)
4
Try to edit the file eg. change x+1 to x+3
Run again
julia> f1(3)
6
Notes:
you will still need to restart REPL when your data structures change (you modify a definition of a struct object)
you could generate a full module package using Pkg.generate but I wanted to make things simplified.
I'm using Julia to autograde students' work. I have all of their files Student1.jl, Student2.jl, etc. as separate modules Student1, Student2, etc in a directory that is part of LOAD_PATH. What I want to be able to do works completely fine in the REPL, but fails in a file.
macro Student(number)
return Meta.parse("Main.Student$number")
end
using Student1
#Student(1).call_function(inputs)
works completely fine in the REPL. However, since I'm running this in a script, I need to be able to include the modules with more metaprogramming that is currently not working. I would have thought that the exact same script above would have worked in a file Autograder.jl by calling
#eval(using Student1)
#Student(1).call_function(inputs)
in a module Autograder. But I get either an UndefVarError: Student1 not defined or LoadError: cannot replace module Student1 during compilation depending on how I tweak things.
Is there something small in Julia metaprogramming I'm missing here to make this autograding system work out? Thanks for any advice.
The code just as you have written works for me on julia versions 1.1.0, 1.3.1, 1.5.1, 1.6.0 and 1.7.0. By that I mean, if I add an inputs variable and put your first code block in a file Autograder.jl and run JULIA_LOAD_PATH="modules:$JULIA_LOAD_PATH" julia Autograder.jl with the student modules in the modules directory I get the output of the call_function function in the Student1 module.
However if Autograder.jl actually contains a module then the Student$number module is not required into Main and your macro needs to be modified accordingly:
module Autograder
macro Student(number)
return Meta.parse("Student$number") # or "Autograder.Student$number"
end
inputs = []
#eval(using Student1)
#Student(1).call_function(inputs)
end
Personally I wouldn't use a macro to accomplish this, here is one possible alternative:
student(id) = Base.require(Main, Symbol("Student$(id)"))
let student_module = student(1)
student_module.call_function(inputs)
end
or without modifying the LOAD_PATH:
student(id) = include("modules/Student$(id).jl")
let student_module = student(1)
student_module.call_function(inputs)
end
In OpenMDAO V3.1 I am using an ExternalCodeComp to execute a CFD code. Typically, I would call it as such:
mpirun nodet_mpi --design_run
If the above call is made in the appropriate directory, then it will find the appropriate run file and execute the CFD run. I have tried command args for the ExternalCodeComp;
execute = ['mpirun', 'nodet_mpi', '--design_run']
execute = ['mpirun', 'nodet_mpi --design_run']
execute = ['mpirun nodet_mpi --design_run']
I either get an error such as:
RunTimeError: 255, execvp error on file "nodet_mpi --design_run" (No such file or directory)
Or that the command cannot be found.
Is there any way to setup the execute statement to include commandline args for the flow solver when an input file is not defined?
Thanks in advance!
One detail in your question seems incorrect, you state that you have tried execute = "...". The ExternalCodeComp uses an option called command. I will assume that you are using the correct option in your code.
The most correct form to use is the list with all arguments as single entries in the list:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
Your error msg seems to indicate that the directory that OpenMDAO is running in is not the same as the directory you would like to execute the CFD code from. The absolute simplest solution would be to make sure that you are in the correct directory via cd in the terminal window before executing your python script.
However, there is likely a reason that your python script is in a different place so there are other options I can suggest:
You can use a combination of os.getcwd() and os.chdir() inside the compute method that you have implemented to make sure you switch into and out of the working directory for the CFD code.
If you would like to, you can modify the entries of the list you've assigned to the self.options['command'] option on the fly within your compute method. You would again be relying on some of the methods in the os module for help. os.path.exists can be used to test if the specific input files you need exist or not, and you can modify the command option accordingly.
For option 2, code would look something like this:
def compute(self, inputs, outputs):
if os.path.exists('some_input.file'):
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
else:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run', '--other_options']
# the parent compute function actually runs the external code
super().compute(inputs, outputs)
I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.
I wrote
let fact x =
let result = ref 1 in
for i = 1 to x do
result := !result * i;
Printf.printf "%d %d %d\n" x i !result;
done;
!result;;
in a file named "Moduletest.ml", and
val fact : int -> int
in a file named "Moduletest.mli".
But, why don't they work?
When I tried to use in ocaml,
Moduletest.fact 3
it told me:
Error: Reference to undefined global `Moduletest'
What's happening?
Thanks.
OCaml toplevel is linked only with a standard library. There're several options on how to make other code visible to it:
copy-pasting
evaluating from the editor
loading files #use directive
making custom toplevel
loading with ocamlfind
Copy-pasting
This self-describing, you just copy code from some source and paste it into toplevel. Don't forget that toplevel won't evaluate your code until you add ;;
Evaluating from the editor
Where the editor is of course Emacs... Well, indeed it can be any other capable editor, like vim for example. This method is an elaboration of the previous, where the editor is actually responsible for copying and pasting the code for you. In Emacs you can evaluate the whole file with C-c C-b command, or you can narrow it to a selected area with C-c C-r, and the most granular is to use C-c C-e, i.e., evaluate an expression. Although it is slightly buggy.
Loading with #use directive.
This directive accepts a filename, and it will essentially copy and paste the code from the file. Notice, that it won't create a file-module for you/ For example, if you have file test.ml with this contents:
(* file test.ml *)
let sum x y = x + y
then loading it with the #use directive, will actually bring to your scope, sum value:
# #use "test.ml";;
# let z = sum 2 2
You mustn't to qualify sum with Test., because no Test module is actually created. #use directive merely copies the contents of the file to the toplevel. Nothing more.
Making custom toplevels
You can create your own toplevel with your code compiled in. It is an advanced theme, so I will skip it.
Loading libraries with ocamlfind
ocamlfind is a tool that allows you to find and load libraries, installed on your system, into your toplevel. By default, toplevel is not linked with any code except standard library. Even, not all parts of the library are actually linked, e.g., Unix module is not available, and needed to be loaded explicitly. There're primitive directives that can load any library, like #load and #include, but they are not for a casual user, especially if you have excellent ocamlfind at your disposal. Before using it, you need to load it, since it is also not available by default. The following command, will load ocamlfind and add few new directives:
# #use "topfind";;
In a process of loading it will show you a little hint on how to use it. The most interesting directive, that is added is #require. It accepts a library name, and loads (i.e., links) its code into toplevel:
# #require "unix";;
This will load a unix library. If you're not sure, about the name of the library you can always view all libraries with a #list command. The #require directive is clever and it will automatically load all dependencies of the library.
If you do not want to type all this directives every time you start OCaml top-level, then you cam create .ocamlinit file in your home directory, and put them there. This file will be loaded automatically on a top-level startup.
I have tested your code and it looks fine. You should "load" it from the OCaml toplevel (launched from the same directory as your .ml and .mli files) in the following way:
# #use "Moduletest.ml";;
val fact : int -> int = <fun>
# fact 4;;
4 1 1
4 2 2
4 3 6
4 4 24
- : int = 24