so I have the following:
A Private Package: PrivPack.jl
Within PrivPack.jl, I have 2 modules: mod1 and mod2.
In the PrivPack.jl file I do include("mod1.jl") and include("mod2.jl") which are the files that house the modules. However, when I try to access mod1 anywhere in my package, I do not seem to have access to it. I either need to include the file and do using .mod1 or I have to push the file into my loadpath using the startup.jl file.
What's the solution to fix this issue? I don't want to have to re-include the file every time.
If I'm understanding the question correctly, you should just do using .mod1, .mod2 after includeing the files. Otherwise, you just have modules floating around that haven't actually been loaded.
Related
So I use a lot of custom built functions in R which I save in the documents folder in my pc. I would like to bring these functions into my R environment (I usually use source()). At the moment I use the entire file path, i.e. C:\Users\usename\documents\R functions\my_function.r and then create a quick access shortcut link in my project directory to these functions (for easy reference in case its needed). However I was wondering if there is a better way to reference these files. By better I basically mean shorter, or a way to source the files through the quick access shortcut. An alternative to this would be to create a secondary directory so I could just type source("&/my_function.r") (the "&" means secondary directory). This is just a minor inconvenience I think would make life easier if resolved. What do yo think? is this unnecessary complication? Is there anyone in a similar situation as me that has any tips for easily sourcing functions?
Thanks a lot!
If these are functions you often use, you could wrap them in a minimalistic package. Then your call would just be library("myhelpers") and you have all of them available.
Creating this package is quite easy. Assuming you use RStudio, you just:
Create a package: File -> New Project -> New Directory -> R Package
Give it the name you want e.g. "myhelpers"
Specify the folder it should be in
Then RStudio directly creates the package structure for you.
Now you have the package structure in your folder. It will look like this:
- DESCRIPTION
- man
- NAMESPACE
- R
- myhelpers.Rproj
You just have to put your .R files with the functions in the R folder. It does not matter, if the functions are in one file or in multiple files.
Then in R Studio go to the Tab "Build" and click "Install and and Restart". That's it!
Now in your other projects or R files you can just type and use all the functions you put in the R folder:
library("myhelpers")
var <- myfunction1(x)
If you later on want to edit your package functions or add new ones, you can just go to the package folder and click on myhelpers.Rproj and RStudio will open your package project for you. After your changes just click again Build -> Install and and Restart to update the package.
Here is also a short explanation with pictures. This is all you need to use your functions for yourself. The nice thing is, from there you can also go further if needed. E.g. add documentation to your functions. (then you could also have a help() page to your function).
As already mentioned I'm using the Atom text editor.
I'm currently working on a project written in c++. Of course it is desirable to jump to the definition of a function (in another project file), or other uses of this function (within the project). As far as I know this can be achieved with the packages I'll mention below. I want the package to display me the definition along with the path to the corresponding file which holds the definition and ideally the line where it occurs.
I'll welcome any comments and suggestions on how to solve the below mentioned problem(s) I have with (one of) the packages. Moreover I'm also thankful about pointers to possible solutions or posts concerning my problem(s), or how I can achieve this with another package.
Here is what I found / tried / did so far.
goto
Currently I'm using this package, although it is rather slow and does not show the arguments of the function as e.g. atom-ctags does, but it's the only package which displays me the files I need to see.
It shows me where the function is defined as well as where it is also used. However it does not show me the path to the file corresponding file it refers to.
atom-ctags
I also tried this package, building the tags is quite fast and moreover it show me the path to the file. But this package only lists the .cc files and not the .h files. It appears to me as if it only shows me the other uses but not the definition, which is obviously a problem.
I also tried generating the ctags myself and changing the command options in the settings of the package, unfortunately without any success.
Atoms built-in symbols-view
In order to get this to work, one needs to generate the symbols. This can be, for example, achieved with the symbol-gen package. However, it shows me some of the definitions, but also no .h files. Moreover, jumping to the definition results in a Selected file does not exist., therefore it is not usable at all.
goto-definition
Just for completeness, there is also this package. It does not work for me, since c++ is not supported but maybe others will find it useful.
symbols-plus
Again, for completeness, this should be a replacement for the atom built-in, but when disabling the build-in it does not show me any jump functionality nor is a short cut mentioned.
So, basically, nothing really works well. I have tried Symbol Tree View but it but barely works.
I have a module I wrote here:
# Hello.jl
module Hello
function foo
return 1
end
end
and
# Main.jl
using Hello
foo()
When I run the Main module:
$ julia ./Main.jl
I get this error:
ERROR: LoadError: ArgumentError: Hello not found in path
in require at ./loading.jl:249
in include at ./boot.jl:261
in include_from_node1 at ./loading.jl:320
in process_options at ./client.jl:280
in _start at ./client.jl:378
while loading /Main.jl, in expression starting on line 1
There is a new answer to this question since the release of Julia v0.7 and v1.0 that is slightly different. I just had to do this so I figured I'd post my findings here.
As already explained in other solutions, it is necessary to include the relevant script which defines the module. However, since the custom module is not a package, it cannot be loaded as a package with the same using or import commands as could be done in older Julia versions.
So the Main.jl script would be written with a relative import like this:
include("./Hello.jl")
using .Hello
foo()
I found this explained simply in Stefan Karpinski's discourse comment on a similar question. As he describes, the situation can also get more elaborate when dealing with submodules. The documentation section on module paths is also a good reference.
EDIT: Updated code to apply post-v1.0. The other answers still have a fundamental problem: if you define a module and then include that module definition in multiple places, you will get unexpected hard-to-understand errors. #kiliantics' answer is correct as long as you only include the file once. If you have a module that you're using across multiple files, make that module into a package, use add MyModule, and then type using MyModule in as many places as you want, letting Pkg handle module identity for you.
Though 张实唯's answer is the most convenient, you should not use include outside the REPL (or just once per included file as a simple practice to organize large modules, as in the first example here). If you're writing a program file, go through the trouble of adding the appropriate directory to the LOAD_PATH. Remy gives a very good explanation of how to do so, but it's worth also explaining why you should do so in the first place. (Additionally from the docs: push!(LOAD_PATH, "/Path/To/My/Module/") but note your module and your file have to have the same name)
The problem is that anything you include will be defined right where you call include even if it is also defined elsewhere. Since the goal of modules is re-use, you'll probably eventually use MyModule in more than one file. If you call include in each file, then each will have its own definition of MyModule, and even though they are identical, these will be different definitions. That means any data defined in the MyModule (such as data types) will not be the same.
To see why this is a huge problem, consider these three files:
types.jl
module TypeModule
struct A end
export A
end
a_function.jl
include("types.jl")
module AFunctionModule
using ..TypeModule
function takes_a(a::A)
println("Took A!")
end
export takes_a
end
function_caller.jl
include("a_function.jl")
include("types.jl") # delete this line to make it work
using .TypeModule, .AFunctionModule
my_a = A()
takes_a(my_a)
If you run julia function_caller.jl you'll get MethodError: no method matching takes_a(::A). This is because the type A used in function_caller.jl is different from the one used in a_function.jl. In this simple case, you can actually "fix" the problem by reversing the order of the includes in function_caller.jl (or just by deleting include("types.jl") entirely from function_caller.jl! That's not good!). But what if you wanted another file b_function.jl that also used a type defined in TypeModule? You would have to do something very hacky. Or you could just modify your LOAD_PATH so the module is only defined once.
EDIT in response to xji: To distribute a module, you'd use Pkg (docs). I understood the premise of this question to be a custom, personal module. It's also fine for distribution if you know the relative path of the directory containing your module definition from each file that needs to load that module, e.g. if all your files are in the same folder then you'd just have push!(LOAD_PATH, #__DIR__).
Incidentally, if you really don't like the idea of modifying your load path (even if it's only within the scope of a single script...) you could symlink your module into a package directory (e.g. ~/.julia/v0.6/MyModule/MyModule.jl) and then Pkg.add(MyModule) and then import as normal. I find that to be a bit more trouble.
This answer has been OUTDATED. Please see other excellent explanations.
===
You should include("./Hello.jl") before using Hello
This answers was originally written for Julia 0.4.5. There is now an easier way of importing a local file (see #kiliantics answer). However, I will leave this up as my answer explains several other methods of loading files from other directories which may be of use still.
There have already been some short answers, but I wanted to provide a more complete answer if possible.
When you run using MyModule, Julia only searches for it in a list of directories known as your LOAD_PATH. If you type LOAD_PATH in the Julia REPL, you will get something like the following:
2-element Array{ByteString,1}:
"/Applications/Julia-0.4.5.app/Contents/Resources/julia/local/share/julia/site/v0.4"
"/Applications/Julia-0.4.5.app/Contents/Resources/julia/share/julia/site/v0.4"
These are the directories that Julia will search for modules to include when you type using Hello. In the example that you provided, since Hello was not in your LOAD_PATH, Julia was unable to find it.
If you wish to include a local module, you can specify its location relative to your current working directory.
julia> include("./src/Hello.jl")
Once the file has been included, you can then run using Hello as normal to get all of the same behavior. For one off scripts, this is probably the best solution. However, if you find yourself regular having to include() a certain set of directories, you can permanently add them to your LOAD_PATH.
Adding directories to LOAD_PATH
Manually adding directories to your LOAD_PATH can be a pain if you wish to regularly use particular modules that are stored outside of the Julia LOAD_PATH. In that case, you can append additional directories to the LOAD_PATH environment variable. Julia will then automatically search through these directories whenever you issue an import or using command.
One way to do this is to add the following to your .basrc, .profile, .zshrc.
export JULIA_LOAD_PATH="/path/to/module/storage/folder"
This will append that directory onto the standard directories that Julia will search. If you then run
julia> LOAD_PATH
It should return
3-element Array{ByteString,1}:
"/path/to/module/storage/folder"
"/Applications/Julia-0.4.5.app/Contents/Resources/julia/local/share/julia/site/v0.4"
"/Applications/Julia-0.4.5.app/Contents/Resources/julia/share/julia/site/v0.4"
You can now freely run using Hello and Julia will automatically find the module (as long as it is stored underneath /path/to/module/storage/folder.
For more information, take a look at this page from the Julia Docs.
Unless you explicitly load the file (include("./Hello.jl")) Julia looks for module files in directories defined in the LOAD_PATH variable.
See this page.
I have Julia Version 1.4.2 (2020-05-23). Just this using .Hello worked for me.
However, I had to compile the Hello module before just using .Hello. It makes sense for both the defined and using scripts of Hello is on the same file.
Instead, we can define Hello in one file and use it in a different file with include("./Hello.jl");using .Hello
If you want to access function foo when importing the module with "using" you need to add "export foo" in the header of the module.
I have seen many related answers here,but i didn't get a proper way to solve my problem under windows system...
I know the link the similar question
I got that setwd() can locate the directory what i want,however,my R script may move to another directory without any modification,so I want to know the current file directory,becase there are expression like source(...),this called source file and the execution file under the same parent directory in a R project,how I can do?
any help appreciated.
You can get your current directory using the getwd() function and give it a name, say:
cpath = getwd()
Another useful function is the file.path, which can help you specify new directories with simple syntax. For example, you want to get the directory that is one level "above" the current directory, you can use:
upp.dir = file.path("..", "cpath")
This gives upp.dir as "../Your_Current_Dir". How about changing to another folder (called Folder_A) in current directory? Use:
folderA = file.path("cpath", "Folder_A")
These may help easy navigate the file system.
Basically, if you write scripts and those scripts depend on where they are, then you are Doing It Wrong.
Write code in packages. Parameterise functions to make them generally applicable. If you have folders with data in, then make one of those parameters a folder.
A script called with source() cannot reliably locate itself, but that shouldn't be a problem, because WHATEVER CALLED THE SCRIPT knows where the script is (it has to, or how else can it call it?) so it could pass that as a parameter. Something like:
> youarehere = "C:\foo\"
> source("C:\foo\bar.R")
and now bar.R can do setwd(youarehere) and it will work, even if it is badly written such that it relies on sourcing other code in its containing folder.
Or you can do:
> setwd(youarehere)
> source("bar.R")
in your calling function.
But really, its a fail, its a sign of badly written code. Use functions, write packages, use devtools, its really not that hard, then your code will work anywhere and you wont be writing stupid scripts that are a twisty turny maze of source() calls.
Stay classy.
How can a sourced or Sweaved file find out its own path?
Background:
I work a lot with .R scripts or .Rnw files.
My projects are organized in a directory structure, but the path of the project's base directory frequently varies between different computers (e.g. because I just do parts of data analysis for someone else, and their directory structure is different from mine: I have projects base directories ~/Projects/StudentName/ or ~/Projects/Studentname/Projectname and most students who have just their one Project usually have it under ~/Measurements/ or ~/DataAnalysis/ or something the like - which wouldn't work for me).
So a line like
setwd (my.own.path ())
would be incredibly useful as it would allow to ensure the working directory is the base path of the project regardless of where that project actually is. Without the need that the user must think of setting the working directory.
Let me clarify: I look for a solution that works with pressing the editor's/IDE's source or Sweave Keyboard shortcut of the unthinking user.
Just FYI, knitr will setwd() to the dir of the input file when (and only when) evaluating the code chunks, i.e. if you call knit('path/to/input.Rnw'), the working dir will be temporarily switched to path/to/. If you want to know the input dir in code chunks, currently you can call an unexported function knitr:::input_dir() (I may export it in the future).
Starting from gsk3's Seb's suggestions, here's an idea:
the combination of username (login) and IP or name of the computer could be used to select the right directory.
That leads to something like:
setwd (switch (paste (Sys.info () [c ("user", "nodename")], collapse="."),
user.laptop = "~/Messungen",
user2.server = "~/Projekte/Projekt/",
))
So there is an automatic solution, that
works with source
works with Sweave
even works for interactive sessions where the commands are sent line by line
the combination of user and nodename of course needs to be specific
the paths need to be edited by hand, though.
Improvements welcome!
Update:
Gabor Grothendieck answered the following to a related question on r-help today:
this.dir <- dirname(parent.frame(2)$ofile)
setwd(this.dir)
which will work for source.
Another update: I now do most of the data analysis work in RStudio. RStudio's projects basically solve the problem: RStudio changes the working directory to the project root directory every time I switch between projects.
I can therefore put the project directory as far down my directory tree as I want (and the students can also put their copy wherever they want) and sync the data files and scripts/.Rnws via version control (We use a private git server). The RStudio project files are kept out of the version control, i.e. .gitignore contains .Rproj.user.
Obviously, within the project, the directory structure needs to be synchronized.
You can use sys.calls() to get the command used to source the file. Then you need a bit of trickery using regular expressions to get the pathname, bearing in mind that source("something/filename") could have used either the absolute or relative path. Here's a first attempt at putting all the pieces together: try inserting the following lines at the top of a source file.
whereFrom=sys.calls()[[1]]
# This should be an expression that looks something like
# source("pathname/myfilename.R")
whereFrom=as.character(whereFrom[2]) # get the pathname/filename
whereFrom=paste(getwd(),whereFrom,sep="/") # prefix it with the current working directory
pathnameIndex=gregexpr(".*/",whereFrom) # we want the string up to the final '/'
pathnameLength=attr(pathnameIndex[[1]],"match.length")
whereFrom=substr(whereFrom,1,pathnameLength-1)
print(whereFrom) # or "setwd(whereFrom)" to set the working directory
It's not very robust—for instance, it will fail on windows with source("pathname\\filename"), and I haven't tested what happens if you have one file sourcing another file—but you might be able to build a solution on top of this.
I have no direct solution how to obtain the directory of the file itself but if you have a limited range of directories and directory structures you can probably use
if(file.exists("c:/somedir")==TRUE){setwd("c:/somedir")}
You could check out the pattern of the directory in question and then set the dir. Does this help you?
An additional problem is that the working directory is a global variable, which can be changed by any script, so if your script calls another script, it will have to set the wd back. In RStudio I use Session -> Set Working Directory -> To Source File Location (I know, it's not ideal), and then my script does
wd = getwd ()
...
source ("mySubDir/myOtherScript.R", chdir=TRUE); setwd (wd)
...
source ("anotherSubDir/anotherScript.R", chdir=TRUE); setwd (wd)
In this way one can maintain a stack of working directories. I would love to see this implemented in the language itself.
This answer works for source and also inside nvim-R - I have no idea if it works with knitr and similar things. Any feedback appreciated.
If you have multiple scripts source-ing each other, it is important to get the correct one. That is, the largest i for which sys.frame(i)$ofile exists.
get.full.path.to.this.sourced.script = function() {
for(i in sys.nframe():1) { # Go through all the call frames,
# in *reverse* order.
x = sys.frame(i)$ofile
if(!is.null(x)) # if $ofile exists,
return(normalizePath(x)) # then return the full absolute path
}
}