I'm sourcing 2 files into my TCL script. Both having some variables with same name but different values. I've to use the both files and I can't modify them. How can i overcome this problem..?
The two ideas that come immediately to mind are to try namespaces or interpreters.
Separating Scripts with Namespaces
In this case, we're just doing:
namespace eval A {
source script-a.tcl
}
namespace eval B {
source script-b.tcl
}
It's not guaranteed that these won't conflict — they're not strongly separated — but they might work. It's definitely an easy first thing to try, and the variables and procedures created should be separated.
Separating Scripts with Interpreters
This method definitely walls scripts off from each other, but can be more awkward to work with since it also walls them off from the rest of your application (unless you provide appropriate cross-interp aliases):
interp create Acontext
interp eval Acontext {
source script-a.tcl
}
interp create Bcontext
interp eval Bcontext {
source script-b.tcl
}
As I noted above, this doesn't let the scripts see anything done by the main script by default; the walling off is pretty much total. To grant access to a command in the main interpreter, you need to make an alias:
proc mainContextOperation {callee args} {
puts "this is in the main context called from ${callee}: $args"
}
interp alias Acontext DoMyThing {} mainContextOperation Acontext
interp alias Bcontext DoMyThing {} mainContextOperation Bcontext
Make these aliases before you do the source calls (but after the interp create calls), and those scripts will be able to use the DoMyThing command to write a message out while the main context will have a really good idea what is going on.
Tcl uses this mechanism as a basis for safe interpreters, which are how you can evaluate untrusted Tcl code without exposing much ability to cause mischief.
Related
What is the sane way to go from a Module object to a path to the file in which it was declared?
To be precise, I am looking for the file where the keyword module occurs.
The indirect method is to find the location of the automatically defined eval method in each module.
moduleloc(mm::Module) = first(functionloc(mm.eval, (Symbol,)))
for example
moduleloc(mm::Module) = first(functionloc(mm.eval, (Symbol,)))
using DataStructures
moduleloc(DataStructures)
Outputs:
/home/oxinabox/.julia/v0.6/DataStructures/src/DataStructures.jl
This indirect method works, but it feels like a bit of a kludge.
Have I missed some inbuilt function to do this?
I will remind answered that Modules are not the same thing as packages.
Consider the existence of submodules, or even modules that are being loaded via includeing some abolute path that is outside the package directory or loadpath.
Modules simply do not store the file location where they were defined. You can see that for yourself in their definition in C. Your only hope is to look through the bindings they hold.
Methods, on the other hand, do store their file location. And eval is the one function that is defined in every single module (although not baremodules). Slightly more correct might be:
moduleloc(mm::Module) = first(functionloc(mm.eval, (Any,)))
as that more precisely mirrors the auto-defined eval method.
If you aren't looking for a programmatic way of doing it you can use the methods function.
using DataFrames
locations = methods(DataFrames.readtable).ms
It's for all methods but it's hardly difficult to find the right one unless you have an enormous number of methods that differ only in small ways.
There is now pathof:
using DataStructures
pathof(DataStructures)
"/home/ederag/.julia/packages/DataStructures/59MD0/src/DataStructures.jl"
See also: pkgdir.
pkgdir(DataStructures)
"/home/ederag/.julia/packages/DataStructures/59MD0"
Tested with julia-1.7.3
require obviously needs to perform that operation. Looking into loading.jl, I found that finding the module path has changed a bit recently: in v0.6.0, there is a function
load_hook(prefix::String, name::String, ::Void)
which you can call "manually":
julia> Base.load_hook(Pkg.dir(), "DataFrames", nothing)
"/home/philipp/.julia/v0.6/DataFrames/src/DataFrames.jl"
However, this has changed to the better in the current master; there's now a function find_package, which we can copy:
macro return_if_file(path)
quote
path = $(esc(path))
isfile(path) && return path
end
end
function find_package(name::String)
endswith(name, ".jl") && (name = chop(name, 0, 3))
for dir in [Pkg.dir(); LOAD_PATH]
dir = abspath(dir)
#return_if_file joinpath(dir, "$name.jl")
#return_if_file joinpath(dir, "$name.jl", "src", "$name.jl")
#return_if_file joinpath(dir, name, "src", "$name.jl")
end
return nothing
end
and add a little helper:
find_package(m::Module) = find_package(string(module_name(m)))
Basically, this takes Pkg.dir() and looks in the "usual locations".
Additionally, chop in v0.6.0 doesn't take these additional arguments, which we can fix by adding
chop(s::AbstractString, m, n) = SubString(s, m, endof(s)-n)
Also, if you're not on Unix, you might want to care about the definitions of isfile_casesensitive above the linked code.
And if you're not so concerned about corner cases, maybe this is enough or can serve as a basis:
function modulepath(m::Module)
name = string(module_name(m))
Pkg.dir(name, "src", "$name.jl")
end
julia> Pkg.dir("DataStructures")
"/home/liso/.julia/v0.7/DataStructures"
Edit: I now realized that you want to use Module object!
julia> m = DataStructures
julia> Pkg.dir(repr(m))
"/home/liso/.julia/v0.7/DataStructures"
Edit2: I am not sure if you are trying to find path to module or to object defined in module (I hope that parsing path from next result is easy):
julia> repr(which(DataStructures.eval, (String,)))
"eval(x) in DataStructures at /home/liso/.julia/v0.7/DataStructures/src/DataStructures.jl:3"
I have a file where my ZSH functions are defined, and I source it from my zshrc.
There are the set of helper functions which used only in other functions from that file.
My question is how can I keep readable names for those helpers (such as 'ask', etc.) and be sure that they will not be overridden later in other sourced files.
So, for example I have two functions:
helper() {
# do something
}
function-i-want-to-use-in-shell() {
helper # call helper, I want to be sure that it is 'my' helper
# do something more
}
I want to protect helper for functions declared within that file.
It would be nice if I could wrap those functions in, for example, subshell () and then export function-i-want-to-use-in-shell to parent (I know this is impossible);
So I am looking for a convenient way to create something like their own scope for those functions, and make some of them global and some local.
[EDIT]
I think another example will give better explanation of the behaviour I want to achieve:
So, for second example I have two files: file1.sh and file2.sh.
file1.sh the same as example above, in file2.sh another function helper defined. I want you to understand that helper from file1.sh it's just function for local usage (within that file), just snippet of code. Later in shell I want only use function-i-want-to-use-in-shell from file1.sh and helper from file2.sh. I do not want helper readonly, I just want it for local usage only. Maybe I can do something like "namespace" for functions in file1.sh, or somehow achieve javascript-like scoping lookup behaviour in that file. The only way I see to do it now is to refuse the condition to keep good, readable, self-explaining names of my helper functions, and
give them names that are hardly to be invented by someone else, or use prefix for those functions. Oh, I just wanted to write something like if ask "question"; then but not if my-local-ask "question"; then in other my functions, and be sure that if someone (or I myself) will define later another function ask nothing will be broken
It's a little heavy-handed, but you can use an autoloaded function to, if not prevent overriding a function, "reset" it easily before calling. For example.
# Assumes that $func_dir is a directory in your fpath;
% echo 'print bar' > $func_dir/helper
% helper () { print 9; }
% helper
9
% unset -f helper
% autoload helper
% helper
bar
I am new to Julia, so this might be trivial.
I have a function definition within a module that looks like (using URIParser):
function add!(graph::Graph,
subject::URI,
predicate::URI,
object::URI)
...
end
Outside of the module, I call:
add!(g, URIParser.URI("http://test.org/1"), URIParser.URI("http://test.org/2"), URIParser.URI("http://test.org/1"))
Which gives me this error:
ERROR: no method add!(Graph,URI,URI,URI)
in include at boot.jl:238
in include_from_node1 at loading.jl:114
at /Users/jbaran/src/RDF/src/RDF.jl:79
Weird. Because when I can see a matching signature:
julia> methods(RDF.add!)
# 4 methods for generic function "add!":
add!(graph::Graph,subject::URI,predicate::URI,object::Number) at /Users/jbaran/src/RDF/src/RDF.jl:29
add!(graph::Graph,subject::URI,predicate::URI,object::String) at /Users/jbaran/src/RDF/src/RDF.jl:36
add!(graph::Graph,subject::URI,predicate::URI,object::URI) at /Users/jbaran/src/RDF/src/RDF.jl:43
add!(graph::Graph,statement::Statement) at /Users/jbaran/src/RDF/src/RDF.jl:68
At first I thought it was my use of object::Union(...), but even when I define three functions with Number, String, and URI, I get this error.
Is there something obvious that I am missing? I am using Julia 0.2.1 x86_64-apple-darwin12.5.0, by the way.
Thanks,
Kim
This looks like you may be getting bit by the very slight difference between method extension and function shadowing.
Here's the short of it. When you write function add!(::Graph, ...); …; end;, Julia looks at just your local scope and sees if add! is defined. If it is, then it will extend that function with this new method signature. But if it's not already defined locally, then Julia creates a new local variable add! for that function.
As JMW's comment suggests, I bet that you have two independent add! functions. Base.add! and RDF.add!. In your RDF module, you're shadowing the definition of Base.add!. This is similar to how you can name a local variable pi = 3 without affecting the real Base.pi in other scopes. But in this case, you want to merge your methods with the Base.add! function and let multiple dispatch take care of the resolution.
There are two ways to get the method extension behavior:
Within your module RDF scope, say import Base: add!. This explicitly brings Base.add! into your local scope as add!, allowing method extension.
Explicitly define your methods as function Base.add!(graph::Graph, …). I like this form as it more explicitly documents your intentions to extend the Base function at the definition site.
This could definitely be better documented. There's a short reference to this in the Modules section, and there's currently a pull request that should be merged soon that will help.
I am trying to optimize my ESS - R environment. So far I make use of the r-autoyas, I set intendation and stuff following style guides, in the mini-buffer there are eldoc hints for function arguments, and I have the option to press a key in order to find information about variable at point (more here).
Are there any other things you use in order to have a nice R environment? Maybe non-ESS people have some nice things to add (I got that info of variable at point from looking at an Eclipser). One example could be an easy way to insert "just-before-defined" variables without typing the variable name (should be something for that?).
(Please help me to change the question instead of "closing" the thread if it is not well formulated)
I am not using autoyas as I find auto-complete integration a better approach.
Insertion of previously defined symbols is a general emacs functionality called 'dabbrev-expand' and is bound to M-/. I have this in my .emacs to make it complete on full symbols:
(setq dabbrev-abbrev-char-regexp "\\sw\\|\\s_\\|s.")
(setq dabbrev-case-fold-search t)
Another thing which I use extensively is imenu-based-jump-to-symbol-definition. It offers similar functionality to emacs tags, but just for open buffers in the same mode as the current buffer. It also uses IDO for queries:
Put imenu-anywhere.el into your emacs load path and add this:
(require 'imenu-anywhere)
(global-set-key [?\M-o] 'imenu-anywhere)
Now, if I do M-o foo RET emacs jumps to the function/class/method/generic definition of 'foo' as long as 'foo' is defined in one of the open buffers. This of course works whenever a mode defines imenu-tags. ESS defines those, so you should not need to add more.
There is also somewhere a collection of R-yas templates. I didn't get around to starting using them but my guess is that it's a pretty efficient template insertion mechanism.
[edit] Activate tracebug:
(setq ess-use-tracebug t)
I am a bit of an R novice, and I am stuck with what seems like a simple problem, yet touches pretty deep questions about how and when things get evaluated in R.
I am using Rserve quite a bit; the typical syntax to get things evaluated remotely is a bit of a pain to type repeatedly:
RSeval(connection, quote(try(command)))
So I would like a function r which does the same thing with just the call:
r(command)
My first, naive, bound to fail attempt involved:
r <- function(command) {
RSeval(c, quote(try(command)))
}
You've guessed it: this sends, literally, try(command) to my confused Rserve daemon. I want command to be partially evaluated, if that makes any sense -- i.e. replaced by what I typed as an argument, but without evaluating it locally.
I looked for solutions to this, browsed throught the documentation for quote, substitute, eval, call, etc.. but I was not able to find something that worked. Either command gets evaluated locally, or not at all.
This is not a big problem, I can type the whole damn quote(try()) thing all the time; but at this point I am mostly curious as to how to get this to work!
EDIT:
More explanations as to what I want to do.
In the text above, command is meant to be a call do a function, ideally -- i.e., not a character string. Something like a <- 3 or assign("a", 3) rather than "a<-3" or quote(a<-3).
I believe that this is part of what makes this tricky. It seems really hard to tell R not to evaluate this locally, but only send it literally. Basically I would like my function to be a bit like quote(), which does not evaluate its argument.
Some explanation about my intentions. I wish to use Rserve frequently to pass commands to a remote R daemon. The commands would be my own (or my colleagues) and the daemon protected by firewall and authentication (and would not be run as root) -- so there is no worry of malicious commands being passed.
To be perfectly honest, this is not a big issue, and I would be pretty happy to always use the RSeval(c, quote(try())). At this point I see this more like an interesting inquiry into the subleties of R :-)
You probably want to use the substitute command, it can give you the argument unevaluated that you can build into the call.
I'm not sure if I understood you correctly - would eval(parse(text = command)) do the trick? Notice that command is a character, so you can easily pass it as a function argument. If I'm getting the point...
Anyway, evaluating user-specified commands is potentially malicious, therefore not recommended. You should either install AppArmor and tweak it (which is not an easy one), or drop the whole evaluation thing...