Is there a possibility in Itcl to extend a class dynamically with methods inside the constructor?
I have some functions which are generated dynamically...
They look somehow like this:
proc attributeFunction fname {
set res "proc $fname args {
#set a attribute list in the class
}"
uplevel 1 $res
}
Now I have a file which has a list of possible attributes:
attributeFunction ::func1
attributeFunction ::func2
attributeFunction ::func3
...
This file gets sourced. But until now I am adding global functions.
It would be way nicer to add these functions as methods to an Itcl object.
A little background information:
This is used to generate an abstract language where the user can easily add these attributes by writing them without any other keyword. The use of functions here offers a lot of advantages I do not want to miss.
In Itcl 3, all you can do is redefine an existing method (using the itcl::body command). You can't create new methods in the constructor.
You can do this in Itcl 4, because is built on the foundation of TclOO (a fully dynamic OO core). You'll need the underlying TclOO facilities to do this, but the command you call is something like this:
::oo::objdefine [self] method myMethodName {someargument} {
puts "in the method we can do what we want..."
}
Here's a more complete example:
% package require itcl
4.0.2
% itcl::class Foo {
constructor {} {
::oo::objdefine [self] method myMethodName {someargument} {
puts "in the method we can do what we want..."
}
}
}
% Foo abc
abc
% abc myMethodName x
in the method we can do what we want...
Looks like it works to me…
Related
I have the following javascript method:
myFunc = function(callback) { callback.call(this, "hello", "world"); }
and I´m passing a java object that implements the 'call' method. In the java call method I get the two parameters "hello" and "world", but not 'this' (of course). Is there a way to access 'this' from java?
I´m interfacing java with d3.js and d3 has lots of callbacks in this way and 'this' is where d3 stores a selection.
Thanks
I´m not actually coding in Java but JRuby. In order to make a Java example
I´ll have to simplify my code bellow. Maybe this can help some. If not,
I´ll try to do a Java example.
# Function f1 will call the callback methods cb1 and cb2 with 'this' variable
# This is just a notation for creating javascript function. It calls
# #browser.executeJavaScriptAndReturnValue(scrpt), whith the function
# body (everything between EOT) modified to make a valid javascript script.
# f1 is a function with 2 arguments cb1, and cb2 which should be the
# callback functions
f1 = B.function(<<-EOT)
(cb1, cb2) {
cb1.call(this, "banana", "milk");
cb2.call(this, "arroz", "feijao");
}
EOT
# Proc is a closure. It receives two arguments |food1, food2|. This will
# become a java object per JRuby´s magic
proc = Proc.new { |food1, food2| puts "I hate #{food1} and #{food2}" }
# now call method f1 passing proc as the first argument and the block as
# the second argument. So cb1 = proc and cb2 = <the block bellow>. Method
# 'send' grabs the given arguments, converts them to java objects and then
# calls jxBrowser 'invoke' method with the given arguments.
f1.send(proc) { |food1, food2| puts "eu gosto de #{food1} e #{food2}" }
The result of executing this code is:
I hate banana and milk
eu gosto de arroz e feijao
As can be seen, the 'this' variable is just gone... I would like to be able to
capture the 'this' variable somehow in order to be able to use the context in the blocks. I´ve managed to make a workaround that allows capturing the 'this' variable, but it requires wrapping the block in another javascript function.
The whole idea of this code is to allow a JRuby developer to write Ruby code and get this code executed in jxBrowser without needing to use any javascript. Examples of this can already be seen by downloading mdarray-sol GEM, or going to https://github.com/rbotafogo/mdarray-sol. There you can see multiple examples of using d3.js with JRuby.
Please make sure that you follow the instruction at https://jxbrowser.support.teamdev.com/support/solutions/articles/9000013062-calling-java-from-javascript and inject your Java object with the call() method correctly:
Java code:
browser.addScriptContextListener(new ScriptContextAdapter() {
#Override
public void onScriptContextCreated(ScriptContextEvent event) {
Browser browser = event.getBrowser();
JSValue window = browser.executeJavaScriptAndReturnValue("window");
window.asObject().setProperty("java", new JavaObject());
}
});
...
public static class JavaObject {
public void call(JSValue window, String message) {
System.out.println(message);
}
}
JavaScript code:
window.java.call(window, 'Hello Java!');
here is the code of an alloy controller written in two different ways. Although the both work the same, Which one might be best practice?
example 1 of controller.js:
var currentState = true;
$.getState = function(){
return currentState;
}
example 2 of controller.js:
var currentState = true;
exports.getState = function(){
return currentState;
}
Titanium is based on the CommonJS framework. The exports variable is a special variable used typically to expose a public API in a class object. So when you want to expose a method of doSomething() on the MyModule.js class you would use the exports variable like this:
exports.doSomething() = function(args) {
//Some really cool method here
};
Then reference that class using
var myModule = require('MyModule');
myModule.doSomething();
However when referencing a view object the typical way to reference the is using the $. shortcut. You can see they prefer that method in the official documentation.
http://docs.appcelerator.com/platform/latest/#!/guide/Alloy_XML_Markup
The $ variable holds a reference to your controller instance. It also contains some references to all indexed views (understand, views for which you supplied an index in you xml markup).
Both ways are strictly equivalent as, during the compilation, Alloy will merge the content of the exports with your controller referenced in $. Adding them directly to the instance won't change a thing.
Neverthless, developers are used to see the public API as the set of functions exported via the special variable exports; Thus, I will recommend to keep using it in a clean and clear way (for instance, defining your functions in your module scope, and only expose them at the end or beginning of your controller).
function myFunction1 () { }
function myFunction2 () { }
function myFunction3 () { }
exports.myFunction1 = myFunction1;
exports.myFunction3 = myFunction3;
Thereby, your API is quite clear for people diving into your source code. (A readMe file is also highly recommended :) ).
I'm trying to use macro annotations in scala, where my macro annotation would take an argument of another type. It would then use scala reflection to look at the passed in type, and add some methods as appropriate.Eg.
trait MyTrait {
def x: Int
def y: Float
}
#MyAnnotation class MyClass //<-- somehow, this annotation should reference MyTrait
class MyAnnotation(val target: Any) extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro MyAnnotationImpl.impl
}
object MyAnnotationImpl {
def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
// if I can get a handle on the type MyTrait in here
// then I can call .members on it, etc.
...
}
}
Basically, the same thing as Using Scala reflection in Scala macros, except using macro annotations. However, when I try to template my macro annotation with a TypeTag
class MyAnnotation[T](val target: Any) extends StaticAnnotation {
def macroTransform[T](annottees: Any*) = macro MyAnnotationImpl.impl[T]
}
object MyAnnotationImpl {
def impl[T: c.WeakTypeTag](c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
...
}
}
I get
[error] /Users/imran/other_projs/learn_macros/macros/src/main/scala/com/imranrashid/oleander/macros/MacrosWithReflection.scala:7: macro annotation has wrong shape:
[error] required: def macroTransform(annottees: Any*) = macro ...
[error] found : def macroTransform[T](annottees: Any*) = macro ...
[error] class MyAnnotation[T](val target: Any) extends StaticAnnotation {
[error] ^
I've also tried to make the type an argument to my annotation, so I would use it like #MyAnnotation(MyTrait) class Foo .... I can extract the name as a String with something like
val targetTrait = c.prefix.tree match {
case Apply(Select(New(Ident(_)), nme.CONSTRUCTOR), List(Ident(termName))) => termName
}
but, I'm not sure what I can do w/ that String to get back the full type. I've also tried variants like #MyAnnotation(typeOf[MyTrait]) class Foo ..., and then use c.eval on the typeOf inside my macro, but that doesn't compile either.
In macro paradise 2.0.0-SNAPSHOT we have quite a tricky way of accessing type parameters for macro annotations (the situation will improve later on when we have dedicated APIs for that, but right now it's very difficult to introduce new functionality to scala-reflect.jar in macro paradise, so the current API is a bit rough).
For now it's necessary to specify the type parameter on the annotation class and not to declare any type parameters on the macroTransform method. Then, in macro expansion, access c.macroApplication and extract the untyped tree corresponding to the passed type parameter. Afterwards, do c.typeCheck as described in Can't access Parent's Members while dealing with Macro Annotations.
As Eugene points out in his answer it is possible to match on the tree of the whole macro application. Like every Scala method, annotation macro applications can take multiple type argument lists as well as multiple value argument lists.
Consider the macro application of an annotation macro called test:
#test[A, B][C, D](a, b)(c, d) trait Foo
In the implementation of test we can inspect the macro application by
println(show(c.macroApplication))
which will result in:
new test[A, B][C, D](a, b)(c, d).macroTransform(abstract trait Foo extends scala.AnyRef)
To extract the (type/value) parameters from the tree you have to pattern match on the tree. A parser for an arbitrary amount of parameter lists can be found in this project
Using this parser retrieving the first value argument of the macro application is as easy as
val List(List(arg)) = MacroApp(c.macroApplication).termArgs
I want to contain all my commands in a map and map from the command to a function doing the job (just a standard dispatch table). I started with the following code:
package main
import "fmt"
func hello() {
fmt.Print("Hello World!")
}
func list() {
for key, _ := range whatever {
fmt.Print(key)
}
}
var whatever = map[string](func()) {
"hello": hello,
"list": list,
}
However, it fails to compile because there is a recursive reference between the function and the structure. Trying to forward-declare the function fails with an error about re-definition when it is defined, and the map is at top-level. How do you define structures like this and initialize them on top level without having to use an init() function.
I see no good explanation in the language definition.
The forward-reference that exists is for "external" functions and it does not compile when I try to forward-declare the function.
I find no way to forward-declare the variable either.
Update: I'm looking for a solution that do not require you to populate the variable explicitly when you start the program nor in an init() function. Not sure if that is possible at all, but it works in all comparable languages I know of.
Update 2: FigmentEngine suggested an approach that I gave as answer below. It can handle recursive types and also allow static initialization of the map of all commands.
As you might already have found, the Go specifications states (my emphasis):
if the initializer of A depends on B, A will be set after B. Dependency analysis does not depend on the actual values of the items being initialized, only on their appearance in the source. A depends on B if the value of A contains a mention of B, contains a value whose initializer mentions B, or mentions a function that mentions B, recursively. It is an error if such dependencies form a cycle.
So, no, it is not possible to do what you are trying to do. Issue 1817 mentions this problem, and Russ Cox does say that the approach in Go might occasionally be over-restrictive. But it is clear and well defined, and workarounds are available.
So, the way to go around it is still by using init(). Sorry.
Based on the suggestion by FigmentEngine above, it is actually possible to create a statically initialized array of commands. You have, however, to pre-declare a type that you pass to the functions. I give the re-written example below, since it is likely to be useful to others.
Let's call the new type Context. It can contain a circular reference as below.
type Context struct {
commands map[string]func(Context)
}
Once that is done, it is possible to declare the array on top level like this:
var context = Context {
commands: map[string]func(Context) {
"hello": hello,
"list": list,
},
}
Note that it is perfectly OK to refer to functions defined later in the file, so we can now introduce the functions:
func hello(ctx Context) {
fmt.Print("Hello World!")
}
func list(ctx Context) {
for key, _ := range ctx.commands {
fmt.Print(key)
}
}
With that done, we can create a main function that will call each of the functions in the declared context:
func main() {
for key, fn := range context.commands {
fmt.Printf("Calling %q\n", key)
fn(context)
}
}
Just populate the map inside a function before using list(). Like that.
Sry I did not see that you wrote "without init()": that is not possible.
I am using a Cairngorm MVC architecture for my current project.
I have several commands which use the same type of function that returns a value. I would like to have this function in one place, and reuse it, rather than duplicate the code in each command. What is the best way to do this?
Create a static class or static method in one of your Cairngorm classes.
class MyStatic
{
public static function myFunction(value:String):String
{
return "Returning " + value;
}
}
Then where you want to use your function:
import MyStatic;
var str:String = MyStatic.myFunction("test");
Another option is to create a top level function (a la "trace"). Check out this post I wrote here.
You have lots of options here -- publicly defined functions in your model or controller, such as:
var mySharedFunction:Function = function():void
{
trace("foo");
}
... static methods on new or existing classes, etc. Best practice probably depends on what the function needs to do, though. Can you elaborate?
Create an abstract base class for your commands and add your function in the protected scope. If you need to reuse it anywhere else, refactor it into a public static method on a utility class.