Is is possible to use function overloading in R? - r

Is it possible to use function overloading in R? I've seen answers to this question from 2013, but the language and its possibilities evolved a lot over time.
In my specific case, I'd like the following:
formFeedback <- function(success, message)
formFeedback <- function(feedback_object)
Is this at all possible? If not, what's the best practice? Should I use two different named functions, should I make all parameters optional, or should I force the users to always pass an object and make the other one impossible?

No, there's no function overloading in R, though there are a few object systems that accomplish similar things. The simplest one is S3: it dispatches based on the class attribute of the first argument (though there are a few cases where it looks at other arguments).
So if your feedback_object has a class attribute, you could make your first form the default and your second one the method for that class, but you'd have to rename the first argument to match in all methods. The functions would need to have different names, formFeedback.default and formFeedback.yourclass.
There's also a more complicated system called S4 that can dispatch on the types of all arguments.
And you can craft your own dispatch if you like (and this is the basis for several other object systems in packages), using whatever method you like to decide which function to call.

Related

What is Multiple dispatch and how does one use it in Julia?

I have seen and heard many times that Julia allows "multiple dispatch", but I am not really sure what that means or looks like. Can anyone provide me an example of what it looks like programmatically and what it enables?
From the Julia docs
The choice of which method to execute when a function is applied is called dispatch. Julia allows the dispatch process to choose which of a function's methods to call based on the number of arguments given, and on the types of all of the function's arguments. This is different than traditional object-oriented languages, where dispatch occurs based only on the first argument, which often has a special argument syntax, and is sometimes implied rather than explicitly written as an argument. 1 Using all of a function's arguments to choose which method should be invoked, rather than just the first, is known as multiple dispatch. Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to "belong" to one argument more than any of the others: does the addition operation in x + y belong to x any more than it does to y? The implementation of a mathematical operator generally depends on the types of all of its arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a powerful and convenient paradigm for structuring and organizing programs.
So in short: other languages rely on the first parameter of a method in order to determine which method should be called whereas in Julia, multiple parameters are taken into account. This enables multiple definitions of similar functions that have the same initial parameter.
A simple example of multiple dispatch in Julia can be found here.

How to pass an object by reference and value in Julia?

I know that from here:
Julia function arguments follow a convention sometimes called "pass-by-sharing", which means that values are not copied when they are passed to functions. Function arguments themselves act as new variable bindings (new locations that can refer to values), but the values they refer to are identical to the passed values. Modifications to mutable values (such as Arrays) made within a function will be visible to the caller. This is the same behavior found in Scheme, most Lisps, Python, Ruby and Perl, among other dynamic languages.
Given this, it's clear to me that to pass by reference, all you need to do is have a mutable type that you pass into a function and edit.
My question then becomes, how can I clearly distinguish between pass by value and pass by reference? Does anyone have an example that shows a function being called twice; once with pass by reference, and once with pass by value?
I saw this post which alludes to some similar ideas, but it did not fully answer my question.
In Julia, functions always have pass-by-sharing argument-passing behavior:
https://docs.julialang.org/en/v1/manual/functions/
This argument-passing convention is also used in most general purpose dynamic programming languages, including various Lisps, Python, Perl and Ruby. A good and useful description can be found here:
https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing
In short, pass-by-sharing works like pass-by-reference but you cannot change which value a binding in the calling scope refers to by reassigning to an argument in the function being called—if you reassign an argument, the binding in the caller is unchanged. This means that in general you cannot use functions to change bindings, such as for example to swap to variables. (Macros can, however, modify bindings in the caller.) In particular, if a variable in the caller refers to an immutable value like an integer or a floating-point number, its value cannot be changed by a function call since which object the variable refers to cannot be changed by a function call and the value itself cannot be modified as it is immutable.
If you want to have something like R or Matlab pass by value behavior, you need to explicitly create a copy of the argument before modifying it. This is precisely what R and Matlab do when an argument is passed in a modified and an external reference to the argument remains. In Julia it must be done explicitly by the programmer rather than being done automatically by the system. A downside is that the system can sometimes know that no copy is required (no external references remain) when the programmer cannot generally know this. That ability, however, is deeply tied with the reference counting garbage collections technique, which is not used by Julia due to performance considerations.
By convention, functions which mutate the contents of an argument have a ! postfix (e.g., sort v/s sort!).

Adding a Base function vs using a unique function name in Julia

I am porting a library to Julia and am curious what would be considered best practice for adding Base functions from a module.
This library contains functions like values(t::MyType) and keys(t::MyType) that take unique struct types but do not really do the same thing or return the same types as the Base functions
What would be the best practice in this case?
Just add Base.values(t::MyType) and Base.keys(t::MyType) functions so they can be used without prefixes.
Change the function names to my_type_keys(t::MyType) and my_type_values(t::MyType)
Use their original names and require them to be prefixed as MyModule.values(t) and MyModule.keys(t)
If you extend Base functions you should aim for them to do conceptually the same thing. Also, you should only extend Base functions to dispatch on types you define in your own package. The rule is that you can define methods for outside (e.g. Base or some other package's) functions for your own types. Or define you own functions for outside types. But defining methods for outside functions on outside types is "type piracy".
Given that you'd be defining them on your own types, it's not a problem that the return values differ, as long as the functions are conceptualy the same.
With regards to option 2 or 3 you can do both. Option 3 requires that you don't import the Base functions explicitly (in that case you'd be extending them by defining new methods rather than defining new functions with the same name), that you don't export them, and most likely that you'll not be using the Base functions inside your module (keys is really widely used for example - you can use it but would have to preface it with Base. inside the module).
Option 2 is always safe, especially if you can find a better name than my_keys. BUT I think it is preferable to use the same name IF the function is doing conceptually the same thing, as it simplifies the user experience a lot. You can assume most users will know the Base functions and try them out if that's intuitive.

Changing method dispatch in Common Lisp

I'm trying to simulate something akin to Haskell's typeclasses with Common Lisp's CLOS. That is, I'd like to be able to dispatch a method on an object's "typeclasses" instead of its superclasses.
I have a metaclass defined for classes which have and implement typeclasses(which are just other classes). Those classes(those that implement typeclasses) have a slot containing the list of the typeclasses they implement.
I'd like to be able to define methods for a typeclass, and then be able to dispatch that method on objects whose class implement that typeclass. And I'd like to be able to add and remove typeclasses dynamically.
I figure I could probably do this by changing the method dispatch algorithm, though that doesn't seem too simple.
Anybody is comfortable enough with CLOS and the MOP to give me some suggestions?
Thanks.
Edit: My question might be specified as, how do I implement compute-applicable-methods-using-classes and compute-applicable-methods for a "custom" generic-function class such that if some of the specializers of a generic function method are typeclasses(classes whose metaclass is the 'typeclass' class), then the corresponding argument's class must implement the typeclass(which simply means having the typeclass stored in a slot of the argument's class) for the method to be applicable?
From what I understand from documentation, when a generic function is called, compute-discriminating-functionis first called, which will first attempt to obtain applicable methods through compute-applicable-methods-using-classes, and if unsuccessful, will try the same with compute-applicable-methods.
While my definition of compute-applicable-methods-using-classes seems to work, the generic function fails to dispatch an applicable function. So the problem must be in compute-discriminating-function or compute-effective-method.
See code.
This is not easily achievable in Common Lisp.
In Common Lisp, operations (generic functions) are separate from types (classes), i.e. they're not "owned" by types. Their dispatch is done at runtime, with the possibility of adding, modifying and removing methods at runtime as well.
Usually, errors from missing methods are signaled only at runtime. The compiler has no way to know if a generic function is being "well" used or not.
The idiomatic way in Common Lisp is to use generic functions and describe its requirements, or in other words, the closest to an interface in Common Lisp is a set of generic functions and a marker mixin class. But most usually, only a protocol is specified, and its dependencies on other protocols. See, for instance, the CLIM specification.
As for type classes, it's a key feature that keeps the language not only fully type-safe, but also makes it very extensible in that aspect. Otherwise, either the type system would be too strict, or the lack of expressiveness would lead to type-unsafe situations, at least from the compiler's point of view. Note that Haskell doesn't keep, or doesn't have to keep, object types at runtime, it takes every type inference at compile-time, much in contrast with idiomatic Common Lisp.
To have something similar to type classes in Common Lisp at runtime, you have a few choices
Should you choose to support type classes with its rules, I suggest you use the meta-object protocol:
Define a new generic function meta-class (i.e. one which inherits from standard-generic-function)
Specialize compute-applicable-methods-using-classes to return false as a second value, because classes in Common Lisp are represented solely by their name, they're not "parameterizable" or "constrainable"
Specialize compute-applicable-methods to inspect the argument's meta-classes for types or rules, dispatch accordingly and possibly memoize results
Should you choose to only have parameterizable types (e.g. templates, generics), an existing option is the Lisp Interface Library, where you pass around an object that implements a particular strategy using a protocol. However, I see this mostly as an implementation of the strategy pattern, or an explicit inversion of control, rather than actual parameterizable types.
For actual parameterizable types, you could define abstract unparameterized classes from which you'd intern concrete instances with funny names, e.g. lib1:collection<lib2:object>, where collection is the abstract class defined in the lib1 package, and the lib2:object is actually part of the name as is for a concrete class.
The benefit of this last approach is that you could use these classes and names anywhere in CLOS.
The main disadvantage is that you must still generate concrete classes, so you'd probably have your own defmethod-like macro that would expand into code that uses a find-class-like function which knows how to do this. Thus breaking a significant part of the benefit I just mentioned, or otherwise you should follow the discipline of defining every concrete class in your own library before using them as specializers.
Another disadvantage is that without further non-trivial plumbing, this is too static, not really generic, as it doesn't take into account that e.g. lib1:collection<lib2:subobject> could be a subclass of lib1:collection<lib2:object> or vice-versa. Generically, it doesn't take into account what is known in computer science as covariance and contravariance.
But you could implement it: lib:collection<in out> could represent the abstract class with one contravariant argument and one covariant argument. The hard part would be generating and maintaining the relationships between concrete classes, if at all possible.
In general, a compile-time approach would be more appropriate at the Lisp implementation level. Such Lisp would most probably not be a Common Lisp. One thing you could do is to have a Lisp-like syntax for Haskell. The full meta-circle of it would be to make it totally type-safe at the macro-expansion level, e.g. generating compile-time type errors for macros themselves instead of only for the code they generate.
EDIT: After your question's edit, I must say that compute-applicable-methods-using-classes must return nil as a second value whenever there is a type class specializer in a method. You can call-next-method otherwise.
This is different than there being a type class specializer in an applicable method. Remember that CLOS doesn't know anything about type classes, so by returning something from c-a-m-u-c with a true second value, you're saying it's OK to memoize (cache) given the class alone.
You must really specialize compute-applicable-methods for proper type class dispatching. If there is opportunity for memoization (caching), you must do so yourself here.
I believe you'll need to override compute-applicable-methods and/or compute-applicable-methods-using-classes which compute the list of methods that will be needed to implement a generic function call. You'll then likely need to override compute-effective-method which combines that list and a few other things into a function which can be called at runtime to perform the method call.
I really recommend reading The Art of the Metaobject Protocol (as was already mentioned) which goes into great detail about this. To summarize, however, assume you have a method foo defined on some classes (the classes need not be related in any way). Evaluating the lisp code (foo obj) calls the function returned by compute-effective-method which examines the arguments in order to determine which methods to call, and then calls them. The purpose of compute-effective-method is to eliminate as much of the run-time cost of this as is possible, by compiling the type tests into a case statement or other conditional. The Lisp runtime thus does not have to query for the list of all methods each time you make a method call, but only when you add, remove or change a method implementation. Usually all of that is done once at load time and then saved into your lisp image for even better performance while still allowing you to change these things without stopping the system.

How to hand over variables to a function? With an array or variables?

When I try to refactor my functions, for new needs, I stumble from time to time about the crucial question:
Shall I add another variable with a default value? Or shall I use only one array, where I´m able to add an additional variable without breaking the API?
Unless you need to support a flexible number of variables, I think it's best to explicitly identify each parameter. In most cases you can add an overloaded method that has a different signature to support the extra parameter while still supporting the original method signature. If you use an array for passing variables it just makes it too confusing for users of your API. Obviously there are some inputs that lend themselves to an array (a list of points in a polygon, a list of account IDs you wish to perform an action on, etc.) but if it's not a variable that you would reasonably expect to be an array or list, you should pass it into the method as a separate parameter.
Just like many questions in programming, the right answer is "it depends".
To take Javascript/jQuery as an example, one good rule of thumb is whether the parameter will be required each time the function is called or whether it is optional. For example, the main jQuery function itself requires an expression to determine what element(s) the operation will affect:
jQuery(expresssion)
It makes no sense to try to pass this parameter as part of an array as it will be required every time this function is called.
On the other hand, many jQuery plugins require several miscellaneous parameters that may be optional. By convention, these are passed as parameters via an 'options' array. As you said, this provides a nice interface as new parameters can be added without affecting the existing API. This makes the API clean as well since the user can ignore those options that are not applicable.
In general, when several parameters are involved, passing them as an array is a nice convention as many of them are certainly going to be optional. This would have helped clean up many WIN32 API's, although it is more difficult to deal with arrays in C/C++ than in Javascript.
It depends on the programming language used.
If you have a run-of-the-mill OO language, you should use an object that you can easily extend, if you are really concerned about API consistency.
If that doesn't matter that much, there is the option of changing the method signature and overloading the method with more / different parameters.
If your language doesn't support either and you want the API to be binary stable, use an array.
There are several considerations that must be made.
Where is the function used? - Only in code you created? One place or hundreds of places? The amount of work that will need to be done to maintain existing code is important. Remember to include the amount of time it will take to communicate to other programmers that may currently be using your function.
How critical is the new parameter? - Do you want to require it to be used? If it has a default value, will that default value break existing use of the function in any subtle ways?
Ease of comprehension - How many parameters are already passed into the function? The larger the number, the more confusing and error prone it will be. Code Complete recommends that you restrict the number of parameters to 7 or less. If you need more than that, you should try to abstract some or all of the related parameters into one object.
Other special considerations - Do you want to optimize your efforts for any special conditions such as code speed or size? Are there any special considerations that must be taken into account for your execution environment? Keep in mind your goals for the project and make sure you aren't working against them with whatever design choice you make.
In his book Code Complete, Steve McConnell decrees that a function should never have more than 7 arguments, and rarely even that many. He presents compelling arguments - that I can't cite from memory, alas.
Clean Code, more recently, advocates even fewer arguments.
So unless the number of things to pass is really small, they should be passed in an enveloping structure. If they're homogenous, an array. If not, then a reasonably lightweight object should be built for the purpose.
You should do neither. Just add the parameter and change all callers to supply the proper default value. The reason is that parameters with default values can only be at the end, and will not be able to add any more required parameters anywhere in the parameters list, without having a risk of misinterpretation.
These are the critical steps to disaster:
1. add one or two parameters with defaults
2. some callers will supply it, and some will rely on defaults.
[half a year passed]
3. add a required parameter (before them)
4. change all callers to accept the required parameter
5. get a phone call, or other event which will make you forget to change one of the instances in part#2
6. now your program compiles perfectly, but is invalid.
Unfortunately, in function call semantics we usually don't have a chance to say, by name, which value goes where.
Array is also not a proper solution. Array should be used as a connection of similar objects, upon which there's a uniform activity performed. As they say here, if it's worth refactoring, it's worth refactoring now.

Resources