Is it possible to send an anonymous message to an object? I want to compose three objects like this (think FP):
" find inner product "
reduce + (applyToAll * (transpose #(1 2 3) #(4 5 6)))
where reduce, applyToAll and transpose are objects and +, * and the two arrays are arguments passed to anonymous messages sent to those objects. Is it possible to achieve the same using blocks? (but no explicit usage of value:).
Perhaps what you really want to do is define a DSL inside Smalltalk?
With HELVETIA we explore a lightweight approach to embed new languages into the host language. The approach reuses the existing toolchain of editor, parser, compiler and debugger by leveraging the abstract syntax tree (AST) of the host environment. Different languages cleanly blend into each other and into existing code.
aRealObject reduceMethod: +;
applyToAll: *;
transpose: #(#(1 2 3) #(4 5 6));
evaluate
would work when aRealObject has defined the right methods. Where do you need a block?
You are looking for doesNotUnderstand:. If reduce is an object that does not implement + but you send it anyway, then instead its doesNotUnderstand: method will be invoked. Normally it just raises an error. But you can override the default, and access the selector + and the other argument and do whatever you like with them.
For simplicity, create a class Reduce. On its class side, define the method:
doesNotUnderstand: aMessage
^aMessage argument reduce: aMessage selector
Then you can use it like this:
Reduce + (#(1 2 3) * #(4 5 6))
which in a Squeak workspace answers 32, as expected.
It works because * is already implemented for Collections with suitable semantics.
Alternatively, add a class ApplyToAll with this class-side method:
doesNotUnderstand: aMessage
^aMessage argument collect: [:e | e reduce: aMessage selector]
and also add this method to SequenceableCollection:
transposed
^self first withIndexCollect: [:c :i | self collect: [:r | r at: i]]
Then you can write
Reduce + (ApplyToAll * #((1 2 3) #(4 5 6)) transposed)
which is pretty close to your original idea.
Related
I am trying to shadow the mathematical operators in the CL package. Except for *, / and +, this works fine. However the symbol-value of those symbols are set by the implementation to values that I use frequently at the REPL (the function is interactive-eval in SBCL).
Since they're set in the previous form evaluation, I can't get at them except through the symbol in the CL package, i.e. cl:*, after the form is evaluated. I thought about making * a symbol macro that would dispatch on either my vectorised version of *, if in a function context, or return the value of cl:* otherwise.
However there doesn't seem to be an easy way to determine whether the symbol is being used as a function or value.
A stylised version of what I've got so far is:
(in-package :my-math-package)
(setf (fdefinition '+) #'my-vectorised-version-of-+)`
Since my version of the + function is a superset of CL's, everything works fine, except for trying to use * at the REPL whilst in my package. I could use cl:* and it works, but I'm trying to keep cl:* and my-package:* value slots synced.
One analogy for the symbol value might be:
(setf (symbol-value '+) #'cl:+)
But that doesn't work for two reasons:
When compiling cl:* has no symbol value
Even if it did, it would not have the current value of the last evaluated form, it would have a 'snapshot' of the value of cl:* at the time.
So I need to dynamically keep my-package:* and cl:* to have the same symbol-value.
Anyone have any ideas? Am I missing something obvious?
This is a classic edge-case of the package system in CL. However for most purposes a symbol macro will do just what you want.
As an example, if I'm in a package where * is not cl:* then I can, for instance, say this:
(defun * (a b)
(+ a b))
(define-symbol-macro * cl:*)
And now
> (* 1 2)
3
> *
3
> (funcall #'* 3 4)
7
> *
7
This is because symbol macros affect references to symbols for value not references for their function definition.
What you will lose is the ability to bind * and have that binding be special, because you can't declare symbol macros special. So
> (funcall (let ((* 2))
(lambda () *)))
2
for instance. That's probably not a huge problem for *.
More significantly you will also lose any places where * is used just as a symbol. For instance (declare (type (array * (* *))) ...) is no longer going to work, at all. There is nothing you can do about this because these are simply uses of * as a symbol. This is an inherent limitation of the package system.
As an aside: if you overload the operators of the field of numbers (so * and +) you probably want to think much harder about consistency than people usually do. Or just give up on consistency, which is what people normally seem to do. In particular (+) should return the zero of the field and (*) should return the 1 of it. But ... which field?
I saw this example in the Julia language documentation. It uses something called Base. What is this Base?
immutable Squares
count::Int
end
Base.start(::Squares) = 1
Base.next(S::Squares, state) = (state*state, state+1)
Base.done(S::Squares, s) = s > S.count;
Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type
Base.length(S::Squares) = S.count;
Base is a module which defines many of the functions, types and macros used in the Julia language. You can view the files for everything it contains here or call whos(Base) to print a list.
In fact, these functions and types (which include things like sum and Int) are so fundamental to the language that they are included in Julia's top-level scope by default.
This means that we can just use sum instead of Base.sum every time we want to use that particular function. Both names refer to the same thing:
Julia> sum === Base.sum
true
Julia> #which sum # show where the name is defined
Base
So why, you might ask, is it necessary is write things like Base.start instead of simply start?
The point is that start is just a name. We are free to rebind names in the top-level scope to anything we like. For instance start = 0 will rebind the name 'start' to the integer 0 (so that it no longer refers to Base.start).
Concentrating now on the specific example in docs, if we simply wrote start(::Squares) = 1, then we find that we have created a new function with 1 method:
Julia> start
start (generic function with 1 method)
But Julia's iterator interface (invoked using the for loop) requires us to add the new method to Base.start! We haven't done this and so we get an error if we try to iterate:
julia> for i in Squares(7)
println(i)
end
ERROR: MethodError: no method matching start(::Squares)
By updating the Base.start function instead by writing Base.start(::Squares) = 1, the iterator interface can use the method for the Squares type and iteration will work as we expect (as long as Base.done and Base.next are also extended for this type).
I'll grant that for something so fundamental, the explanation is buried a bit far down in the documentation, but http://docs.julialang.org/en/release-0.4/manual/modules/#standard-modules describes this:
There are three important standard modules: Main, Core, and Base.
Base is the standard library (the contents of base/). All modules
implicitly contain using Base, since this is needed in the vast
majority of cases.
I'm learning Erlang from the very basic and have a problem with a tail recursive function. I want my function to receive a list and return a new list where element = element + 1. For example, if I send [1,2,3,4,5] as an argument, it must return [2,3,4,5,6]. The problem is that when I send that exact arguments, it returns [[[[[[]|2]|3]|4]|5]|6].
My code is this:
-module(test).
-export([test/0]).
test()->
List = [1,2,3,4,5],
sum_list_2(List).
sum_list_2(List)->
sum_list_2(List,[]).
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail,[Result|Head +1]);
sum_list_2([], Result)->
Result.
However, if I change my function to this:
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail,[Head +1|Result]);
sum_list_2([], Result)->
Result.
It outputs [6,5,4,3,2] which is OK. Why the function doesn't work the other way around([Result|Head+1] outputing [2,3,4,5,6])?
PS: I know this particular problem is solved with list comprehensions, but I want to do it with recursion.
For this kind of manipulation you should use list comprehension:
1> L = [1,2,3,4,5,6].
[1,2,3,4,5,6]
2> [X+1 || X <- L].
[2,3,4,5,6,7]
it is the fastest and most idiomatic way to do it.
A remark on your fist version: [Result|Head +1] builds an improper list. the construction is always [Head|Tail] where Tail is a list. You could use Result ++ [Head+1] but this would perform a copy of the Result list at each recursive call.
You can also look at the code of lists:map/2 which is not tail recursive, but it seems that actual optimization of the compiler work well in this case:
inc([H|T]) -> [H+1|inc(T)];
inc([]) -> [].
[edit]
The internal and hidden representation of a list looks like a chained list. Each element contains a term and a reference to the tail. So adding an element on top of the head does not need to modify the existing list, but adding something at the end needs to mutate the last element (the reference to the empty list is replaced by a reference to the new sublist). As variables are not mutable, it needs to make a modified copy of the last element which in turn needs to mutate the previous element of the list and so on. As far as I know, the optimizations of the compiler do not make the decision to mutate variable (deduction from the the documentation).
The function that produces the result in reverse order is a natural consequence of you adding the newly incremented element to the front of the Result list. This isn't uncommon, and the recommended "fix" is to simply list:reverse/1 the output before returning it.
Whilst in this case you could simply use the ++ operator instead of the [H|T] "cons" operator to join your results the other way around, giving you the desired output in the correct order:
sum_list_2([Head|Tail], Result)->
sum_list_2(Tail, Result ++ [Head + 1]);
doing so isn't recommended because the ++ operator always copies it's (increasingly large) left hand operand, causing the algorithm to operate in O(n^2) time instead of the [Head + 1 | Tail] version's O(n) time.
... partial application (or partial function application) refers to the process of fixing a
number of arguments to a function, producing another function of smaller arity.
I would like to find out if there is a specific name for the following: (pseudo-code!)
// Given functions:
def f(a, b) := ...
def g(a, b) := ...
def h(a, b) := ...
// And a construct of the following:
def cc(F, A, B) := F(A, B) // cc calls its argument F with A and B as parameters
// Then doing Partial Application for cc:
def call_1(F) := cc(F, 42, "answer")
def call_2(F) := cc(F, 7, "lucky")
// And the calling different matching functions this way:
do call_1(f)
do call_1(g)
do call_2(g)
do call_2(h)
Is there a name for this in functional programming? Or is it just partial application where the unbound parameter just happens to be a function
Actually, there's more to things like your call_N functions, beyond just partial application. Two things of note:
When you apply call_1 or call_2 to an argument, they can be immediately discarded; everything you do with them will be a tail call.
You could write similar functions that don't just apply the argument, but hold onto it for a while; this essentially lets the functions grab hold of their evaluation context, and give techniques for implementing complicated flow control via "jumping back" to previous contexts.
If you take the above two points and run with the concept, you'll eventually end up with continuation-passing style.
Is there a possibility of writing functions which are generic in respect to collection types they support other than using the seq module?
The goal is, not having to resort to copy and paste when adding new collection functions.
Generic programming with collections can be handled the same way generic programming is done in general: Using generics.
let f (map_fun : ('T1 -> 'T2) -> 'T1s -> 'T2s) (iter_fun : ('T2 -> unit) -> 'T2s -> unit) (ts : 'T1s) (g : 'T1 -> 'T2) (h : 'T2 -> unit)=
ts
|> map_fun g
|> iter_fun h
type A =
static member F(ts, g, h) = f (Array.map) (Array.iter) ts g h
static member F(ts, g, h) = f (List.map) (List.iter) ts g h
A bit ugly and verbose, but it's possible. I'm using a class and static members to take advantage of overloading. In your code, you can just use A.F and the correct specialization will be called.
For a prettier solution, see https://stackoverflow.com/questions/979084/what-features-would-you-add-remove-or-change-in-f/987569#987569 Although this feature is enabled only for the core library, it should not be a problem to modify the compiler to allow it in your code. That's possible because the source code of the compiler is open.
The seq<'T> type is the primary way of writing computations that work for any collections in F#. There are a few ways you can work with the type:
You can use functions from the Seq module (such as Seq.filter, Seq.windowed etc.)
You can use sequence comprehensions (e.g. seq { for x in col -> x * 2 })
You can use the underlying (imperative) IEnumerator<'T> type, which is sometimes needed e.g. if you want to implement your own zipping of collections (this is returned by calling GetEnumerator)
This is relatively simple type and it can be used only for reading data from collections. As the result, you'll always get a value of type seq<'T> which is essentially a lazy sequence.
F# doesn't have any mechanism for transforming collections (e.g. generic function taking collection C to collection C with new values) or any mechanism for creating collections (which is available in Haskell or Scala).
In most of the practical cases, I don't find that a problem - most of the work can be done using seq<'T> and when you need a specialized collection (e.g. array for performance), you typically need a slightly different implementation anyway.