For instance, mask in Haskell is of type (((forall a . IO a -> IO a) -> IO b) -> IO b). What is the purpose of such a function? Any language with a notion of a higher-order function is welcome.
For purposes of exactness, include only functions which are defined in public libraries or in use in live code.
Okasaki exhibits a 6th order function:
Even Higher-Order Functions for Parsing
or
Why Would Anyone Ever Want To Use a
Sixth-Order Function?
Related
When I try to write
fun foo :: "nat ⇒ nat"
where "foo = Suc"
Isabelle complains that "Function has no arguments". Why is this? What's wrong with a fun having no arguments? I know that I can change fun to abbreviation or definition and all is fine. But it seems a shame to spoil the uniformity of my .thy file, in which every other definition is declared with fun.
Alex Krauss, the author of the present generation of fun and function in Isabelle/HOL had particular opinions about that, and probably also good formal reasons to say that a "function" really needs to have arguments. In SML you actually have a similar situation: "constants" without arguments are defined via val not fun.
In the rare situations, where zero-argument functions are needed in Isabelle/HOL, it is sufficiently easy to use definition [simp] "c = t to get mostly the same result, apart from the name of the key theorems produced internally: c_def versus c.simps.
I think the main inconvenience and occasional pitfal of function in this respect is its exposure of the auxiliary c_def that is not meant to be used in applications: it unfolds the internal construction behind the function specification, not its main characterizing equation.
Since no other answer seems to be on its way, let me repeat and extend on my previous comment. In Isabelle/HOL there are three ways of defining functions:
definition for non-recursive functions (which could just be seen as constants that serve as abbreviations for longer statements).
primrec for primitive recursive functions (in the sense that in every recursive call there is a fixed argument where a datatype constructor is removed).
fun for general recursive functions.
Both, primrec and fun expect at least one argument. For the former it is automatically checked that one of its arguments corresponds to the syntactic pattern of primitive recursion on datatypes, while for the latter the task of proving "termination" (or rather the well-foundedness of the call graph) will be delegated to the user in hard cases.
Anyway, it would of course be possible to relay primrec and fun to definition for easy cases without arguments, but at least to me this rather seems to obfuscate things for the user instead of clearing them up.
I'm studying Elixir and when I use only or except operators when importing functions from a module I need to specify an arity number. Why?
e.g.
import :math, only: [sqrt: 1]
or
import :math, except: [sin: 1, cos: 1]
Across the Erlang ecosystem functions are identified by name + arity. In most other languages you can overload functions by name. In other words, in the Erlang world foo/1 (that is, foo(one_arg)) is a completely different function than foo/2 (as in, foo(one_arg, two_arg)), but in Python or Ruby "foo" is the complete function identity and it can be invoked with a flexible number of arguments.
The convention is to name functions that mean the same thing the same name, especially in the case of recursively defined iterative functions like:
factorial(N) -> factorial(1, N).
factorial(A, 0) -> A;
factorial(A, N) -> factorial(A * N, N - 1).
Notice there are two periods, meaning there are two completely independent definitions here. We could just as well write:
fac(N) -> sum(1, N).
sum(A, 0) -> A;
sum(A, N) -> sum(A * N, N - 1).
But you will notice that the second version's savings in terms of character strokes is drastically outweighed by the convolution of its semantics -- the second version's internal function name is an outright lie!
The convention is to name related functions the same thing, but in actuality overloading functions by arity is not allowed in the Erlang ecosystem. To make such overloading acceptable would require significant feature additions to the compiler of a language that compiles to Erlang's bytecode, and that would be a pointless waste of painful effort. The current situation is about as good as we can get in a dynamically typed functional language (without it becoming a statically typed functional language... and that's another discussion entirely).
The end result is that you have to specify exactly what function you want to import, whether in Erlang or Elixir, and that means identifying it by name + arity. Recognizing that the common convention is to use the same name for functions that do the same thing but have different argument counts (often simply writing a cascade of curried definitions to enclose common defaults), Elixir provides a shortcut to including functions by groups instead of enumerating them.
So when you import :math, only: [sqrt: 1] you only take math:sqrt/1 and left the rest of the module out (were there a math:sqrt/2 you would have ignored it). When you import :math, except: [sin: 1, cos: 1] you take everything but math:sin/1 and math:cos/1 (were there a math:sin/2 you would have taken it). The name + arity is a distinct identity. Imagine a big KV store of available functions. The keys are {module, func, arity}, meaning they are an atomic value to the system. If you're familiar with Erlang even a little this may strike you as familiar, because you deal with the tuple {Module, Function, Args} all the time.
Functions in Erlang and Elixir are uniquely identified by module/name/arity. In order to import/exclude the correct function, you need to specify all three parts. Another way of understanding this is to consider the case of capturing function references, e.g. &Map.get/2.
Even if two functions share the same name, they are actually completely different functions to the VM. In order to reference the correct one, you have to correctly identify the function you wish to call, hence the need to specify all three components with only, except.
I've been programming in more functional-style languages and have gotten to appreciate things like tuples, and higher-order functions such as maps, and folds/aggregates. Do either VHDL or Verilog have any of these kinds of constructs?
For example, is there any way to do even simple things like
divByThreeCount = count (\x -> x `mod` 3 == 0) myArray
or
myArray2 = map (\x -> x `mod` 3) myArray
or even better yet let me define my own higher-level constructs recursively, in either of these languages?
I think you're right that there is a clash between the imperative style of HDLs and the more functional aspects of combinatorial circuits. Describing parallel circuits with languages which are linear in nature is odd, but I think a full blown functional HDL would be a disaster.
The thing with functional languages (I find) is that it's very easy to write code which takes huge resources, either in time, memory, or processing power. That's their power; they can express complexity quite succinctly, but they do so at the expense of resource management. Put that in an HDL and I think you'll have a lot of large, power hungry designs when synthesised.
Take your map example:
myArray2 = map (\x -> x `mod` 3) myArray
How would you like that to synthesize? To me you've described a modulo operator per element of the array. Ignoring the fact that modulo isn't cheap, was that what you intended, and how would you change it if it wasn't? If I start breaking that function up in some way so that I can say "instantiate a single operator and use it multiple times" I lost a lot of the power of functional languages.
...and then we've got retained state. Retained state is everywhere in hardware. You need it. You certainly wouldn't use a purely functional language.
That said, don't throw away your functional design patterns. Combinatorial processes (VHDL) and "always blocks" (Verilog) can be viewed as functions that apply themselves to the data presented at their input. Pipelines can be viewed as chains of functions. Often the way you structure a design looks functional, and can share a lot with the "Actor" design pattern that's popular in Erlang.
So is there stuff to learn from functional programming? Certainly. Do I wish VHDL and Verilog took more from functional languages? Sometimes. The trouble is functional languages get too high level too quickly. If I can't distinguish between "use one instance of f() many times" and "use many instances of f()" then it doesn't do what a Hardware Description Language must do... describe hardware.
Have a look at http://clash-lang.org for an example of a higher-order language that is transpiled into VHDL/Verilog. It allows functions to be passed as arguments, currying etc., even a limited set of recursive data structures (Vec). As you would expect the pure moore function instantiates a stateful Moore machine given step/output functions and a start state. It has this signature:
moore :: (s -> i -> s) -> (s -> o) -> s -> Signal i -> Signal o
Signals model the time-evolved values in sequential logic.
BlueSpec is used by a small com. known by few people only - INTEL
It’s an extension of SystemVerilog (SV) and called BSV
But it’s NOT open - I don’t think you can try use or even learn it without paying BlueSpec.com BIG money
There’s also Lava that’s used by Xilinx but I don’t know if you can use it directly.
Note:
In Vhdl functions (and maybe Verilog too)
CAN’T have time delay (you can’t ask function to wait for clk event)
Vhdl has a std. lib. for complex num. cal.
But in Vhdl you can change or overload a system function like add (“+”) and define your implementation.
You can declare vector or matrix add sub mul or div (+ - * /) function,
use generic size (any) and recursive declarations - most synthesizers will "understand you" and do what you’ve asked.
Why is the Haskell implementation so focused on linked lists?
For example, I know Data.Sequence is more efficient
with most of the list operations (except for the cons operation), and is used a lot;
syntactically, though, it is "hardly supported". Haskell has put a lot of effort into functional abstractions, such as the Functor and the Foldable class, but their syntax is not compatible with that of the default list.
If, in a project I want to optimize and replace my lists with sequences - or if I suddenly want support for infinite collections, and replace my sequences with lists - the resulting code changes are abhorrent.
So I guess my wondering can be made concrete in questions such as:
Why isn't the type of map equal to (Functor f) => (a -> b) -> f a -> f b?
Why can't the [] and (:) functions be used for, for example, the type in Data.Sequence?
I am really hoping there is some explanation for this, that doesn't include the words "backwards compatibility" or "it just grew that way", though if you think there isn't, please let me know. Any relevant language extensions are welcome as well.
Before getting into why, here's a summary of the problem and what you can do about it. The constructors [] and (:) are reserved for lists and cannot be redefined. If you plan to use the same code with multiple data types, then define or choose a type class representing the interface you want to support, and use methods from that class.
Here are some generalized functions that work on both lists and sequences. I don't know of a generalization of (:), but you could write your own.
fmap instead of map
mempty instead of []
mappend instead of (++)
If you plan to do a one-off data type replacement, then you can define your own names for things, and redefine them later.
-- For now, use lists
type List a = [a]
nil = []
cons x xs = x : xs
{- Switch to Seq in the future
-- type List a = Seq a
-- nil = empty
-- cons x xs = x <| xs
-}
Note that [] and (:) are constructors: you can also use them for pattern matching. Pattern matching is specific to one type constructor, so you can't extend a pattern to work on a new data type without rewriting the pattern-matchign code.
Why there's so much list-specific stuff in Haskell
Lists are commonly used to represent sequential computations, rather than data. In an imperative language, you might build a Set with a loop that creates elements and inserts them into the set one by one. In Haskell, you do the same thing by creating a list and then passing the list to Set.fromList. Since lists so closely match this abstraction of computation, they have a place that's unlikely to ever be superseded by another data structure.
The fact remains that some functions are list-specific when they could have been generic. Some common functions like map were made list-specific so that new users would have less to learn. In particular, they provide simpler and (it was decided) more understandable error messages. Since it's possible to use generic functions instead, the problem is really just a syntactic inconvenience. It's worth noting that Haskell language implementations have very little list-speficic code, so new data structures and methods can be just as efficient as the "built-in" ones.
There are several classes that are useful generalizations of lists:
Functor supplies fmap, a generalization of map.
Monoid supplies methods useful for collections with list-like structure. The empty list [] is generalized to other containers by mempty, and list concatenation (++) is generalized to other containers by mappend.
Applicative and Monad supply methods that are useful for interpreting collections as computations.
Traversable and Foldable supply useful methods for running computations over collections.
Of these, only Functor and Monad were in the influential Haskell 98 spec, so the others have been overlooked to varying degrees by library writers, depending on when the library was written and how actively it was maintained. The core libraries have been good about supporting new interfaces.
I remember reading somewhere that map is for lists by default since newcomers to Haskell would be put off if they made a mistake and saw a complex error about "Functors", which they have no idea about. Therefore, they have both map and fmap instead of just map.
EDIT: That "somewhere" is the Monad Reader Issue 13, page 20, footnote 3:
3You might ask why we need a separate map function. Why not just do away with the current
list-only map function, and rename fmap to map instead? Well, that’s a good question. The
usual argument is that someone just learning Haskell, when using map incorrectly, would much
rather see an error about lists than about Functors.
For (:), the (<|) function seems to be a replacement. I have no idea about [].
A nitpick, Data.Sequence isn't more efficient for "list operations", it is more efficient for sequence operations. That said, a lot of the functions in Data.List are really sequence operations. The finger tree inside Data.Sequence has to do quite a bit more work for a cons (<|) equivalent to list (:), and its memory representation is also somewhat larger than a list as it is made from two data types a FingerTree and a Deep.
The extra syntax for lists is fine, it hits the sweet spot at what lists are good at - cons (:) and pattern-matching from the left. Whether or not sequences should have extra syntax is further debate, but as you can get a very long way with lists, and lists are inherently simple, having good syntax is a must.
List isn't an ideal representation for Strings - the memory layout is inefficient as each Char is wrapped with a constructor. This is why ByteStrings were introduced. Although they are laid out as an array ByteStrings have to do a bit of administrative work - [Char] can still be competitive if you are using short strings. In GHC there are language extensions to give ByteStrings more String-like syntax.
The other major lazy functional Clean has always represented strings as byte arrays, but its type system made this more practical - I believe the ByteString library uses unsafePerfomIO under the hood.
With version 7.8, ghc supports overloading list literals, compare the manual. For example, given appropriate IsList instances, you can write
['0' .. '9'] :: Set Char
[1 .. 10] :: Vector Int
[("default",0), (k1,v1)] :: Map String Int
['a' .. 'z'] :: Text
(quoted from the documentation).
I am pretty sure this won't be an answer to your question, but still.
I wish Haskell had more liberal function names(mixfix!) a la Agda. Then, the syntax for list constructors (:,[]) wouldn't have been magic; allowing us to at least hide the list type and use the same tokens for our own types.
The amount of code change while migrating between list and custom sequence types would be minimal then.
About map, you are a bit luckier. You can always hide map, and set it equal to fmap yourself.
import Prelude hiding(map)
map :: (Functor f) => (a -> b) -> f a -> f b
map = fmap
Prelude is great, but it isn't the best part of Haskell.
Is there a way to print polymorphic values in Standard ML (SML/NJ specifically)? I have a polymorphic function that is not doing what I want and due to the abysmal state that is debugging in SML (see Any real world experience debugging a production functional program?), I would like to see what it is doing with some good-ol' print's. A simple example would be (at a prompt):
fun justThisOnce(x : 'a) : 'a = (print(x); x);
justThisOnce(42);
Other suggestions are appreciated. In the meantime I'll keep staring the offending code into submission.
Update
I was able to find the bug but the question still stands in the hopes of preventing future pain and suffering.
No, there is no way to print a polymorphic value.
You have two choices:
Specialize your function to integers or strings, which are readily printed. Then when the bug is slain, make it polymorphic again.
If the bug manifests only with some other instantiation, pass show as an additional argument to your function. So for example, if your polymorphic function has type
'a list -> 'a list
you extend the type to
('a -> string) -> 'a list -> 'a list
You use show internally to print, and then by partially applying the function to a suitable show, you can get a version you can use in the original context.
It's very tedious but it does help. (But be warned: it may drive you to try Haskell.)
Only in MOSML: Merely for debugging purposes, use the printVal function. Note that this function is only available in toplevel mode, it will cause an error when you try to compile your program.
Edit: In that case, I'm afraid there is no general solution, you need to translate your values explicitly to strings, and print those. See other answer for good suggestions.