What is define-struct in Racket and why are there no variables? - functional-programming

In one of my CS courses at university we have to work with Racket. Most of my programming time before university I spent with PHP and Java and also JavaScript. I know Racket is a functional programming language, just like JavaScript (Edit: Of course it isn't. But I felt like I was doing 'functional' programming with it, which after seeing the answers, is a wrong perception.) But I still don't understand some fundamental characteristics of Racket (Scheme).
Why are there no 'real' variables? Why is everything a function in Racket/Scheme? Why did the language designers not include them?
What is define-struct? Is it a function? Is it a class? I somehow, because of my PHP background, always think it's a class, but that can't be really correct.
My question here is I want to understand the concept of the language. I personally still think it's really strange and not like anything I worked with before, so my brain tries to compare it with JavaScript, but it just seems so different to me. Parallels/differences to JavaScript would help a lot!

There are 'real' variables in Racket. For example, if you write this
code
(define x 3)
the 'global' variable x will be set to value 3. If you now write
(set! x 4)
the variable x will change its value to 4. So, in Racket you can
have a 'normal' variables like in any 'normal' language, if you
want. The fact is that in Racket the preferred programming style is
functional as opposed to procedural. In functional programming style
variable mutation is discouraged.
define-struct is a Racket macro that you use to define 'structure
template' along with several other things. For example, if you
write:
(define-struct coord (x y))
you just defined a 'structure template' (i.e user type named coord
that have two "slots": x and y). After that, you can now:
create new "instance" of structure coord, for example like this:
(make-coord 2 3)
extract slot value from the structure object:
(coord-x (make-coord 2 3)) ;will return 2
or
(coord-y (make-coord 2 3)) ;will return 3
you can ask if some given object is just that structure. For
example, (coord? 3) will return #f, since 3 is not of type coord
structure, but
(coord? (make-coord 2 3)) ;will return #t

Perhaps the most popular or in-fashion way to program (using languages like C++, Javascript, and Java) has a few characteristics. You may take them for granted as self-evident, the only possible way. They include:
Imperative
You focus on saying "do this step, then this next step" and so on.
Using mutation.
You declare a variable, and keep assigning it different values ("mutate it").
Object-oriented.
You bundle code and data into classes, and declare instances of them as objects. Then you mutate the objects.
Learning Scheme or Racket will help you understand that these aren't the only way to go about it.
It might make your brain hurt at first, in the same way that a philosophy class might cause you to question things you took for granted. However unlike the philosophy class, there will be some practical pay-off to making brain hurt. :)
An alternative:
Functional (instead of imperative). Focus on expressions that return values, instead of making to-do lists of steps.
Immutable. Ditto.
Not object oriented. Using objects of classes can be a good approach to some problems, but not all. If you want to bundle code with data, there are some more general ways to go about it, such as closures with "let over lambda" and so on. Sometimes you don't need all the "baggage" of classes and especially inheritance.
Scheme and Racket make it easy to explore these ideas. But they are not "pure functional" like say Haskell, so if you really want to do imperative, mutable, object-oriented things you can do that, too. However there's not much point in learning Racket to do things the same way you would in Javascript.

Scheme very much has "real" variables.
The difference between a functional language (like Racket) and an imperative language (like JavaScript or PHP) is that in a functional language, you usually don't use mutable state. Variables are better thought of as names for values than as containers that can hold values. Instead of using things like looping constructs to change values in variables, you instead use recursion for flow control.
define-struct is a special syntactic form, kind of like keywords in other languages. (Unlike other languages, in Scheme you can create your own syntactic forms.) It defines a struct type, which is like a class but doesn't have methods. It also defines a number of functions that help you utilize your new struct type.

There are variables in Scheme.
> (define a 1)
#<unspecified>
> a
1
> (set! a 2)
#<unspecified>
> a
2
There are even mutable data structures in this language.
> (begin
> (define v (make-vector 4))
> (vector-set! v 0 'foo)
> (vector-set! v 1 'bar)
> (vector-set! v 2 'baz)
> (vector-set! v 3 'quux))
#<unspecified>
> v
#(foo bar baz quux)
Scheme is not a pure FP language; it does allow imperative programming, although it is mostly geared towards functional programming. That's a design choice that Scheme's inventors made.
define-struct is a special form; it's syntax, like the function or return keywords in JavaScript.

Related

Higher-order functions in VHDL or Verilog

I've been programming in more functional-style languages and have gotten to appreciate things like tuples, and higher-order functions such as maps, and folds/aggregates. Do either VHDL or Verilog have any of these kinds of constructs?
For example, is there any way to do even simple things like
divByThreeCount = count (\x -> x `mod` 3 == 0) myArray
or
myArray2 = map (\x -> x `mod` 3) myArray
or even better yet let me define my own higher-level constructs recursively, in either of these languages?
I think you're right that there is a clash between the imperative style of HDLs and the more functional aspects of combinatorial circuits. Describing parallel circuits with languages which are linear in nature is odd, but I think a full blown functional HDL would be a disaster.
The thing with functional languages (I find) is that it's very easy to write code which takes huge resources, either in time, memory, or processing power. That's their power; they can express complexity quite succinctly, but they do so at the expense of resource management. Put that in an HDL and I think you'll have a lot of large, power hungry designs when synthesised.
Take your map example:
myArray2 = map (\x -> x `mod` 3) myArray
How would you like that to synthesize? To me you've described a modulo operator per element of the array. Ignoring the fact that modulo isn't cheap, was that what you intended, and how would you change it if it wasn't? If I start breaking that function up in some way so that I can say "instantiate a single operator and use it multiple times" I lost a lot of the power of functional languages.
...and then we've got retained state. Retained state is everywhere in hardware. You need it. You certainly wouldn't use a purely functional language.
That said, don't throw away your functional design patterns. Combinatorial processes (VHDL) and "always blocks" (Verilog) can be viewed as functions that apply themselves to the data presented at their input. Pipelines can be viewed as chains of functions. Often the way you structure a design looks functional, and can share a lot with the "Actor" design pattern that's popular in Erlang.
So is there stuff to learn from functional programming? Certainly. Do I wish VHDL and Verilog took more from functional languages? Sometimes. The trouble is functional languages get too high level too quickly. If I can't distinguish between "use one instance of f() many times" and "use many instances of f()" then it doesn't do what a Hardware Description Language must do... describe hardware.
Have a look at http://clash-lang.org for an example of a higher-order language that is transpiled into VHDL/Verilog. It allows functions to be passed as arguments, currying etc., even a limited set of recursive data structures (Vec). As you would expect the pure moore function instantiates a stateful Moore machine given step/output functions and a start state. It has this signature:
moore :: (s -> i -> s) -> (s -> o) -> s -> Signal i -> Signal o
Signals model the time-evolved values in sequential logic.
BlueSpec is used by a small com. known by few people only - INTEL
It’s an extension of SystemVerilog (SV) and called BSV
But it’s NOT open - I don’t think you can try use or even learn it without paying BlueSpec.com BIG money
There’s also Lava that’s used by Xilinx but I don’t know if you can use it directly.
Note:
In Vhdl functions (and maybe Verilog too)
CAN’T have time delay (you can’t ask function to wait for clk event)
Vhdl has a std. lib. for complex num. cal.
But in Vhdl you can change or overload a system function like add (“+”) and define your implementation.
You can declare vector or matrix add sub mul or div (+ - * /) function,
use generic size (any) and recursive declarations - most synthesizers will "understand you" and do what you’ve asked.

Haskell "collections" language design

Why is the Haskell implementation so focused on linked lists?
For example, I know Data.Sequence is more efficient
with most of the list operations (except for the cons operation), and is used a lot;
syntactically, though, it is "hardly supported". Haskell has put a lot of effort into functional abstractions, such as the Functor and the Foldable class, but their syntax is not compatible with that of the default list.
If, in a project I want to optimize and replace my lists with sequences - or if I suddenly want support for infinite collections, and replace my sequences with lists - the resulting code changes are abhorrent.
So I guess my wondering can be made concrete in questions such as:
Why isn't the type of map equal to (Functor f) => (a -> b) -> f a -> f b?
Why can't the [] and (:) functions be used for, for example, the type in Data.Sequence?
I am really hoping there is some explanation for this, that doesn't include the words "backwards compatibility" or "it just grew that way", though if you think there isn't, please let me know. Any relevant language extensions are welcome as well.
Before getting into why, here's a summary of the problem and what you can do about it. The constructors [] and (:) are reserved for lists and cannot be redefined. If you plan to use the same code with multiple data types, then define or choose a type class representing the interface you want to support, and use methods from that class.
Here are some generalized functions that work on both lists and sequences. I don't know of a generalization of (:), but you could write your own.
fmap instead of map
mempty instead of []
mappend instead of (++)
If you plan to do a one-off data type replacement, then you can define your own names for things, and redefine them later.
-- For now, use lists
type List a = [a]
nil = []
cons x xs = x : xs
{- Switch to Seq in the future
-- type List a = Seq a
-- nil = empty
-- cons x xs = x <| xs
-}
Note that [] and (:) are constructors: you can also use them for pattern matching. Pattern matching is specific to one type constructor, so you can't extend a pattern to work on a new data type without rewriting the pattern-matchign code.
Why there's so much list-specific stuff in Haskell
Lists are commonly used to represent sequential computations, rather than data. In an imperative language, you might build a Set with a loop that creates elements and inserts them into the set one by one. In Haskell, you do the same thing by creating a list and then passing the list to Set.fromList. Since lists so closely match this abstraction of computation, they have a place that's unlikely to ever be superseded by another data structure.
The fact remains that some functions are list-specific when they could have been generic. Some common functions like map were made list-specific so that new users would have less to learn. In particular, they provide simpler and (it was decided) more understandable error messages. Since it's possible to use generic functions instead, the problem is really just a syntactic inconvenience. It's worth noting that Haskell language implementations have very little list-speficic code, so new data structures and methods can be just as efficient as the "built-in" ones.
There are several classes that are useful generalizations of lists:
Functor supplies fmap, a generalization of map.
Monoid supplies methods useful for collections with list-like structure. The empty list [] is generalized to other containers by mempty, and list concatenation (++) is generalized to other containers by mappend.
Applicative and Monad supply methods that are useful for interpreting collections as computations.
Traversable and Foldable supply useful methods for running computations over collections.
Of these, only Functor and Monad were in the influential Haskell 98 spec, so the others have been overlooked to varying degrees by library writers, depending on when the library was written and how actively it was maintained. The core libraries have been good about supporting new interfaces.
I remember reading somewhere that map is for lists by default since newcomers to Haskell would be put off if they made a mistake and saw a complex error about "Functors", which they have no idea about. Therefore, they have both map and fmap instead of just map.
EDIT: That "somewhere" is the Monad Reader Issue 13, page 20, footnote 3:
3You might ask why we need a separate map function. Why not just do away with the current
list-only map function, and rename fmap to map instead? Well, that’s a good question. The
usual argument is that someone just learning Haskell, when using map incorrectly, would much
rather see an error about lists than about Functors.
For (:), the (<|) function seems to be a replacement. I have no idea about [].
A nitpick, Data.Sequence isn't more efficient for "list operations", it is more efficient for sequence operations. That said, a lot of the functions in Data.List are really sequence operations. The finger tree inside Data.Sequence has to do quite a bit more work for a cons (<|) equivalent to list (:), and its memory representation is also somewhat larger than a list as it is made from two data types a FingerTree and a Deep.
The extra syntax for lists is fine, it hits the sweet spot at what lists are good at - cons (:) and pattern-matching from the left. Whether or not sequences should have extra syntax is further debate, but as you can get a very long way with lists, and lists are inherently simple, having good syntax is a must.
List isn't an ideal representation for Strings - the memory layout is inefficient as each Char is wrapped with a constructor. This is why ByteStrings were introduced. Although they are laid out as an array ByteStrings have to do a bit of administrative work - [Char] can still be competitive if you are using short strings. In GHC there are language extensions to give ByteStrings more String-like syntax.
The other major lazy functional Clean has always represented strings as byte arrays, but its type system made this more practical - I believe the ByteString library uses unsafePerfomIO under the hood.
With version 7.8, ghc supports overloading list literals, compare the manual. For example, given appropriate IsList instances, you can write
['0' .. '9'] :: Set Char
[1 .. 10] :: Vector Int
[("default",0), (k1,v1)] :: Map String Int
['a' .. 'z'] :: Text
(quoted from the documentation).
I am pretty sure this won't be an answer to your question, but still.
I wish Haskell had more liberal function names(mixfix!) a la Agda. Then, the syntax for list constructors (:,[]) wouldn't have been magic; allowing us to at least hide the list type and use the same tokens for our own types.
The amount of code change while migrating between list and custom sequence types would be minimal then.
About map, you are a bit luckier. You can always hide map, and set it equal to fmap yourself.
import Prelude hiding(map)
map :: (Functor f) => (a -> b) -> f a -> f b
map = fmap
Prelude is great, but it isn't the best part of Haskell.

Learning Scheme Macros. Help me write a define-syntax-rule

I am new to Scheme Macros. If I just have one pattern and I want to combine the define-syntax and syntax-rules, how do I do that?
(define-syntax for
(syntax-rules (from to)
[(for i from x to y step body) ...]
[(for i from x to y body) ...]))
If I just have one for, how do I combine the syntax definition and the rule?
Thanks.
In other words, you decided that for really only needs one pattern and want to write something like:
(defmacro (for ,i from ,x to ,y step ,body)
; code goes here
)
There is nothing built-in to Scheme that makes single-pattern macros faster to write. The traditional solution is (surprise!) to write another macro.
I have used defsubst from Swindle, and PLT Scheme now ships with define-syntax-rule which does the same thing. If you are learning macros, then writing your own define-syntax-rule equivalent would be a good exercise, particularly if you want some way to indicate keywords like "for" and "from". Neither defsubst nor define-syntax-rule handle those.

Is there a strongly typed programming language which allows you to define new operators?

I am currently looking for a programming language to write a math class in. I know that there are lots and lots of them everywhere around, but since I'm going to start studying math next semester, I thought this might be a good way to get a deeper insight in to what I've learned.
Thanks for your replys.
BTW: If you are wondering what I wanted to ask:
"Is there a strongly typed programming language which allows you to define new operators?"
Like EFraim said, Haskell makes this pretty easy:
% ghci
ghci> let a *-* b = (a*a) - (b*b)
ghci> :type (*-*)
(*-*) :: (Num a) => a -> a -> a
ghci> 4 *-* 3
7
ghci> 1.2 *-* 0.9
0.6299999999999999
ghci> (*-*) 5 3
16
ghci> :{
let gcd a b | a > b = gcd (a - b) b
| b > a = gcd a (b - a)
| otherwise = a
:}
ghci> :type gcd
gcd :: (Ord a, Num a) => a -> a -> a
ghci> gcd 3 6
3
ghci> gcd 12 11
1
ghci> 18 `gcd` 12
6
You can define new infix operators (symbols only) using an infix syntax. You can then use
them as infix operators, or enclose them in parens to use them as a normal function.
You can also use normal functions (letters, numbers, underscores and single-quotes) as operators
by enclosing them in backticks.
Well, you can redefine a fixed set of operators in many languages, like C++ or C#. Others, like F# or Scala allow you to define even new operators (even as infix ones) which might be even nicer for math-y stuff.
Maybe Haskell? Allows you to define arbitrary infix operators.
Ted Neward wrote a series of article on Scala aimed at Java developers, and he finished it off by demonstrating how to write a mathematical domain language in Scala (which, incidentally, is a statically-typed language)
Part 1
Part 2
Part 3
In C++ you can define operators that work on other classes, but I don't think other primitive types like ints since they can't have instance methods. You could either make your own number class in C++ and redefine ALL the operators, including + * etc.
To make new operators on primitive types you have to turn to functional programming (it seems from the other answers). This is fine, just keep in mind that functional programming is very different from OOP. But it will be a great new challenge and functional programming is great for math as it comes from lambda calc. Learning functional programming will teach you different skills and help you greatly with math and programming in general. :D
good luck!
Eiffel allows you to define new operators.
http://dev.eiffel.com
Inasmuch as the procedure you apply to the arguments in a Lisp combination is called an “operator,” then yeah, you can define new operators till the cows come home.
Ada has support for overriding infix operators: here is the reference manual chapter.
Unfortunately you can't create your own new operators, it seems you can only override the existing ones.
type wobble is new integer range 23..89;
function "+" (A, B: wobble) return wobble is
begin
...
end "+";
Ada is not a hugely popular language, it has to be said, but as far as strong typing goes, you can't get much stronger.
EDIT:
Another language which hasn't been mentioned yet is D. It also is a strongly typed language, and supports operator overloading. Again, it doesn't support user-defined infix operators.
From http://www.digitalmars.com/d/1.0/rationale.html
Why not allow user definable operators?
These can be very useful for attaching new infix operations to various unicode symbols. The trouble is that in D, the tokens are supposed to be completely independent of the semantic analysis. User definable operators would break that.
Both ocaml and f# have infix operators. They have a special set of characters that are allowed within their syntax, but both can be used to manipulate other symbols to use any function infix (see the ocaml discussion).
I think you should probably think deeply about why you want to use this feature. It seems to me that there are much more important considerations when choosing a language.
I can only think of one possible meaning for the word "operator" in this context, which is just syntactic sugar for a function call, e.g. foo + bar would be translated as a call to a function +(a, b).
This is sometimes useful, but not often. I can think of very few instances where I have overloaded/defined an operator.
As noted in the other answers, Haskell does allow you to define new infix operators. However, a purely functional language with lazy evaluation can be a bit of a mouthful. I would probably recommend SML over Haskell, if you feel like trying on a functional language for the first time. The type system is a bit simpler, you can use side-effects and it is not lazy.
F# is also very interesting and also features units of measure, which AFAIK is unique to that language. If you have a need for the feature it can be invaluable.
Off the top of my head I can't think of any statically typed imperative languages with infix operators, but you might want to use a functional language for math programming anyway, since it is much easier to prove facts about a functional program.
You might also want to create a small DSL if syntax issues like infix operators are so important to you. Then you can write the program in whatever language you want and still specify the math in a convenient way.
What do you mean by strong typing? Do you mean static typing (where everything has a type that is known at compile time, and conversions are restricted) or strong typing (everything has a type known at run time, and conversions are restricted)?
I'd go with Common Lisp. It doesn't actually have operators (for example, adding a and b is (+ a b)), but rather functions, which can be defined freely. It has strong typing in that every object has a definite type, even if it can't be known at compile time, and conversions are restricted. It's a truly great language for exploratory programming, and it sounds like that's what you'll be doing.
Ruby does.
require 'rubygems'
require 'superators'
class Array
superator "<---" do |operand|
self << operand.reverse
end
end
["jay"] <--- "spillihp"
You can actually do what you need with C# through operator overloading.
Example:
public static Complex operator -(Complex c)
{
Complex temp = new Complex();
temp.x = -c.x;
temp.y = -c.y;
return temp;
}

Functional Programming for Basic Algorithms

How good is 'pure' functional programming for basic routine implementations, e.g. list sorting, string matching etc.?
It's common to implement such basic functions within the base interpreter of any functional language, which means that they will be written in an imperative language (c/c++). Although there are many exceptions..
At least, I wish to ask: How difficult is it to emulate imperative style while coding in 'pure' functional language?
How good is 'pure' functional
programming for basic routine
implementations, e.g. list sorting,
string matching etc.?
Very. I'll do your problems in Haskell, and I'll be slightly verbose about it. My aim is not to convince you that the problem can be done in 5 characters (it probably can in J!), but rather to give you an idea of the constructs.
import Data.List -- for `sort`
stdlistsorter :: (Ord a) => [a] -> [a]
stdlistsorter list = sort list
Sorting a list using the sort function from Data.List
import Data.List -- for `delete`
selectionsort :: (Ord a) => [a] -> [a]
selectionsort [] = []
selectionsort list = minimum list : (selectionsort . delete (minimum list) $ list)
Selection sort implementation.
quicksort :: (Ord a) => [a] -> [a]
quicksort [] = []
quicksort (x:xs) =
let smallerSorted = quicksort [a | a <- xs, a <= x]
biggerSorted = quicksort [a | a <- xs, a > x]
in smallerSorted ++ [x] ++ biggerSorted
Quick sort implementation.
import Data.List -- for `isInfixOf`
stdstringmatch :: (Eq a) => [a] -> [a] -> Bool
stdstringmatch list1 list2 = list1 `isInfixOf` list2
String matching using isInfixOf function from Data.list
It's common to implement such basic
functions within the base interpreter
of any functional language, which
means that they will be written in an
imperative language (c/c++). Although
there are many exceptions..
Depends. Some functions are more naturally expressed imperatively. However, I hope I have convinced you that some algorithms are also expressed naturally in a functional way.
At least, I wish to ask: How difficult
is it to emulate imperative style
while coding in 'pure' functional
language?
It depends on how hard you find Monads in Haskell. Personally, I find it quite difficult to grasp.
1) Good by what standard? What properties do you desire?
List sorting? Easy. Let's do Quicksort in Haskell:
sort [] = []
sort (x:xs) = sort (filter (< x) xs) ++ [x] ++ sort (filter (>= x) xs)
This code has the advantage of being extremely easy to understand. If the list is empty, it's sorted. Otherwise, call the first element x, find elements less than x and sort them, find elements greater than x and sort those. Then concatenate the sorted lists with x in the middle. Try making that look comprehensible in C++.
Of course, Mergesort is much faster for sorting linked lists, but the code is also 6 times longer.
2) It's extremely easy to implement imperative style while staying purely functional. The essence of imperative style is sequencing of actions. Actions are sequenced in a pure setting by using monads. The essence of monads is the binding function:
(Monad m) => (>>=) :: m a -> (a -> m b) -> m b
This function exists in C++, and it's called ;.
A sequence of actions in Haskell, for example, is written thusly:
putStrLn "What's your name?" >>=
const (getLine >>= \name -> putStrLn ("Hello, " ++ name))
Some syntax sugar is available to make this look more imperative (but note that this is the exact same code):
do {
putStrLn "What's your name?";
name <- getLine;
putStrLn ("Hello, " ++ name);
}
Nearly all functional programming languages have some construct to allow for imperative coding (like do in Haskell). There are many problem domains that can't be solved with "pure" functional programming. One of those is network protocols, for example where you need a series of commands in the right order. And such things don't lend themselves well to pure functional programming.
I have to agree with Lothar, though, that list sorting and string matching are not really examples you need to solve imperatively. There are well-known algorithms for such things and they can be implemented efficiently in functional languages already.
I think that 'algorithms' (e.g. method bodies and basic data structures) are where functional programming is best. Assuming nothing completely IO/state-dependent, functional programming excels are authoring algorithms and data structures, often resulting in shorter/simpler/cleaner code than you'd get with an imperative solution. (Don't emulate imperative style, FP style is better for most of these kinds of tasks.)
You want imperative stuff sometimes to deal with IO or low-level performance, and you want OOP for partitioning the high-level design and architecture of a large program, but "in the small" where you write most of your code, FP is a win.
See also
How does functional programming affect the structure of your code?
It works pretty well the other way round emulating functional with imperative style.
Remember that the internal of an interpreter or VM ware so close to metal and performance critical that you should even consider going to assember level and count the the clock cycles for each instruction (like Smalltalk Dophin is just doing it and the results are impressive).
CPU's are imperative.
But there is no problem to do all the basic algorithm implementation - the one you mention are NOT low level - they are basics.
I don't know about list sorting, but you'd be hard pressed to bootstrapp a language without some kind of string matching in the compiler or runtime. So you need that routine to create the language. As there isn't a great deal of point writing the same code twice, when you create the library for matching strings within the language, you call the code written earlier. The degree to which this happens in successive releases will depend on how self hosting the language is, but unless that's a strong design goal there won't be any reason to change it.

Resources