What are the best uses of Logic Programming? - functional-programming

By Logic Programming I mean the a sub-paradigm of declarative programming languages. Don't confuse this question with "What problems can you solve with if-then-else?"
A language like Prolog is very fascinating, and it's worth learning for the sake of learning, but I have to wonder what class of real-world problems is best expressed and solved by such a language. Are there better languages? Does logic programming exist by another name in more trendy programming languages? Is the cynical version of the answer a variant of the Python Paradox?

Prototyping.
Prolog is dynamic and has been for 50 years. The compiler is liberal, the syntax minimalist, and "doing stuff" is easy, fun and efficient. SWI-Prolog has a built-in tracer (debugger!), and even a graphical tracer. You can change the code on the fly, using make/0, you can dynamically load modules, add a few lines of code without leaving the interpreter, or edit the file you're currently running on the fly with edit(1). Do you think you've found a problem with the foobar/2 predicate?
?- edit(foobar).
And as soon as you leave the editor, that thing is going to be re-compiled. Sure, Eclipse does the same thing for Java, but Java isn't exactly a prototyping language.
Apart from the pure prototyping stuff, Prolog is incredibly well suited for translating a piece of logic into code. So, automatic provers and that type of stuff can easily be written in Prolog.
The first Erlang interpreter was written in Prolog - and for a reason, since Prolog is very well suited for parsing, and encoding the logic you find in parse trees. In fact, Prolog comes with a built-in parser! No, not a library, it's in the syntax, namely DCGs.
Prolog is used a lot in NLP, particularly in syntax and computational semantics.
But, Prolog is underused and underappreciated. Unfortunately, it seems to bear an academic or "unusable for any real purpose" stigma. But it can be put to very good use in many real-world applications involving facts and the computation of relations between facts. It is not very well suited for number crunching, but CS is not only about number crunching.

Since Prolog = Syntactic Unification + Backward chaining + REPL,
most places where syntactic unification is used is also a good use for Prolog.
Syntactic unification uses
AST transformations
Type Inference
Term rewriting
Theorem proving
Natural language processing
Pattern matching
Combinatorial test case generation
Extract sub structures from structured data such as an XML document
Symbolic computation i.e. calculus
Deductive databases
Expert systems
Artificial Intelligence
Parsing
Query languages

Constraint Logic Programming (CLP)
Many very good and well-suited use cases of logic programming have already been mentioned. I would like to complement the existing list with several tasks from an extremely important application area of logic programming:
Logic programming blends seamlessly, more seamlessly than other paradigms, with constraints, resulting in a framework called Constraint Logic Programming.
This leads to dedicated constraint solvers for different domains, such as:
CLP(FD) for integers
CLP(B) for Booleans
CLP(Q) for rational numbers
CLP(R) for floating point numbers.
These dedicated constraint solvers lead to several important use cases of logic programming that have not yeen been mentioned, some of which I show below.
When choosing a Prolog system, the power and performance of its constraint solvers are often among the deciding factors, especially for commercial users.
CLP(FD) — Reasoning over integers
In practice, CLP(FD) is one of the most imporant applications of logic programming, and is used to solve tasks from the following areas, among others:
scheduling
resource allocation
planning
combinatorial optimization
See clpfd for more information and several examples.
CLP(B) — Boolean constraints
CLP(B) is often used in connection with:
SAT solving
circuit verification
combinatorial counting
See clpb.
CLP(Q) — Rational numbers
CLP(Q) is used to solve important classes of problems arising in Operations Research:
linear programming
integer linear programming
mixed integer linear programming
See clpq.

One of the things Prolog gives you for free is a backtracking search algorithm -- you could implement it yourself, but if your problem is best solved by having that algorithm available, then it's nice to use it.
The two things I've seen it be good at is mathematical proofs and natural language understanding.

Prolog is ideal for non-numeric problems. This article gives a few examples of some applications of Prolog and it might help you understand the type of problems that it might solve.

Prolog is great at solving puzzles and the like. That said, in the domain of puzzle-solving it makes easy/medium puzzle-solving easier and complicated puzzle solving harder. Still, writing solvers for grid puzzles and the like such as Hexiom, Sudoku, or Nurikabe is not especially tough.

One simple answer is "build systems". The language used to build Makefiles (at least, the part to describe dependencies) is essentially a logic programming language, although not really a "pure" logic programming language.

Yes, Prolog has been around since 1972. It was invented by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses. Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995.
And Alain invented it to analyse Natural Language. Several successful prototypes were created by him and his "followers".
His own system Orbis to understand questions in English and French about the solar system. See his personal site.
Warren and Pereira's system Chat80 QA on world geography.
Today, IBM Watson is a contempory QA based on logic with a huge dose of statistics about real world phrases.
So you can imagine that's where it's strength is.
Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.

Related

Which is the easiest functional programming language for someone who has background in imperative languages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I would like to learn a functional language in order to broaden my horizon. I have knowledge of Python and C/C++ and I want a language to be easy to learn from someone who comes from the imperative domain of languages. I don't care if the language is powerful enough. I just want a language in order to learn the basic of functional programming and then I will try for a more difficult (and powerful language).
Thanks
I recommend pure-lang for these pedagogical ends. It's also plenty powerful. If you want something more popular / with more community support, then I'd recommend Scheme or OCaml, depending on whether you'd rather deal with unfamiliar syntax (go with Scheme) or deal with unfamiliar typing (go with OCaml) first. SML and F# are only slightly different from OCaml. Others have or will mention Clojure, Scala, and Haskell.
Clojure is a variant of Scheme, with its own idiosyncracies (e.g. no tail-call optimization), so using it would be a way of starting with Scheme. I'd expect you'd have an easier time with a less idiosyncratic Scheme implementation though. Racket is what's often used for teaching. Scala looks to be fundamentally similar to OCaml, but this is based on only casual familiarity.
Unlike Haskell, the other languages mentioned all have two advantages: (1) evaluation-order is eager by default, though you can get lazy evaluation by specifically requesting it. In Haskell's the reverse. (2) Mutation is available, though much of the libraries and code you'll see doesn't use it. I actually think it's pedagogically better to learn functional programming while at the same time having an eye on how it interacts with side-effects, and working your way to monadic-style composition somewhat down the road. So I think this is an advantage. Some will tell you that it's better to be thrown into Haskell's more-quarantined handling of mutaton first, though.
Robert Harper at CMU has some nice blog posts on teaching functional programming. As I understand, he also prefers languages like OCaml for teaching.
Among the three classes of languages I recommended (Pure, Scheme and friends, OCaml and friends), the first two have dynamic typing. The first and third have explicit reference cells (as though in Python, you restricted yourself to never reassiging a variable but you could still change what's stored at a list index). Scheme has implicit reference cells: variables themselves look mutable, as in C and Python, and the reference cell handling is done under the covers. In languages like that, you often have some form of explicit reference cell available too (as in the example I just gave in Python, or using mutable pairs/lists in Racket...in other Schemes, including the Scheme standard, those are the default pairs/lists).
One virtue Haskell does have is some textbooks are appearing for it. (I mean this sincerely, not snarkily.) What books/resources to use is another controversial issue with many wars/closed questions. SICP as others have recommended has many fans and also some critics. There seem to me to be many good choices. I won't venture further into those debates.
At first, read Structure and Implementation of Computer Programs. I recommend Lisp (for, example, it's dialect Scheme) as first functional programming language.
Another option is Clojure, which I'm given to understand is more "purely" functional than Scheme/Racket (don't ask me about the details here) and possibly similar enough to let you use it in conjunction with SICP (Structure and Interpretation of Computer Programs, a highly recommended book also suggested by another answer).
I would like to learn a functional language in order to broaden my horizon. I have knowledge of Python and C/C++ and I want a language to be easy to learn from someone who comes from the imperative domain of languages. I don't care if the language is powerful enough. I just want a language in order to learn the basic of functional programming and then I will try for a more difficult (and powerful language).
Great question!
I had done BASIC, Pascal, assembler, C and C++ before I started doing functional programming in the late 1990s. Then I started using two functional languages at about the same time, Mathematica and OCaml, and was using them exclusively within a few years. In particular, OCaml let me write imperative code which looked like the code I had been writing before. I found that valuable as a learner because it let me compare the different approaches which made the advantages of ML obvious.
However, as others have mentioned, the core benefit of Mathematica and OCaml is pattern matching and that is not technically related to functional programming. I have subsequently looked at many other functional languages but I have no desire to go back to a language that lacks pattern matching.
This question is probably off-topic because it is going to result in endless language wars, but here's a general bit of advice:
There are a class of functional programming languages which are sometimes called "mostly functional", in that they permit some imperative features where you want them. Examples include Standard ML, OCaml, F#, and Scala. You might consider one of these if you want to be able to get a grip on the functional idiomatic style while still being able to achieve things in reasonably familiar ways.
I've used Standard ML extensively in the past, but if you're looking for something that has a bit less of a learning curve, I'd personally recommend Scala, which is my second-favourite programming language. The reasons for this include the prevalence of libraries, a healthy-sized community, and the availability of nice books and tutorials to help you getting started (particularly if you have ever had any dealings with Java).
One element that was not discussed is the availability of special pattern-matching syntax for algebraic datatypes, as in Haskell, all flavors of ML, and probably several of the other languages mentioned. Pattern-matching syntax tends to help the programmer see their functions as mathematical functions. Haskell's syntax is sufficiently complex, and its implementations have sufficiently poor parse error messages, that syntax is a decent reason not to choose Haskell. Scheme is probably easier to learn than most other options (and Scheme probably has the king of all macro systems), but the lack of pattern matching syntax would steer me away from it for an intro to functional programming.

Is there a software-engineering methodology for functional programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Software Engineering as it is taught today is entirely focused on object-oriented programming and the 'natural' object-oriented view of the world. There is a detailed methodology that describes how to transform a domain model into a class model with several steps and a lot of (UML) artifacts like use-case-diagrams or class-diagrams. Many programmers have internalized this approach and have a good idea about how to design an object-oriented application from scratch.
The new hype is functional programming, which is taught in many books and tutorials. But what about functional software engineering?
While reading about Lisp and Clojure, I came about two interesting statements:
Functional programs are often developed bottom up instead of top down ('On Lisp', Paul Graham)
Functional Programmers use Maps where OO-Programmers use objects/classes ('Clojure for Java Programmers', talk by Rich Hickley).
So what is the methodology for a systematic (model-based ?) design of a functional application, i.e. in Lisp or Clojure? What are the common steps, what artifacts do I use, how do I map them from the problem space to the solution space?
Thank God that the software-engineering people have not yet discovered functional programming. Here are some parallels:
Many OO "design patterns" are captured as higher-order functions. For example, the Visitor pattern is known in the functional world as a "fold" (or if you are a pointy-headed theorist, a "catamorphism"). In functional languages, data types are mostly trees or tuples, and every tree type has a natural catamorphism associated with it.
These higher-order functions often come with certain laws of programming, aka "free theorems".
Functional programmers use diagrams much less heavily than OO programmers. Much of what is expressed in OO diagrams is instead expressed in types, or in "signatures", which you should think of as "module types". Haskell also has "type classes", which is a bit like an interface type.
Those functional programmers who use types generally think that "once you get the types right; the code practically writes itself."
Not all functional languages use explicit types, but the How To Design Programs book, an excellent book for learning Scheme/Lisp/Clojure, relies heavily on "data descriptions", which are closely related to types.
So what is the methodology for a systematic (model-based ?) design of a functional application, i.e. in Lisp or Clojure?
Any design method based on data abstraction works well. I happen to think that this is easier when the language has explicit types, but it works even without. A good book about design methods for abstract data types, which is easily adapted to functional programming, is Abstraction and Specification in Program Development by Barbara Liskov and John Guttag, the first edition. Liskov won the Turing award in part for that work.
Another design methodology that is unique to Lisp is to decide what language extensions would be useful in the problem domain in which you are working, and then use hygienic macros to add these constructs to your language. A good place to read about this kind of design is Matthew Flatt's article Creating Languages in Racket. The article may be behind a paywall. You can also find more general material on this kind of design by searching for the term "domain-specific embedded language"; for particular advice and examples beyond what Matthew Flatt covers, I would probably start with Graham's On Lisp or perhaps ANSI Common Lisp.
What are the common steps, what artifacts do I use?
Common steps:
Identify the data in your program and the operations on it, and define an abstract data type representing this data.
Identify common actions or patterns of computation, and express them as higher-order functions or macros. Expect to take this step as part of refactoring.
If you're using a typed functional language, use the type checker early and often. If you're using Lisp or Clojure, the best practice is to write function contracts first including unit tests—it's test-driven development to the max. And you will want to use whatever version of QuickCheck has been ported to your platform, which in your case looks like it's called ClojureCheck. It's an extremely powerful library for constructing random tests of code that uses higher-order functions.
For Clojure, I recommend going back to good old relational modeling. Out of the Tarpit is an inspirational read.
Personally I find that all the usual good practices from OO development apply in functional programming as well - just with a few minor tweaks to take account of the functional worldview. From a methodology perspective, you don't really need to do anything fundamentally different.
My experience comes from having moved from Java to Clojure in recent years.
Some examples:
Understand your business domain / data model - equally important whether you are going to design an object model or create a functional data structure with nested maps. In some ways, FP can be easier because it encourages you to think about data model separately from functions / processes but you still have to do both.
Service orientation in design - actually works very well from a FP perspective, since a typical service is really just a function with some side effects. I think that the "bottom up" view of software development sometimes espoused in the Lisp world is actually just good service-oriented API design principles in another guise.
Test Driven Development - works well in FP languages, in fact sometimes even better because pure functions lend themselves extremely well to writing clear, repeatable tests without any need for setting up a stateful environment. You might also want to build separate tests to check data integrity (e.g. does this map have all the keys in it that I expect, to balance the fact that in an OO language the class definition would enforce this for you at compile time).
Prototying / iteration - works just as well with FP. You might even be able to prototype live with users if you get very extremely good at building tools / DSL and using them at the REPL.
OO programming tightly couples data with behavior. Functional programming separates the two. So you don't have class diagrams, but you do have data structures, and you particularly have algebraic data types. Those types can be written to very tightly match your domain, including eliminating impossible values by construction.
So there aren't books and books on it, but there is a well established approach to, as the saying goes, make impossible values unrepresentable.
In so doing, you can make a range of choices about representing certain types of data as functions instead, and conversely, representing certain functions as a union of data types instead so that you can get, e.g., serialization, tighter specification, optimization, etc.
Then, given that, you write functions over your adts such that you establish some sort of algebra -- i.e. there are fixed laws which hold for these functions. Some are maybe idempotent -- the same after multiple applications. Some are associative. Some are transitive, etc.
Now you have a domain over which you have functions which compose according to well behaved laws. A simple embedded DSL!
Oh, and given properties, you can of course write automated randomized tests of them (ala QuickCheck).. and that's just the beginning.
Object Oriented design isn't the same thing as software engineering. Software engineering has to do with the entire process of how we go from requirements to a working system, on time and with a low defect rate. Functional programming may be different from OO, but it does not do away with requirements, high level and detailed designs, verification and testing, software metrics, estimation, and all that other "software engineering stuff".
Furthermore, functional programs do exhibit modularity and other structure. Your detailed designs have to be expressed in terms of the concepts in that structure.
One approach is to create an internal DSL within the functional programming language of choice. The "model" then is a set of business rules expressed in the DSL.
See my answer to another post:
How does Clojure aproach Separation of Concerns?
I agree more needs to be written on the subject on how to structure large applications that use an FP approach (Plus more needs to be done to document FP-driven UIs)
While this might be considered naive and simplistic, I think "design recipes" (a systematic approach to problem solving applied to programming as advocated by Felleisen et al. in their book HtDP) would be close to what you seem to be looking for.
Here, a few links:
http://www.northeastern.edu/magazine/0301/programming.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.8371
I've recently found this book:
Functional and Reactive Domain Modeling
I think is perfectly in line with your question.
From the book description:
Functional and Reactive Domain Modeling teaches you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start with the basics of functional programming and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design, and isolation of side-effects can make your model compose for readability and verifiability.
There is the "program calculation" / "design by calculation" style associated with Prof. Richard Bird and the Algebra of Programming group at Oxford University (UK), I don't think its too far-fetched to consider this a methodology.
Personally while I like the work produced by the AoP group, I don't have the discipline to practice design in this way myself. However that's my shortcoming, and not one of program calculation.
I've found Behavior Driven Development to be a natural fit for rapidly developing code in both Clojure and SBCL. The real upside of leveraging BDD with a functional language is that I tend to write much finer grain unit tests than I usually do when using procedural languages because I do a much better job of decomposing the problem into smaller chunks of functionality.
Honestly if you want design recipes for functional programs, take a look at the standard function libraries such as Haskell's Prelude. In FP, patterns are usually captured by higher order procedures (functions that operate on functions) themselves. So if a pattern is seen, often a higher order function is simply created to capture that pattern.
A good example is fmap. This function takes a function as an argument and applies it to all the "elements" of the second argument. Since it is part of the Functor type class, any instance of a Functor (such as a list, graph, etc...) may be passed as a second argument to this function. It captures the general behavior of applying a function to every element of its second argument.
Well,
Generally many Functional Programming Languages are used at universities for a long time for "small toy problems".
They are getting more popular now since OOP has difficulties with "paralel programming" because of "state".And sometime functional style is better for problem at hand like Google MapReduce.
I am sure that, when functioanl guys hit the wall [ try to implement systems bigger than 1.000.000 lines of code], some of them will come with new software-engineering methodologies with buzz words :-). They should answer the old question: How to divide system into pieces so that we can "bite" each pieces one at a time? [ work iterative, inceremental en evolutionary way] using Functional Style.
It is sure that Functional Style will effect our Object Oriented
Style.We "still" many concepts from Functional Systems and adapted to
our OOP languages.
But will functional programs will be used for such a big systems?Will they become main stream? That is the question.
And Nobody can come with realistic methodology without implementing such a big systems, making his-her hands dirty.
First you should make your hands dirty then suggest solution. Solutions-Suggestions without "real pains and dirt" will be "fantasy".

Do functional languages cope well with complexity?

I am curious how functional languages compare (in general) to more "traditional" languages such as C# and Java for large programs. Does program flow become difficult to follow more quickly than if a non-functional language is used? Are there other issues or things to consider when writing a large software project using a functional language?
Thanks!
Functional programming aims to reduce the complexity of large systems, by isolating each operation from others. When you program without side-effects, you know that you can look at each function individually - yes, understanding that one function may well involve understanding other functions too, but at least you know it won't interfere with some other piece of system state elsewhere.
Of course this is assuming completely pure functional programming - which certainly isn't always the case. You can use more traditional languages in a functional way too, avoiding side-effects where possible. But the principle is an important one: avoiding side-effects leads to more maintainable, understandable and testable code.
Does program flow become difficult to follow more quickly than if a >non-functional language is used?
"Program flow" is probably the wrong concept to analyze a large functional program. Control flow can become baroque because there are higher-order functions, but these are generally easy to understand because there is rarely any shared mutable state to worry about, so you can just think about arguments and results. Certainly my experience is that I find it much easier to follow an aggressively functional program than an aggressively object-oriented program where parts of the implementation are smeared out over many classes. And I find it easier to follow a program written with higher-order functions than with dynamic dispatch. I also observe that my students, who are more representative of programmers as a whole, have difficulties with both inheritance and dynamic dispatch. They do not have comparable difficulties with higher-order functions.
Are there other issues or things to consider when writing a large
software project using a functional language?
The critical thing is a good module system. Here is some commentary.
The most powerful module system I know of the unit system of PLT Scheme designed by Matthew Flatt and Matthias Felleisen. This very powerful system unfortunately lacks static types, which I find a great aid to programming.
The next most powerful system is the Standard ML module system. Unfortunately Standard ML, while very expressive, also permits a great many questionable constructs, so it is easy for an amateur to make a real mess. Also, many programmers find it difficult to use Standard ML modules effectively.
The Objective Caml module system is very similar, but there are some differences which tend to mitigate the worst excesses of Standard ML. The languages are actually very similar, but the styles and idioms of Objective Caml make it significantly less likely that beginners will write insane programs.
The least powerful/expressive module system for a functional langauge is the Haskell module system. This system has a grave defect that there are no explicit interfaces, so most of the cognitive benefit of having modules is lost. Another sad outcome is that while the Haskell module system gives users a hierarchical name space, use of this name space (import qualified, in case you're an insider) is often deprecated, and many Haskell programmers write code as if everything were in one big, flat namespace. This practice amounts to abandoning another of the big benefits of modules.
If I had to write a big system in a functional language and had to be sure that other people understood it, I'd probably pick Standard ML, and I'd establish very stringent programming conventions for use of the module system. (E.g., explicit signatures everywhere, opague ascription with :>, and no use of open anywhere, ever.) For me the simplicity of the Standard ML core language (as compared with OCaml) and the more functional nature of the Standard ML Basis Library (as compared with OCaml) are more valuable than the superior aspects of the OCaml module system.
I've worked on just one really big Haskell program, and while I found (and continue to find) working in Haskell very enjoyable, I really missed not having explicit signatures.
Do functional languages cope well with complexity?
Some do. I've found ML modules and module types (both the Standard ML and Objective Caml) flavors invaluable tools for managing complexity, understanding complexity, and placing unbreachable firewalls between different parts of large programs. I have had less good experiences with Haskell
Final note: these aren't really new issues. Decomposing systems into modules with separate interfaces checked by the compiler has been an issue in Ada, C, C++, CLU, Modula-3, and I'm sure many other languages. The main benefit of a system like Standard ML or Caml is the that you get explicit signatures and modular type checking (something that the C++ community is currently struggling with around templates and concepts). I suspect that these issues are timeless and are going to be important for any large system, no matter the language of implementation.
I'd say the opposite. It is easier to reason about programs written in functional languages due to the lack of side-effects.
Usually it is not a matter of "functional" vs "procedural"; it is rather a matter of lazy evaluation.
Lazy evaluation is when you can handle values without actually computing them yet; rather, the value is attached to an expression which should yield the value if it is needed. The main example of a language with lazy evaluation is Haskell. Lazy evaluation allows the definition and processing of conceptually infinite data structures, so this is quite cool, but it also makes it somewhat more difficult for a human programmer to closely follow, in his mind, the sequence of things which will really happen on his computer.
For mostly historical reasons, most languages with lazy evaluation are "functional". I mean that these language have good syntaxic support for constructions which are typically functional.
Without lazy evaluation, functional and procedural languages allow the expression of the same algorithms, with the same complexity and similar "readability". Functional languages tend to value "pure functions", i.e. functions which have no side-effect. Order of evaluation for pure function is irrelevant: in that sense, pure functions help the programmer in knowing what happens by simply flagging parts for which knowing what happens in what order is not important. But that is an indirect benefit and pure functions also appear in procedural languages.
From what I can say, here are the key advantages of functional languages to cope with complexity :
Functional programming hates side-effects.
You can really black-box the different layers
and you won't be afraid of parallel processing
(actor model like in Erlang is really easier to use
than locks and threads).
Culturally, functional programmer
are used to design a DSL to express
and solve a problem. Identifying the fundamental
primitives of a problem is a radically
different approach than rushing to the brand
new trendy framework.
Historically, this field has been led by very smart people :
garbage collection, object oriented, metaprogramming...
All those concepts were first implemented on functional platform.
There is plenty of literature.
But the downside of those languages is that they lack support and experience in the industry. Having portability, performance and interoperability may be a real challenge where on other platform like Java, all of this seems obvious. That said, a language based on the JVM like Scala could be a really nice fit to benefit from both sides.
Does program flow become difficult to
follow more quickly than if a
non-functional language is used?
This may be the case, in that functional style encourages the programmer to prefer thinking in terms of abstract, logical transformations, mapping inputs to outputs. Thinking in terms of "program flow" presumes a sequential, stateful mode of operation--and while a functional program may have sequential state "under the hood", it usually isn't structured around that.
The difference in perspective can be easily seen by comparing imperative vs. functional approaches to "process a collection of data". The former tends to use structured iteration, like a for or while loop, telling the program "do this sequence of tasks, then move to the next one and repeat, until done". The latter tends to use abstracted recursion, like a fold or map function, telling the program "here's a function to combine/transform elements--now use it". It isn't necessary to follow the recursive program flow through a function like map; because it's a stateless abstraction, it's sufficient to think in terms of what it means, not what it's doing.
It's perhaps somewhat telling that the functional approach has been slowly creeping into non-functional languages--consider foreach loops, Python's list comprehensions...

Is functional programming the next step towards natural-language programming? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
This is my very first question so I am a bit nervous about it because I am not sure whether I get the meaning across well enough. Anyhow, here we go....
Whenever new milestones in programming have been reached it seems they always have had one goal in common: to make it easier for programmers, well, to program.
Machine language, opcodes/mnemonics, procedures/functions, structs, classes (OOP) etc. always helped, in their time, to plan, structure and code programs in a more natural, understandable and better maintainable way.
Of course functional programming is by no means a novelty but it seems that it has experienced a sort of renaissance in recent years. I also believe that FP will get an enormous boost when Microsoft will add F# to their mainstream programming languages.
Returning to my original question, I believe that ultimately programming will be done in a natural language (English) with very few restrictions or rules. The compiler will be part of an AI/NLP system that extracts information from the code or should I say text and transforms it into an intermediate language which the compiler can compile.
So, does FP take programming closer to natural-language programming or is it rather an obstacle and mainstream OOP will lead us faster to natural-language programming?
This question should not be used to discuss the useability or feasability of natural-language programming because only the future will tell.
Sorry, I don't agree at all. Code is ultimately a blueprint for making things (objects), so it has to be very precise and rule-governed in order to function reliably. Natural language won't take over programming any sooner than sketching ideas on napkins will take over mechanical engineering.
I personally have come to the conclusion natural language programming is somewhat crack.
English is not exactly suited to be used fully as a programming language, too many abstract words that have no-correlation in programming, such as emotive terms and other abstract notions that have no place in programming, so to say programming could ever be "natural language" would follow, that "natural language" could be programming, but it isn't.
Now while I get what you're saying here, the problem is the english language has too many scrap terms and repeated names for the same things, so we'd be using something that isn't even specific to the domain of programming, for the task of programming.
I think its more suited that people understand that programming is in fact a highly specialized language, and use their brains and learn to code in a language, which is simple, declarative, and has a consistent definition, unlike English, where definition is highly subjective.
Once you learn the ins and outs of a language, and learn its schematics and behaviors, you can combine them to do new things.
Take Perl, everyone lambasts it for being line noise, but when you know many programming languages, once you get past the initial hurdles of "OMG LINE NOISE", there is a degree of intuitiveness about it where you can make stuff up you never read about and then see it magically works just as you expected.
And IMHO, domain specific languages trump spoken ones for targeted problem solving.
"So, does FP take programming closer to natural-language programming or is it rather an obstacle and mainstream OOP will lead us faster to natural-language programming?"
Neither. Both operate on the same principle that you have to be specific about what you want the computer to do. There must be no room for uncertainty, and neither paradigm has anything to do with natural languages. They tackle an entirely different problem: That of managing and structuring complex code and large codebases.
The big obstacle in natural languages is the parsing. It is impossible to unambiguously parse natural language. Even humans can't do it without a lot of context information (facial expressions, tone of voice), and even then, we still get it wrong quite often.
OOP and FP are only about what happens after parsing. Which meaning is assigned to each semantic element, once it's been identified and parsed.
Perhaps we'll one day be able to program in natural language. I doubt it'll happen within the next couple of decades, but it may happen one day. But today's programming paradigms will neither speed up this process or delay it. They simply have nothing to do with it, and won't help solving the parsing problem.
I don't think that functional programming is any closer to natural language programming than OO programming. Functional programming has a very verb-oriented syntax. When you program in Lisp or Scheme, you spend a lot of time thinking about functions and what actions you want to take on your data. In OO programming, you spend most of your time thinking about objects, hence it seems very noun-oriented. However, in Smalltalk, C++, and Java, you also have methods, which allow you to apply verbs to all of your nouns (so to speak).
I don't think that OO programming will necessarily lead us to natural language programming, but from my point of view it's a little bit closer than functional programming. Functional programming, to me, seems a little bit closer to math than to natural language. That's not such a bad thing, since maybe math is the language we should be thinking in when we program anyway.
Just FYI, Inform 7 is probably the closest anyone has gotten to natural-language programming. It is a language for a very specific domain: writing interactive fiction, the kind of software that began with "adventure games".
The current spurt of interest in Functional Programming result primarily of C# 3.0's cool new features is basically to enable parallelism and denotes a shift towards multi-core computing. IMHO, I don't think we can consider this a next step towards 'natural language programming'
If you are looking for the next evolution in programming languages, I would look to DSLs. DSL allows for highly customized languages that enable sophisticated biz users to configure a system without having to worry about coding details such as datatypes, threads and UI widgets.
Functional languages will have their place in "highly parallel processing" space.
Do you think subjective questions will get this here order for "Windows Internals the 5th Element" added to the database and shipped to my address? If so, natural language programming will be very close to functional programming, since I asked my question in a somewhat functional manner. If not, then natural language programming won't get my order shipped, will it? Functional programming can work because it still has nothing to do with natural languages.
No. Functional programming will take us closer to proving compilers. That is compilers that prove more assertions about your code. The more compilers can prove for us, the closer software development comes to be engineering rather than art.
A NLP programming language is probably more of a "do what I mean not what I say" style language. That is probably the opposite of the direction functional languages go.
"All programming languages are converging towards LISP."

Is programming a subset of math? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've heard many times that all programming is really a subset of math. Some suggest that OO, at its roots, is mathematically based, but I don't get the connection, aside from some obvious examples:
using induction to prove a recursive algorithm,
formal correctness proofs,
functional languages,
lambda calculus,
asymptotic complexity,
DFAs, NFAs, Turing Machines, and theoretical computation in general,
and the fact that everything on the box is binary.
I know math is very important to programming, but I struggle with this "subset" view. In what ways is programming a subset of math?
I'm looking for an explanation that might have relevance to enterprise/OO development, if there is a strong enough connection, that is.
It's math in the sense that it requires abstract thought about algorithms etc.
It's engineering when it involves planning schedules, deliverables, testing.
It's art when you have no idea how it's going to eventually turn out.
Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.
--E. W. Dijkstra
Overall, remember that mathematics is a formal codification of logic, which is also what we do in software.
The list of topics in your question is loaded with mathematical problems. We are able to do programming on a fairly high level of abstraction, so the raw mathematics may not be staring you in the face. For example, you mentioned DFAs.. you can use a regular expression in your programs without knowing any math, but you'll find more of a need for mathematics when you want to design a good regular expression engine.
I think you've hit on an interesting point. Programming is an art and a science. There are a lot of "tools of the trade", and you don't necessarily sit down and do a lot of high-level mathematics in order to simply write a program. In fact, when you're programming, you many not really being doing much mathematics or computer science.
It's when we start to solve difficult problems in computer science that mathematics shows up. The deeper you go, the more it will flesh itself out.. often in lower levels of abstraction.
There are also some realms of programming that you don't necessarily have to work in, but they involve more math. For example, while you can certainly learn a language and write some apps without any formal mathematics, you won't get very far in algorithm analysis without some applied math.
OK, I was a math and CS major in college. I would say that if the set A is Math and the set B is CS, then A intersects B. It's not a subset.
It's no doubt that many of the fathers and mothers of computer science were Mathematicians like Turing and Dykstra. Most of the founders of the internet were either Phd's in Math, Physics, or Engineering. Most of the core concepts of computer science come from math, but the act of programming isn't really math. Math helps us in our daily lives, but the two aren't the same.
But there is no doubt that the original reasoning behind the computer was to well, compute things. We have come a long way from there in such a short time.
Doesn't mention programming, but idea is still relevant.
Einstein was known in 1917 as a famous mathematician. It wasn't until Hiroshima that the general public finally came around to the realization that physics is not just applied mathematics.
When people don't understand something, they try to understand it as a type of something that they do understand. They think by analogy. Programming has been described as a field of math, engineering, science, art, craft, construction... None of these are completely false; it borrows from all of these. The real issue is that the field of programming is only about 50 years old. People have not integrated it into their mental taxonomies.
There's a lot of confusion here.
First of all, "programming" does not (currently) equal "computer science." When Dijkstra called himself a "programmer" (more or less inventing the title), he was not pumping out CRUD applications, but actually doing applied computer science. Let's not let that confuse us-- today, there is a vast difference between what most programmers in a business setting do and computer science.
Now, the argument can be made that computer science is a branch of mathematics; but, as Knuth points out (in his paper "Computer Science and its Relation to Mathematics", collected in his Selected Papers on Computer Science) it can also be argued that mathematics is a branch of computer science.
In fact, I'd strongly recommend this paper to anyone thinking about the relationship between mathematics and computer science, as Knuth lays out the territory nicely.
But, to return to your original question: to a practitioner, "enterprise/OO development" is pretty far removed from mathematics-- but that's largely because most of the serious mathematics involved at the lower levels of operation have been abstracted away (by compilers, operating systems, instruction sets, etc.). Similarly, advanced knowledge of the physics of the internal combustion engine are not required for driving a car. Naturally, if you want to design a more efficient car....
if your definition of math includes all forms of formal logic, and programming is defined only by the logic and calculations extant in the code, then programming is a subset of math QED ;-)
but this is like saying that painting is merely putting colored pigments on a surface - it completely igores the art, the insight, the intuition, the entire creative process
one could argue that music is a subset of math by the same reasoning
so i'd have to say no, programming is not a subset of math. Programming uses a subset of math, but requires non-math skills/talent as well [much like music composition]
Disclaimer: I work as an IT consultant and develop mainly portals and Architecture stuff. I have a Psychology degree. I never studied Maths in University. And i get my job done. And usually well. Why? Because I don't think you need to know Maths (as in 'heavy' Maths stuff) to write code. You need analytical thinking, problem-solving skills, and a high level of abstraction. But Maths does not give you that. It's just another discipline that requires similar skills. My studies in Psychology also apply to my daily work when dealing with usability issues and data storage. Linguistics and Semiotics also play a part.
But wait, just don't flame me yet. I'm not saying Maths are not needed at all for computers - obviously, you need real Math skills when designing encryption algorithms and hardware and etc -- but if, as lots of programmers, you just work an a mid/low level language (like C) or higher level stuff (like C# or java), consuming mostly pre-built frameworks and APIs, you don't really need to understand the mathematical principles behind Fourier transforms or Huffman trees or Moebius strips... let someone else handle that, and let me build value on top of it. I am not stupid. I know the difference between linear and exponential algorithms and data structures and etc. I just don't have the interest to rewrite quicksort or a spiffy new video compression technique.
Well, aside from all that...!
Math is used for many aspects of programming such as
Creating efficient and smart algorithms
Understanding Big O notation
Security (such as RSA)
Many more...
I think that programming needs math to survive. But I wouldn't call it a subset. It's just like blowing glass uses properties of physics, but those artists don't call themselves physicists.
The foundation of everything we do is math.
Luckily, we don't need to be good at math itself to do it. Just like you don't need to understand physics to drive a car or even fly a plane.
The difference between programming and pure mathematics is the concept of state.
Have a look at http://en.wikipedia.org/wiki/Dynamic_logic_(modal_logic). It's a way of mathematically analyzing things changing through time. Also, Hoare triples is a way of formalizing the input-output behavior of programs. By having some axioms dealing with sequential composition of programs and how assignment works, you can perfectly well deal with state changing over time in a mathematically rigorous way.
If the math you know is insufficient, "invent" some new math to deal with what you want to analyze. Newton and Leibniz did it for analysis (aka calculus, I think). No reason to not do it for computation and programming.
I don't believe I've heard that programming is a subset of math. Even the link you provide is simply a proposed approach to programming (not claiming it's a subset of mathematics) and the wiki page has plenty of disagreements in it as well.
Programming requires (at least some) applied mathematics. Mathematics can be used to help describe and analyze programs and program fragments. Programming has a very close relationship with math and uses it and concepts from it heavily. But subset? no.
I'd love to see someone actually claim that it is one with some clear reasoning. I don't think I ever have
Just because you can use mathematics
to reason about something does not
imply that it is, ipso facto, a
mathematical object. Mathematics is
used to reason about internal
combustion engines, radioactive decay
and juggling patterns. Using
mathematics is not doing mathematics.
I would say...
It's partly math, especially at the theoretical level. Imagine designing efficient searching/sorting/clustering/allocating/fooifying algorithms, that's all math... running the gamut from number theory to statistics.
It's partly engineering. Complex systems can rarely achieve ideal levels of performance and reliability, and software is no exception. A lot of software development is about achieving robustness in the face of unreliable hardware and (ahem) humans.
And it's partly art. Creative and idiosyncratic software design often comes up with great new ideas... like assembly language, multitasking operating systems, graphical user interfaces, dynamic languages, and the web.
Just my 2¢...
Math + art + logic
You can actually argue that math, in the form of logical proofs, is analogous to programming --
Check out the Curry-Howard correspondence. It's probably more the way a mathematician would look at things, but I think this is hitting the proverbial nail on the head.
Programming may have originally started as a quasi-subset of math, but the increasing complexity of the field over time has led to programming being the art and science of creating good abstractions for information processing and computation.
Programming does involve math, engineering, and an aesthetic sense for good design and implementation. Algorithms are an extension of mathematics, and the systems engineering side overlaps with other engineering disciplines to some degree. However, neither mathematics nor other engineering fields have the same level of need for complex, flexible, and yet understandable abstractions that can be used and adapted at so many different levels to solve new and evolving problems.
It is the need for useful, flexible, and dynamic abstractions which led first to the creation of function libraries, then class/component libraries, and in more recent years design patterns and service-oriented architectures. Although the latter have more of a design focus, they are a reaction to the increasing need to build high-level abstractional bridges between programming problems and solutions.
For all of these reasons, programming is neither a subset nor a superset of math. It is simply yet another field which uses math that has deeper roots in it than others do.
The topics you listed are topics in Theoretical Computer Science, and THAT is a branch of Pure Mathematics. Programming is an applied science which uses theoretical computer science. Programming itself isn't a branch of mathematics but the Lambda Calculus/theory of computation/formal logic/set theory etc that programming languages are based on is.
Also I completely disagree with Dijkstra. It's either self-congratulatory or Dijkstra is being misquoted/quoted out of context. Pure mathematics is a very very very difficult field. It is so enormously abstract that no branch of applied mathematics is comparable in difficulty. It is one field that requires enormous leaps of imagination. I did my first degree in computer science where I focused a lot on theoretical CS and applied areas like programming, OS, compilers. I also did a degree in Electrical Engineering - arguably the most difficult branch of engineering - and worked on difficult areas of applied mathematics like Maxwell's equations, control theory and partial differential equations in general.
I've also done research in applied and pure mathematics, and to this day I find applied far easier. As for the pure mathematicians, they're a whole different breed.
Now there's a tendency for someone to study an year or two of calculus unhinged from application and conclude that pure mathematics is easy. They have no idea what they're talking about. Studying calculus or even topology unhinged from application does not give you any inkling of what a pure mathematician does. The task of actually proving those theorems are so profoundly difficult that I will defer to a computer scientist to point out the distinction:
"If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps,' no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..." —Scott Aaronson, (Theoretical Computer Scientist, MIT)
I think mathematics provides a set of tools for programmers which they use at abstract level
to solve real world problems.
I would say that programming is less about math than it used to be as we move up to 4th Generation Languages. Assembly is very much about math, C#, not so much. Thoughts?
If you just want the design specs handed out to you by your boss, then it's not much math but such a work isn't fun at all... However, coming up with how to do things does require mathematical ideas, at least things like abstraction, graphs, sometimes number theory stuffs and depending on the problems, calculus. Personally, more I've been involved with programming, more I see the mathematical side to it. However, most of the times IMO, you can just pick up the book from library and look up the basics of the thing you need to do but that requires some mathematical grasp upfront.
You really can't design "good" algorithms without understanding the maths behind it. Searching in google takes you only so far.
Programming is a too wide subject. Good software based not only on math (logic) but also on psychology, linguistics etc. Algorithms are part of math, but there are many other programming-related things besides algorithms.
As a mathematician, it is clear to me that Math is not equal to Programming but that the process which is used to solve problems in either discipline is extremely similar.
Solving a higher level mathematics questions requires analytical thinking, a toolbox of possible ways of solving problems, experience with the field, and some formalized ways of constructing your answer so that other mathematicians agree. If you find a particularly clever, abstract, or elegant way of solving a problem, you get Kudos from your fellow mathematicians. For particularly difficult math problems, you may solve the problem in stages, and codify your stage arguments using things called conjectures and proofs.
I think programming involves the same set of skills. In programming, the same set of principles applies to the solving and presenting of solutions to problems. When you have a partial solution to a programming dilemna, you include it as part of your personal library and use it as part of another bigger problem later. These skills seem very similar to the skills used in mathematics.
The major difference between Math and Programming is the latter has a lot more in common between different disciplines of programming than Math does. Two fields of mathematics can be very, very different in presentation and what is used to communicate the field. By contrast, programming structures, to me at least, look very similar in many different languages.
The difference between programming and pure mathematics is the concept of state. A program is a state machine that uses logic (maths) to transition between states. The actual logic used to transition between states is usually very simple, which is why being a math genius doesn't necessarily help you all that much as a programmer.
Part of the reason I'm a programmer is because I don't like math. I have no problem with math itself, and I'm fine with it conceptually, I just don't like doing calculations by hand. When I found I could tell a computer what the math problem is and let it do the calculating for me, a life-long passion and career was born.
To answer the question, according to my alma mater, math == programming since they allowed me to take Intro to C++ to fulfill my math requirement.
Edit: I should mention my degree is in telecommunications which, at the time, had only the standard liberal arts math requirement of one semester.
Math is the purest form of truth. Everything inherits from math.
Amen.
It's interesting to compare programming with music too. In UK, anyway, there are computing based undergrad university courses that will accept applicants on the bases of music qualifications as supposed to computing due to the logic, patterns, etc. involved.
Maths is powerful, programming is powerful, if maths is a subset of programming then it is equally true to state that programming is a subset of maths.
Maths is described using language, often written down. Therefore is maths a subset of writing too?
Historicly maths came before computer programming, but then lists and processes probably preceded maths, both of which could be equally thought of as mathematical or do with programming.
Cirtainly programming can be represented using maths, so there is some bases for it being true that programming is a sub-set of maths. However a computer program could also implement maths, representing information symbolically, as maths typically does when done on paper, including the infinite and only somewhat defined, from the fundamental axioms, as well as allowing higher level structures to be defined that use each other and other sorts of relationships beyond composition, supporting the drawing of diagrams and allowing the system to be expanded. Maths is equally a subset of programming.
While maths can represent structures such as words, maths is by design about numbers. Strings for example are more programmatic than mathematic.
It's half math, half man speak, duh.

Resources