What elements does a turing complete programming language need to consist of? - math

I've been looking for this but came to nothing.
I saw this explanation once in some of the python's book out there but i don't remember which one.
So, what elements(variable, loop, branching, etc) a programming language should consist of being a turing complete?

Related

Which is the easiest functional programming language for someone who has background in imperative languages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I would like to learn a functional language in order to broaden my horizon. I have knowledge of Python and C/C++ and I want a language to be easy to learn from someone who comes from the imperative domain of languages. I don't care if the language is powerful enough. I just want a language in order to learn the basic of functional programming and then I will try for a more difficult (and powerful language).
Thanks
I recommend pure-lang for these pedagogical ends. It's also plenty powerful. If you want something more popular / with more community support, then I'd recommend Scheme or OCaml, depending on whether you'd rather deal with unfamiliar syntax (go with Scheme) or deal with unfamiliar typing (go with OCaml) first. SML and F# are only slightly different from OCaml. Others have or will mention Clojure, Scala, and Haskell.
Clojure is a variant of Scheme, with its own idiosyncracies (e.g. no tail-call optimization), so using it would be a way of starting with Scheme. I'd expect you'd have an easier time with a less idiosyncratic Scheme implementation though. Racket is what's often used for teaching. Scala looks to be fundamentally similar to OCaml, but this is based on only casual familiarity.
Unlike Haskell, the other languages mentioned all have two advantages: (1) evaluation-order is eager by default, though you can get lazy evaluation by specifically requesting it. In Haskell's the reverse. (2) Mutation is available, though much of the libraries and code you'll see doesn't use it. I actually think it's pedagogically better to learn functional programming while at the same time having an eye on how it interacts with side-effects, and working your way to monadic-style composition somewhat down the road. So I think this is an advantage. Some will tell you that it's better to be thrown into Haskell's more-quarantined handling of mutaton first, though.
Robert Harper at CMU has some nice blog posts on teaching functional programming. As I understand, he also prefers languages like OCaml for teaching.
Among the three classes of languages I recommended (Pure, Scheme and friends, OCaml and friends), the first two have dynamic typing. The first and third have explicit reference cells (as though in Python, you restricted yourself to never reassiging a variable but you could still change what's stored at a list index). Scheme has implicit reference cells: variables themselves look mutable, as in C and Python, and the reference cell handling is done under the covers. In languages like that, you often have some form of explicit reference cell available too (as in the example I just gave in Python, or using mutable pairs/lists in Racket...in other Schemes, including the Scheme standard, those are the default pairs/lists).
One virtue Haskell does have is some textbooks are appearing for it. (I mean this sincerely, not snarkily.) What books/resources to use is another controversial issue with many wars/closed questions. SICP as others have recommended has many fans and also some critics. There seem to me to be many good choices. I won't venture further into those debates.
At first, read Structure and Implementation of Computer Programs. I recommend Lisp (for, example, it's dialect Scheme) as first functional programming language.
Another option is Clojure, which I'm given to understand is more "purely" functional than Scheme/Racket (don't ask me about the details here) and possibly similar enough to let you use it in conjunction with SICP (Structure and Interpretation of Computer Programs, a highly recommended book also suggested by another answer).
I would like to learn a functional language in order to broaden my horizon. I have knowledge of Python and C/C++ and I want a language to be easy to learn from someone who comes from the imperative domain of languages. I don't care if the language is powerful enough. I just want a language in order to learn the basic of functional programming and then I will try for a more difficult (and powerful language).
Great question!
I had done BASIC, Pascal, assembler, C and C++ before I started doing functional programming in the late 1990s. Then I started using two functional languages at about the same time, Mathematica and OCaml, and was using them exclusively within a few years. In particular, OCaml let me write imperative code which looked like the code I had been writing before. I found that valuable as a learner because it let me compare the different approaches which made the advantages of ML obvious.
However, as others have mentioned, the core benefit of Mathematica and OCaml is pattern matching and that is not technically related to functional programming. I have subsequently looked at many other functional languages but I have no desire to go back to a language that lacks pattern matching.
This question is probably off-topic because it is going to result in endless language wars, but here's a general bit of advice:
There are a class of functional programming languages which are sometimes called "mostly functional", in that they permit some imperative features where you want them. Examples include Standard ML, OCaml, F#, and Scala. You might consider one of these if you want to be able to get a grip on the functional idiomatic style while still being able to achieve things in reasonably familiar ways.
I've used Standard ML extensively in the past, but if you're looking for something that has a bit less of a learning curve, I'd personally recommend Scala, which is my second-favourite programming language. The reasons for this include the prevalence of libraries, a healthy-sized community, and the availability of nice books and tutorials to help you getting started (particularly if you have ever had any dealings with Java).
One element that was not discussed is the availability of special pattern-matching syntax for algebraic datatypes, as in Haskell, all flavors of ML, and probably several of the other languages mentioned. Pattern-matching syntax tends to help the programmer see their functions as mathematical functions. Haskell's syntax is sufficiently complex, and its implementations have sufficiently poor parse error messages, that syntax is a decent reason not to choose Haskell. Scheme is probably easier to learn than most other options (and Scheme probably has the king of all macro systems), but the lack of pattern matching syntax would steer me away from it for an intro to functional programming.

should we teach pointers in a "fundamentals of programming" course?

I will be teaching a course on the fundamentals of programming next Fall, first year computer science course. What are the pros and cons of teaching pointers in such a course? (My position: they should be taught).
Edit: My problem with the "cater your audience" argument is that in the first couple of years in University, we (profs) do not know if students would like to be scientists or not... we wish we knew, but we have to strike a balance between those who will remain in school (4 years does not a scientist make), and those who will be engineers.
Final decision: At least references, but possibly pointers without pointer arithmetic.
At the very least you should teach references or some equivalent concept. I think you should probably take it easy on things like pointer arithmetic, c arrays and strings, but indirection is a very important concept in computer science, and students should be introduced to it.
Yes.
Pointers underpin a huge number of concepts in other, higher level languages, and I'm firmly of the opinion that you need to teach a certain amount of the lower-level stuff to facilitate a good understanding of why we bother with anything higher level at all.
Once you understand a bit about how memory is allocated, and how it's addressed and manipulated with pointers, explaining a lot of other constructs gets easier. For example, explaining a NullPointerException in Java, or even the concept of references in such languages is child's play if you've got someone who understands pointers in C (and better still, if they also grok references in C++).
Absolutely teach them. Understanding indirection is essential for programming, whether it's with pointers, references, dynamic binding, or any number of other things. Now obviously don't start off with them, but understanding indirection is at least as important as understanding control flow ideas.
The con of course is that some people just won't get it and will do poorly or drop out. If this is a course for people who want to be CS majors then don't sweat it because you're just giving them incentive to switch majors earlier rather than later. If it's more or a general ed course for people who are kind of interested in programming, then they should probably still be introduced, but not graded harshly or heavily.
During my first year as a CS student, I took a Java course in fall which was the general intro. The professor didn't teach pointers directly, but he did teach the concept of references, and why you can modify objects and not when primitives when either is passed in an argument.
During my 2nd semester, I took the next course in the series, which was about C, and this class heavily relied on pointers.
For an intro to programming class, I'd say just mention references, but not pointers directly.
I think that a "fundamentals of programming" course should at least touch on basic processor architecture and assembly language, and if it does, you can't really make a case for not discussing pointers.
If you only teach higher-level (byte-code) languages, then I guess pointers would confuse the audience.
Pros: solid understanding of the way that memory is used by the machine, the difference between (and pitfalls of) pointers to data on the heap vs. pointers to data on the stack, passing methods by address, etc.
Cons: complex for an audience who is not yet knowledgeable (or has not had enough time to assimilate the concepts) of computer architecture, including what is stack, what are registers, calling conventions, etc.
So, to summarize, it depends a lot on your audience and on the language(s) you'll tackle (pointers will be meaningless in the context of LISP or Java), as well as on how deep you are willing to go in the direction of what is heap, what is stack, how scope is translated into stack (i.e. why never to return a pointer to a local variable), etc.
When I taught pointers to an engineering class I ultimately fired up a debugger on a simple "hello world" program, and showed the students the actual machine code, register values and corresponding memory dumps, with the stack manipulation and parameter passing, etc., but they were ready for it. Would your audience be receptive to such a drill-down expedition, to ensure solid understanding of what's going on behind the scenes, and would you be willing to go to such lengths? :)
I think you shouldn't teach it first. But later, once basic concepts of programming are acquired.
A good example is the last Stroustrup book : Programming -- Principles and Practice Using C++ where he teach how to make a parser, I/O (streams) usage and GUI usage before even talking about pointers!
I think it will be a good reference for teaching because it is more natural to understand the way we build ideas instead of how much constraints (memory management for example) we have to handle at the same time to make a software work. I really recommand you this book to have a fresh perspective about teaching fundamentals of programming.
It really depends on the goal of your course - teaching programming and teaching computer science are two separate goals, and though they are not mutually exclusive, introductory classes generally do not teach both equally well. Here's an example of the difference: say we want to learn how to sort a list. A programming course in C++ would teach you to use the syntax of a std::sort function template, and homework might be writing several comparison functors. A computer science course would explain to you what a merge sort is, what the algorithm looks like in pseudo-code, and its performance/space characteristics, and homework would be writing the sort function itself.
So if you are teaching introductory programming, then yes, you should teach your students about pointers.
If you are teaching computer science, then no, there is no need to understand pointers at an introductory level.
Anybody who calls themselves a good programmer must know how pointers work; being a good programmer implies that they do not know only a single programming language, but that they know how programming languages work in general, allowing them to adapt to programming languages they haven't seen before.
This doesn't mean that a fundamentals of programming course should be teaching pointers, however.
If your goal is to give these people a complete, well rounded familiarity with programming languages in general, then yes pointers shall be part of that.
If your way of introducing them to programming is to use one programming language at first, with the intention of covering other languages in subsequent courses, and pointers are not relevant to that language, then there's no need to talk about pointers yet.
I think there's a lot to be said by starting people out in one language only, rather than trying to cover every style of language at once.
My first introductory programming course used Haskell. It wasn't until a subsequent course using C that pointers were introduced (I was already a good C and C++ programmer when I took the course; those subjects were mandatory).

Concepts that surprised you when you read SICP?

SICP - "Structure and Interpretation of Computer Programs"
Explanation for the same would be nice
Can some one explain about Metalinguistic Abstraction
SICP really drove home the point that it is possible to look at code and data as the same thing.
I understood this before when thinking about universal Turing machines (the input to a UTM is just a representation of a program) or the von Neumann architecture (where a single storage structure holds both code and data), but SICP made the idea much more clear. Scheme (Lisp) helped here, as the syntax for a program is exactly the same as the syntax for lists in general, namely S-expressions.
Once you have the "equivalence" of code and data, suddenly a lot of things become easy. For example, you can write programs that have different evaluation methods (lazy, nondeterministic, etc). Previously, I might have thought that this would require an extension to the programming language; in reality, I can just add it on to the language myself, thus allowing the core language to be minimal. As another example, you can similarly implement an object-oriented framework; again, this is something I might have naively thought would require modifying the language.
Incidentally, one thing I wish SICP had mentioned more: types. Type checking at compilation time is an amazing thing. The SICP implementation of object-oriented programming did not have this benefit.
I didn't read that book yet, I have only looked at the video courses, but it taught me a lot. Functions as first class citizens was mind blowing for me. Executing a "variable" was something very new to me. After watching those videos the way I now see JavaScript and programming in general has greatly changed.
Oh, I think I've lied, the thing that really struck me was that + was a function.
I think the most surprising thing about SICP is to see how few primitives are actually required to make a Turing complete language--almost anything can be built from almost nothing.
Since we are discussing SICP, I'll put in my standard plug for the video lectures at http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussman-lectures/, which are the best Introduction to Computer Science you could hope to get in 20 hours.
The one that I thought was really cool was streams with delayed evaluation. The one about generating primes was something I thought was really neat. Like a "PEZ" dispenser that magically dispenses the next prime in the sequence.
One example of "the data and the code are the same thing" from A. Rex's answer got me in a very deep way.
When I was taught Lisp back in Russia, our teachers told us that the language was about lists: car, cdr, cons. What really amazed me was the fact that you don't need those functions at all - you can write your own, given closures. So, Lisp is not about lists after all! That was a big surprise.
A concept I was completely unfamiliar with was the idea of coroutines, i.e. having two functions doing complementary work and having the program flow control alternate between them.
I was still in high school when I read SICP, and I had focused on the first and second chapters. For me at the time, I liked that you could express all those mathematical ideas in code, and have the computer do most of the dirty work.
When I was tutoring SICP, I got impressed by different aspects. For one, the conundrum that data and code are really the same thing, because code is executable data. The chapter on metalinguistic abstractions is mind-boggling to many and has many take-home messages. The first is that all the rules are arbitrary. This bothers some students, specially those who are physicists at heart. I think the beauty is not in the rules themselves, but in studying the consequence of the rules. A one-line change in code can mean the difference between lexical scoping and dynamic scoping.
Today, though SICP is still fun and insightful to many, I do understand that it's becoming dated. For one, it doesn't teach debugging skills and tools (I include type systems in there), which is essential for working in today's gigantic systems.
I was most surprised of how easy it is to implement languages. That one could write interpreter for Scheme onto a blackboard.
I felt Recursion in different sense after reading some of the chapters of SICP
I am right now on Section "Sequences as Conventional Interfaces" and have found the concept of procedures as first class citizens quite fascinating. Also, the application of recursion is something I have never seen in any language.
Closures.
Coming from a primarily imperative background (Java, C#, etc. -- I only read SICP a year or so ago for the first time, and am re-reading it now), thinking in functional terms was a big revelation for me; it totally changed the way I think about my work today.
I read most part of the book (without exercise). What I have learned is how to abstract the real world at a specific level, and how to implement a language.
Each chapter has ideas surprise me:
The first two chapters show me two ways of abstracting the real world: abstraction with the procedure, and abstraction with data.
Chapter 3 introduces time in the real world. That results in states. We try assignment, which raises problems. Then we try streams.
Chapter 4 is about metalinguistic abstraction, in other words, we implement a new language by constructing an evaluator, which determines the meaning of expressions.
Since the evaluator in Chapter 4 is itself a Lisp program, it inherits the control structure of the underlying Lisp system. So in Chapter 5, we dive into the step-by-step operation of a real computer with the help of an abstract model, register machine.
Thanks.

What are the best uses of Logic Programming?

By Logic Programming I mean the a sub-paradigm of declarative programming languages. Don't confuse this question with "What problems can you solve with if-then-else?"
A language like Prolog is very fascinating, and it's worth learning for the sake of learning, but I have to wonder what class of real-world problems is best expressed and solved by such a language. Are there better languages? Does logic programming exist by another name in more trendy programming languages? Is the cynical version of the answer a variant of the Python Paradox?
Prototyping.
Prolog is dynamic and has been for 50 years. The compiler is liberal, the syntax minimalist, and "doing stuff" is easy, fun and efficient. SWI-Prolog has a built-in tracer (debugger!), and even a graphical tracer. You can change the code on the fly, using make/0, you can dynamically load modules, add a few lines of code without leaving the interpreter, or edit the file you're currently running on the fly with edit(1). Do you think you've found a problem with the foobar/2 predicate?
?- edit(foobar).
And as soon as you leave the editor, that thing is going to be re-compiled. Sure, Eclipse does the same thing for Java, but Java isn't exactly a prototyping language.
Apart from the pure prototyping stuff, Prolog is incredibly well suited for translating a piece of logic into code. So, automatic provers and that type of stuff can easily be written in Prolog.
The first Erlang interpreter was written in Prolog - and for a reason, since Prolog is very well suited for parsing, and encoding the logic you find in parse trees. In fact, Prolog comes with a built-in parser! No, not a library, it's in the syntax, namely DCGs.
Prolog is used a lot in NLP, particularly in syntax and computational semantics.
But, Prolog is underused and underappreciated. Unfortunately, it seems to bear an academic or "unusable for any real purpose" stigma. But it can be put to very good use in many real-world applications involving facts and the computation of relations between facts. It is not very well suited for number crunching, but CS is not only about number crunching.
Since Prolog = Syntactic Unification + Backward chaining + REPL,
most places where syntactic unification is used is also a good use for Prolog.
Syntactic unification uses
AST transformations
Type Inference
Term rewriting
Theorem proving
Natural language processing
Pattern matching
Combinatorial test case generation
Extract sub structures from structured data such as an XML document
Symbolic computation i.e. calculus
Deductive databases
Expert systems
Artificial Intelligence
Parsing
Query languages
Constraint Logic Programming (CLP)
Many very good and well-suited use cases of logic programming have already been mentioned. I would like to complement the existing list with several tasks from an extremely important application area of logic programming:
Logic programming blends seamlessly, more seamlessly than other paradigms, with constraints, resulting in a framework called Constraint Logic Programming.
This leads to dedicated constraint solvers for different domains, such as:
CLP(FD) for integers
CLP(B) for Booleans
CLP(Q) for rational numbers
CLP(R) for floating point numbers.
These dedicated constraint solvers lead to several important use cases of logic programming that have not yeen been mentioned, some of which I show below.
When choosing a Prolog system, the power and performance of its constraint solvers are often among the deciding factors, especially for commercial users.
CLP(FD) — Reasoning over integers
In practice, CLP(FD) is one of the most imporant applications of logic programming, and is used to solve tasks from the following areas, among others:
scheduling
resource allocation
planning
combinatorial optimization
See clpfd for more information and several examples.
CLP(B) — Boolean constraints
CLP(B) is often used in connection with:
SAT solving
circuit verification
combinatorial counting
See clpb.
CLP(Q) — Rational numbers
CLP(Q) is used to solve important classes of problems arising in Operations Research:
linear programming
integer linear programming
mixed integer linear programming
See clpq.
One of the things Prolog gives you for free is a backtracking search algorithm -- you could implement it yourself, but if your problem is best solved by having that algorithm available, then it's nice to use it.
The two things I've seen it be good at is mathematical proofs and natural language understanding.
Prolog is ideal for non-numeric problems. This article gives a few examples of some applications of Prolog and it might help you understand the type of problems that it might solve.
Prolog is great at solving puzzles and the like. That said, in the domain of puzzle-solving it makes easy/medium puzzle-solving easier and complicated puzzle solving harder. Still, writing solvers for grid puzzles and the like such as Hexiom, Sudoku, or Nurikabe is not especially tough.
One simple answer is "build systems". The language used to build Makefiles (at least, the part to describe dependencies) is essentially a logic programming language, although not really a "pure" logic programming language.
Yes, Prolog has been around since 1972. It was invented by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses. Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995.
And Alain invented it to analyse Natural Language. Several successful prototypes were created by him and his "followers".
His own system Orbis to understand questions in English and French about the solar system. See his personal site.
Warren and Pereira's system Chat80 QA on world geography.
Today, IBM Watson is a contempory QA based on logic with a huge dose of statistics about real world phrases.
So you can imagine that's where it's strength is.
Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.

Is functional programming the next step towards natural-language programming? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
This is my very first question so I am a bit nervous about it because I am not sure whether I get the meaning across well enough. Anyhow, here we go....
Whenever new milestones in programming have been reached it seems they always have had one goal in common: to make it easier for programmers, well, to program.
Machine language, opcodes/mnemonics, procedures/functions, structs, classes (OOP) etc. always helped, in their time, to plan, structure and code programs in a more natural, understandable and better maintainable way.
Of course functional programming is by no means a novelty but it seems that it has experienced a sort of renaissance in recent years. I also believe that FP will get an enormous boost when Microsoft will add F# to their mainstream programming languages.
Returning to my original question, I believe that ultimately programming will be done in a natural language (English) with very few restrictions or rules. The compiler will be part of an AI/NLP system that extracts information from the code or should I say text and transforms it into an intermediate language which the compiler can compile.
So, does FP take programming closer to natural-language programming or is it rather an obstacle and mainstream OOP will lead us faster to natural-language programming?
This question should not be used to discuss the useability or feasability of natural-language programming because only the future will tell.
Sorry, I don't agree at all. Code is ultimately a blueprint for making things (objects), so it has to be very precise and rule-governed in order to function reliably. Natural language won't take over programming any sooner than sketching ideas on napkins will take over mechanical engineering.
I personally have come to the conclusion natural language programming is somewhat crack.
English is not exactly suited to be used fully as a programming language, too many abstract words that have no-correlation in programming, such as emotive terms and other abstract notions that have no place in programming, so to say programming could ever be "natural language" would follow, that "natural language" could be programming, but it isn't.
Now while I get what you're saying here, the problem is the english language has too many scrap terms and repeated names for the same things, so we'd be using something that isn't even specific to the domain of programming, for the task of programming.
I think its more suited that people understand that programming is in fact a highly specialized language, and use their brains and learn to code in a language, which is simple, declarative, and has a consistent definition, unlike English, where definition is highly subjective.
Once you learn the ins and outs of a language, and learn its schematics and behaviors, you can combine them to do new things.
Take Perl, everyone lambasts it for being line noise, but when you know many programming languages, once you get past the initial hurdles of "OMG LINE NOISE", there is a degree of intuitiveness about it where you can make stuff up you never read about and then see it magically works just as you expected.
And IMHO, domain specific languages trump spoken ones for targeted problem solving.
"So, does FP take programming closer to natural-language programming or is it rather an obstacle and mainstream OOP will lead us faster to natural-language programming?"
Neither. Both operate on the same principle that you have to be specific about what you want the computer to do. There must be no room for uncertainty, and neither paradigm has anything to do with natural languages. They tackle an entirely different problem: That of managing and structuring complex code and large codebases.
The big obstacle in natural languages is the parsing. It is impossible to unambiguously parse natural language. Even humans can't do it without a lot of context information (facial expressions, tone of voice), and even then, we still get it wrong quite often.
OOP and FP are only about what happens after parsing. Which meaning is assigned to each semantic element, once it's been identified and parsed.
Perhaps we'll one day be able to program in natural language. I doubt it'll happen within the next couple of decades, but it may happen one day. But today's programming paradigms will neither speed up this process or delay it. They simply have nothing to do with it, and won't help solving the parsing problem.
I don't think that functional programming is any closer to natural language programming than OO programming. Functional programming has a very verb-oriented syntax. When you program in Lisp or Scheme, you spend a lot of time thinking about functions and what actions you want to take on your data. In OO programming, you spend most of your time thinking about objects, hence it seems very noun-oriented. However, in Smalltalk, C++, and Java, you also have methods, which allow you to apply verbs to all of your nouns (so to speak).
I don't think that OO programming will necessarily lead us to natural language programming, but from my point of view it's a little bit closer than functional programming. Functional programming, to me, seems a little bit closer to math than to natural language. That's not such a bad thing, since maybe math is the language we should be thinking in when we program anyway.
Just FYI, Inform 7 is probably the closest anyone has gotten to natural-language programming. It is a language for a very specific domain: writing interactive fiction, the kind of software that began with "adventure games".
The current spurt of interest in Functional Programming result primarily of C# 3.0's cool new features is basically to enable parallelism and denotes a shift towards multi-core computing. IMHO, I don't think we can consider this a next step towards 'natural language programming'
If you are looking for the next evolution in programming languages, I would look to DSLs. DSL allows for highly customized languages that enable sophisticated biz users to configure a system without having to worry about coding details such as datatypes, threads and UI widgets.
Functional languages will have their place in "highly parallel processing" space.
Do you think subjective questions will get this here order for "Windows Internals the 5th Element" added to the database and shipped to my address? If so, natural language programming will be very close to functional programming, since I asked my question in a somewhat functional manner. If not, then natural language programming won't get my order shipped, will it? Functional programming can work because it still has nothing to do with natural languages.
No. Functional programming will take us closer to proving compilers. That is compilers that prove more assertions about your code. The more compilers can prove for us, the closer software development comes to be engineering rather than art.
A NLP programming language is probably more of a "do what I mean not what I say" style language. That is probably the opposite of the direction functional languages go.
"All programming languages are converging towards LISP."

Resources