Prove that regular languages and context free languages are recursive - recursion

I am a bit confused with the difference between regular and context free languages.
A recursive language is a language, for which there exits a TM which always halts.
I am facing problem in proving the above statement.

Regular languages are accepted by finite automata, which have no usable memory. Context-free languages are accepted by pushdown automata, which have a single stack (supporting push/pop) for memory. A recursive language is one that can be decided by a Turing machine, which has a (potentially) infinite tape for memory. Regular languages are recursive because you can make a TM equivalent to a FA by not using the tape. Context free languages are recursive because you can make a TP equivalent to a PDA by using the tape as a stack: only reading and writing to the last non-blank symbol. This lacks details but can be made into a rigorous proof of the claims you are asking be proven.

Related

Are there any modern languages not supporting recursion?

There is a lot of talk about loop-less (functional) programming languages, but I haven't heard of any modern recursive-less programming languages. I know COBOL doesn't (or didn't) support recursion and I imagine it's the same for FORTRAN (or at the very least they will have a weird behaviour), but are there any modern languages?
There are several reasons I am wondering about this...
From what I understand there is no recursive algorithm for which no iterative implementation exists that is at least as performant (as you can simply "simulate" recursion iteratively, when managing the stack within the loop). So there is a) no performance gain and b) no capability gain from using recursion.
Recursion can have worse performance than iteration for things like naiive, recursive implementations of fibonacci numbers. People like to point out that recursion can be just as fast when compilers are optimized and it comes to tail recursion, but from my understanding tail recursion algos always are trivial to convert to loops. So there is c) potential performance loss from recursion.
Academics seem to have a thing for recursion, but then again most of them can't/don't really write any productive code. C++ (or object orientation in general) is the living proof how academic ideas can make their way into the business world/companies, while beeing mostly impractical. I wonder if recursion is only another example of this. (I know this is an opinionated topic, but the majority of accomplished software engineers I heard talk agree on this at least to some extent).
I have written recursive algorithms in productive systems before, but I can't think of a time, when I really benefited from using recursion. Recursive algorithms d) may or may not provide code readability advantages in some cases. I am not sure about this...
Most (all?) modern operating systems rely on specified call stack conventions. So if you want to call any external library from within your program, your programming language/compiler has to comply with the system's call stack convention. This makes it an easy step to adopt recursion for programming languages/compilers. However if you would create a recursion-less language you wouldn't need a call stack at all, which would have interesting effects in terms of memory management...
So I am not hating on recursion... I am just wondering if the recursion-less language experiment has been done in the recent past and what the results have been, as it seems like a practical experiment to me?

which is more general recursion or iteration?

is Recursion is more general than Iteration?
for me Iteration means repetitive control using language constructs other than subroutine calls (like loop-constructs and/or explicit
goto’s), whereas recursion means as repetitive control obtained using subroutine calls). which is more general in these two?
Voted to close as likely to prompt opinion-based responses; my opinion-based response is: recursion is more general because:
it's simply a user-case of function calls, which a language will already have; and
recursion captures both a direct one-to-one pattern of repetition and more complicated patterns, such as tree traversal, divide and conquer, etc.
Conversely iteration tends to be a specific language-level construct allowing only a direct linear traversal.
In so far as it is not opinion-based, the most reasonable answer is that neither recursion nor iteration is more general than the other. A language can be Turing complete with recursion but no iteration (minimal Lisps are like that) and a language can also be Turing complete with iteration but no recursion (earlier versions of Fortran didn't support recursion). Both recursion and iteration are widely used. Iteration is probably more commonly used since for every person who learns programming with something like Lisp or Haskell there are probably a dozen who learn programming with things like Java or Visual Basic -- but I don't think that "most commonly used" is a good synonym for "general".

Functional programming, in what areas is it inefficient and why is it hard to determine space and time cost?

I have been reading up on functional programming and I have two questions that I was hoping someone could help me with.
I've read that lazy functional programs can be inefficient if you are accessing the same data often because of the extra overhead of checking whether the expression has been evaluated. I have also read in the first answer of the following thread (Are functional programming languages suitable for graphics programming?), that functional programming can be resource demanding in the context of graphical programming because it creates a lot of temporary objects (I assume this has to do with having to create new objects to simulate state?).
Are there any other areas where functional programming might end up being resource heavy / inefficient in compairison to OOP/procedural programming?
I have read in the first answer in the following thread (Pitfalls/Disadvantages of Functional Programming), that "it is very difficult to predict the time and space costs of evaluating a lazy functional program". Could someone give a simple (if that exists) explaination of why this is the case? I assume it has to do with lazy evaluation only evaluation expressions when needed, but why is it not simple to predict kind of a worst case scenario that is similar to imperative programming where everything is evaluated?
I've read that lazy functional programs can be inefficient if you are accessing the same data often because of the extra overhead of checking whether the expression has been evaluated.
This involves checking a tag bit on a pointer. It is cheap.
functional programming can be resource demanding in the context of graphical programming because it creates a lot of temporary objects
This depends on the implementation. Allocation in pure FP languages is cheap, as immutability means you can avoid some write barriers. Object allocation is roughly similar to OO languages, though some GCs, such as GHCs, are very efficient compared to e.g. Java.
Are there any other areas where functional programming might end up being resource heavy / inefficient in compairison to OOP/procedural programming?
There are plenty of problems that require very tight resource usage. E.g. operating systems. In such environments you need libraries for direct access to hardware and the ability to mutate memory in place. Depending on the functional language implementation you're using, you may or may not have this.
it is very difficult to predict the time and space costs of evaluating a lazy functional program
It is harder to model lazy evaluation costs because the amount of work done, and when it is done, depends on the input data, which is only available at runtime.
Practically, languages let you choose whether you want to use strict or lazy evaluation, as neither are appropriate for all situations.

Is Erlang really a functional language? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I hear all the time that Erlang is a functional language, yet it is easy to call databases or non side-effect free code from a function, and commands are easily ordered by using "," commas between them just like Ruby or another language, so where is the "functional" part of Erlang?
The central idea is that each process is a functional program over an input stream of messages. The result from the functional program is an output stream of messages to others. From this perspective, Erlang is a rather clean functional language; there are no destructive updates to data structures (like setcar in Lisp and most Schemes).
With few exceptions, all built-in functions such as operations on ETS tables also follow this model: apart from efficiency issues, those BIFs could actually have been implemented with pure Erlang processes and message passing.
So yes, the Erlang language is functional, but a collection of interacting Erlang processes is a different thing. Each process is an ongoing computation, and as such it has a current state, which can change in relation to the other processes. Even a database is just another process in this respect.
In my mind, this is one of the most important things about Erlang: outside the process, there could be a storm raging, but inside, things are calm, letting you focus on what that process should do - and only that.
There's a meme that functional languages must have immutable values and be side-effect free, but I say that any language with first-class functions is a functional programming language.
It is useful and powerful to have strong controls over value mutability and side effects, but I believe these are peripheral to functional programming. Nice to have, but not essential. Even in languages that do have these properties, there is always a way to escape the purity of the paradigm.1
There is a grayscale continuum between truly pure FP languages that you can't actually use for anything practical and languages that are really quite impure but still have some of the FP nature to them:
Book FP: Introductory books on FP languages frequently show only a subset of the language, with all examples done within the language's REPL, so that you never get to see the purely functional paradigm get broken. A good example of this is Scheme as presented in The Little Schemer.
You can come away from reading such a book with the false impression that FP languages can't actually do anything useful.
Haskell: The creators of Haskell went to uncommon lengths to wall off impurity via the famous I/O monad. Everything on one side of the wall is purely functional, so that the compiler can reason confidently about the code.
But the important thing is, despite the existence of this wall, you have to use the I/O monad to get anything useful done in Haskell.2 In that sense, Haskell isn't as "pure" as some would like you to believe. The I/O monad allows you to build any sort of "impure" software you like in Haskell: database clients, highly stateful GUIs, etc.
Erlang: Has immutable values and first-class functions, but lacks a strong wall between the core language and the impure bits.
Erlang ships with Mnesia, a disk-backed in-memory DBMS, which is about as impure as software gets. It's scarcely different in practice from a global variable store. Erlang also has great support for communicating with external programs via ports, sockets, etc.
Erlang doesn't impose any kind of purity policy on you, the programmer, at the language level. It just gives you the tools and lets you decide how to use them.
OCaml and F#: These closely-related multiparadigm languages include both purely functional elements as well as imperative and object-oriented characteristics.
The imperative programming bits allow you to do things like write a traditional for loop with a mutable counter variable, whereas a pure FP program would probably try to recurse over a list instead to accomplish the same result.
The OO parts are pretty much useless without the mutable keyword, which turns a value definition into a variable, so that the compiler won't complain if you change the variable's value. Mutable variables in OCaml and F# have some limitations, but you can escape even those limitations with the ref keyword.
If you're using F# with .NET, you're going to be mutating values all the time, because most of .NET is mutable, in one way or another. Any .NET object with a settable property is mutable, for example, and all the GUI stuff inherently has side-effects. The only immutable part of .NET that immediately comes to mind is System.String.
Despite all this, no one would argue that OCaml and F# are not functional programming languages.
JavaScript, R, Lua, Perl...: There are many languages even less pure than OCaml which can still be considered functional in some sense. Such languages have first-class functions, but values are mutable by default.
Foototes:
Any truly pure FP language is a toy language or someone's research project.
That is, unless your idea of "useful" is to keep everything in the ghci REPL. You can use Haskell like a glorified calculator, if you like, so it's pure FP.
Yes, it's a functional language. It's not a pure functional language like Haskell, but then again, neither is LISP (and nobody really argues that LISP isn't functional).
The message-passing/process handling of Erlang is an implementation of the Actor model. You could argue that Erlang is an Actor language, with a functional language used for the individual Actors.
The functional part is that you tend to pass around functions. Most langauges can be used both as a functional language, and as an imperative language, even C (it's quite possible to make a program consisting of only function pointers and constants).
I guess the distinguishing factor is usually the lack of mutable variables in functional languages.

What are the best uses of Logic Programming?

By Logic Programming I mean the a sub-paradigm of declarative programming languages. Don't confuse this question with "What problems can you solve with if-then-else?"
A language like Prolog is very fascinating, and it's worth learning for the sake of learning, but I have to wonder what class of real-world problems is best expressed and solved by such a language. Are there better languages? Does logic programming exist by another name in more trendy programming languages? Is the cynical version of the answer a variant of the Python Paradox?
Prototyping.
Prolog is dynamic and has been for 50 years. The compiler is liberal, the syntax minimalist, and "doing stuff" is easy, fun and efficient. SWI-Prolog has a built-in tracer (debugger!), and even a graphical tracer. You can change the code on the fly, using make/0, you can dynamically load modules, add a few lines of code without leaving the interpreter, or edit the file you're currently running on the fly with edit(1). Do you think you've found a problem with the foobar/2 predicate?
?- edit(foobar).
And as soon as you leave the editor, that thing is going to be re-compiled. Sure, Eclipse does the same thing for Java, but Java isn't exactly a prototyping language.
Apart from the pure prototyping stuff, Prolog is incredibly well suited for translating a piece of logic into code. So, automatic provers and that type of stuff can easily be written in Prolog.
The first Erlang interpreter was written in Prolog - and for a reason, since Prolog is very well suited for parsing, and encoding the logic you find in parse trees. In fact, Prolog comes with a built-in parser! No, not a library, it's in the syntax, namely DCGs.
Prolog is used a lot in NLP, particularly in syntax and computational semantics.
But, Prolog is underused and underappreciated. Unfortunately, it seems to bear an academic or "unusable for any real purpose" stigma. But it can be put to very good use in many real-world applications involving facts and the computation of relations between facts. It is not very well suited for number crunching, but CS is not only about number crunching.
Since Prolog = Syntactic Unification + Backward chaining + REPL,
most places where syntactic unification is used is also a good use for Prolog.
Syntactic unification uses
AST transformations
Type Inference
Term rewriting
Theorem proving
Natural language processing
Pattern matching
Combinatorial test case generation
Extract sub structures from structured data such as an XML document
Symbolic computation i.e. calculus
Deductive databases
Expert systems
Artificial Intelligence
Parsing
Query languages
Constraint Logic Programming (CLP)
Many very good and well-suited use cases of logic programming have already been mentioned. I would like to complement the existing list with several tasks from an extremely important application area of logic programming:
Logic programming blends seamlessly, more seamlessly than other paradigms, with constraints, resulting in a framework called Constraint Logic Programming.
This leads to dedicated constraint solvers for different domains, such as:
CLP(FD) for integers
CLP(B) for Booleans
CLP(Q) for rational numbers
CLP(R) for floating point numbers.
These dedicated constraint solvers lead to several important use cases of logic programming that have not yeen been mentioned, some of which I show below.
When choosing a Prolog system, the power and performance of its constraint solvers are often among the deciding factors, especially for commercial users.
CLP(FD) — Reasoning over integers
In practice, CLP(FD) is one of the most imporant applications of logic programming, and is used to solve tasks from the following areas, among others:
scheduling
resource allocation
planning
combinatorial optimization
See clpfd for more information and several examples.
CLP(B) — Boolean constraints
CLP(B) is often used in connection with:
SAT solving
circuit verification
combinatorial counting
See clpb.
CLP(Q) — Rational numbers
CLP(Q) is used to solve important classes of problems arising in Operations Research:
linear programming
integer linear programming
mixed integer linear programming
See clpq.
One of the things Prolog gives you for free is a backtracking search algorithm -- you could implement it yourself, but if your problem is best solved by having that algorithm available, then it's nice to use it.
The two things I've seen it be good at is mathematical proofs and natural language understanding.
Prolog is ideal for non-numeric problems. This article gives a few examples of some applications of Prolog and it might help you understand the type of problems that it might solve.
Prolog is great at solving puzzles and the like. That said, in the domain of puzzle-solving it makes easy/medium puzzle-solving easier and complicated puzzle solving harder. Still, writing solvers for grid puzzles and the like such as Hexiom, Sudoku, or Nurikabe is not especially tough.
One simple answer is "build systems". The language used to build Makefiles (at least, the part to describe dependencies) is essentially a logic programming language, although not really a "pure" logic programming language.
Yes, Prolog has been around since 1972. It was invented by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses. Alain was a French computer scientist and professor at Aix-Marseille University from 1970 to 1995.
And Alain invented it to analyse Natural Language. Several successful prototypes were created by him and his "followers".
His own system Orbis to understand questions in English and French about the solar system. See his personal site.
Warren and Pereira's system Chat80 QA on world geography.
Today, IBM Watson is a contempory QA based on logic with a huge dose of statistics about real world phrases.
So you can imagine that's where it's strength is.
Retired in 2006, he remained active until he died in 2017. He was named Chevalier de la Legion d’Honneur by the French government in 1986.

Resources