Recursion using only global variables - recursion

For the cause of simplicity, Smallbasic has only global variables. It does not have locals or parameters.
Although this makes it simpler to teach or learn it, it also complicates some matters, like recursive functions. I had a hard time creating a simple recursive function in smallbasic and had to use a manual stack. This works but it makes it more complicated and contradicts the initial main goal of simplicity!
This is how i can write the factorial:
n = 5
ind = 1
fact()
TextWindow.WriteLine("fact(5)=" + f)
Sub fact
If n = 1 Then
f = 1
Else
ind = ind+1
keepn[ind] = n
n = n-1
fact()
f = f * keepn[ind]
ind = ind-1
EndIf
EndSub
Note: I wrote it just now and it could have errors.
You see the picture. I'm manually creating a stack and using it to simulate local variable and use it for recursion.
Is there an easy way to create this recursive function?

I think you do have to resort to global variables to write a recursive function in SmallBasic.
I'd agree that SmallBasic's lack of function arguments is quite limiting and often makes a supposedly simple programming language quite complex to use in practice.
SmallBasic's library however is great for beginners, making it significantly easier to put things on the screen than enterprise frameworks like WinForms or WPF. The library, SmallBasicLibrary.dll, can be easily loaded into other .Net languages including VB.Net, C# and F#. Simply create a console application and add a reference to the library and then use import/using/open against the Library namespace.
While teaching my kids programming I started with SmallBasic, they loved the Turtle functionality, but then quickly switched to F# which has first-class support for functions and far less ceremony when compared to VB.Net or C#. Having to explain public static void Main to a 7yo before they could print "Hello World" just wasn't an attractive option to me.
As an experiment I've also created an alternative SmallBasic compiler implementation which you may find interesting as it includes support for function arguments, tuples and pattern matching.

I think it's worth noting that creating a recursive function in this way - i.e. with only global variables, using a stack - is very educational in its own right. This is closer to the way assembly works, so from that perspective having to do things this way could actually be considered a feature...

Related

Functional vs Procedural programming [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?
A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.
Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )
I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.
Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.
I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.
To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)
One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.
A functional programming is identical to procedural programming in which global variables are not being used.
Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.
Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).
To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.
If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.
#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]
Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.
None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).
To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

How to speed up writing to a matrix in a reference class in R

Here is a piece of R code that writes to each element of a matrix in a reference class. It runs incredibly slowly, and I’m wondering if I’ve missed a simple trick that will speed this up.
nx = 2000
ny = 10
ref_matrix <- setRefClass(
"ref_matrix",fields = list(data = "matrix"),
)
out <- ref_matrix(data = matrix(0.0,nx,ny))
#tracemem(out$data)
for (iy in 1:ny) {
for (ix in 1:nx) {
out$data[ix,iy] <- ix + iy
}
}
It seems that each write to an element of the matrix triggers a check that involves a copy of the entire matrix. (Uncommenting the tracemen() call shows this.) Now, I’ve found a discussion that seems to confirm this:
https://r-devel.r-project.narkive.com/8KtYICjV/rd-copy-on-assignment-to-large-field-of-reference-class
and this also seems to be covered by Speeding up field access in R reference classes
but in both of these this behaviour can be bypassed by not declaring a class for the field, and this works for the example in the first link which uses a 1D vector, b, which can just be set as b <<- 1:10000. But I’ve not found an equivalent way of creating a 2D array without using a explicit “matrix” instance.
Am I just missing something simple, or is this actually not possible?
Let me add a couple of things. First, I’m very new to R, so could easily have missed something. Second, I’m really just curious about the way reference classes work in this case and whether there’s a simple way to use them efficiently; I’m not looking for a really fast way to set the elements of a matrix - I can do that by not having the matrix in a reference class at all, and if I really care about speed I can write a C routine to do it and call it from R.
Here’s some background that might explain why I’m interested in this, which you’re welcome to ignore.
I got here by wanting to see how different languages, and even different compiler options and different ways of coding the same operation, compared for efficiency when accessing 2D rectangular arrays. I’ve been playing with a test program that creates two 2D arrays of the same size, and calls a subroutine that sets the first to the elements of the second plus their index values. (Almost any operation would do, but this one isn’t completely trivial to optimise.) I have this in a number of languages now, C, C++, Julia, Tcl, Fortran, Swift, etc., even hand-coded assembler (spoiler alert: assembler isn’t worth the effort any more) and thought I’d try R. The obvious implementation in R passes the two arrays to a subroutine that does the work, but because R doesn’t normally pass by reference, that routine has to make a copy of the modified array and return that as the function value. I thought using a reference class would avoid the relatively minor overhead of that copy, so I tried that and was surprised to discover that, far from speeding things up, it slowed them down enormously.
Use outer:
out$data <- outer(1:ny, 1:nx, `+`)
Also, don't use reference classes (or R6 classes) unless you actually need reference semantics. KISS and all that.

Replacing an ordinary function with a generic function

I'd like to use names such as elt, nth and mapcar with a new data structure that I am prototyping, but these names designate ordinary functions and so, I think, would need to be redefined as generic functions.
Presumably it's bad form to redefine these names?
Is there a way to tell defgeneric not to generate a program error and to go ahead and replace the function binding?
Is there a good reason for these not being generic functions or is just historic?
What's the considered wisdom and best practice here please?
If you are using SBCL or ABCL, and aren't concerned with ANSI compliance, you could investigate Extensible Sequences:
http://www.sbcl.org/manual/#Extensible-Sequences
http://www.doc.gold.ac.uk/~mas01cr/papers/ilc2007/sequences-20070301.pdf
...you can't redefine functions in the COMMON-LISP package, but you could create a new package and shadow the imports of the functions you want to redefine.
Is there a good reason for these not being generic functions or is just historic?
Common Lisp has some layers of language in some of its areas. Higher-level parts of the software might need to be built on lower-level constructs.
One of its goals was being fast enough for a range of applications.
Common Lisp also introduced the idea of sequences, the abstraction over lists and vectors, at a time, when the language didn't have an object-system. CLOS came several years after the initial Common Lisp design.
Take for example something like equality - for numbers.
Lisp has =:
(= a b)
That's the fastest way to compare numbers. = is also defined only for numbers.
Then there are eql, equal and equalp. Those work for numbers, but also for some other data types.
Now, if you need more speed, you can declare the types and tell the compiler to generate faster code:
(locally
(declare (fixnum a b)
(optimize (speed 3) (safety 0)))
(= a b))
So, why is = not a CLOS generic function?
a) it was introduced when CLOS did not exist
but equally important:
b) in Common Lisp it wasn't known (and it still isn't) how to make a CLOS generic function = as fast as a non-generic function for typical usage scenarios - while preserving dynamic typing and extensibility
CLOS generic function simply have a speed penalty. The runtime dispatch costs.
CLOS is best used for higher level code, which then really benefits from features like extensibility, multi-dispatch, inheritance/combinations. Generic functions should be used for defined generic behavior - not as collections of similar methods.
With better implementation technology, implementation-specific language enhancements, etc. it might be possible to increase the range of code which can be written in a performant way using CLOS. This has been tried with programming languages like Dylan and Julia.
Presumably it's bad form to redefine these names?
Common Lisp implementations don't let you replace them just so. Be aware, that your replacement functions should be implemented in a way which works consistently with the old functions. Also, old versions could be inlined in some way and not be replaceable everywhere.
Is there a way to tell defgeneric not to generate a program error and to go ahead and replace the function binding?
You would need to make sure that the replacement is working while replacing it. The code replacing functions, might use those function you are replacing.
Still, implementations allow you to replace CL functions - but this is implementation specific. For example LispWorks provides the variables lispworks:*packages-for-warn-on-redefinition* and lispworks:*handle-warn-on-redefinition*. One can bind them or change them globally.
What's the considered wisdom and best practice here please?
There are two approaches:
use implementation specific ways to replace standard Common Lisp functions
This can be dangerous. Plus you need to support it for all implementations of CL you want to use...
use a language package, where you define your new language. Here this would be standard Common Lisp plus your extensions/changes. Export everything the user would use. In your software use this package instead of CL.

Smart design of a math parser?

What is the smartest way to design a math parser? What I mean is a function that takes a math string (like: "2 + 3 / 2 + (2 * 5)") and returns the calculated value? I did write one in VB6 ages ago but it ended up being way to bloated and not very portable (or smart for that matter...). General ideas, psuedo code or real code is appreciated.
A pretty good approach would involve two steps. The first step involves converting the expression from infix to postfix (e.g. via Dijkstra's shunting yard) notation. Once that's done, it's pretty trivial to write a postfix evaluator.
I wrote a few blog posts about designing a math parser. There is a general introduction, basic knowledge about grammars, sample implementation written in Ruby and a test suite. Perhaps you will find these materials useful.
You have a couple of approaches. You could generate dynamic code and execute it in order to get the answer without needing to write much code. Just perform a search on runtime generated code in .NET and there are plenty of examples around.
Alternatively you could create an actual parser and generate a little parse tree that is then used to evaluate the expression. Again this is pretty simple for basic expressions. Check out codeplex as I believe they have a math parser on there. Or just look up BNF which will include examples. Any website introducing compiler concepts will include this as a basic example.
Codeplex Expression Evaluator
If you have an "always on" application, just post the math string to google and parse the result. Simple way but not sure if that's what you need - but smart in some way i guess.
I know this is old, but I came across this trying to develop a calculator as part of a larger app and ran across some issues using the accepted answer. The links were IMMENSELY helpful in understanding and solving this problem and should not be discounted. I was writing an Android app in Java and for each item in the expression "string," I actually stored a String in an ArrayList as the user types on the keypad. For the infix-to-postfix conversion, I iterated through each String in the ArrayList, then evaluated the newly arranged postfix ArrayList of Strings. This was fantastic for a small number of operands/operators, but longer calculations were consistently off, especially as the expressions started evaluating to non-integers. In the provided link for Infix to Postfix conversion, it suggests popping the Stack if the scanned item is an operator and the topStack item has a higher precedence. I found that this is almost correct. Popping the topStack item if it's precedence is higher OR EQUAL to the scanned operator finally made my calculations come out correct. Hopefully this will help anyone working on this problem, and thanks to Justin Poliey (and fas?) for providing some invaluable links.
The related question Equation (expression) parser with precedence? has some good information on how to get started with this as well.
-Adam
Assuming your input is an infix expression in string format, you could convert it to postfix and, using a pair of stacks: an operator stack and an operand stack, work the solution from there. You can find general algorithm information at the Wikipedia link.
ANTLR is a very nice LL(*) parser generator. I recommend it highly.
Developers always want to have a clean approach, and try to implement the parsing logic from ground up, usually ending up with the Dijkstra Shunting-Yard Algorithm. Result is neat looking code, but possibly ridden with bugs. I have developed such an API, JMEP, that does all that, but it took me years to have stable code.
Even with all that work, you can see even from that project page that I am seriously considering to switch over to using JavaCC or ANTLR, even after all that work already done.
11 years into the future from when this question was asked: If you don't want to re-invent the wheel, there are many exotic math parsers out there.
There is one that I wrote years ago which supports arithmetic operations, equation solving, differential calculus, integral calculus, basic statistics, function/formula definition, graphing, etc.
Its called ParserNG and its free.
Evaluating an expression is as simple as:
MathExpression expr = new MathExpression("(34+32)-44/(8+9(3+2))-22");
System.out.println("result: " + expr.solve());
result: 43.16981132075472
Or using variables and calculating simple expressions:
MathExpression expr = new MathExpression("r=3;P=2*pi*r;");
System.out.println("result: " + expr.getValue("P"));
Or using functions:
MathExpression expr = new MathExpression("f(x)=39*sin(x^2)+x^3*cos(x);f(3)");
System.out.println("result: " + expr.solve());
result: -10.65717648378352
Or to evaluate the derivative at a given point(Note it does symbolic differentiation(not numerical) behind the scenes, so the accuracy is not limited by the errors of numerical approximations):
MathExpression expr = new MathExpression("f(x)=x^3*ln(x); diff(f,3,1)");
System.out.println("result: " + expr.solve());
result: 38.66253179403897
Which differentiates x^3 * ln(x) once at x=3.
The number of times you can differentiate is 1 for now.
or for Numerical Integration:
MathExpression expr = new MathExpression("f(x)=2*x; intg(f,1,3)");
System.out.println("result: " + expr.solve());
result: 7.999999999998261... approx: 8
This parser is decently fast and has lots of other functionality.
Work has been concluded on porting it to Swift via bindings to Objective C and we have used it in graphing applications amongst other iterative use-cases.
DISCLAIMER: ParserNG is authored by me.

What is the difference between procedural programming and functional programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 17 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core?
A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1
A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.)
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages.
1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas.
Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 )
#Procedural:#
The output of a routine does not always have a direct correlation with the input.
Everything is done in a specific order.
Execution of a routine may have side effects.
Tends to emphasize implementing solutions in a linear fashion.
##Perl 6 ##
sub factorial ( UInt:D $n is copy ) returns UInt {
# modify "outside" state
state $call-count++;
# in this case it is rather pointless as
# it can't even be accessed from outside
my $result = 1;
loop ( ; $n > 0 ; $n-- ){
$result *= $n;
}
return $result;
}
##D 2##
int factorial( int n ){
int result = 1;
for( ; n > 0 ; n-- ){
result *= n;
}
return result;
}
#Functional:#
Often recursive.
Always returns the same output for a given input.
Order of evaluation is usually undefined.
Must be stateless. i.e. No operation can have side effects.
Good fit for parallel execution
Tends to emphasize a divide and conquer approach.
May have the feature of Lazy Evaluation.
##Haskell##
( copied from Wikipedia );
fac :: Integer -> Integer
fac 0 = 1
fac n | n > 0 = n * fac (n-1)
or in one line:
fac n = if n > 0 then n * fac (n-1) else 1
##Perl 6 ##
proto sub factorial ( UInt:D $n ) returns UInt {*}
multi sub factorial ( 0 ) { 1 }
multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 }
##D 2##
pure int factorial( invariant int n ){
if( n <= 1 ){
return 1;
}else{
return n * factorial( n-1 );
}
}
#Side note:#
Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators.
sub postfix:< ! > ( UInt:D $n --> UInt )
is tighter(&infix:<*>)
{ [*] 2 .. $n }
say 5!; # 120␤
This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*)
It also shows that you can put --> UInt in the signature instead of returns UInt after it.
( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments )
I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well:
Functional programming focuses on expressions
Procedural programming focuses on statements
Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out.
Statements don't have values and instead modify the state of some conceptual machine.
In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine.
Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine.
The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions.
For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression.
You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language.
Funtional Programming
num = 1
def function_to_add_one(num):
num += 1
return num
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
function_to_add_one(num)
#Final Output: 2
Procedural Programming
num = 1
def procedure_to_add_one():
global num
num += 1
return num
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
procedure_to_add_one()
#Final Output: 6
function_to_add_one is a function
procedure_to_add_one is a procedure
Even if you run the function five times, every time it will return 2
If you run the procedure five times, at the end of fifth run it will give you 6.
DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly.
Helps my students, hope it helps you too.
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state.
I believe that procedural/functional/objective programming are about how to approach a problem.
The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge.
Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D).
Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby.
To expand on Konrad's comment:
As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined;
Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away.
Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier).
Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :)
One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like
fac n = foldr (*) 1 [1..n]
is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion.
A functional programming is identical to procedural programming in which global variables are not being used.
Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code.
Recursion is a classic example of functional style programming.
Konrad said:
As a consequence, a purely functional program always yields the same value for an input,
and the order of evaluation is not well-defined; which means that uncertain values like
user input or random values are hard to model in purely functional languages.
The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all!
Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation.
... uncertain values like user input or random values are hard to model in purely
functional languages.
The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free".
I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet.
[1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell).
To expand on Konrad's comment:
and the order of evaluation is not
well-defined
Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around.
Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition.
If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML.
So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation.
You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked.
#Creighton:
In Haskell there is a library function called product:
prouduct list = foldr 1 (*) list
or simply:
product = foldr 1 (*)
so the "idiomatic" factorial
fac n = foldr 1 (*) [1..n]
would simply be
fac n = product [1..n]
Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values.
Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls.
Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell.
None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative.
Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below
Procedural
arr_equal(a : [Int], b : [Str]) -> Bool {
if(a.len != b.len) {
return false;
}
bool ret = true;
for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) {
int a_int = a[i];
int b_int = parseInt(b[i]);
ret &= a_int == b_int;
}
return ret;
}
Functional
eq = i, j => i == j # This is usually a built-in
toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization
arr_equal(a : [Int], b : [Str]) -> Bool =
zip(a, b.map(toInt)) # Combines into [Int, Int]
.map(eq)
.reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value
While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate.
This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding.
Addendum
In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization).
To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming.
Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state.
Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions).
PS: understanding every programming paradigm is used for should clarify the differences between all of them.
PSS: In the end of the day, programming paradigms are just different approaches to solving problems.
PSS: this quora answer has a great explanation.

Resources