Etymology of reflection? - reflection

I've never found a clear explanation for the etymology of reflection in the context of computer languages, so I want to clarify this here.
"Reflection" originates from Latin and has the following definitions:
bend back
turn back
turn round
So the idea behind it is a language that is able to bend back on itself, to be able to look and manipulate its own code.
Or is there something else?
Reflection in logic, functional and object-oriented programming:
a Short Comparative Study Francois-Nicola Demers and Jacques Malenfant (PDF) seems to agree:
Reflection is the process of reasoning about and/or acting upon oneself.

The term reflection was (according to the wikipedia article) coined by Brian Cantwell Smith in his dissertation Procedural reflection in programming languages.
The prologue starts
It is a striking fact about human cognition that we can think not only about the world around us, but also about our ideas, our actions, our feelings. our past experience. This ability to reflect lies behind much of the subtlety and flexibility with which we deal with the world; it is an essential part of mastering new skills, of reacting to unexpected circumstances, ...
(---)
This last aspect -- the self-referential aspect of reflective thought -- has sparked particular interest for cognitive theorists...
(---)
In artifial intelligence, the focus on computational forms of self-referential reflective reasoning has become particularly central.
And then it summarises the reflection hypothesis as
In as much as a computational process can be constructed to reason about an external world in virtue of comprising an ingredient process (interpreter) formally manipulating representations of that world, so too a computational process could be made to reason about itself in virtue of comprising an ingredient process (interpreter) formally manipulating representations of its own operations and structures.
The use of reflection is tied to self-representation and self-reference which to me suggests that of the alternatives in the question, the closest is bend back as also given in the etymonline entry on reflection:
Of the mind, from 1670s. Meaning "remark made after turning back one's thought on some subject" is from 1640s. Spelling with -ct- recorded from late 14c., established 18c., by influence of the verb.

Related

Theory of automata prerequisites

I'm interested in automata theory to improve my understanding of programming and compiler design (I would like to create some simple syntax's in my own projects , for example; L-Systems, AI, neural net structures and intelligent object-object conversation 'AI dialog') but there are things I need to learn before I go forward.
There are a lot of new symbols and mathematical concepts I need to learn before studying automata theory, I could not copy and paste examples because of the symbols and
I don't have the required reputation to post an image so hears a link to a wiki article.
Context-free grammar article on Wikipedia
Under the heading "Proper CFGs" you can see some definitions. I don't understand them.
Could someone please tell me what this notation is called so I can Google it. Any other pointers or information would also be helpful but just knowing a few key words will help. Also if anyone knows of a comprehensive resource that can be accessed for free e.g, an IIT Video lecture on the subject of that notation I would be eternally grateful as I
can't afford tutoring or even text books at this time.
The resource I'm using at the moment for automata theory(for anyone who is interested) is Theory of Automata IIT Lectures on YouTube.
The symbols ∀ and ∃ are logical quantifiers, respectively meaning "for all" and "there exists".
Typically you are first introduced to them in a discrete mathematics course, though they're a part of predicate logic (also known as first-order logic); in my particular university's CS program, Discrete Math is a pre-requisite for Logic for Computer Science, which in turn is a pre-requisite for Formal Languages and Automata.
The star * symbol in the term (V union Sigma)* there is studied in formal languages/automata theory itself: it is the Kleene star operator. Its input is an alphabet (a set of symbols), and it produces the set of all strings of zero or more symbols over that alphabet.
A useful tool for studying formal languages and automata is JFLAP.
This topic, at the level that you have referred to in your link, is really only for mathematicians or graduate-level theoretical computer science students. The symbols you are referring to are just symbolic logic. If you are really interested in automata theory, I would recommend trying to find resources that explore the topic from a conceptual level and avoid using complex logical statements. OR, if you really want to dive in, you can teach yourself symbolic logic, some set theory, probably some modern algebra, and then tackle automata theory from there.
I read many books on the subject of Languages and Automata, including the Dragon books on compilers (and the much more pragmatic Jack Crenshaw's Let's Write a Compiler), but none of it really clicked until I read the classic Finite and Infinite Machines by Marvin Minsky. Being an old book, it does not cover the latest research and developments in the field at all, but he explains the state-of-the-art for the 1960s in Automata, Neural Networks, Turing Machines, Functional Programming and Lambda Calculus, and the oft-neglected third wheel of String-Rewriting Systems. And the writing is exceptionally excellent and engaging. IIRC Minksy even co-authored a robot story with Isaac Asimov, so he has some serious writing credentials.
Like I say, this book will not bring you up-to-date in any of these fields, but it's the best book I've found for explaining everything from the ground up. And it would provide a very firm basis for reading anything more recent. This book is in the bibliography of every book published since.

Generating articles automatically

This question is to learn and understand whether a particular technology exists or not. Following is the scenario.
We are going to provide 200 english words. Software can add additional 40 words, which is 20% of 200. Now, using these, the software should write dialogs, meaningful dialogs with no grammar mistake.
For this, I looked into Spintax and Article Spinning. But you know what they do, taking existing articles and rewrite it. But that is not the best way for this (is it? let me know if it is please). So, is there any technology which is capable of doing this? May be semantic theory that Google uses? Any proved AI method?
Please help.
To begin with, a word of caution: this is quite the forefront of research in natural language generation (NLG), and the state-of-the-art research publications are not nearly good enough to replace human teacher. The problem is especially complicated for students with English as a second language (ESL), because they tend to think in their native tongue before mentally translating the knowledge into English. If we disregard this fearful prelude, the normal way to go about this is as follows:
NLG comprises of three main components:
Content Planning
Sentence Planning
Surface Realization
Content Planning: This stage breaks down the high-level goal of communication into structured atomic goals. These atomic goals are small enough to be reached with a single step of communication (e.g. in a single clause).
Sentence Planning: Here, the actual lexemes (i.e. words or word-parts that bear clear semantics) are chosen to be a part of the atomic communicative goal. The lexemes are connected through predicate-argument structures. The sentence planning stage also decides upon sentence boundaries. (e.g. should the student write "I went there, but she was already gone." or "I went there to see her. She has already left." ... notice the different sentence boundaries and different lexemes, but both answers indicating the same meaning.)
Surface Realization: The semi-formed structure attained in the sentence planning step is morphed into a proper form by incorporating function words (determiners, auxiliaries, etc.) and inflections.
In your particular scenario, most of the words are already provided, so choosing the lexemes is going to be relatively simple. The predicate-argument structures connecting the lexemes needs to be learned by using a suitable probabilistic learning model (e.g. hidden Markov models). The surface realization, which ensures the final correct grammatical structure, should be a combination of grammar rules and statistical language models.
At a high-level, note that content planning is language-agnostic (but it is, quite possibly, culture-dependent), while the last two stages are language-dependent.
As a final note, I would like to add that the choice of the 40 extra words is something I have glossed over, but it is no less important than the other parts of this process. In my opinion, these extra words should be chosen based on their syntagmatic relation to the 200 given words.
For further details, the two following papers provide a good start (complete with process flow architectures, examples, etc.):
Natural Language Generation in Dialog Systems
Stochastic Language Generation for Spoken Dialogue Systems
To better understand the notion of syntagmatic relations, I had found Sahlgren's article on distributional hypothesis extremely helpful. The distributional approach in his work can also be used to learn the predicate-argument structures I mentioned earlier.
Finally, to add a few available tools: take a look at this ACL list of NLG systems. I haven't used any of them, but I've heard good things about SPUD and OpenCCG.

Is there a software-engineering methodology for functional programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Software Engineering as it is taught today is entirely focused on object-oriented programming and the 'natural' object-oriented view of the world. There is a detailed methodology that describes how to transform a domain model into a class model with several steps and a lot of (UML) artifacts like use-case-diagrams or class-diagrams. Many programmers have internalized this approach and have a good idea about how to design an object-oriented application from scratch.
The new hype is functional programming, which is taught in many books and tutorials. But what about functional software engineering?
While reading about Lisp and Clojure, I came about two interesting statements:
Functional programs are often developed bottom up instead of top down ('On Lisp', Paul Graham)
Functional Programmers use Maps where OO-Programmers use objects/classes ('Clojure for Java Programmers', talk by Rich Hickley).
So what is the methodology for a systematic (model-based ?) design of a functional application, i.e. in Lisp or Clojure? What are the common steps, what artifacts do I use, how do I map them from the problem space to the solution space?
Thank God that the software-engineering people have not yet discovered functional programming. Here are some parallels:
Many OO "design patterns" are captured as higher-order functions. For example, the Visitor pattern is known in the functional world as a "fold" (or if you are a pointy-headed theorist, a "catamorphism"). In functional languages, data types are mostly trees or tuples, and every tree type has a natural catamorphism associated with it.
These higher-order functions often come with certain laws of programming, aka "free theorems".
Functional programmers use diagrams much less heavily than OO programmers. Much of what is expressed in OO diagrams is instead expressed in types, or in "signatures", which you should think of as "module types". Haskell also has "type classes", which is a bit like an interface type.
Those functional programmers who use types generally think that "once you get the types right; the code practically writes itself."
Not all functional languages use explicit types, but the How To Design Programs book, an excellent book for learning Scheme/Lisp/Clojure, relies heavily on "data descriptions", which are closely related to types.
So what is the methodology for a systematic (model-based ?) design of a functional application, i.e. in Lisp or Clojure?
Any design method based on data abstraction works well. I happen to think that this is easier when the language has explicit types, but it works even without. A good book about design methods for abstract data types, which is easily adapted to functional programming, is Abstraction and Specification in Program Development by Barbara Liskov and John Guttag, the first edition. Liskov won the Turing award in part for that work.
Another design methodology that is unique to Lisp is to decide what language extensions would be useful in the problem domain in which you are working, and then use hygienic macros to add these constructs to your language. A good place to read about this kind of design is Matthew Flatt's article Creating Languages in Racket. The article may be behind a paywall. You can also find more general material on this kind of design by searching for the term "domain-specific embedded language"; for particular advice and examples beyond what Matthew Flatt covers, I would probably start with Graham's On Lisp or perhaps ANSI Common Lisp.
What are the common steps, what artifacts do I use?
Common steps:
Identify the data in your program and the operations on it, and define an abstract data type representing this data.
Identify common actions or patterns of computation, and express them as higher-order functions or macros. Expect to take this step as part of refactoring.
If you're using a typed functional language, use the type checker early and often. If you're using Lisp or Clojure, the best practice is to write function contracts first including unit tests—it's test-driven development to the max. And you will want to use whatever version of QuickCheck has been ported to your platform, which in your case looks like it's called ClojureCheck. It's an extremely powerful library for constructing random tests of code that uses higher-order functions.
For Clojure, I recommend going back to good old relational modeling. Out of the Tarpit is an inspirational read.
Personally I find that all the usual good practices from OO development apply in functional programming as well - just with a few minor tweaks to take account of the functional worldview. From a methodology perspective, you don't really need to do anything fundamentally different.
My experience comes from having moved from Java to Clojure in recent years.
Some examples:
Understand your business domain / data model - equally important whether you are going to design an object model or create a functional data structure with nested maps. In some ways, FP can be easier because it encourages you to think about data model separately from functions / processes but you still have to do both.
Service orientation in design - actually works very well from a FP perspective, since a typical service is really just a function with some side effects. I think that the "bottom up" view of software development sometimes espoused in the Lisp world is actually just good service-oriented API design principles in another guise.
Test Driven Development - works well in FP languages, in fact sometimes even better because pure functions lend themselves extremely well to writing clear, repeatable tests without any need for setting up a stateful environment. You might also want to build separate tests to check data integrity (e.g. does this map have all the keys in it that I expect, to balance the fact that in an OO language the class definition would enforce this for you at compile time).
Prototying / iteration - works just as well with FP. You might even be able to prototype live with users if you get very extremely good at building tools / DSL and using them at the REPL.
OO programming tightly couples data with behavior. Functional programming separates the two. So you don't have class diagrams, but you do have data structures, and you particularly have algebraic data types. Those types can be written to very tightly match your domain, including eliminating impossible values by construction.
So there aren't books and books on it, but there is a well established approach to, as the saying goes, make impossible values unrepresentable.
In so doing, you can make a range of choices about representing certain types of data as functions instead, and conversely, representing certain functions as a union of data types instead so that you can get, e.g., serialization, tighter specification, optimization, etc.
Then, given that, you write functions over your adts such that you establish some sort of algebra -- i.e. there are fixed laws which hold for these functions. Some are maybe idempotent -- the same after multiple applications. Some are associative. Some are transitive, etc.
Now you have a domain over which you have functions which compose according to well behaved laws. A simple embedded DSL!
Oh, and given properties, you can of course write automated randomized tests of them (ala QuickCheck).. and that's just the beginning.
Object Oriented design isn't the same thing as software engineering. Software engineering has to do with the entire process of how we go from requirements to a working system, on time and with a low defect rate. Functional programming may be different from OO, but it does not do away with requirements, high level and detailed designs, verification and testing, software metrics, estimation, and all that other "software engineering stuff".
Furthermore, functional programs do exhibit modularity and other structure. Your detailed designs have to be expressed in terms of the concepts in that structure.
One approach is to create an internal DSL within the functional programming language of choice. The "model" then is a set of business rules expressed in the DSL.
See my answer to another post:
How does Clojure aproach Separation of Concerns?
I agree more needs to be written on the subject on how to structure large applications that use an FP approach (Plus more needs to be done to document FP-driven UIs)
While this might be considered naive and simplistic, I think "design recipes" (a systematic approach to problem solving applied to programming as advocated by Felleisen et al. in their book HtDP) would be close to what you seem to be looking for.
Here, a few links:
http://www.northeastern.edu/magazine/0301/programming.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.8371
I've recently found this book:
Functional and Reactive Domain Modeling
I think is perfectly in line with your question.
From the book description:
Functional and Reactive Domain Modeling teaches you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start with the basics of functional programming and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design, and isolation of side-effects can make your model compose for readability and verifiability.
There is the "program calculation" / "design by calculation" style associated with Prof. Richard Bird and the Algebra of Programming group at Oxford University (UK), I don't think its too far-fetched to consider this a methodology.
Personally while I like the work produced by the AoP group, I don't have the discipline to practice design in this way myself. However that's my shortcoming, and not one of program calculation.
I've found Behavior Driven Development to be a natural fit for rapidly developing code in both Clojure and SBCL. The real upside of leveraging BDD with a functional language is that I tend to write much finer grain unit tests than I usually do when using procedural languages because I do a much better job of decomposing the problem into smaller chunks of functionality.
Honestly if you want design recipes for functional programs, take a look at the standard function libraries such as Haskell's Prelude. In FP, patterns are usually captured by higher order procedures (functions that operate on functions) themselves. So if a pattern is seen, often a higher order function is simply created to capture that pattern.
A good example is fmap. This function takes a function as an argument and applies it to all the "elements" of the second argument. Since it is part of the Functor type class, any instance of a Functor (such as a list, graph, etc...) may be passed as a second argument to this function. It captures the general behavior of applying a function to every element of its second argument.
Well,
Generally many Functional Programming Languages are used at universities for a long time for "small toy problems".
They are getting more popular now since OOP has difficulties with "paralel programming" because of "state".And sometime functional style is better for problem at hand like Google MapReduce.
I am sure that, when functioanl guys hit the wall [ try to implement systems bigger than 1.000.000 lines of code], some of them will come with new software-engineering methodologies with buzz words :-). They should answer the old question: How to divide system into pieces so that we can "bite" each pieces one at a time? [ work iterative, inceremental en evolutionary way] using Functional Style.
It is sure that Functional Style will effect our Object Oriented
Style.We "still" many concepts from Functional Systems and adapted to
our OOP languages.
But will functional programs will be used for such a big systems?Will they become main stream? That is the question.
And Nobody can come with realistic methodology without implementing such a big systems, making his-her hands dirty.
First you should make your hands dirty then suggest solution. Solutions-Suggestions without "real pains and dirt" will be "fantasy".

should we teach pointers in a "fundamentals of programming" course?

I will be teaching a course on the fundamentals of programming next Fall, first year computer science course. What are the pros and cons of teaching pointers in such a course? (My position: they should be taught).
Edit: My problem with the "cater your audience" argument is that in the first couple of years in University, we (profs) do not know if students would like to be scientists or not... we wish we knew, but we have to strike a balance between those who will remain in school (4 years does not a scientist make), and those who will be engineers.
Final decision: At least references, but possibly pointers without pointer arithmetic.
At the very least you should teach references or some equivalent concept. I think you should probably take it easy on things like pointer arithmetic, c arrays and strings, but indirection is a very important concept in computer science, and students should be introduced to it.
Yes.
Pointers underpin a huge number of concepts in other, higher level languages, and I'm firmly of the opinion that you need to teach a certain amount of the lower-level stuff to facilitate a good understanding of why we bother with anything higher level at all.
Once you understand a bit about how memory is allocated, and how it's addressed and manipulated with pointers, explaining a lot of other constructs gets easier. For example, explaining a NullPointerException in Java, or even the concept of references in such languages is child's play if you've got someone who understands pointers in C (and better still, if they also grok references in C++).
Absolutely teach them. Understanding indirection is essential for programming, whether it's with pointers, references, dynamic binding, or any number of other things. Now obviously don't start off with them, but understanding indirection is at least as important as understanding control flow ideas.
The con of course is that some people just won't get it and will do poorly or drop out. If this is a course for people who want to be CS majors then don't sweat it because you're just giving them incentive to switch majors earlier rather than later. If it's more or a general ed course for people who are kind of interested in programming, then they should probably still be introduced, but not graded harshly or heavily.
During my first year as a CS student, I took a Java course in fall which was the general intro. The professor didn't teach pointers directly, but he did teach the concept of references, and why you can modify objects and not when primitives when either is passed in an argument.
During my 2nd semester, I took the next course in the series, which was about C, and this class heavily relied on pointers.
For an intro to programming class, I'd say just mention references, but not pointers directly.
I think that a "fundamentals of programming" course should at least touch on basic processor architecture and assembly language, and if it does, you can't really make a case for not discussing pointers.
If you only teach higher-level (byte-code) languages, then I guess pointers would confuse the audience.
Pros: solid understanding of the way that memory is used by the machine, the difference between (and pitfalls of) pointers to data on the heap vs. pointers to data on the stack, passing methods by address, etc.
Cons: complex for an audience who is not yet knowledgeable (or has not had enough time to assimilate the concepts) of computer architecture, including what is stack, what are registers, calling conventions, etc.
So, to summarize, it depends a lot on your audience and on the language(s) you'll tackle (pointers will be meaningless in the context of LISP or Java), as well as on how deep you are willing to go in the direction of what is heap, what is stack, how scope is translated into stack (i.e. why never to return a pointer to a local variable), etc.
When I taught pointers to an engineering class I ultimately fired up a debugger on a simple "hello world" program, and showed the students the actual machine code, register values and corresponding memory dumps, with the stack manipulation and parameter passing, etc., but they were ready for it. Would your audience be receptive to such a drill-down expedition, to ensure solid understanding of what's going on behind the scenes, and would you be willing to go to such lengths? :)
I think you shouldn't teach it first. But later, once basic concepts of programming are acquired.
A good example is the last Stroustrup book : Programming -- Principles and Practice Using C++ where he teach how to make a parser, I/O (streams) usage and GUI usage before even talking about pointers!
I think it will be a good reference for teaching because it is more natural to understand the way we build ideas instead of how much constraints (memory management for example) we have to handle at the same time to make a software work. I really recommand you this book to have a fresh perspective about teaching fundamentals of programming.
It really depends on the goal of your course - teaching programming and teaching computer science are two separate goals, and though they are not mutually exclusive, introductory classes generally do not teach both equally well. Here's an example of the difference: say we want to learn how to sort a list. A programming course in C++ would teach you to use the syntax of a std::sort function template, and homework might be writing several comparison functors. A computer science course would explain to you what a merge sort is, what the algorithm looks like in pseudo-code, and its performance/space characteristics, and homework would be writing the sort function itself.
So if you are teaching introductory programming, then yes, you should teach your students about pointers.
If you are teaching computer science, then no, there is no need to understand pointers at an introductory level.
Anybody who calls themselves a good programmer must know how pointers work; being a good programmer implies that they do not know only a single programming language, but that they know how programming languages work in general, allowing them to adapt to programming languages they haven't seen before.
This doesn't mean that a fundamentals of programming course should be teaching pointers, however.
If your goal is to give these people a complete, well rounded familiarity with programming languages in general, then yes pointers shall be part of that.
If your way of introducing them to programming is to use one programming language at first, with the intention of covering other languages in subsequent courses, and pointers are not relevant to that language, then there's no need to talk about pointers yet.
I think there's a lot to be said by starting people out in one language only, rather than trying to cover every style of language at once.
My first introductory programming course used Haskell. It wasn't until a subsequent course using C that pointers were introduced (I was already a good C and C++ programmer when I took the course; those subjects were mandatory).

Is programming a subset of math? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've heard many times that all programming is really a subset of math. Some suggest that OO, at its roots, is mathematically based, but I don't get the connection, aside from some obvious examples:
using induction to prove a recursive algorithm,
formal correctness proofs,
functional languages,
lambda calculus,
asymptotic complexity,
DFAs, NFAs, Turing Machines, and theoretical computation in general,
and the fact that everything on the box is binary.
I know math is very important to programming, but I struggle with this "subset" view. In what ways is programming a subset of math?
I'm looking for an explanation that might have relevance to enterprise/OO development, if there is a strong enough connection, that is.
It's math in the sense that it requires abstract thought about algorithms etc.
It's engineering when it involves planning schedules, deliverables, testing.
It's art when you have no idea how it's going to eventually turn out.
Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.
--E. W. Dijkstra
Overall, remember that mathematics is a formal codification of logic, which is also what we do in software.
The list of topics in your question is loaded with mathematical problems. We are able to do programming on a fairly high level of abstraction, so the raw mathematics may not be staring you in the face. For example, you mentioned DFAs.. you can use a regular expression in your programs without knowing any math, but you'll find more of a need for mathematics when you want to design a good regular expression engine.
I think you've hit on an interesting point. Programming is an art and a science. There are a lot of "tools of the trade", and you don't necessarily sit down and do a lot of high-level mathematics in order to simply write a program. In fact, when you're programming, you many not really being doing much mathematics or computer science.
It's when we start to solve difficult problems in computer science that mathematics shows up. The deeper you go, the more it will flesh itself out.. often in lower levels of abstraction.
There are also some realms of programming that you don't necessarily have to work in, but they involve more math. For example, while you can certainly learn a language and write some apps without any formal mathematics, you won't get very far in algorithm analysis without some applied math.
OK, I was a math and CS major in college. I would say that if the set A is Math and the set B is CS, then A intersects B. It's not a subset.
It's no doubt that many of the fathers and mothers of computer science were Mathematicians like Turing and Dykstra. Most of the founders of the internet were either Phd's in Math, Physics, or Engineering. Most of the core concepts of computer science come from math, but the act of programming isn't really math. Math helps us in our daily lives, but the two aren't the same.
But there is no doubt that the original reasoning behind the computer was to well, compute things. We have come a long way from there in such a short time.
Doesn't mention programming, but idea is still relevant.
Einstein was known in 1917 as a famous mathematician. It wasn't until Hiroshima that the general public finally came around to the realization that physics is not just applied mathematics.
When people don't understand something, they try to understand it as a type of something that they do understand. They think by analogy. Programming has been described as a field of math, engineering, science, art, craft, construction... None of these are completely false; it borrows from all of these. The real issue is that the field of programming is only about 50 years old. People have not integrated it into their mental taxonomies.
There's a lot of confusion here.
First of all, "programming" does not (currently) equal "computer science." When Dijkstra called himself a "programmer" (more or less inventing the title), he was not pumping out CRUD applications, but actually doing applied computer science. Let's not let that confuse us-- today, there is a vast difference between what most programmers in a business setting do and computer science.
Now, the argument can be made that computer science is a branch of mathematics; but, as Knuth points out (in his paper "Computer Science and its Relation to Mathematics", collected in his Selected Papers on Computer Science) it can also be argued that mathematics is a branch of computer science.
In fact, I'd strongly recommend this paper to anyone thinking about the relationship between mathematics and computer science, as Knuth lays out the territory nicely.
But, to return to your original question: to a practitioner, "enterprise/OO development" is pretty far removed from mathematics-- but that's largely because most of the serious mathematics involved at the lower levels of operation have been abstracted away (by compilers, operating systems, instruction sets, etc.). Similarly, advanced knowledge of the physics of the internal combustion engine are not required for driving a car. Naturally, if you want to design a more efficient car....
if your definition of math includes all forms of formal logic, and programming is defined only by the logic and calculations extant in the code, then programming is a subset of math QED ;-)
but this is like saying that painting is merely putting colored pigments on a surface - it completely igores the art, the insight, the intuition, the entire creative process
one could argue that music is a subset of math by the same reasoning
so i'd have to say no, programming is not a subset of math. Programming uses a subset of math, but requires non-math skills/talent as well [much like music composition]
Disclaimer: I work as an IT consultant and develop mainly portals and Architecture stuff. I have a Psychology degree. I never studied Maths in University. And i get my job done. And usually well. Why? Because I don't think you need to know Maths (as in 'heavy' Maths stuff) to write code. You need analytical thinking, problem-solving skills, and a high level of abstraction. But Maths does not give you that. It's just another discipline that requires similar skills. My studies in Psychology also apply to my daily work when dealing with usability issues and data storage. Linguistics and Semiotics also play a part.
But wait, just don't flame me yet. I'm not saying Maths are not needed at all for computers - obviously, you need real Math skills when designing encryption algorithms and hardware and etc -- but if, as lots of programmers, you just work an a mid/low level language (like C) or higher level stuff (like C# or java), consuming mostly pre-built frameworks and APIs, you don't really need to understand the mathematical principles behind Fourier transforms or Huffman trees or Moebius strips... let someone else handle that, and let me build value on top of it. I am not stupid. I know the difference between linear and exponential algorithms and data structures and etc. I just don't have the interest to rewrite quicksort or a spiffy new video compression technique.
Well, aside from all that...!
Math is used for many aspects of programming such as
Creating efficient and smart algorithms
Understanding Big O notation
Security (such as RSA)
Many more...
I think that programming needs math to survive. But I wouldn't call it a subset. It's just like blowing glass uses properties of physics, but those artists don't call themselves physicists.
The foundation of everything we do is math.
Luckily, we don't need to be good at math itself to do it. Just like you don't need to understand physics to drive a car or even fly a plane.
The difference between programming and pure mathematics is the concept of state.
Have a look at http://en.wikipedia.org/wiki/Dynamic_logic_(modal_logic). It's a way of mathematically analyzing things changing through time. Also, Hoare triples is a way of formalizing the input-output behavior of programs. By having some axioms dealing with sequential composition of programs and how assignment works, you can perfectly well deal with state changing over time in a mathematically rigorous way.
If the math you know is insufficient, "invent" some new math to deal with what you want to analyze. Newton and Leibniz did it for analysis (aka calculus, I think). No reason to not do it for computation and programming.
I don't believe I've heard that programming is a subset of math. Even the link you provide is simply a proposed approach to programming (not claiming it's a subset of mathematics) and the wiki page has plenty of disagreements in it as well.
Programming requires (at least some) applied mathematics. Mathematics can be used to help describe and analyze programs and program fragments. Programming has a very close relationship with math and uses it and concepts from it heavily. But subset? no.
I'd love to see someone actually claim that it is one with some clear reasoning. I don't think I ever have
Just because you can use mathematics
to reason about something does not
imply that it is, ipso facto, a
mathematical object. Mathematics is
used to reason about internal
combustion engines, radioactive decay
and juggling patterns. Using
mathematics is not doing mathematics.
I would say...
It's partly math, especially at the theoretical level. Imagine designing efficient searching/sorting/clustering/allocating/fooifying algorithms, that's all math... running the gamut from number theory to statistics.
It's partly engineering. Complex systems can rarely achieve ideal levels of performance and reliability, and software is no exception. A lot of software development is about achieving robustness in the face of unreliable hardware and (ahem) humans.
And it's partly art. Creative and idiosyncratic software design often comes up with great new ideas... like assembly language, multitasking operating systems, graphical user interfaces, dynamic languages, and the web.
Just my 2¢...
Math + art + logic
You can actually argue that math, in the form of logical proofs, is analogous to programming --
Check out the Curry-Howard correspondence. It's probably more the way a mathematician would look at things, but I think this is hitting the proverbial nail on the head.
Programming may have originally started as a quasi-subset of math, but the increasing complexity of the field over time has led to programming being the art and science of creating good abstractions for information processing and computation.
Programming does involve math, engineering, and an aesthetic sense for good design and implementation. Algorithms are an extension of mathematics, and the systems engineering side overlaps with other engineering disciplines to some degree. However, neither mathematics nor other engineering fields have the same level of need for complex, flexible, and yet understandable abstractions that can be used and adapted at so many different levels to solve new and evolving problems.
It is the need for useful, flexible, and dynamic abstractions which led first to the creation of function libraries, then class/component libraries, and in more recent years design patterns and service-oriented architectures. Although the latter have more of a design focus, they are a reaction to the increasing need to build high-level abstractional bridges between programming problems and solutions.
For all of these reasons, programming is neither a subset nor a superset of math. It is simply yet another field which uses math that has deeper roots in it than others do.
The topics you listed are topics in Theoretical Computer Science, and THAT is a branch of Pure Mathematics. Programming is an applied science which uses theoretical computer science. Programming itself isn't a branch of mathematics but the Lambda Calculus/theory of computation/formal logic/set theory etc that programming languages are based on is.
Also I completely disagree with Dijkstra. It's either self-congratulatory or Dijkstra is being misquoted/quoted out of context. Pure mathematics is a very very very difficult field. It is so enormously abstract that no branch of applied mathematics is comparable in difficulty. It is one field that requires enormous leaps of imagination. I did my first degree in computer science where I focused a lot on theoretical CS and applied areas like programming, OS, compilers. I also did a degree in Electrical Engineering - arguably the most difficult branch of engineering - and worked on difficult areas of applied mathematics like Maxwell's equations, control theory and partial differential equations in general.
I've also done research in applied and pure mathematics, and to this day I find applied far easier. As for the pure mathematicians, they're a whole different breed.
Now there's a tendency for someone to study an year or two of calculus unhinged from application and conclude that pure mathematics is easy. They have no idea what they're talking about. Studying calculus or even topology unhinged from application does not give you any inkling of what a pure mathematician does. The task of actually proving those theorems are so profoundly difficult that I will defer to a computer scientist to point out the distinction:
"If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in 'creative leaps,' no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..." —Scott Aaronson, (Theoretical Computer Scientist, MIT)
I think mathematics provides a set of tools for programmers which they use at abstract level
to solve real world problems.
I would say that programming is less about math than it used to be as we move up to 4th Generation Languages. Assembly is very much about math, C#, not so much. Thoughts?
If you just want the design specs handed out to you by your boss, then it's not much math but such a work isn't fun at all... However, coming up with how to do things does require mathematical ideas, at least things like abstraction, graphs, sometimes number theory stuffs and depending on the problems, calculus. Personally, more I've been involved with programming, more I see the mathematical side to it. However, most of the times IMO, you can just pick up the book from library and look up the basics of the thing you need to do but that requires some mathematical grasp upfront.
You really can't design "good" algorithms without understanding the maths behind it. Searching in google takes you only so far.
Programming is a too wide subject. Good software based not only on math (logic) but also on psychology, linguistics etc. Algorithms are part of math, but there are many other programming-related things besides algorithms.
As a mathematician, it is clear to me that Math is not equal to Programming but that the process which is used to solve problems in either discipline is extremely similar.
Solving a higher level mathematics questions requires analytical thinking, a toolbox of possible ways of solving problems, experience with the field, and some formalized ways of constructing your answer so that other mathematicians agree. If you find a particularly clever, abstract, or elegant way of solving a problem, you get Kudos from your fellow mathematicians. For particularly difficult math problems, you may solve the problem in stages, and codify your stage arguments using things called conjectures and proofs.
I think programming involves the same set of skills. In programming, the same set of principles applies to the solving and presenting of solutions to problems. When you have a partial solution to a programming dilemna, you include it as part of your personal library and use it as part of another bigger problem later. These skills seem very similar to the skills used in mathematics.
The major difference between Math and Programming is the latter has a lot more in common between different disciplines of programming than Math does. Two fields of mathematics can be very, very different in presentation and what is used to communicate the field. By contrast, programming structures, to me at least, look very similar in many different languages.
The difference between programming and pure mathematics is the concept of state. A program is a state machine that uses logic (maths) to transition between states. The actual logic used to transition between states is usually very simple, which is why being a math genius doesn't necessarily help you all that much as a programmer.
Part of the reason I'm a programmer is because I don't like math. I have no problem with math itself, and I'm fine with it conceptually, I just don't like doing calculations by hand. When I found I could tell a computer what the math problem is and let it do the calculating for me, a life-long passion and career was born.
To answer the question, according to my alma mater, math == programming since they allowed me to take Intro to C++ to fulfill my math requirement.
Edit: I should mention my degree is in telecommunications which, at the time, had only the standard liberal arts math requirement of one semester.
Math is the purest form of truth. Everything inherits from math.
Amen.
It's interesting to compare programming with music too. In UK, anyway, there are computing based undergrad university courses that will accept applicants on the bases of music qualifications as supposed to computing due to the logic, patterns, etc. involved.
Maths is powerful, programming is powerful, if maths is a subset of programming then it is equally true to state that programming is a subset of maths.
Maths is described using language, often written down. Therefore is maths a subset of writing too?
Historicly maths came before computer programming, but then lists and processes probably preceded maths, both of which could be equally thought of as mathematical or do with programming.
Cirtainly programming can be represented using maths, so there is some bases for it being true that programming is a sub-set of maths. However a computer program could also implement maths, representing information symbolically, as maths typically does when done on paper, including the infinite and only somewhat defined, from the fundamental axioms, as well as allowing higher level structures to be defined that use each other and other sorts of relationships beyond composition, supporting the drawing of diagrams and allowing the system to be expanded. Maths is equally a subset of programming.
While maths can represent structures such as words, maths is by design about numbers. Strings for example are more programmatic than mathematic.
It's half math, half man speak, duh.

Resources