functional programming and self-commenting code - is this really possible? - functional-programming

As I got a lot of spare time to spend ATM I read a few threads/comments on code-comments and documentation here.
As most people here I too think that you should write your code so that it's easy to read and self-commenting as far as it's possible.
On the other hand I am a huge FP-fanboy - and yes if you write your code the right way it will be very readable in FP - or so I thought.
Problem is that tiny things make a awful lot of difference in FP-world. If your colleague doesn't fully understand FP he might be able to "read" the indentation of the code but won't be able to modify or fully understand it. That stands for languagues like Haskell, where a '.' or '$' makes a big difference and also for languages like F# or even C# of VB.NET with lots of LINQ statements.
At first glance the problem might be, that your peer just doesn't get the language and it's not the codes fault - on the other hand: who truly gets all of FP? Look at some papers concerning Haskell - the code is beautifully crafted and self-commenting but just as in math you may have to chew on a line for several minutes before you get it.
Of course in those papers there will be a text-block trying to clarify just after the code ....
So IMHO you have to comment your FP-code as long as you work in a shop where not every colleague has a PhD in CS ;)
What do you think?
PS: first post here - really looked for answers concerning this questions but didn't find any - please be gentle if I just didn't look hard enough :)

Functional languages greatly favor the development of self-documenting code, because you can freely rearrange the order of functions, and easily abstract out any given part of the code, assigning it an explanatory name.
Abstract, abstract, abstract, is the keyword to master code complexity, and that's where the functional style shines. But there will be always things that cannot be expressed within the code itself.
One clear example is code for algorithms. It is unlikely that one can easily understand a complex algorithm just by looking at the implementation. Yes, functional languages make understanding simpler, becasue many gory details (trivial example: memory management) do not have to be coded explicitly, thus exposing the underlying logic more clearly.
However this is no substitute for an explanation in natural language, which conveys in an intuitive way how it works (and sometimes a picture is worth a thousand words). This is becasue our brain needs to observe difficult concepts from different point of views in order to understand them fully.
What to comment also depends on your audience. Beginners, average programmers or wizards? There is no one-fits-all solution.
E.g. you should explain the meaning of a "." (function composition) in Haskell if you are writing tutorial code, but certainly that would be a redundant explanation for anyone who has gone past chapter one/two of any Haskell book.
On the other hand some specific algorithm, like say red-black trees, could be a given for some programmers, and something very mysterious for others. In the second case you should add many comments to the code, or point to a document with further explanations.
Finally, one should notice that there is no consensus even among the masters. E.g. Dennis Ritchie is famous for being extremely parsimonious with comments, instead Don Knuth is an advocate of "Literate programming", where comments are as important as code itself. A set of rules will never be a substitute for personal taste.

Related

How can I learn higher-level programming-related math without much formal training? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I haven't taken any math classes above basic college calculus. However, in the course of my programming work, I've picked up a lot of math and comp sci from blogs and reading, and I genuinely believe I have a decent mathematical mind. I enjoy and have success doing Project Euler, for example.
I want to dive in and really start learning some cool math, particularly discrete mathematics, set theory, graph theory, number theory, combinatorics, category theory, lambda calculus, etc.
My impression so far is that I'm well equipped to take these on at a conceptual level, but I'm having a really hard time with the mathematical language and symbols. I just don't "speak the language" and though I'm trying to learn it, I'm the going is extremely slow. It can take me hours to work through even one formula or terminology heavy paragraph. And yeah, I can look up terms and definitions, but it's a terribly onerous process that very much obscures the theoretical simplicity of what I'm trying to learn.
I'm really afraid I'm going to have to back up to where I left off, get a mid-level math textbook, and invest some serious time in exercises to train myself in that way of thought. This sounds amazingly boring, though, so I wondered if anyone else has any ideas or experience with this.
If you don't want to attend a class, you still need to get what the class would have given you: time in the material and lots of practice.
So, grab that text book and start doing the practice problems. There really isn't any other way (unless you've figured out how osmosis can actually happen...).
There is no knowledge that can only be gained in a classroom.
Check out the MIT Courseware for Mathematics
Also their YouTube site
Project Euler is also a great way to think about math as it relates to programming
Take a class at your local community college. If you're like me you'd need the structure. There's something to be said for the pressure of being graded. I mean there's so much to learn that going solo is really impractical if you want to have more than just a passing nod-your-head-mm-hmm sort of understanding.
Sounds like you're in the same position I am. What I'm finding out about math education is that most of it is taught incorrectly. Whether a cause or result of this, I also find most math texts are written incorrectly. Exceptions are rare, but notable. For instance, anything written by Donald Knuth is a step in the right direction.
Here are a couple of articles that state the problem quite clearly:
A Gentle Introduction To Learning
Calculus
Developing Your Intuition For
Math
And here's an article on a simple study technique that aims at retaining knowledge:
Teaching linear algebra
Consider auditing classes in discrete mathematics and proofs at a local university. The discrete math class will teach you some really useful stuff (graph theory, combinatorics, etc.), and the proofs class will teach you more about the mathematical style of thinking and writing.
I'd agree with #John Kugelman, classes are the way to go to get it done properly but I'd add that if you don't want to take classes, the internet has many resources to help you, including recorded lectures which I find can be more approachable than books and papers.
I'd recommend checking out MIT Open Courseware. There's a Maths for Computer Science module there, and I'm enjoying working through Gilbert Strang's Linear Algebra course of video lectures.
Youtube and videolectures.com are also good resources for video lectures.
Finally, there's a free Maths for CS book at bookboon.
To this list I would now add The Haskel Road to Logic, Maths, and Programming, and Conceptual Mathematics: A First Introduction to Categories.
--- Nov 16 '09 answer for posterity--
Two books. Diestel's Graph Theory, and Knuth's Concrete Mathematics. Once you get the hang of those try CAGES.
Find a good mentor who is an expert in the field who is willing to spend time with you on a regular basis.
There is a sort of trick to learning dense material, like math and mathematical CS. Learning unfamiliar abstract stuff is hard, and the most effective way to do it is to familiarize yourself with it in stages. First, you need to skim it: don't worry if you don't understand everything in the first pass. Then take a break; after you have rested, go through it again in more depth. Lather, rinse, repeat; meditate, and eventually you may become enlightened.
I'm not sure exactly where I'd start, to become familiar with the language of mathematics; I just ended up reading through lots of papers until I got better at it. You might look for introductory textbooks on formal mathematical logic, since a lot of math (especially in language theory) is based off of that; if you learn to hack the formal stuff a bit, the everyday notation might look a bit easier.
You should probably look through books on topics you're personally interested in; the inherent interest should help get you over the hump. Also, make sure you find texts that are actually introductory; I have become wary of slim, undecorated hardbacks labeled Elementary Foobar Theory, which tend to be elementary only to postdocs with a PhD in Foobar.
A word of warning: do not start out with category theory -- it is the most boring math I have ever encountered! Due to its relevance to language design and type theory, I would like to know more about it, but so far I have not been able to deal...
For a nice, scattershot intro to bits of many kinds of CS-ish math, I recommend Godel, Escher, Bach by Hofstadter (if you haven't read it already, of course). It's not a formal math book, though, so it won't help you with the familiarity problem, but it is quite inspirational.
Mathematical notation is is akin to several computer languages:
concise
exacting
based on many idioms
a fair amount of local variations and conventions
As with a computer language, you don't need to "wash the whole elephant at once": take it one part a at time.
A tentative plan for you could be
identify areas of mathematics that are interesting or important to you. (seems you already have a bit of a sense for that, CS has helped you develop quite a culture for it.)
take (or merely audit) a few formal classes in this area. I agree with several answers in this post, an in-person course, at local college is preferable, but, maybe at first, or to be sure to get the most of a particular class, first self-teaching yourself in this area with MIT OCW, similar online resources and associated books is ok/fine.
if an area of math introduces too high of a pre-requisite in terms of fluency with notation or with some underlying concept or (most often mechanical computation and transformation techniques). No problem! Just backtrack a bit, learn these foundations (and just these foundations!) and move forward again.
Find a "guru", someone that has a broad mathematical culture and exposure, not necessarily a mathematician, physics folks are good too, indeed they can often articulate math in a more practical fashion. Use this guru to guide you, as he/she can show you how the big pieces fit together.
Note: There is little gain to be had of learning mathematical notation for its own sake. Rather it should be learned in context, just like say a C# idiom is better memorized when used and when associated with a specific task, rather than learned in vacuo. A related SO posting however provides several resources to decipher and learn mathematical notation
Project Euler takes problems out of context and drops them in for people to solve them. Project Euler cannot teach you anything effectively. I think you should forget about it, if it is popular it does not mean anything. You cannot study Mathematics through Project Euler as it contains only bits and pieces(and some pretty high level pieces) that you're supposed to know in order to solve the problems. Learning mathematics means to consider a subject and a read a book about it and solving exercices or reading solutions, that's how you learn math. If it so happens that through your reading you find something that is close to some project euler thing, your luck , but otherwise Project euler is a complete waste of time. I think the time is much better invested choosing a particular branch of mathematics and studying that. Let me explain why: I solved 3 pretty advanced Projec Euler problems and they were all making appeal to knowledge from Number theory which I happened to have because i studies some part of it. I do not think Iearned anything from Project Euler, it just happened that I already knew some number theory and solved the problems.
For example, if you find out you like number theory, take H. Davenport -> Hardy & Wright -> Kenneth & Rosen's , study those.
If you like Graph Theory take Reinhard Diestel's book which is freely available and study that(or check books.google.com and find whichever is more appropriate to your taste) but don't spread your attention in 999999 directions just because Project Euler has problems ranging from dynamic programming to advanced geometry or to advanced number theory, that is clearly the wrong way to go and it will not bring you closer to your goal.
This sounds amazingly boring
Well ... Mathematics is not boring when you find some problem that you are attached to, which you like and you'd like to find the solution to, and when you have the sufficient time to reflect on it while not behind a computer screen. Mathematics is done with pen and paper mostly(yes you can use computers .. but that's not really the point).
So, if you find a real-world problem, or some programming problem that would benefit from
you knowing some advanced maths, and you know what maths you have to study , it can be motivating to learn in that way.
If you feel you are not motivated it is hard to study properly.
There is also the question of what you actually mean when you say learn. Does the learning process stop after you solved the problems at the end of the chapter of a book ? Well you decide. You can consider you have finished learning that subject, or you can consider you have not finished and read more about it. There are entire books on just one equation and variations of it.
The amount of programming-related math that you can learn without formal training is limited, but it's more than enough. But maybe you can self-teach yourself.
It all boils down to your resources and motivation.
To know mathematics you have to do mathematics not programming(project euler).
For beginning to learn category theory I recommend David Spivak's Category Theory for the Sciences (AKA Category Theory for Scientists) because its relatively comprehensible due to many examples that enable understanding by analogy and which quickly builds a foundation for understanding more abstract concepts.
It requires the ability to reason logically and an intuitive notion of what is a set. It proceeds from sets and functions through basic category theory to adjoint functors, categories of functors, sheaves, monads and an introduction to operads. Two main threads throughout are modeling databases in terms of categories and describing categories with annotated diagrams called ologs. The bibliography provides references to more advanced and specialized topics including recent papers by Dr. Spivak.
An expected outcome from reading this book is the capability of understanding category theory texts and papers written for mathematicians such as Mac Lane's Category Theory for the Working Mathematician.
In PDF format it is available from http://math.mit.edu/~dspivak/teaching/sp13/ (the dynamic version is recommended since its the most recent). The open access HTML version is available from https://mitpress.mit.edu/books/category-theory-sciences (which is recommended since it includes additional content including answers to some exercises).

Structure and Interpretation of Computer Programs, what level of maths ability is required?

I regrettably haven't studied mathematics since I was 16 (GCSE level), I'm now a 27 year old C# developer.
Would it be a fruitless exercise trying to work through Structure and Interpretation of Computer Programs (SICP)?
What kind of mathematics standard is expected of the reader?
Having worked through all of SICP, I can tell you with confidence that you don't need a lot of math background to understand it. SICP is (used to be?) a first or second semester course in MIT, for students with practically no college/university level math. Whenever it discusses mathematical topics, it provides sufficient background for any intelligent reader to understand.
From the little you tell about yourself, it's great time to work through SICP. Reading the book and solving (at least some of) the exercises, and playing with the code of the projects, can teach you a lot about programming. Don't worry about math - you'll handle it without any problems. What's really needed is a true, deep curiosity about programming, and some patience.
It's never too late to start SICP. And it doesn't really require any higher maths at all, except perhaps in the signal processing with infinite streams parts. That can be skipped without losing too much though.
The most important thing while reading SICP is solving the problems, IMO. Some of the tougher ones can be mind-expanding and force you to really understand the topic. If you are confident about some solution you can skip it though. And the solutions can be found at - http://eli.thegreenplace.net/category/programming/lisp/sicp/
The danger in reading SICP is that after completing it, you will not like using any programming language other than Scheme. :)
I had a ganders at this book. My maths knowledge is not great ... but there is a key:
For understanding things like this, providing you have a creative mind and a good grasp of the abstract nature of structures and mathematical principles you should be fine. My mental arithmatic is pretty poor by anyones standards, but I love reading about discrete mathematics because of it's abstract nature.
I wouldn't consider myself a very good mathematician in the numeric sense, but as a software developer I like to think I have a mathematics (or mechanical) mind.
I wouldn't worry too much about your numeric strength but more about the nature of mathematics and the personality of the concepts underpinning computer science. If you have a good programming mind, maybe try and enhance that with combinatorics/discrete/concrete mathematics (which, besides counting theory, in many cases avoids dry numbers).
I found my love for things like set-theory studying compilers, and I wouldn't want to sit my maths A-level without alot of cramming!
Give it a go, what have you got to lose?
(im 22 and in a similar situation to you)
Good luck
PS: I also found the video lectures interesting. You can torrent them from
http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussman-lectures/
It definitely wouldn't be a fruitless exercise, it's an excellent book. On the other hand, it would be kind of tough going, as they do expect you to have some mathematical sophistication, if not tons of advanced math.
You might find How to Design Programs, by Felleisen et al, a bit of an easier start while giving you much the same approach, using Scheme and all.
From what I can remember from this book, it talks about some matrix calculations, which might be hard to understand at first. But it is just list of lists, or array of arrays... so you will need to deal with that sooner or later in programming.
If there was any difficult math, I think you can skip it. This book was (and probably still is) used in Berkeley's first year computer science class (many students take it in the first semester), without any need of understanding calculus at all, so I think general understanding of math is good enough to understand the book.
The book talks about a function being a black box... and after reading the book, I think it helps a person's understanding of math in general as well.
The Numerical Programming section might require some higher math, but you should be able to digest the rest of the book with high-school math.

How do I get my brain moving in "lisp mode?"

My professor told us that we could choose a programming language for our next programming assignment. I've been meaning to try out a functional language, so I figured I'd try out clojure. The problem is that I understand the syntax and understand the basic concepts, but I'm having problems getting everything to "click" in my head. Does anyone have any advice? Or am I maybe picking the wrong language to start functional programming with?
It's a little like riding a bike, it just takes practice. Try solving some problems with it, maybe ProjectEuler and eventually it'll click.
Someone mentioned the book "The Little Schemer" and this is a pretty good read. Although it targets Scheme the actual problems will be worth working through.
Good luck!
Well, for me, I encountered the same problem as you do in the beginning when I started doing OCAML, but the trick is that you must start thinking about what you want from the code and not how to do it!!!
For example, to calculate the square of list's elements, forget about the length of the list and such tricks, just think mathematically like that:
if the list is empty -> I am done
if not, then the list must have a head and tail -> you calculate the square of the head, then ask your function to do the same with the tail.
Just think about the general case and the base one, and that you are emitting data and not modifying it (unless you want to modify it ;) ).
Good luck!
You could check out The Little Schemer.
How about this: http://www.defmacro.org/ramblings/lisp.html
It is a very simple, step-by-step introduction to thinking in lisp from the point of view of a regular imperative programmer (Java, C#, etc.).
For educational purposes I would recommend PLT Scheme. It is a portable and powerful environment with very good examples and an even better documentation. It will help you to discover the thoughts behind functional programming step by step and in a very clean way. Choosing a little application to implement will help you learning the new language.
http://www.plt-scheme.org/
Additionally "Structure and Interpretation of Computer Programs" of H. Abelssn, G. Sussman, and J. Sussman is a very good book regarding Scheme (and programming).
Regards
mue
Take a look at 99 Lispy problems
Some thoughts on Lisps, not specific to Clojure (I'm not a Lisp expert, so I hope they're mostly correct and useful):
Coding in AST
I know little about compiler or interpreter theory, but every time I code in Lisp, it amazes me that it feels like directly building an AST.
That's part of what "code = data" means, coding in Lisp is a lot like filling data structures (nested lists) with AST nodes. Amazing, and it's easy to read too (with the right text editor).
A Programmable Programming Language
So code chunks are just nested lists, and list operations are part of the language. So you can very easily write Lisp code that generates Lisp code (see Lisp macros). This makes Lisp a programmable (in itself!) programming language.
This makes building a DSL or an interpreter in Lisp is very easy (see also meta-circular evaluation).
Never reboot anything
And in most Lisp systems, code (including documentation) can be introspected and hot swapped at run time.
Advanced OOP
Then, most Lisp Systems have some sort of Object System derived from CLOS, which is an advanced (compared to many OOP implementations) and configurable Object System (see The Art of the Metaobject Protocol).
All these features where invented long ago, but I'm not sure they are available in many other programming languages (although most are catching up, e.g. with closures), so you have to "rediscover" and get used to these by practicing (see the books in other answers).
Just remember: it's all data!
Write some simple classic functions that Lisp is good at, like
reverse a list
tell if an atom is somewhere in an s-expression
write EQUAL to tell if 2 s-expressions are equal
write FRINGE to get the list of atoms at the fringe of an s-expression
write SUBST, then write SUBLIS
Symbolic differentiation
Algebraic simplification
write a simple EVAL and/or APPLY
Understand that Lisp is good for these kinds of no-side-effect functional programs.
It is also useful for stateful side-effect (non-functional) programs, but those are more like "programs" than "functions".
Which is better for a given application depends on the application. In general, it should contain no less, and no more, state information than necessary.
Easy!
M-x lisp-mode
OK, OK, so you might not have Emacs for a brain. In all seriousness, what you need to do is to get really good at recursion. This can be quite a brain warp initially when trying to extend the concept of recursion beyond the canonical examples, but ultimately it will result in more fluid, lispy code.
Also, a lot of people get hung up on the parenthesis, and I don't really know why - the syntax is very simple and consistent and can be mastered in minutes. For me, I came to Scheme after having learned C++ and Java, and I always thought that the difference between "functions" and "operators" was a false dichotomy, and it was refreshing to see that distinction eliminated.
As far as functional programming goes, as long as you can wrap your head around the fact that a function is a first-class value and can be passed both into and out of other functions you should be fine. The usefulness of this will become clear over time, but it's enough that you can write function-taking and function-returning functions.
Finally, I'm not sure what support Clojure has for macros, but they're considered an essential part of lisp. However, I wouldn't worry about learning them until you're deeply familiar with the above items - though macros are incredibly useful and versatile, they're also used less often than the other techniques I mentioned.
I'd start with a language that can be interpreted. I found Moscow ML to be fairly easy. It is a lightweight implementation of Standard ML.
My personal practice is to find a small project (something that might take 3-5 nights hacking away) and implement it. How about a blog filter tool? Maybe just a Towers of Hanoi or Linked List implementation (those are usually 1-night projects).
The way it usually works out is I implement it poorly the first time, throw away what I had, and it finally clicks a few hours in.
A HUGE help is taking a course in something like... um... LISP! :) The homework will force you to confront a lot of the concepts and it clicked for me long before the semester ended.
Good luck!!
Good luck. It took me until about halfway through the "Programming Languages" course in college before Scheme "clicked". Once that happened, though, everything just made sense, and I fell in love with functional programming.
Write a Lisp interpreter in Lisp.
If you haven't alrady, read up on what makes lisp a unique language. If you don't do this first, you'll be trying to do the same things you could do in some other programming languages.
Then try to implement some small thing (try to make it useful to you or you might not have the motivation).
Lisp in a box is a great way to get your feet wet.
For me the important thing is to make sure you do everything in a 'lisp-y' way. Don't be tempted to think 'In Java I'd use a for loop here, how do I do for loops in Lisp?' but to go through enough examples and tutorials (as someone pointed out, SICP is perfect for this) that you can start to spot when code looks 'Lisp-y' and recognise common language paradigms.
I certainly know the feeling of looking at some code I've just written and intuitively knowing that it's correctly idiomatic for that language and platform/framework - that, I think, is when it 'clicks'.
Edit: And kudos for choosing a functional language, lesser students would have just done it in Java :)
Who said it is going to click? I'm always confused
But if you think about how much abstraction it is possible to hide away, behind lisp macros. Then your brain will explode.
:)
I'd check out Programming Clojure. It's a great book for non-lispers.
In addition to what other SO'ers have already suggested, here are my 2 cents:
Start learning the language and try out a few simple numerical/hobby problems in the language
IMPORTANT: Post the solution/code to StackOverflow, asking for people's opinion if that is really the LISPy way to do it.
Best of luck!

Math, programming, and learning [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It has been discussed on this site before about the relationship between math and programming, and whether one is a subset of the other, etc.
In my recent study of programming, I've found myself more and more wishing I was better at math. You all know the scenario when programming books start to generalize something in a math way ("Therefore, we may say that for all <some single letter>, <lots of letters>"). My eyes glaze over in such situations. I know that that is mostly due to me being stupid, but it seems that if I could just improve my higher math skills, maybe I could get more out of such things.
Major question: Is math indeed something one can "get better at," or is your brain kinda either wired for it or not?
Important follow-up question: If the answer to the above is yes, then what are some ways to go about it?
I think anyone can get better at math. You just have to be determined and practice.
Part of the problem is that math books tend to be written by mathematicians who ceased being math novices decades ago. What you want is books geared to your level and which contain material you can work with.
Some recommendations:
If you can find a copy, get Mathematica and a good book on it (the Schaum's outline is actually pretty good and cheap). I use it all the time to visualize things.
As a programmer, you probably want to aim more for discrete mathematics than calculus.
The Concrete Mathematics book mentioned elsewhere is excellent.
Most introductory discrete math texts have good coverage of the things like logic, sets, combinatorics, probability, graph theory, etc. My school used Rosen's text which I liked.
Linear algebra is useful if you are going to do 3D graphics programming. Most intro texts for engineers will teach you what you need to know. Linear Algebra Done Right is probably the best on "real" linear algebra if you want something more theoretical.
Look for books by Martin Gardner and play with his puzzles. He's an excellent writer and teacher.
Remember that math doesn't change that much. You can get used books for cheap on Amazon and in used bookstores. I always look for the n-1 version when I buy textbooks.
When you first started learning English all those "symbols" (letters) looked like gibberish to you. I'm sure that at some point you were frustrated over your lack of understanding. But slowly, and gradually you began to understand them.
Eventually you were able to construct your own words and sentences using these symbols. After being corrected on it's structure and grammar for years you now have a command of the language.
Math is just like that. Your eyes glaze over because you haven't learned the language. Maybe in school you didn't particularly enjoy math because you didn't see any practical applications for it. Certainly the way we teach math to our students is atrocious, so it is no wonder why many get through school without being well versed in it (for further reading, check out A Mathematician’s Lament which discusses how horrible our current method of teaching math is).
However, it is never too late to get up to a degree of proficiency that will allow you to read many academic Computer Science writings. Start out with Pre-Calculus at your local community college at night (to brush up on everything you have forgotten). Then move on to Calculus and after that take Discrete math. Honestly, this is all the math you will need 99.99% of the time. In less than 2-3 semesters you can be fully caught up and you'll no longer have your eyes glaze over when reading something with some mathematical roots.
I can share my experience...
I have been terrified of math since grade school. Hated it, didn't get the point, didn't pursue it.
By contrast, I have always been fascinated by computers. I study programming from a "need to know"- I can't stand not understanding computers and programming from the lowest to highest levels. I am almost completely self-educated, and have a career as a programmer/architect.
Last year, at my wife's urging, I began to go back to college. I signed up for a remedial Algebra class, knowing it was going to be a pain. It wasn't.
Somehow, through all the years of learning to develop OO software, it seemed that I had tricked myself into learning how to think mathematically. The concepts just weren't that difficult anymore. It may be that I had learned to think in terms of complex systems made up of smaller, less complex ideas.
I am now researching game development, and that is some seriously math-oriented programming. WAY more so than the business development I've been doing to this point. However, I don't find it so daunting because it's applied mathetmatics. Working to solve practical problems seems to make the study less tedious and far more interesting. I have found Wikipedia and Wolfram's Mathworld to be helpful. If you already know how to program, you're ahead of the game learning math.
I'd say it's certainly something anyone can get better at. It takes time and patience and some texts are obscenely dense as far as the notation involved, but if you're willing to put the time in, I think it shouldn't be too horrible.
I'd check out Wikipedia's list of mathematical symbols, and keep if nearby whenever you see a large blob of symbols pop up. Translate them one at a time and put them together in the way that makes the most sense to you (or ask us a few times until you get the hang of it).
It's both. You can get better at math. But you're also indeed limited/endowed by the particular wiring in your brain. What that means is that most likely you can improve your current mathematical skills. However, because of your mental hardware's limits, you may never discover a new theorem.
And when it comes to improving, I think the way as always is to practice. To read mathematical literature, to try to solve mathematical problems and eventually, develop an outlook where you are able to break, as a matter of habit, real-world conundrums you see before you down in mathematical terms.
As for programming's relation with mathematics, I think there's a pretty strong one. In fact, one could argue that a program is nothing but a proof of a theorem, the requirements document being all the inputs to the proof.
Your skills get rusty if not used and knowledge fades with time if not used.
If you don't use your math skills you soon have no math skills. Continuous new learning and practice of the skills you already have will lead to you one day being a math/programming master.
Project Euler has lot of math problems that can only be solved through programming. The problems get more difficult but build on the skills and knowledge acquired in your previous solutions.
I also buy some interesting textbooks at second hand book shops. Their cheap and slowly your skills improve. I use them in conjunction with MIT Open Course ware.
A fun way to practice math is http://projecteuler.net/.
Although it's less systematic/effective than doing a course or reading a textbook.
I know exactly how you feel. I've always wanted to learn more Math, but as I was unable to do it at college after school (not enough space) and not able to take it at university (not able to with a CS degree) I'm still yet to study Math formally since the age of 16.
Math is something that anyone can learn. Some will argue that it gets harder with age but I've met people going on for 60 that are taking Math classes with ease. There's one woman at my university that's going on for 70 and she's a few months off of graduating with a degree in a Mathematics related field. If you want to learn Math then now is the right time, although I'll be the first person to say that it is not easy. Whilst you'll find many of the problems extremely easy with programming experience you'll still find that going through a set of problems takes a lot of time out of your day. I almost finished the MIT OpenCourseWare course on Linear Algebra, then ended up getting a new part-time job, working 10 hours a day, 7 days a week, and forgetting the majority of what I had learnt.
That being said, if you have the time and true dedication I can recommend some links to video lectures that may just help you get on your way.
Pre-Calculus Algebra
Pre-Calculus
Calculus
Applied Probability
Introduction to Statistics
Differential Equations
Linear Algebra
I'm not saying that this is what you need to know. This is what I've set out to learn myself before I graduate from my CS degree, so feel free to pick and choose whatever you feel is best for you.
Studying math is like getting a classical education, one specially suited for programmers and other computer professions. And math's the sort of thing that you can appreciate more as you get older. You realize that it's not about grinding out answers so much as it's about thinking deeply and conceptually. The "answers" you might grind out make a lot more sense that way.
At one time, I would have recommended taking a geometry course, and take some time to learn how to prove theorems, see how the concepts flow together. These days, though, I'd say it may be better to take a course in discrete mathematics. It's much more practical, and there's a lot more variety, but there's still enough theory in there to make it challenging if you want.
Discrete math also provides you with programming challenges you might not have thought of before. Maybe you can hack up a good heuristic to solve an NP-complete problem, like an N-city Traveling Salesman problem. Maybe even come up with a couple solutions, and test which ones work best in which circumstances.
(I never took CompSci classes in college. You can probably tell.)
Go to the local community college and sign up for Calculus 1. This covers functions in the mathematical sense, and has a rigorous refresher course on algebra, and will use just enough of the symbols to get you ahead.
First of all, I would recommend Steve Yegge's Math For Programmers. It pretty much sums up your struggle.
And now I would like to tell a personal story. I was a double major in Math and CS. I learned a lot in the Math classes, but I honestly didn't appreciate it as much as I should have. I will tell you that a lot of things that I did have helped me in my programming career. And it's not about some formula or knowing calculus, or any of that stuff. It's that a solid Math background teaches you how to think in order to solve a problem. To me, that's the Math that you need.
1) Yes.
2) Explore mathematical questions which sound interesting. Buy/read books that give you the information needed. Repeat.
It can absolutely be learned. I personally had the most benefit from the math (especially proof) courses I took in college.
Recommended courses:
Discrete Math
Mathematical Thought
Abstract Algebra
any other proof courses
Recommended book:
The Nuts and Bolts of Proofs, by Antonella Cupillari
I strongly recommend trying to take one or more of these courses at a school of some sort. Find a local college and audit a course.
A 2nd vote for Lockhart's "A Mathematician’s Lament", which recommends that math be taught like painting, poetry, or music -- not for it's practical usefulness but for simple pleasure:
There’s no
ulterior practical purpose here. I’m just playing. That’s what math is— wondering, playing,
amusing yourself with your imagination.
Look at the diagrams in a recent Knuth paper, Dancing Links, and tell me he wasn't having fun making those.
Part of the problem is that a few mathematical symbols convey a heck of a lot of information. If you are reading a normal programming book, it is full of words and code. Neither of these is super verbose (although I often have to slow down much more on code than normal words). However one complicated mathematical equation can easily be a screen full of programming code or words. We have all sorts of simple notations that convey complex processes.
Another issue is that notation is standardized but not exactly. Different books use slightly different notation so it takes a while to get used to it. Also many textbooks leave out key steps in mathematical proofs or even examples. Sometimes even college professors puzzle themselves with the missing steps in a given proof in a textbook and then give their own proof, or give a slightly modified one over the one in the book because they learned it differently or can't recreate the missing step exactly which takes the proof in a different direction.
So anyway just because your eyes glaze over doesn't mean you have to give up. The first time you see the equations you will probably be in read the english text mode and have to pause to consider them. Going over them slowly and paying attention to what all the symbols mean one step at a time may yield the answer for you. If there is some notation you haven't seen before, there is probably an intro chapter or appendix explaining the notation, so check there. Finally, look for other sources. Use google/wikipedia to look up equations for the concept and you may find a derivation and/or proof that you can follow. Additionally the other one may help you to understand the current proofs/derivations better. Even if your understanding of the proof/derivation does not improve, your additional research will probably aid in your understanding of the equation.
I think there are two things to learning math:
1. Learning the general techniques. Ie how to add two fractions, how to differentiate, integrate.
2. Learning to problem solve and apply math to the real world.
I think by picking up math textbooks yo will learn 1. Many math textbooks are organized by section where there will be a few pages showing you a technique and then a bunch of problems. The problems tend to be related to the technique that you just learned and very similar. Ie a section on logarithms will have all problems on logarithms and probably won't include any polynomials. By doing the problems in the section you will learn the techniques. The more problems you do the faster you will get and the more you will understand the concepts. Many times you will find if you work through the problems without explicitly memorizing the formulas, you will find that after you do enough the required formulas will be implicitly memorized. Ultimately if you are having trouble looking at probability formulas you will want to read a probability book. If you are having trouble with sum notation you will want to consult that section of an algebra book, etc...
To learn 2 I think math textbooks don't help as much because each section tends to have problems related to that section. Occasionally there are a few "mixed review" problems or a "chapter review" which mixes problems, but they are typically far in between. Science textbooks like Physics, Biology, Chemistry, etc. tend to be better for this. There you often read the problem, lay it out, and end up using a variety of mathematical tools to to solve it. Sometimes calculus, linear algebra, and geometry all within the same problem. The value here is that it teaches you to problem solve. Generally the SAT/GRE do not test if you know how to do Algebra, they test if you know how to apply it to the real world, and the science problems really help you here. Also programming in general is about problem solving and the better you get at problem solving the better you'll be at programming. Basically in programming you take problems, create a mental model, design a solution, and then model it in your programming language of choice. This is similar to say Physics. You look at the problem, extract a mathematical model, design a solution, right down some equations with the model of the solution, then plug numbers in. I highly recommend physics because after my college physics class word problems became simple for me and they used to be quite difficult (though not impossible).
In day to day programming you probably won't use more than algebra and logic (for if statements and loop conditions). There are some places that use high math like computer games, cryptology, data mining, etc. but for a typical business application you probably won't use more than algebra and logic and maybe a bit of set theory (the stuff so basic you already internalized it). Even in places that use high math (like financial companies) often the business users (or some industry literature) will have done the higher math and you will just need to implement the equations (with some algebra). I only mention this because most programming books don't have more than algebra and logic either, unless you are reading textbooks on Algorithm Analysis (Introduction to Algorithms), Artificial Intelligence, or some other research area. General application books on how to do things are usually short on math.
But depending upon what you are reading math can help. For most computer science algebra + discrete math should be enough. Couple that with some physics practice and you should be good to go. It may still be a slow go but you should have the proper background.
I like combinatorics and algorithms - having fun you learn faster.
study study study!
wikipedia is actually a fairly good math reference. start with something you're interested in learning and follow links until you understand all the building blocks for that initial thing.
practice practice practice!
Schaum's outlines are good for this. If you're interested in probability (which touches on combinatorics), see 50 Challenging Problems in Probability.
Short answer:
There may be people who are to stupid to get good a maths. But those people generally are to stupid to program, too.
So if you have some skills in programming, you might consider yourself smart enough to learn maths as well.
Note: I know there are smart people with a serious math learning disability, but I think thats more like an exception.
I would also recommend project euler While it does not exactly teach math, it gives you problems that you can then look up how to solve. I've always preferred solving actual problems instead of just learning theory.

Concepts that surprised you when you read SICP?

SICP - "Structure and Interpretation of Computer Programs"
Explanation for the same would be nice
Can some one explain about Metalinguistic Abstraction
SICP really drove home the point that it is possible to look at code and data as the same thing.
I understood this before when thinking about universal Turing machines (the input to a UTM is just a representation of a program) or the von Neumann architecture (where a single storage structure holds both code and data), but SICP made the idea much more clear. Scheme (Lisp) helped here, as the syntax for a program is exactly the same as the syntax for lists in general, namely S-expressions.
Once you have the "equivalence" of code and data, suddenly a lot of things become easy. For example, you can write programs that have different evaluation methods (lazy, nondeterministic, etc). Previously, I might have thought that this would require an extension to the programming language; in reality, I can just add it on to the language myself, thus allowing the core language to be minimal. As another example, you can similarly implement an object-oriented framework; again, this is something I might have naively thought would require modifying the language.
Incidentally, one thing I wish SICP had mentioned more: types. Type checking at compilation time is an amazing thing. The SICP implementation of object-oriented programming did not have this benefit.
I didn't read that book yet, I have only looked at the video courses, but it taught me a lot. Functions as first class citizens was mind blowing for me. Executing a "variable" was something very new to me. After watching those videos the way I now see JavaScript and programming in general has greatly changed.
Oh, I think I've lied, the thing that really struck me was that + was a function.
I think the most surprising thing about SICP is to see how few primitives are actually required to make a Turing complete language--almost anything can be built from almost nothing.
Since we are discussing SICP, I'll put in my standard plug for the video lectures at http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussman-lectures/, which are the best Introduction to Computer Science you could hope to get in 20 hours.
The one that I thought was really cool was streams with delayed evaluation. The one about generating primes was something I thought was really neat. Like a "PEZ" dispenser that magically dispenses the next prime in the sequence.
One example of "the data and the code are the same thing" from A. Rex's answer got me in a very deep way.
When I was taught Lisp back in Russia, our teachers told us that the language was about lists: car, cdr, cons. What really amazed me was the fact that you don't need those functions at all - you can write your own, given closures. So, Lisp is not about lists after all! That was a big surprise.
A concept I was completely unfamiliar with was the idea of coroutines, i.e. having two functions doing complementary work and having the program flow control alternate between them.
I was still in high school when I read SICP, and I had focused on the first and second chapters. For me at the time, I liked that you could express all those mathematical ideas in code, and have the computer do most of the dirty work.
When I was tutoring SICP, I got impressed by different aspects. For one, the conundrum that data and code are really the same thing, because code is executable data. The chapter on metalinguistic abstractions is mind-boggling to many and has many take-home messages. The first is that all the rules are arbitrary. This bothers some students, specially those who are physicists at heart. I think the beauty is not in the rules themselves, but in studying the consequence of the rules. A one-line change in code can mean the difference between lexical scoping and dynamic scoping.
Today, though SICP is still fun and insightful to many, I do understand that it's becoming dated. For one, it doesn't teach debugging skills and tools (I include type systems in there), which is essential for working in today's gigantic systems.
I was most surprised of how easy it is to implement languages. That one could write interpreter for Scheme onto a blackboard.
I felt Recursion in different sense after reading some of the chapters of SICP
I am right now on Section "Sequences as Conventional Interfaces" and have found the concept of procedures as first class citizens quite fascinating. Also, the application of recursion is something I have never seen in any language.
Closures.
Coming from a primarily imperative background (Java, C#, etc. -- I only read SICP a year or so ago for the first time, and am re-reading it now), thinking in functional terms was a big revelation for me; it totally changed the way I think about my work today.
I read most part of the book (without exercise). What I have learned is how to abstract the real world at a specific level, and how to implement a language.
Each chapter has ideas surprise me:
The first two chapters show me two ways of abstracting the real world: abstraction with the procedure, and abstraction with data.
Chapter 3 introduces time in the real world. That results in states. We try assignment, which raises problems. Then we try streams.
Chapter 4 is about metalinguistic abstraction, in other words, we implement a new language by constructing an evaluator, which determines the meaning of expressions.
Since the evaluator in Chapter 4 is itself a Lisp program, it inherits the control structure of the underlying Lisp system. So in Chapter 5, we dive into the step-by-step operation of a real computer with the help of an abstract model, register machine.
Thanks.

Resources