In what areas of programming is a knowledge of mathematics helpful? [closed] - math

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
For example, math logic, graph theory.
Everyone around tells me that math is necessary for programmer. I saw a lot of threads where people say that they used linear algebra and some other math, but no one described concrete cases when they used it.
I know that there are similar threads, but I couldn't see any description of such a case.

Computer graphics.
It's all matrix multiplication, vector spaces, affine spaces, projection, etc. Lots and lots of algebra.
For more information, here's the Wikipedia article on projection, along with the more specific case of 3D projection, with all of its various matrices. OpenGL, a common computer graphics library, is an example of applying affine matrix operations to transform and project objects onto a computer screen.

I think that a lot of programmers use more math than they think they do. It's just that it comes so intuitively to them that they don't even think about it. For instance, every time you write an if statement are you not using your Discrete Math knowledge?

In graphic world you need a lot of transformations.
In cryptography you need geometry and number theory.
In AI, you need algebra.
And statistics in financial environments.
Computer theory needs math theory: actually almost all the founders are from Maths.

Given a list of locations with latitudes and longitudes, sort the list in order from closest to farthest from a specific position.
All applications that deal with money need math.
I can't think of a single app that I have written that didn't require math at some point.

I wrote a parser compiler a few months back, and that's full of graph-theory. This was only designed to be slightly more powerful than regular expressions (in that multiple matches were allowed, and some other features were added), but even such a simple compiler requires loop detection, finite state automata, and tons more math.

Implementing the Advanced Encryption Standard (AES) algorithm required some basic understanding of finite field math. See act 4 of my blog post on it for details (code sample included).

I've used a lot of algebra when writing business apps.
Simple Examples
BMI = weight / (height * height);
compensation = 10 * hours * ((pratio * 2.3) + tratio);

A few years ago, I had a DSP project that had to compute a real radix-2 FFT of size N, in a given time. The vendor-supplied real radix-2 FFT wouldn't run in the allocated time, but their complex FFT of size N/2 would. It is easy to feed the real data into the complex FFT. Getting the answers out afterwards is not so easy: it is called post-weaving, or post-unweaving, or unweaving. Deriving the unweave equations from the FFT and complex number theory was not fun. Going from there to tightly-optimized DSP code was equally not fun.
Naturally, the signal I was measuring did not match the FFT sample size, which causes artifacts. The standard fix is to apply a Hanning window. This causes other artifacts. As part of understanding (and testing) that code, I had to understand the artifacts caused by the Hanning window, so I could interpret the results and decide whether the code was working or not.

I've used tons of math in various projects, including:
Graph theory for dealing with dependencies in large systems (e.g. a Makefile is a kind of directed graph)
Statistics and linear regression in profiling performance bottlenecks
Coordinate transformations in geospatial applications
In scientific computing, project requirements are often stated in algebraic form, especially for computationally intensive code
And that's just off the top of my head.
And of course, anything involving "pure" computer science (algorithms, computational complexity, lambda calculus) tends to look more and more like math the deeper you go.

In answering this image-comparison-algorithm question, I drew on lots of knowledge of math, some of it from other answers and web searches (where I had to apply my own knowledge to filter the information), and some from my own engineering training and lengthy programming background.

General Mindforming
Solving Problems - One fundamental method of math, independent of the area, is transofrming an unknown problem into a known one. Even if you don't have the same problems, you need the same skill. In math, as in programming, virtually everything has different representations. Understanding the equivalence between algorithms, problems or solutions that are completely different on the surface helps you avoid the hard parts.
(A similar thing happens in physics: to solve a kinematic problem, choice of the coordinate system is often the difference between one and ten pages full of formulas, even though problem and solution are identical.)
Precision of Language / Logical reasoning - Math has a very terse yet precise language. Learning to deal with that will prepare you for computers doing what you say, not what you meant. Also, the same precision is required to analyse if a specification is sufficient, to check a piece of code if it covers all possible cases, etc.
Beauty and elegance - This may be the argument that's hardest to grasp. I found the notion of "beauty" in code is very close to the one found in math. A beautiful proof is one whose idea is immediately convincing, and the proof itself is merely executing a sequence of executing the next obvious step.
The same goes for an elegant implementation.
(Most mathematicians I've encountered have a faible for putting the "Aha!" - effect at the end rather than at the beginning. As have most elite geeks).
You can learn these skills without one lesson of math, of course. But math ahs perfected this for centuries.
Applied Skills
Examples:
- Not having to run calc.exe for a quick estimation of memory requirements
- Some basic statistics to tell a valid performance measurement from a shot in the dark
- deducing a formula for a sequence of values, rather than hardcoding them
- Getting a feeling for what c*O(N log N) means.
- Recursion is the same as proof by inductance
(that list would probably go on if I'd actively watch myself for items for a day. This part is admittedly harder than I thought. Further suggestions welcome ;))
Where I use it
The company I work for does a lot of data acquisition, and our claim to fame (comapred to our competition) is the brain muscle that goes into extracting something useful out of the data. While I'm mostly unconcerned with that, I get enough math thrown my way. Before that, I've implemented and validated random number generators for statistical applications, implemented a differential equation solver, wrote simulations for selected laws of physics. And probably more.

I wrote some hash functions for mapping airline codes and flight numbers with good efficiency into a fairly limited number of data slots.
I went through a fair number of primes before finding numbers that worked well with my data. Testing required some statistics and estimates of probabilities.

In machine learning: we use Bayesian (and other probabilistic) models all the time, and we use quadratic programming in the form of Support Vector Machines, not to mention all kinds of mathematical transformations for the various kernel functions. Calculus (derivatives) factors into perceptron learning. Not to mention a whole theory of determining the accuracy of a machine learning classifier.
In artifical intelligence: constraint satisfaction, and logic weigh very heavily.

I was using co-ordinate geometry to solve a problem of finding the visible part of a stack of windows, not exactly overlapping on one another.
There are many other situations, but this is the one that I got from the top of my head. Inherently all operations that we do is mathematics or at least depends on/related to mathematics.
Thats why its important to know mathematics to have a more clearer understanding of things :)
Infact in some cases a lot of math has gone into our common sense that we don't notice that we are using math to solve a particular problem, since we have been using it for so long!
Thanks

-Graphics (matrices, translations, shaders, integral approximations, curves, etc, etc,...infinite dots)
-Algorithm Complexity calculations (specially in line of business' applications)
-Pointer Arithmetics
-Cryptographic under field arithmetics etc.
-GIS (triangles, squares algorithms like delone, bounding boxes, and many many etc)
-Performance monitor counters and the functions they describe
-Functional Programming (simply that, not saying more :))
-......

I used Combinatorials to stuff 20 bits of data into 14 bits of space.

Machine Vision or Computer Vision requires a thorough knowledge of probability and statistics. Object detection/recognition and many supervised segmentation techniques are based on Bayesian inference. Heavy on linear algebra too.

As an engineer, I'm trying really hard to think of an instance when I did not need math. Same story when I was a grad student. Granted, I'm not a programmer, but I use computers a lot.

Games and simulations need lots of maths - fluid dynamics, in particular, for things like flames, fog and smoke.

As an e-commerce developer, I have to use math every day for programming. At the very least, basic algebra.
There are other apps I've had to write for vector based image generation that require a strong knowledge of Geometry, Calculus and Trigonometry.
Then there is bit-masking...
Converting hexadecimal to base ten in your head...
Estimating load potential of an application...
Yep, if someone is no good with math, they're probably not a very good programmer.

Modern communications would completely collapse without math. If you want to make your head explode sometime, look up Galois fields, error correcting codes, and data compression. Then symbol constellations, band-limited interpolation functions (I'm talking about sinc and raised-cosine functions, not the simple linear and bicubic stuff), Fourier transforms, clock recovery, minimally-ambiguous symbol training sequences, Rayleigh and/or Ricean fading, and Kalman filtering. All of those involve math that makes my head hurt bad, and I got a Masters in Electrical Engineering. And that's just off the top of my head, from my wireless communications class.
The amount of math required to make your cell phone work is huge. To make a 3G cell phone with Internet access is staggering. To prove with sufficient confidence that an algorithm will work in most all cases sometimes takes people's careers.
But... if you're only ever going to work with this stuff as black boxes imported from a library (at their mercy, really), well, you might get away with just knowing enough algebra to debug mismatched parentheses. And there are a lot more of those jobs than the hard ones... but at the same time, the hard jobs are harder to find a replacement for.

Examples that I've personally coded:
wrote a simple video game where one spaceship shoots a laser at another ship. To know if the ship was in the laser's path, I used basic algebra y=mx+b to calculate if the paths intersect. (I was a child when I did this and was quite amazed that something that was taught on a chalkboard (algebra) could be applied to computer programming.)
calculating mortgage balances and repayment schedules with logarithms
analyzing consumer buying choices by calculating combinatorics
trigonometry to simulate camera lens behavior
Fourier Transform to analyze digital music files (WAV files)
stock market analysis with statistics (linear regressions)
using logarithms to understand binary search traversals and also disk space savings when using packing information into bit fields. (I don't calculate logarithms in actual code, but I figure them out during "design" to see if it's feasible to even bother coding it.)
None of my projects (so far) have required topics such as calculus, differential equations, or matrices. I didn't study mathematics in school but if a project requires math, I just reference my math books and if I'm stuck, I search google.
Edited to add: I think it's more realistic for some people to have a programming challenge motivate the learning of particular math subjects. For others, they enjoy math for its own sake and can learn it ahead of time to apply to future programming problems. I'm of the first type. For example, I studied logarithms in high school but didn't understand their power until I started doing programming and all of sudden, they seem to pop up all over the place.

The recurring theme I see from these responses is that this is clearly context-dependent.
If you're writing a 3D graphics engine then you'd be well advised to brush up on your vectors and matrices. If you're writing a simple e-commerce website then you'll get away with basic algebra.
So depending on what you want to do, you may not need any more math than you did to post your question(!), or you might conceivably need a PhD (i.e. you would like to write a custom geometry kernel for turbine fan blade design).

One time I was writing something for my Commodore 64 (I forget what, I must have been 6 years old) and I wanted to center some text horizontally on the screen.
I worked out the formula using a combination of math and trial-and-error; years later I would tackle such problems using actual algebra.

Drawing, moving, and guidance of missiles and guns and lasers and gravity bombs and whatnot in this little 2d video game I made: wordwarvi
Lots of uses sine/cosine, and their inverses, (via lookup tables... I'm old, ok?)

Any geo based site/app will need math. A simple example is "Show me all Bob's Pizzas within 10 miles of me" functionality on a website. You will need math to return lat/lons that occur within a 10 mile radius.

This is primarily a question whose answer will depend on the problem domain. Some problems require oodles of math and some require only addition and subtraction. Right now, I have a pet project which might require graph theory, not for the math so much as to get the basic vocabulary and concepts in my head.
If you're doing flight simulations and anything 3D, say hello to quaternions! If you're doing electrical engineering, you will be using trig and complex numbers. If you're doing a mortgage calculator, you will be doing discrete math. If you're doing an optimization problem, where you attempt to get the most profits from your widget factory, you will be doing what is called linear programming. If you are doing some operations involving, say, network addresses, welcome to the kind of bit-focused math that comes along with it. And that's just for the high-level languages.
If you are delving into highly-optimized data structures and implementing them yourself, you will probably do more math than if you were just grabbing a library.

Part of being a good programmer is being familiar with the domain in which you are programming. If you are working on software for Fidelity Mutual, you probably would need to know engineering economics. If you are developing software for Gallup, you probably need to know statistics. LucasArts... probably Linear Algebra. NASA... Differential Equations.
The thing about software engineering is you are almost always expected to wear many hats.

More or less anything having to do with finding the best layout, optimization, or object relationships is graph theory. You may not immediately think of it as such, but regardless - you're using math!
An explicit example: I wrote a node-based shader editor and optimizer, which took a set of linked nodes and converted them into shader code. Finding the correct order to output the code in such that all inputs for a certain node were available before that node needed them involved graph theory.
And like others have said, anything having to do with graphics implicitly requires knowledge of linear algebra, coordinate spaces transformations, and plenty of other subtopics of mathematics. Take a look at any recent graphics whitepaper, especially those involving lighting. Integrals? Infinite series?! Graph theory? Node traversal optimization? Yep, all of these are commonly used in graphics.
Also note that just because you don't realize that you're using some sort of mathematics when you're writing or designing software, doesn't mean that you aren't, and actually understanding the mathematics behind how and why algorithms and data structures work the way they do can often help you find elegant solutions to non-trivial problems.

In years of webapp development I didn't have much need with the Math API. As far as I can recall, I have ever only used the Math#min() and Math#max() of the Math API.
For example
if (i < 0) {
i = 0;
}
if (i > 10) {
i = 10;
}
can be done as
i = Math.max(0, Math.min(i, 10));

Related

Can any existing Machine Learning structures perfectly emulate recursive functions like the Fibonacci sequence?

To be clear I don't mean, provided the last two numbers in the sequence provide the next one:
(2, 3, -> 5)
But rather given any index provide the Fibonacci number:
(0 -> 1) or (7 -> 21) or (11 -> 144)
Adding two numbers is a very simple task for any machine learning structure, and by extension counting by ones, twos or any fixed number is a simple addition rule. Recursive calculations however...
To my understanding, most learning networks rely on forwards only evaluation, whereas most programming languages have loops, jumps, or circular flow patterns (all of which are usually ASM jumps of some kind), thus allowing recursion.
Sure some networks aren't forwards only; But can processing weights using the hyperbolic tangent or sigmoid function enter any computationally complete state?
i.e. conditional statements, conditional jumps, forced jumps, simple loops, complex loops with multiple conditions, providing sort order, actual reordering of elements, assignments, allocating extra registers, etc?
It would seem that even a non-forwards only network would only find a polynomial of best fit, reducing errors across the expanse of the training set and no further.
Am I missing something obvious, or did most of Machine Learning just look at recursion and pretend like those problems don't exist?
Update
Technically any programming language can be considered the DNA of a genetic algorithm, where the compiler (and possibly console out measurement) would be the fitness function.
The issue is that programming (so far) cannot be expressed in a hill climbing way - literally, the fitness is 0, until the fitness is 1. Things don't half work in programming, and if they do, there is no way of measuring how 'working' a program is for unknown situations. Even an off by one error could appear to be a totally different and chaotic system with no output. This is exactly the reason learning to code in the first place is so difficult, the learning curve is almost vertical.
Some might argue that you just need to provide stronger foundation rules for the system to exploit - but that just leads to attempting to generalize all programming problems, which circles right back to designing a programming language and loses all notion of some learning machine at all. Following this road brings you to a close variant of LISP with mutate-able code and virtually meaningless fitness functions that brute force the 'nice' and 'simple' looking code-space in attempt to follow human coding best practices.
Others might argue that we simply aren't using enough population or momentum to gain footing on the error surface, or make a meaningful step towards a solution. But as your population approaches the number of DNA permutations, you are really just brute forcing (and very inefficiently at that). Brute forcing code permutations is nothing new, and definitely not machine learning - it's actually quite common in regex golf, I think there's even an xkcd about it...
The real problem isn't finding a solution that works for some specific recursive function, but finding a solution space that can encompass the recursive domain in some useful way.
So other than Neural Networks trained using Backpropagation hypothetically finding the closed form of a recursive function (if a closed form even exists, and they don't in most real cases where recursion is useful), or a non-forwards only network acting like a pseudo-programming language with awful fitness prospects in the best case scenario, plus the virtually impossible task of tuning exit constraints to prevent infinite recursion... That's really it so far for machine learning and recursion?
According to Kolmogorov et al's On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition, a three layer neural network can model arbitrary function with the linear and logistic functions, including f(n) = ((1+sqrt(5))^n - (1-sqrt(5))^n) / (2^n * sqrt(5)), which is the close form solution of Fibonacci sequence.
If you would like to treat the problem as a recursive sequence without a closed-form solution, I would view it as a special sliding window approach (I called it special because your window size seems fixed as 2). There are more general studies on the proper window size for your interest. See these two posts:
Time Series Prediction via Neural Networks
Proper way of using recurrent neural network for time series analysis
Ok, where to start...
Firstly, you talk about 'machine learning' and 'perfectly emulate'. This is not generally the purpose of machine learning algorithms. They make informed guesses given some evidence and some general notions about structures that exist in the world. That typically means an approximate answer is better than an 'exact' one that is wrong. So, no, most existing machine learning approaches aren't the right tools to answer your question.
Second, you talk of 'recursive structures' as some sort of magic bullet. Yet they are merely convenient ways to represent functions, somewhat analogous to higher order differential equations. Because of the feedbacks they tend to introduce, the functions tend to be non-linear. Some machine learning approaches will have trouble with this, but many (neural networks for example) should be able to approximate you function quite well, given sufficient evidence.
As an aside, having or not having closed form solutions is somewhat irrelevant here. What matters is how well the function at hand fits with the assumptions embodied in the machine learning algorithm. That relationship may be complex (eg: try approximating fibbonacci with a support vector machine), but that's the essence.
Now, if you want a machine learning algorithm tailored to the search for exact representations of recursive structures, you could set up some assumptions and have your algorithm produce the most likely 'exact' recursive structure that fits your data. There are probably real world problems in which such a thing would be useful. Indeed the field of optimisation approaches similar problems.
The genetic algorithms mentioned in other answers could be an example of this, especially if you provided a 'genome' that matches the sort of recursive function you think you may be dealing with. Closed form primitives could form part of that space too, if you believe they are more likely to be 'exact' than more complex genetically generated algorithms.
Regarding your assertion that programming cannot be expressed in a hill climbing way, that doesn't prevent a learning algorithm from scoring possible solutions by how many much of your evidence it's able to reproduce and how complex they are. In many cases (most? though counting cases here isn't really possible) such an approach will find a correct answer. Sure, you can come up with pathological cases, but with those, there's little hope anyway.
Summing up, machine learning algorithms are not usually designed to tackle finding 'exact' solutions, so aren't the right tools as they stand. But, by embedding some prior assumptions that exact solutions are best, and perhaps the sort of exact solution you're after, you'll probably do pretty well with genetic algorithms, and likely also with algorithms like support vector machines.
I think you also sum things up nicely with this:
The real problem isn't finding a solution that works for some specific recursive function, but finding a solution space that can encompass the recursive domain in some useful way.
The other answers go a long way to telling you where the state of the art is. If you want more, a bright new research path lies ahead!
See this article:
Turing Machines are Recurrent Neural Networks
http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/
The paper describes how a recurrent neural network can simulate a register machine, which is known to be a universal computational model equivalent to a Turing machine. The result is "academic" in the sense that the neurons have to be capable of computing with unbounded numbers. This works mathematically, but would have problems pragmatically.
Because the Fibonacci function is just one of many computable functions (in fact, it is primitive recursive), it could be computed by such a network.
Genetic algorithms should do be able to do the trick. The important this is (as always with GAs) the representation.
If you define the search space to be syntax trees representing arithmetic formulas and provide enough training data (as you would with any machine learning algorithm), it probably will converge to the closed-form solution for the Fibonacci numbers, which is:
Fib(n) = ( (1+srqt(5))^n - (1-sqrt(5))^n ) / ( 2^n * sqrt(5) )
[Source]
If you were asking for a machine learning algorithm to come up with the recursive formula to the Fibonacci numbers, then this should also be possible using the same method, but with individuals being syntax trees of a small program representing a function.
Of course, you also have to define good cross-over and mutation operators as well as a good evaluation function. And I have no idea how well it would converge, but it should at some point.
Edit: I'd also like to point out that in certain cases there is always a closed-form solution to a recursive function:
Like every sequence defined by a linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form solution.
The Fibonacci sequence, where a specific index of the sequence must be returned, is often used as a benchmark problem in Genetic Programming research. In most cases recursive structures are generated, although my own research focused on imperative programs so used an iterative approach.
There's a brief review of other GP research that uses the Fibonacci problem in Section 3.4.2 of my PhD thesis, available here: http://kar.kent.ac.uk/34799/. The rest of the thesis also describes my own approach, which is covered a bit more succinctly in this paper: http://www.cs.kent.ac.uk/pubs/2012/3202/
Other notable research which used the Fibonacci problem is Simon Harding's work with Self-Modifying Cartesian GP (http://www.cartesiangp.co.uk/papers/eurogp2009-harding.pdf).

what kind of programming requires math? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
This is not a "is math required for programming?" question.
I always thought that for programming, a scary amount of complicated math would be involved (I only got as far as intermediate algebra in college because I'm bad with it).
However, I just got my first job as a developer and have found there is not a ton of math above basic arithmetic(as of yet). I also read on a question here in SO that math is more used to ensure the would-be developer can understand complex problems and solve them.
So I guess is there a different kind of programming where a math level above algebra is needed? My guess would be like geometry and other disciplines for video game programming where you create shapes in 3D and play with time and space in environments. What else requires a high level of math?
EDIT: Wow, lot of answers. One of which made me think of another similar question...say in programs like photoshop, what kind of math(or overall work) is involved in making something twist, crop, edit, and color things like images?
I think there are at least two types of answer to this question. Firstly, there are the sorts of programming which is problems which come from a field where maths is important. These include:
finance
science research, e.g. physical modelling
engineering implementations, e.g. stress analysis, chemical engineering
experimental science, e.g. physics, psychology
mathematics itself
cryptography
image processing
signal processing
And then there are the sorts of programming where the target is not necessarily mathematical, but the process of achieving that target needs some maths. These include:
games
optimisation processes
high-complexity systems, e.g. flight control software
high-availability systems, e.g. industrial process monitoring and/or safety
complex data transformations, e.g. compiler design
and so on. Various of these require various levels and aspects of mathematics.
Gaming and simulation are obvious answers. The math is not difficult, but it is clearly there.
For example, say you want to build some sort of asteroids game. You'll need to figure out the position of your space ship. That's an vector. Now you want the ship to travel in a certain direction a certain direction every frame. You'll need to add some sort of delta-x to x, and delta-y to y, so your motion is another vector: . In an asteroids game, you accelerate in the direction you're pointing, so whenever you hit the 'thrust' key, you'll need to calculate the delta of dx and dy, or an acceleration vector,
(yep, this is the same dx from calculus class, but now I'm building robot zombie opposums with it.)
But that's not all. If you act now, I'll throw in some trig. Normally you think of motion and acceleration as angle and distance(r and theta,) but the programming language normally needs these values in dx, dy format. Here's where you'll use some trig: dx = r * cos (theta) and dy = r * sin(theta)
But, let's say you want to have a planet with a gravitational pull? You'll need to approximate gravity in a way that gives you orbit behavior (elliptical orbits, firing changes the altitude of the other side of the orbit, and so on.) This is easiest to do if you understand Newton's law of universal gravitation: f = ((sqrt(m1 * m2))/d^2) * G. This tels you how much 'planetward' force to add to your ship every frame.Multiply this value by a normalized vector pointing from the spaceship to the planet, and add that as a new motion vector.
Believe it or not, I encourage people who don't like math to take game programming courses. Often they discover that math can be (dare I say it) kind of fun when they're using it on problems that involve exploding cows in space.
As another example, think about optimizing a sound wave. The analog wave has an infinite number of points. It's impossible to store them all in a digital signal, so we break the audio signal into a large number of small segments, then measure every segment. If we want a perfect representation, grab an infinitely large number of infinitely small time slices.
Draw this out, and you've created the Riemann sum, the foundational idea of Integration (and in fact of Calculus)
One more example:
A friend of mine (a biology professor) was trying to build a 'sim city'-style simulation of a lake ecosystem. He was a decent programmer, but he got all bogged down in the various calculations. He wanted the user to vary usage of the lake (outboard motors, fishing restrictions, and dumping restrictions) and then to see how that affected levels of Nitrogen and other key elements.
He tried all kinds of crazy if-then structures with nested conditions and ugly Boolean logic, but never had a clean solution.
We took actual data, ran it through Excel, and found a trendline that accurately reflected his data with a simple logarithmic formula.
Hundreds of lines of messy code were replaced with a simple formula.
Here's a few general places:
Graphics
Cryptography
Statistics
Compression
Optimization
There are also a lot of specific problem areas where complex math is required, but this is due more to the nature of the program and less about programming in general. Things like financial applications fall into this.
Any kind of numerical analysis, like in geophysics or petroleum exploration.
I once built a tool for accident investigators that required a lot of trigonometry.
In commercial programming, not so much math as arithmetic.
All programming requires math. I think that the difference between people with mathematical backgrounds and people with programming backgrounds is how they approach and answer problems. However, if you are advancing your programming skills you are likely unknowingly advancing your mathematical skills as well (and vise versa).
If you abstractly look at both programming and mathematics you'll see they're identical in their approaches: they both strive to answer problems using very fundamental building blocks.
There is a pretty famous essay by Edsger W. Dijkstra which he attempts to answer your exact question. It is called: On the Interplay Between Mathematics and Programming.
Game programming (especially 3-D, as you mentioned) has a lot of "more advanced" math. For that matter, any projects where you're modeling a system (e.g. physics simulation).
Crypto also uses different forms of math.
Robotics requires hardcore matrices, and AI requires all kinds of math.
Quite alot of complex(ish) math in the Finance sector. Other than that and Trig for 3d I can't honestly think of much else.
Im sure there are some though.
Many seemingly non-mathematical industries such as Pharmaceuticals (eg. BioInformatics), Agriculture, Marketing and in general, any "Business Intelligence" relies heavily on statistics. System performance, routing, scheduling, fault tolerance -- the list goes on....
Digital signal processing and AI/simulation/agents are others.
Animation via code, especially when you try to model real physics, also needs math.
I'm a Mathematics graduate and I have to say that the only places where I've really seen any Maths being used (above very basic arithmetic) is understanding / simplifying logical statements, so for example things like the equivalence of these two statements:
(!something) && (!otherThing)
!(something || otherThing)
Apart from that the only time that you would need more complex Maths is when you are working with computer graphics or some subject which is Maths based (e.g. finance or computations) - in which case knowing the Maths is more about understanding your subject than it is about the actual programming.
I work on software that's rather similar to CAD software, and a good grasp of geometry and at least an idea of computational geometry is necessary.
I work in computational chemistry. You need a lot of linear algebra and general understanding of techniques such as Taylor expansion, integrals, gradients, Hessians, Fourier transformation (and in general, expansion on a basis set), differential equations. It's not terribly complex math, but you have to know it.
Statistics is used heavily in businesses performing Quality Assurance and Quality Analysis. My first development job was on a contract at the USDA; these were standard "line-of-business" applications except their line-of-business happened to involve a lot of statistical analysis!
Image compression and image recognition both use Fourier series (including classic sine wave series and other orthogonal series such as wavelet transformations) which has some pretty heavy theory usually not covered until a graduate level course in mathematics or engineering.
Non-linear optimization, constrained optimization, and system estimation using hidden models likewise use a significant amount of advanced mathematical analysis.
Computer science is math. Programming is programmers job. They are related, but the two areas don't exactly overlap, so I see the point of your question.
Scientific computing and numerical analysis obviously require a solid base of linear algebra, geometry, advanced calculus and maybe more. And the whole study of algorithm, data structures and their complexity and properties makes use of discrete mathematics, graph theory, as well as calculus and probability. Behind the simple JPEG standard there's a lot of information theory, coding theory, fourier analysis... And these are only some examples.
Although a computer scientist could even work an entire life without writing down a single line of code, as well as the best programmer in the world could know just a little of math, the fact is that computers perform algorithms. And algorithms require math. I suggest you to take a look at Donald Knuth's "The Art of Computer Programming" to have an idea of what is underneath the "simple" programming thing.
I got my masters degree in meteorology, and I can tell you for that field and other applied physics fields, the kind of coding you will be doing requires an immense amount of mathematics. A lot of what you are coding are things like time differential of equations.
For things like writing code for games, however, you're not always going to be doing a lot math in your code. Gaming requires lots of logic. The part of game coding where math comes in is when you have write physics engines and things like that.

How do I become better in math, after being a programmer for several years [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How to improve my math skills to become a better programmer
Basic Math Book for a Programmer
I've had quite a weird career till now.
First I graduated from a medical school. Then I went into marketing (pharmaceuticals).
And then umm, after some time, I decided to go for my (till then) hobby and became a "professional" programmer.
I've been quite successful at this ever since. I have quite some languages "under my belt". I earn not bad and I have been involved in the opensource community quite heavily.
The thing is that I suck at math :). Well, not totally of course, as I get my work done. But I don't know how much I suck. And I don't know how to find out.
Math has never really been of any priority during my middle/high school years. I only picked as little as I could afford, because I was always getting ready to go for Medicine. Of course I know the basics of algebra. Things like "normal" and square equations. Also the basics of geometry. But well, there are things that I have missed.
And lately I am being fascinated by things like probability theory, infinity, chaos/order etc. But every time I try to learn something about these topics, I hit a wall of terminology, special symbols, and some special kind of thinking, that is quite like mine (a programmer), but also a lot different (and appears weird to me).
So, what kinds of books would you recommend me? It's very hard to find something suitable. All that I find are either too easy (and boring) or totally impenetrable.
Assuming you have your basic algebra down, I'd start with single variable calculus. I've used several calc books, and found Larson's to be the best. Hope you can find it at a library.
Move on to linear algebra shortly after. This book is free and very good.
Don't worry about mastering everything, you'll probably want to come back to linear algebra.
Then find a book that emphasizes proofs, sets, relations, functions, and axioms. I liked Analysis with an introduction to proof by Lay. Learn proof by induction especially well.
From here, you should be able to break that impenetrable wall you've found yourself against. You will be armed with the terminology to read just about any undergraduate mathematics textbook.
I recommend graph theory, combinatorics, and linear algebra, for their applications in computer science.
Good luck!
Of course I know the basics of
algebra. Things like "normal" and
square equations. Also the basics of
geometry. But well, there are things
that I have missed. And lately I am
being fascinated by things like
probability theory, infinity,
chaos/order etc.
I find that mathematics is a one-way door: if you don't get through early, it's hard to go back. It's not impossible to pick up, but it is more difficult without discipline.
The key is doing problems. You don't just read math books - you do problems to work the mechanics into your brain. If you're just reading, I'd say it's impossible to learn it.
Best to go back to what you know and work up. If you feel okay about basic algebra and geometry, start thinking about intro calculus or statistics. Start with the basic stuff: one variable differential and/or integral calculus or statistics. Do a lot of problems and get comfortable.
If you're a computer scientist, you'll find discrete math, graphs, numerical methods, and linear algebra helpful.
Don't expect to do it quickly, especially if you're casual about it.
I'd recommend two wonderful resources:
Verzani - Using R for Introductory Statistics
Gil Strang MIT Linear Algebra
Both are free; both are excellent.
You might check out some of the free course material available online from MIT.
The basics:
Basic understanding of real and complex numbers, functions, sets etc.
(Real) analysis in one variable
(Real) linear algebra
(Real) analysis in several variables
Discrete mathematics
Vector calculus
Complex analysis
Complex linear algebra
Statistics and probability theory
More advanced stuff:
Abstract algebra
Fourier analysis (much more important than one may think) (Basic video course from Stanford)
Transform theory (other than Fourier analysis)
Differential geometry
Functional analysis
Partial differential equations
Non-linear phenomena and chaos
Investigate available math classes at a local junior college. Typically, they offer them during the day for enrolled students but they sometimes have night classes as well. Talk to the professor to see if your math skills are sufficient for the class before enrolling, however, or you'll be struggling right out of the gate.

How do I think about math in programming? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm not sure if this is for SO or not. I am reading some of my old math textbooks and trying to understand math in general. Not how to figure something. I can do that but rather what is it that math is doing.
I'm sure this is painfully obvious but I never thought about it until I thought more about game programming. Is it right to think about math as the "language" that is used to explain, precisely explain, why things work?
I'm having a hard time asking it and again, I'm sure it's obvious to most, but after years of math I'm finally thinking when someone asks to "find the equation of a line" that people recognized certain characteristics of a line (y=mx+b) in space and found relationship. They needed something beside a huge paragraph (like this one) and something very precise. We call this math and at its base it's nothing more than a symbolic way to represent things.
Really, I was thinking, "I know why they said 'find the equation of a line'."
So now I am thinking, not just googling for a formula that tells me how to turn a curve with a walking man or follow a path, but why and how do I represent this mathematically and then programatically.
Just hoping for comments on math in programming.
To my way of thinking, I create a "model" of some aspect of the world. Examples:
Profit = Income - Expenditure
I throw a ball it's path will be a parabola with equation ...
I then represent the model in a computer program. So some kind of abstaction underpins the program, sometimes the math is so "obvious" we hardly notice it, sometimes (eg. simulation games) it's both very clearly there and pretty darn tricky.
Key idea: math can be used to model reality, most business systems can be viewed as represented as a model of reality.
Having said that, in 30 years of programming the amount of true (algebra, calculus) maths I have done is negligable.
Steve Yegge wrote a very good article that you may find helpful: Math Every Day
I recommend that you look into materials related to the theory of computation. For example:
On Computable Numbers, with an Application to the Entscheidungsproblem - Alan Turing (1936)
The Mathematical Theory of Communication - Claude Shannon (1948)
The General and Logical Theory of Automata - John Von Neumann (1951)
These are not papers for the faint of heart, but they will give you insights into the beautiful relationship between mathematics and computer science.
You might want to start with a textbook on the subject of computation theory before you tackle the papers listed above, e.g.
Introduction to the Theory of Computation - Michael Sipser
Math for a programmer is like a hammer for a carpenter. The carpenter doesn't use the hammer for everything, but if he doesn't have one, there's a lot he can't do.
Not sure what your precise question is ...
Some thoughts:
Programming is nothing but math (Functional programming, Lambda calculus, programming == math)
Math is a kind of language - An abstract description/representation of an expression in thought
Math helps you to formalize expressions: Instead of For all integer numbers x from one to ten the square of x is less than 250 you can write ∀x ∈ {1..10} (x² < 250)
Programming (a programming language) does the same thing and helps to formalize algorithms.
The kind math that is commonly used in computer programms is numeric math, but with some efforts, you can also perform symbolic computations
I think math is really the concepts behind the symbols instead of the symbols themselves, but when most people speak of math, they're not making the distinction. They're just thinking of the symbols. Partly, this is because of they way math is taught in school, where the focus is on the mechanistic manipulation of the symbols to get correct results, rather than what the concepts are.
This is similar to the way non-programmers view programming. They look at a computer program and see gibberish, whereas a programmer in the given language (after more or less effort) understands the behavior the code represents.
Some people are better at retaining the meaning of such symbols than others. I think there are people who might appreciate math more than they think if they could get past that barrier to the concepts.
I agree with Taylor. Math inside computers is a very deep topic with numerical methods. The biggest issues is precision and the fact that 32 bits only get you so far. There are some really cool (and complicated) functions that describe how to find integrals and such with computers, but because we can't be exact with our answers, and because computers are limited with what they can do (add, multiply, etc) there are lots of methods of how to estimate math to a great deal of precision.
If you are interested in that topic, all the more power to you. That was one class I struggled through.
I'm looking at something similar (financial models) - similar in that we come up with mathematical models, and then implement these in code.
The main issue you face from a programming perspective is taking a model that is expressed in mathematical terms (which assume continuity, infinitely small time/space steps etc.) and then translate these into 'discrete' models, that assume finite time/space steps (e.g. the ball moves every 1mm, or every 1ms).
The translation of these models is not necessarily trivial, and you should have a look at appropriate references for these (Numerical Recipes is a classic). The implementation in code is often very different to how you might express the problem in mathematical terms.
I think Math in programming with time, silence and good food such that I have a lot of paper and a pen, friends-to-ask-help and a pile of books from Rudin to Bourbaki on the top of my Macbook on the floor.
I think why is a philosophical question.
As far as how I think of math/programming and the interplay between... I think of them as layers of modeling. At the lowest, 'truest' level there is some fundamental truth, whatever that may be. Then there is the mathematical modeling of this truth, upon which the 'language' of mathematics is developed (fortunately there is only one language?). Then there is another layer, that of modeling and approximations. In the case of y=mx+b, its only a line within one model, it could be anything. Being visual beings, the most beneficial is perhaps geometric (lines, surfaces, etc). Then upon this there is the computational modeling, the numerical methods/analysis if you will.
As to how do i think of things, I like to think in the modeling perspective. That is, I like to conceptually model some process, and then apply the math and then the numerical methods. Middle out development if you will (to draw an N-tier analogy).
As an afterthought, perhaps the modeling could be called engineering.
The best way to get the type of understanding that you're looking for is to work through "story problems" (i.e. problems stated in words rather than equations). From this and your other questions, you're mostly looking at trigonometry.
In short, I would recommend trying the trig book from the Schaum's Outline Series -- they are cheap (~$13) and have lots of problems with solutions.
There are other routes to finding problems in math to solve, such as just make up game design problems to solve. Here are two: 1) show an object moving around a circle at constant speed, and 2) show two object moving along to different lines that don't intersect, and draw a line between them. Or you could get a book that walks you through these types of things. But you've got to work out a number of problems to force you to think things through yourself.

Which particular software development tasks have you used math for? And which branch of math did you use?

I'm not looking for a general discussion on if math is important or not for programming.
Instead I'm looking for real world scenarios where you have actually used some branch of math to solve some particular problem during your career as a software developer.
In particular, I'm looking for concrete examples.
I frequently find myself using De Morgan's theorem when as well as general Boolean algebra when trying to simplify conditionals
I've also occasionally written out truth tables to verify changes, as in the example below (found during a recent code review)
(showAll and s.ShowToUser are both of type bool.)
// Before
(showAll ? (s.ShowToUser || s.ShowToUser == false) : s.ShowToUser)
// After!
showAll || s.ShowToUser
I also used some basic right-angle trigonometry a few years ago when working on some simple graphics - I had to rotate and centre a text string along a line that could be at any angle.
Not revolutionary...but certainly maths.
Linear algebra for 3D rendering and also for financial tools.
Regression analysis for the same financial tools, like correlations between financial instruments and indices, and such.
Statistics, I had to write several methods to get statistical values, like the F Probability Distribution, the Pearson product moment coeficient, and some Linear Algebra correlations, interpolations and extrapolations for implementing the Arbitrage pricing theory for asset pricing and stocks.
Discrete math for everything, linear algebra for 3D, analysis for physics especially for calculating mass properties.
[Linear algebra for everything]
Projective geometry for camera calibration
Identification of time series / statistical filtering for sound & image processing
(I guess) basic mechanics and hence calculus for game programming
Computing sizes of caches to optimize performance. Not as simple as it sounds when this is your critical path, and you have to go back and work out the times saved by using the cache relative to its size.
I'm in medical imaging, and I use mostly linear algebra and basic geometry for anything related to 3D display, anatomical measurements, etc...
I also use numerical analysis for handling real-world noisy data, and a good deal of statistics to prove algorithms, design support tools for clinical trials, etc...
Games with trigonometry and AI with graph theory in my case.
Graph theory to create a weighted graph to represent all possible paths between two points and then find the shortest or most efficient path.
Also statistics for plotting graphs and risk calculations. I used both Normal distribution and cumulative normal distribution calculations. Pretty commonly used functions in Excel I would guess but I actully had to write them myself since there is no built-in support in the .NET libraries. Sadly the built in Math support in .NET seem pretty basic.
I've used trigonometry the most and also a small amount a calculus, working on overlays for GIS (mapping) software, comparing objects in 3D space, and converting between coordinate systems.
A general mathematical understanding is very useful if you're using 3rd party libraries to do calculations for you, as you ofter need to appreciate their limitations.
i often use math and programming together, but the goal of my work IS the math so use software to achive that.
as for the math i use; mostly Calculus (FFT's analysing continuous and discrete signals) with a slash of linar algebra (CORDIC) to do trig on a MCU with no floating point chip.
I used a analytic geometry for simple 3d engine in opengl in hobby project on high school.
Some geometry computation i had used for dynamic printing reports, where was another 90° angle layout than.
A year ago I used some derivatives and integrals for store analysis (product item movement in store).
Bot all the computation can be found on internet or high-school book.
Statistics mean, standard-deviation, for our analysts.
Linear algebra - particularly gauss-jordan elimination and
Calculus - derivatives in the form of difference tables for generating polynomials from a table of (x, f(x))
Linear algebra and complex analysis in electronic engineering.
Statistics in analysing data and translating it into other units (different project).
I used probability and log odds (log of the ratio of two probabilities) to classify incoming emails into multiple categories. Most of the heavy lifting was done by my colleague Fidelis Assis.
Real world scenarios: better rostering of staff, more efficient scheduling of flights, shortest paths in road networks, optimal facility/resource locations.
Branch of maths: Operations Research. Vague definition: construct a mathematical model of a (normally complex) real world business problem, and then use mathematical tools (e.g. optimisation, statistics/probability, queuing theory, graph theory) to interrogate this model to aid in the making of effective decisions (e.g. minimise cost, maximise efficency, predict outcomes etc).
Statistics for scientific data analyses such as:
calculation of distributions, z-standardisation
Fishers Z
Reliability (Alpha, Kappa, Cohen)
Discriminance analyses
scale aggregation, poling, etc.
In actual software development I've only really used quite trivial linear algebra, geometry and trigonometry. Certainly nothing more advanced than the first college course in each subject.
I have however written lots of programs to solve really quite hard math problems, using some very advanced math. But I wouldn't call any of that software development since I wasn't actually developing software. By that I mean that the end result wasn't the program itself, it was an answer. Basically someone would ask me what is essentially a math question and I'd write a program that answered that question. Sure I’d keep the code around for when I get asked the question again, and sometimes I’d send the code to someone so that they could answer the question themselves, but that still doesn’t count as software development in my mind. Occasionally someone would take that code and re-implement it in an application, but then they're the ones doing the software development and I'm the one doing the math.
(Hopefully this new job I’ve started will actually let me to both, so we’ll see how that works out)

Resources