Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been using R for my research in corporate finance and asset pricing, and really like it due to my background in Math and Statistics. Until now, I encountered 2 main constrains in R. The first one is handling big data files, but I have kind of circumvented it by combining R with PostgreSQL and Spark, and I believe I can get more RAM from high performance computer or AWS cloud in the future. The second constrain is executing speed (important for handling tick by tick security quote data), and I was recommended that Julia has huge speed advantage over R. My question is that since Rcpp offers a really fast execution, does the speed advantage of Julia still hold? I am considering if I should learn Julia.
In addition, R provides a perfect database connection with WRDS, Quandl, TrueFX, and TAQ, and I am really used to Hadley Wickham style data cleaning. As a academic guy, I kind of like the fact that R has support from peer review journals like Journal of Stat Software. I will try Julia and see how it works. Thanks for all the answers and comments!
Rcpp and Julia will get you to the same place in the end performance-wise. In fact, type-stable Julia will compile to the essentially the same LLVM IR as clang compiled C++. Design-wise there's nothing that stops it from being the same (in the type-stable case), other than a few missing optimizations because the language is young (example, #fastmath doesn't add FMA by default, so you'd have to add FMA calls yourself, whereas I believe C++ compiled with fastmath will FMA). But you can play around check that #code_llvm and #code_native outputs the same code, given type-stability.
However, Rcpp would require that you write a bunch of C++ code and test/maintain that code along with your R code. C/C++ is much lower level and can be more difficult to maintain (the "two language problem"). If you choose to go with Julia, you can write it all in Julia. That's the main difference.
(As for the whole "Julia is 2x slower than C", should probably be mentioned here. Normally it's due to having small parts of type-unstable code, not turning off array bounds checking with #inbounds (the language comparison from the comments notably doesn't do this, which can cause a pretty large difference in a tight loop), and relying on vectorization styles (a la R/MATLAB/Python). The last part is much better in Julia v0.6, but it will always have a small cost over looping. In the end, it's opt-in/opt-out choices for concise code and additional safety checks that causes the difference.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am participating in a big programming competition tomorrow where I use R.
Time is the main factor (only 2 hours for 7 coding problems).
The problems are very mathematics related.
I would like to write "f" instead of "function" when I define a function.
This can be done and I had the code to do so, but I lost it and cannot find it.
Where do I find sin() functions for degrees input, not radian?
(optional) Is there any algorithm specific task view or libraries.
Any tip for a programming contest?
I prepared the following cheat sheet for the contest:
http://pastebin.com/h5xDLhvg
======== EDIT: ==========
So I finally have time to write down my lessons learned.
The programming contest was a lot of fun, but unfortunately I did not score very well. I was in the top 50%, but my aim was to be in the top 25%.
The main problem was that there was very little time to program, just 2 hours in total. But I had to read the problem descriptions and also I needed some time to paste the results in the web form, etc., so it was more like 90 Minutes of programming.
Hopefully the next contest in December will have extended time, like 3-4 hours. The organizers said that perhaps will be the case.
Also, there was no Internet access at the contest, and my mobile reception was not really working.
The main lesson for me is that you have to use a language you daily use in order to have a real chance. Especially, if there is only about 90 Minutes time to program. Since I use haskell more than R in my daily work, I think R was not the best choice. During the contest I mixed up haskell and R function definitions, and I made too many small typos to program fast enough.
What was great about the contest was, that there was about 20 000 bucks prize money in total for the about 80 participants. So the top 25% participants got from 500 to 1500 bucks each. Further, I think the top 15% get a job right away from one of the sponsor IT firms.
So it's a win-win situation. It's fun, plus you can get prize money. Further the IT firms are more than happy, because they have access to the top programmers.
I used the chance to speak to IT decision makers. One of them was from a larger bank. I boldly suggested that they consider switching to Scala for their development (switchung from Java). And also to consider using R and Haskell. It was fun, and they even said they already looked into Scala!
What was interesting to note was, that one of my best friends scored very good at the competition. He is only 19 years old, but he was well in the top 20% and got 500 bucks prize money. He beat me plus 6 of my colleges, who all have a respectable computer science degree. My friend programs more like hacker style, but he was very fast.
People in the top 10 used:
1) Java
2) C# and
3) C++
(No other programming language in the top 10!).
The only other programming language that scored reasonably well was Ruby, I think.
For the next contest the programming language of choice will probably be haskell. For one reason, it's just easier to find 2 team mates for haskell than for R programming. And up to 3 persons can form a team.
My ideal scenario would be a very light weight framework, where I could use multiple programming languages at once for the contest. That way, the main code can be written in haskell (which all team mates can program in). And some specific functions may be programmed in R, or in Mathematica, or even some other programming language (like python/sage).
This sounds a little bit overkill. But I think it would be very usefull. Like a function that has a matrix as a parameter and returns a matrix. Then this framework work generate automatically a RESTful service from the R code, so I could call the R function from any programming language. The matrix is just passed around as JSON data (or some other serialization). Okay, but this is off topic...
So finally some lessons learned as a bullet list:
don't bring food. you don't have time to eat, and there is a rich buffet afterwards
time is the limiting factor!
if you don't program R for a living, don't use R
look for contests where there is more time (3-4 hourss minimum!)
all in all, the concept of the contest is superb! Both for the participants, but also for the sponsors.
BIG THANKS to the help of 'Iterator' for his post!!
I'm going to answer a related, but different question. No offense, but your original suggestions don't seem very wise for a programming competition. Much of the time spent in such contexts is in devising an answer and in debugging (or, better, avoiding the need to debug).
Instead, I will answer this question: "What are the key resources in R that are useful for rapid prototyping, with a focus on being able to find resources quickly, being able to debug quickly, and being able to investigate data quickly? If I need to use numerical optimization methods and algebra systems, what should I investigate?"
Here are my answers:
Install RStudio or possibly Revolution Analytics' R, depending on which interface seems more appropriate to you. Both are good. The former has a very smooth GUI, the latter has a more intense interface, with more capabilities for managing code. Both have some nice properties over the "community" R regarding being able to look up information and navigate the help libraries quickly.
Get acquainted with example(), identify where to get vignettes and tutorials (from packages' pages on CRAN), and take a brief look at demo().
Use the sos library, and master findFn.
Look at the Task Views on CRAN - be sure you know about the tools for high performance computing (if that is going to be related) and the tools for optimization - it's quite common to need to use some kind of solver, and there's a task view for that.
If your code is running slowly during the prototyping or competition, you'll need to run Rprof(). Take that for a spin first. You may also benefit from using the compiler package if your code involves much iteration. In short: You do not want to wait on the computer. You might also look at foreach and doSMP or doMC if you can parcel the job to different cores. To aggregate results, become familiar with plyr and methods like ldply, as well as standard *apply functions, like lapply and apply; another good one to know is rapply. (If you have lots of stuff to process and it takes some time, look at mclapply or the .parallel argument for the plyr functions.)
On Stack Overflow: browse JD Long's questions - much of what you will discover that you do not know will have been asked by him before you thought to ask it. And there's an answer already there.
Create a number of little code templates for yourself. Master functions so that you don't need to learn these in a rush. Learn how to debug and step through these, using debug() and browser().
If you have to count things, learn how to use the hash package (akin to Perl and Python hash tables) and learn to use digest for keys that are too long to be used for hash (see this question for references)
If you are going to need to plot things, get some basic example plots prepared, using either plot or ggplot2, along with hist, boxplot, and some others. If you don't know ggplot2 already, then postpone, but you should become familiar with it. If you happen to use a lot of data, then be sure you know hexbin. If you will have to interact with data, then get to know iplots and the interesting tools there, such as iplot, ihist, and parallel coordinate plots (ipcp).
Be sure you know how to use lists, data frames, and matrices, including subscripting, lookups of entries based on (row, column) indices. (Again, be sure to investigate plyr for transforming and operating on some of these objects.)
Get acquainted with data.table() - it's exceptionally efficient for a lot of things you might do with data frames and matrices.
If you need to do symbolic mathematics, be sure you know the packages for that or else get another standalone tool for symbolic math. Ryacas is one package that appears to be useful.
Get the PDF of the R in a Nutshell, so that you can rapidly search through it for useful methods. Else, get the book itself. Various other books, such as Venables & Ripley, the R Cookbook, and others may be useful, depending on your experience.
If you've already mastered a good editor (e.g. emacs) or IDE (e.g. Eclipse), stick with it and look for bindings to R. Otherwise, a simple one you can begin using right away is Notepad++. Being able to do block selection is a very useful property in an editor. Being able to search through an entire directory hierarchy of code examples is another useful capability.
If you need to do anything involving database data, you may want to know RSQLite and sqldf, though these may not be relevant to a math competition.
Open a bunch of R instances so that you can try things out. :) [This is actually serious: by having multiple instances running, you can somewhat avoid latency associated with sequentially trying things out, waiting for results, and then debugging the results.]
For (1), you can do something like
f <- function(..., body)
{
dots <- substitute(...)
body <- substitute(body)
f <- function()
formals(f) <- dots
body(f) <- body
environment(f) <- parent.env(environment())
f
}
which lets you write, eg, g <- f(x, y, body=x+y) but I'm not sure how far that gets you.
For (2), you could just do:
sindeg <- function(x) sin(x*pi/180)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Recently there has been a paper floating around by Vinay Deolalikar at HP Labs which claims to have proved that P != NP.
Could someone explain how this proof works for us less mathematically inclined people?
I've only scanned through the paper, but here's a rough summary of how it all hangs together.
From page 86 of the paper.
... polynomial time
algorithms succeed by successively
“breaking up” the problem into
smaller subproblems that are joined to
each other through conditional
independence. Consequently, polynomial
time algorithms cannot solve
problems in regimes where blocks whose
order is the same as the underlying
problem instance require simultaneous
resolution.
Other parts of the paper show that certain NP problems can not be broken up in this manner. Thus NP/= P
Much of the paper is spent defining conditional independence and proving these two points.
Dick Lipton has a nice blog entry about the paper and his first impressions of it. Unfortunately, it also is technical. From what I can understand, Deolalikar's main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the problem.
I'm with Rex M with this one, some results, mostly mathematical ones cannot be expressed to people who lack the technical mastery.
I liked this ( http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html ):
His argument revolves around a particular task, the Boolean satisfiability problem, which asks whether a collection of logical statements can all be simultaneously true or whether they contradict each other. This is known to be an NP problem.
Deolalikar claims to have shown that
there is no program which can complete
it quickly from scratch, and that it
is therefore not a P problem. His
argument involves the ingenious use of
statistical physics, as he uses a
mathematical structure that follows
many of the same rules as a random
physical system.
The effects of the above can be quite significant:
If the result stands, it would prove
that the two classes P and NP are not
identical, and impose severe limits on
what computers can accomplish –
implying that many tasks may be
fundamentally, irreducibly complex.
For some problems – including
factorisation – the result does not
clearly say whether they can be solved
quickly. But a huge sub-class of
problems called "NP-complete" would be
doomed. A famous example is the
travelling salesman problem – finding
the shortest route between a set of
cities. Such problems can be checked
quickly, but if P ≠ NP then there is
no computer program that can complete
them quickly from scratch.
This is my understanding of the proof technique: he uses first order logic to characterize all polynomial time algorithms, and then shows that for large SAT problems with certain properties that no polynomial time algorithm can determine their satisfiability.
One other way of thinking about it, which may be entirely wrong, but is my first impression as I'm reading it on the first pass, is that we think of assigning/clearing terms in circuit satisfaction as forming and breaking clusters of 'ordered structure', and that he's then using statistical physics to show that there isn't enough speed in the polynomial operations to perform those operations in a particular "phase space" of operations, because these "clusters" end up being too far apart.
Such proof would have to cover all classes of algorithms, like continuous global optimization.
For example, in the 3-SAT problem we have to evaluate variables to fulfill all alternatives of triples of these variables or their negations. Look that x OR y can be changed into optimizing
((x-1)^2+y^2)((x-1)^2+(y-1)^2)(x^2+(y-1)^2)
and analogously seven terms for alternative of three variables.
Finding the global minimum of a sum of such polynomials for all terms would solve our problem. (source)
It's going out of standard combinatorial techniques to the continuous world using_gradient methods, local minims removing methods, evolutionary algorithms. It's completely different kingdom - numerical analysis - I don't believe such proof could really cover (?)
It's worth noting that with proofs, "the devil is in the detail". The high level overview is obviously something like:
Some some sort of relationship
between items, show that this
relationship implies X and that
implies Y and thus my argument is
shown.
I mean, it may be via Induction or any other form of proving things, but what I'm saying is the high level overview is useless. There is no point explaining it. Although the question itself relates to computer science, it is best left to mathematicians (thought it is certainly incredibly interesting).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
For example, math logic, graph theory.
Everyone around tells me that math is necessary for programmer. I saw a lot of threads where people say that they used linear algebra and some other math, but no one described concrete cases when they used it.
I know that there are similar threads, but I couldn't see any description of such a case.
Computer graphics.
It's all matrix multiplication, vector spaces, affine spaces, projection, etc. Lots and lots of algebra.
For more information, here's the Wikipedia article on projection, along with the more specific case of 3D projection, with all of its various matrices. OpenGL, a common computer graphics library, is an example of applying affine matrix operations to transform and project objects onto a computer screen.
I think that a lot of programmers use more math than they think they do. It's just that it comes so intuitively to them that they don't even think about it. For instance, every time you write an if statement are you not using your Discrete Math knowledge?
In graphic world you need a lot of transformations.
In cryptography you need geometry and number theory.
In AI, you need algebra.
And statistics in financial environments.
Computer theory needs math theory: actually almost all the founders are from Maths.
Given a list of locations with latitudes and longitudes, sort the list in order from closest to farthest from a specific position.
All applications that deal with money need math.
I can't think of a single app that I have written that didn't require math at some point.
I wrote a parser compiler a few months back, and that's full of graph-theory. This was only designed to be slightly more powerful than regular expressions (in that multiple matches were allowed, and some other features were added), but even such a simple compiler requires loop detection, finite state automata, and tons more math.
Implementing the Advanced Encryption Standard (AES) algorithm required some basic understanding of finite field math. See act 4 of my blog post on it for details (code sample included).
I've used a lot of algebra when writing business apps.
Simple Examples
BMI = weight / (height * height);
compensation = 10 * hours * ((pratio * 2.3) + tratio);
A few years ago, I had a DSP project that had to compute a real radix-2 FFT of size N, in a given time. The vendor-supplied real radix-2 FFT wouldn't run in the allocated time, but their complex FFT of size N/2 would. It is easy to feed the real data into the complex FFT. Getting the answers out afterwards is not so easy: it is called post-weaving, or post-unweaving, or unweaving. Deriving the unweave equations from the FFT and complex number theory was not fun. Going from there to tightly-optimized DSP code was equally not fun.
Naturally, the signal I was measuring did not match the FFT sample size, which causes artifacts. The standard fix is to apply a Hanning window. This causes other artifacts. As part of understanding (and testing) that code, I had to understand the artifacts caused by the Hanning window, so I could interpret the results and decide whether the code was working or not.
I've used tons of math in various projects, including:
Graph theory for dealing with dependencies in large systems (e.g. a Makefile is a kind of directed graph)
Statistics and linear regression in profiling performance bottlenecks
Coordinate transformations in geospatial applications
In scientific computing, project requirements are often stated in algebraic form, especially for computationally intensive code
And that's just off the top of my head.
And of course, anything involving "pure" computer science (algorithms, computational complexity, lambda calculus) tends to look more and more like math the deeper you go.
In answering this image-comparison-algorithm question, I drew on lots of knowledge of math, some of it from other answers and web searches (where I had to apply my own knowledge to filter the information), and some from my own engineering training and lengthy programming background.
General Mindforming
Solving Problems - One fundamental method of math, independent of the area, is transofrming an unknown problem into a known one. Even if you don't have the same problems, you need the same skill. In math, as in programming, virtually everything has different representations. Understanding the equivalence between algorithms, problems or solutions that are completely different on the surface helps you avoid the hard parts.
(A similar thing happens in physics: to solve a kinematic problem, choice of the coordinate system is often the difference between one and ten pages full of formulas, even though problem and solution are identical.)
Precision of Language / Logical reasoning - Math has a very terse yet precise language. Learning to deal with that will prepare you for computers doing what you say, not what you meant. Also, the same precision is required to analyse if a specification is sufficient, to check a piece of code if it covers all possible cases, etc.
Beauty and elegance - This may be the argument that's hardest to grasp. I found the notion of "beauty" in code is very close to the one found in math. A beautiful proof is one whose idea is immediately convincing, and the proof itself is merely executing a sequence of executing the next obvious step.
The same goes for an elegant implementation.
(Most mathematicians I've encountered have a faible for putting the "Aha!" - effect at the end rather than at the beginning. As have most elite geeks).
You can learn these skills without one lesson of math, of course. But math ahs perfected this for centuries.
Applied Skills
Examples:
- Not having to run calc.exe for a quick estimation of memory requirements
- Some basic statistics to tell a valid performance measurement from a shot in the dark
- deducing a formula for a sequence of values, rather than hardcoding them
- Getting a feeling for what c*O(N log N) means.
- Recursion is the same as proof by inductance
(that list would probably go on if I'd actively watch myself for items for a day. This part is admittedly harder than I thought. Further suggestions welcome ;))
Where I use it
The company I work for does a lot of data acquisition, and our claim to fame (comapred to our competition) is the brain muscle that goes into extracting something useful out of the data. While I'm mostly unconcerned with that, I get enough math thrown my way. Before that, I've implemented and validated random number generators for statistical applications, implemented a differential equation solver, wrote simulations for selected laws of physics. And probably more.
I wrote some hash functions for mapping airline codes and flight numbers with good efficiency into a fairly limited number of data slots.
I went through a fair number of primes before finding numbers that worked well with my data. Testing required some statistics and estimates of probabilities.
In machine learning: we use Bayesian (and other probabilistic) models all the time, and we use quadratic programming in the form of Support Vector Machines, not to mention all kinds of mathematical transformations for the various kernel functions. Calculus (derivatives) factors into perceptron learning. Not to mention a whole theory of determining the accuracy of a machine learning classifier.
In artifical intelligence: constraint satisfaction, and logic weigh very heavily.
I was using co-ordinate geometry to solve a problem of finding the visible part of a stack of windows, not exactly overlapping on one another.
There are many other situations, but this is the one that I got from the top of my head. Inherently all operations that we do is mathematics or at least depends on/related to mathematics.
Thats why its important to know mathematics to have a more clearer understanding of things :)
Infact in some cases a lot of math has gone into our common sense that we don't notice that we are using math to solve a particular problem, since we have been using it for so long!
Thanks
-Graphics (matrices, translations, shaders, integral approximations, curves, etc, etc,...infinite dots)
-Algorithm Complexity calculations (specially in line of business' applications)
-Pointer Arithmetics
-Cryptographic under field arithmetics etc.
-GIS (triangles, squares algorithms like delone, bounding boxes, and many many etc)
-Performance monitor counters and the functions they describe
-Functional Programming (simply that, not saying more :))
-......
I used Combinatorials to stuff 20 bits of data into 14 bits of space.
Machine Vision or Computer Vision requires a thorough knowledge of probability and statistics. Object detection/recognition and many supervised segmentation techniques are based on Bayesian inference. Heavy on linear algebra too.
As an engineer, I'm trying really hard to think of an instance when I did not need math. Same story when I was a grad student. Granted, I'm not a programmer, but I use computers a lot.
Games and simulations need lots of maths - fluid dynamics, in particular, for things like flames, fog and smoke.
As an e-commerce developer, I have to use math every day for programming. At the very least, basic algebra.
There are other apps I've had to write for vector based image generation that require a strong knowledge of Geometry, Calculus and Trigonometry.
Then there is bit-masking...
Converting hexadecimal to base ten in your head...
Estimating load potential of an application...
Yep, if someone is no good with math, they're probably not a very good programmer.
Modern communications would completely collapse without math. If you want to make your head explode sometime, look up Galois fields, error correcting codes, and data compression. Then symbol constellations, band-limited interpolation functions (I'm talking about sinc and raised-cosine functions, not the simple linear and bicubic stuff), Fourier transforms, clock recovery, minimally-ambiguous symbol training sequences, Rayleigh and/or Ricean fading, and Kalman filtering. All of those involve math that makes my head hurt bad, and I got a Masters in Electrical Engineering. And that's just off the top of my head, from my wireless communications class.
The amount of math required to make your cell phone work is huge. To make a 3G cell phone with Internet access is staggering. To prove with sufficient confidence that an algorithm will work in most all cases sometimes takes people's careers.
But... if you're only ever going to work with this stuff as black boxes imported from a library (at their mercy, really), well, you might get away with just knowing enough algebra to debug mismatched parentheses. And there are a lot more of those jobs than the hard ones... but at the same time, the hard jobs are harder to find a replacement for.
Examples that I've personally coded:
wrote a simple video game where one spaceship shoots a laser at another ship. To know if the ship was in the laser's path, I used basic algebra y=mx+b to calculate if the paths intersect. (I was a child when I did this and was quite amazed that something that was taught on a chalkboard (algebra) could be applied to computer programming.)
calculating mortgage balances and repayment schedules with logarithms
analyzing consumer buying choices by calculating combinatorics
trigonometry to simulate camera lens behavior
Fourier Transform to analyze digital music files (WAV files)
stock market analysis with statistics (linear regressions)
using logarithms to understand binary search traversals and also disk space savings when using packing information into bit fields. (I don't calculate logarithms in actual code, but I figure them out during "design" to see if it's feasible to even bother coding it.)
None of my projects (so far) have required topics such as calculus, differential equations, or matrices. I didn't study mathematics in school but if a project requires math, I just reference my math books and if I'm stuck, I search google.
Edited to add: I think it's more realistic for some people to have a programming challenge motivate the learning of particular math subjects. For others, they enjoy math for its own sake and can learn it ahead of time to apply to future programming problems. I'm of the first type. For example, I studied logarithms in high school but didn't understand their power until I started doing programming and all of sudden, they seem to pop up all over the place.
The recurring theme I see from these responses is that this is clearly context-dependent.
If you're writing a 3D graphics engine then you'd be well advised to brush up on your vectors and matrices. If you're writing a simple e-commerce website then you'll get away with basic algebra.
So depending on what you want to do, you may not need any more math than you did to post your question(!), or you might conceivably need a PhD (i.e. you would like to write a custom geometry kernel for turbine fan blade design).
One time I was writing something for my Commodore 64 (I forget what, I must have been 6 years old) and I wanted to center some text horizontally on the screen.
I worked out the formula using a combination of math and trial-and-error; years later I would tackle such problems using actual algebra.
Drawing, moving, and guidance of missiles and guns and lasers and gravity bombs and whatnot in this little 2d video game I made: wordwarvi
Lots of uses sine/cosine, and their inverses, (via lookup tables... I'm old, ok?)
Any geo based site/app will need math. A simple example is "Show me all Bob's Pizzas within 10 miles of me" functionality on a website. You will need math to return lat/lons that occur within a 10 mile radius.
This is primarily a question whose answer will depend on the problem domain. Some problems require oodles of math and some require only addition and subtraction. Right now, I have a pet project which might require graph theory, not for the math so much as to get the basic vocabulary and concepts in my head.
If you're doing flight simulations and anything 3D, say hello to quaternions! If you're doing electrical engineering, you will be using trig and complex numbers. If you're doing a mortgage calculator, you will be doing discrete math. If you're doing an optimization problem, where you attempt to get the most profits from your widget factory, you will be doing what is called linear programming. If you are doing some operations involving, say, network addresses, welcome to the kind of bit-focused math that comes along with it. And that's just for the high-level languages.
If you are delving into highly-optimized data structures and implementing them yourself, you will probably do more math than if you were just grabbing a library.
Part of being a good programmer is being familiar with the domain in which you are programming. If you are working on software for Fidelity Mutual, you probably would need to know engineering economics. If you are developing software for Gallup, you probably need to know statistics. LucasArts... probably Linear Algebra. NASA... Differential Equations.
The thing about software engineering is you are almost always expected to wear many hats.
More or less anything having to do with finding the best layout, optimization, or object relationships is graph theory. You may not immediately think of it as such, but regardless - you're using math!
An explicit example: I wrote a node-based shader editor and optimizer, which took a set of linked nodes and converted them into shader code. Finding the correct order to output the code in such that all inputs for a certain node were available before that node needed them involved graph theory.
And like others have said, anything having to do with graphics implicitly requires knowledge of linear algebra, coordinate spaces transformations, and plenty of other subtopics of mathematics. Take a look at any recent graphics whitepaper, especially those involving lighting. Integrals? Infinite series?! Graph theory? Node traversal optimization? Yep, all of these are commonly used in graphics.
Also note that just because you don't realize that you're using some sort of mathematics when you're writing or designing software, doesn't mean that you aren't, and actually understanding the mathematics behind how and why algorithms and data structures work the way they do can often help you find elegant solutions to non-trivial problems.
In years of webapp development I didn't have much need with the Math API. As far as I can recall, I have ever only used the Math#min() and Math#max() of the Math API.
For example
if (i < 0) {
i = 0;
}
if (i > 10) {
i = 10;
}
can be done as
i = Math.max(0, Math.min(i, 10));