Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I've noticed the word "moral" keeps coming up in functional programming contexts. A couple examples:
Fast and Loose Reasoning is Morally Correct
Purescript Aff documentation:
This is moral equivalent of ErrorT (ContT Unit (Eff e)) a.
I'm unfamiliar with these usages of the word. I can mostly infer what they're trying to say, but can we clarify more precisely what it means?
(Cross-posted on English Language & Usage)
The term "moral equivalence" in (formalized) logics, and by extension, in programming has nothing to do with appeal to morality (as in, ethical or philosophical questions). It is co-opting the term "morally", but means something different. It is generally supposed to mean "P holds, but only under certain side-conditions". These conditions are often omitted if they have no educational value, are trivial, technical and/or boring. Hence, the linked article about "moral equivalence" has nothing to do it – there are no value judgements involved here.
I don't know much about Purescript, but the way I'd interpret the statement you mentioned as "you can achieve the same thing with Aff as ErrorT (ContT Unit (Eff e)) a."
To give another example: Let's say you have two functions and you are only interested in a specific (maybe large) subset of their domains. Let's also say that these two functions agree on these domains, that is, for all x ∈ dom, f(x) = g(x). But for the sake of the example, maybe they do something different on 0, but you will not ever pass 0 into them (because 0 violates some assumption). One could reasonably say that f and g "are morally equivalent".
Especially in the logics community, there are other uses of "moral", for example in the phrase "the proof is morally questionable", which means that the author considers the proof to be sloppy and that it may have gaps, but technically fixable. In a particular case, namely carrying out proofs about potentially non-terminating programs, the paper you have mentioned gives such a justification, which is echoed in the title "Fast and Loose Reasoning is Morally Correct."
As Conor McBride points out on twitter, this usage stems from the category theory community, which inspires much in fp.
https://twitter.com/pigworker/status/739971825128607744
Eugenia Cheng has a good paper describing the concept of morality as used in mathematics.
http://www.cheng.staff.shef.ac.uk/morality/morality.pdf
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In quite a few of my more recent programs, I've been using basic calculus to replace if statements such as in a for loop I've used:
pos = cbltSz*(x-1) to get the position of a small cube relative to a large one rather than saying something like if(x == 0){pos = -cbltSz}. This is more or less to neaten up the code a little bit. But it got me thinking. To what extent would using maths out-perform pre-defined statements/ functions? and how much would it vary from language to language? This is assuming that my maths used is preferable to the alternative in a way other than aesthetic.
Modern CPUs have deep pipelines, so that a branch misprediction may come at a considerable performance impact. On the other hand, they tend to have considerable floating-point computation power, often being able to perform multiple floating-point operations in parallel on different ALUs. So there might be a performance benefit to this approach. But it all depends. If the application is already doing a lot of number crunching, the ALUs might be saturated. If the branch predictor does a good job, branch mispredictions may be rare in many applications.
So I'd go with the usual rules for optimizations: don't try to hand-optimize everything. Don't optimize at the cost of readability and maintainability. If you have a performance bottleneck, identify the hot portions of your codebase and consider alternatives for those. Try out alternatives in benchmarks, because theoretic considerations only get you so far.
Note that your statements pos = cbltSz*(x-1) and if(x == 0){pos = -cbltSz} are not equivalent if x is non-zero: the first sets pos to some definite value while the second leaves it to its previous value. The significance of this difference depends on the rest of your program. The two statements also differ in clarity--the second expresses your purpose better and the first does not "neaten up the code a little bit". In most programs, improved clarity is more important than a slight increase in speed.
So the answer to your question depends on too many factors to get a full answer here.
What most early programming language designers didn't understand, and many still don't understand, is that mathematical functions are a bit different from user-defined functions. If we've done any trig at all, we all know what sin(PI/8) is going to mean,and we're happy embedding the class in expressions like
rx = cos(theta) * x - sin(theta) * y;
But functions you write yourself are only seldom like basic mathematical functions. They take several parameters, they return several parameters, it's not usually quite clear what they do. The last thing you want is to embed them in complicated expressions.
Secondly, maths has its own system of notation for a reason. The cut down, ascii-only system of a programming language breaks down as soon as expressions go above a certain low complexity. (I use the rule of three, three levels of nested parentheses are all your user can take in). And, without special programming support, programming functions cannot escape their domain.
pow(sqrt(-1), sqrt(-1));
won't do what you want unless you have a complex math library installed.
As for performance, some Fortran and C compilers optimise mathematical routines very aggressively, with constant propagation and the like. Others won''t. It just depends.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
On the wikipedia https://www.wikiwand.com/en/Formal_language, I found the definition of a formal language:
In mathematics, computer science, and linguistics, a formal language
is a set of strings of symbols that may be constrained by rules that
are specific to it.
This looks quite abstract to me. And I can't image any language which doesn't fit to this definition. Does anyone have ideas about what a informal language looks like and how it doesn't fit the definition?
Let me get to your question first. A good non-example of a formal language are the natural languages. English and Slovene are examples. So are Tagalog and Tarifit Berber. Unfortunately linguists don't seem to have a definition of natural language that all would agree upon.
Noam Chomsky famously tried to model natural language using context-free gammars in his 1956 paper Three Models for the Description of Language. He invented (or discovered, if you prefer) them in that paper; although he didn't called them that; while they weren't useful to model the English language, they revolutionized computer science.
Formally, a formal language is just a set of strings over a finite alphabet. That's it.
Examples include all valid C programs, all valid HTML files, all valid XML files, all strings of "balanced" parentheses (e.g. (), ()(), ((()))()(()), ...), the set (codes under some encoding) of all deterministic Turing machines that always halt, the set of all simple graphs that can be colored with k-colors (actually their codes under some encoding), the set of all binary strings that end and begin with a 1, etc.
Some are easy to recognize using regex (or, equivalently, DFA); some are impossible to recognize using DFA's, but can be recognized using PDA (or, equivalently, can be described with a context-free grammar); other's don't admit such a description, but can be recognized by a Turing machine; some aren't recognizable even by a Turing machine (called uncomputable).
This is why the definition is so useful. Many things we encounter in CS evey day can be cast in terms of formal languages.
For a good introduction to the subject, I highly recommend the superb book Introduction to Automata Theory, Languages, and Computation by Hopcroft et al.
English isn't a formal language. It's not just a set of strings; it has a spoken form, and evolution over time, and dialects, and all sorts of other things a formal language doesn't have. A formal language couldn't gain the word "email" from one decade to the next.
A language is a set of sequences made up from given symbols. It can be either finite or infinite (the set of English sentences is infinite even though there are sentences, eg excessively long, which can not be comprehended even by a native speaker). If it is finite then any description of it is a formal definition.
If the language is infinite, say the language of arithmetic expressions involving numbers, two binary operators '+', '*' and variables then you can't possibly list all strings which belong to the language, but sometimes (see blazs's comment below) you can give a finite description as a set of rules.
E := NUM | v | E '+' E | E '*' E
(where NUM is a sequence of digits, v is a variable) is a finite description of an infinite set. That's what makes it formal.
The various other aspects like speech or the evolution of the language are different issues. Those can also be formalised.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I would like to compile a list of tips and tricks on mathematical programming optimization, often I read in the forums things like:
Compare distance using the distance square value because square root
calculation is more expensive
(variable * 0.5) is faster than (variable / 2.0)
For me being an programming enthusiast I would like to do my best in wath optimization is concern, Any contribution would be much appreciated
Two key points.
MEASURE - dont assume you know what is slow and/or why. Measure it in real production code. Then only worry about the bit that is chewing up most of your time.
Optimise your algorithm not your code. Look for somethig that you're doing that is o(N^2) instead of o(N) or is o(N) instead of o(ln(N)) and switch to an algorithm with better asymptotic behaviour.
I would say, the first thing to pin down before thinking of optimisation is the scope and intended purpose of your library. For example, is this library 2D or 3D does it includes geometrical algorithms, like convex hull?
Like most people developing such library you will run into a few unavoidable issues. Things like precisions errors can definitely drive you mad at times. Beware of degenerated triangles as well.
Consider algorithms that include an epsilon or tolerance carefully. This is a neat feature to have, but it will make your algorithms more complex.
If you venture in the world of 3D, treat point and vector differently (this is one of the most common issue in 3D math). Consider meta programming for template multiplications (this one will get flamed I feel it) as it can considerably speed up rendering.
In general, try to avoid virtual calls for anything but substantial algorithms, small classes like vectors or points should not be inherited (another flaming opportunity).
I would say, start by sticking to good development practice, read Efficient C++ and More Efficient C++ by Scott Meyers and If you take short cuts like comparing the squared value to avoid a square root calculation, comment your code so future developer can understand the maths.
Finally, do not try to over optimize up front, use a profiler for this. Personally I often start by coding the most elegant solution (should I say what I consider the most elegant solution) and then optimize, you will be surprised at how good a job the C++ optimizer often do.
Hope this helps
Martin
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Recently there has been a paper floating around by Vinay Deolalikar at HP Labs which claims to have proved that P != NP.
Could someone explain how this proof works for us less mathematically inclined people?
I've only scanned through the paper, but here's a rough summary of how it all hangs together.
From page 86 of the paper.
... polynomial time
algorithms succeed by successively
“breaking up” the problem into
smaller subproblems that are joined to
each other through conditional
independence. Consequently, polynomial
time algorithms cannot solve
problems in regimes where blocks whose
order is the same as the underlying
problem instance require simultaneous
resolution.
Other parts of the paper show that certain NP problems can not be broken up in this manner. Thus NP/= P
Much of the paper is spent defining conditional independence and proving these two points.
Dick Lipton has a nice blog entry about the paper and his first impressions of it. Unfortunately, it also is technical. From what I can understand, Deolalikar's main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the problem.
I'm with Rex M with this one, some results, mostly mathematical ones cannot be expressed to people who lack the technical mastery.
I liked this ( http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html ):
His argument revolves around a particular task, the Boolean satisfiability problem, which asks whether a collection of logical statements can all be simultaneously true or whether they contradict each other. This is known to be an NP problem.
Deolalikar claims to have shown that
there is no program which can complete
it quickly from scratch, and that it
is therefore not a P problem. His
argument involves the ingenious use of
statistical physics, as he uses a
mathematical structure that follows
many of the same rules as a random
physical system.
The effects of the above can be quite significant:
If the result stands, it would prove
that the two classes P and NP are not
identical, and impose severe limits on
what computers can accomplish –
implying that many tasks may be
fundamentally, irreducibly complex.
For some problems – including
factorisation – the result does not
clearly say whether they can be solved
quickly. But a huge sub-class of
problems called "NP-complete" would be
doomed. A famous example is the
travelling salesman problem – finding
the shortest route between a set of
cities. Such problems can be checked
quickly, but if P ≠ NP then there is
no computer program that can complete
them quickly from scratch.
This is my understanding of the proof technique: he uses first order logic to characterize all polynomial time algorithms, and then shows that for large SAT problems with certain properties that no polynomial time algorithm can determine their satisfiability.
One other way of thinking about it, which may be entirely wrong, but is my first impression as I'm reading it on the first pass, is that we think of assigning/clearing terms in circuit satisfaction as forming and breaking clusters of 'ordered structure', and that he's then using statistical physics to show that there isn't enough speed in the polynomial operations to perform those operations in a particular "phase space" of operations, because these "clusters" end up being too far apart.
Such proof would have to cover all classes of algorithms, like continuous global optimization.
For example, in the 3-SAT problem we have to evaluate variables to fulfill all alternatives of triples of these variables or their negations. Look that x OR y can be changed into optimizing
((x-1)^2+y^2)((x-1)^2+(y-1)^2)(x^2+(y-1)^2)
and analogously seven terms for alternative of three variables.
Finding the global minimum of a sum of such polynomials for all terms would solve our problem. (source)
It's going out of standard combinatorial techniques to the continuous world using_gradient methods, local minims removing methods, evolutionary algorithms. It's completely different kingdom - numerical analysis - I don't believe such proof could really cover (?)
It's worth noting that with proofs, "the devil is in the detail". The high level overview is obviously something like:
Some some sort of relationship
between items, show that this
relationship implies X and that
implies Y and thus my argument is
shown.
I mean, it may be via Induction or any other form of proving things, but what I'm saying is the high level overview is useless. There is no point explaining it. Although the question itself relates to computer science, it is best left to mathematicians (thought it is certainly incredibly interesting).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm not sure if this is for SO or not. I am reading some of my old math textbooks and trying to understand math in general. Not how to figure something. I can do that but rather what is it that math is doing.
I'm sure this is painfully obvious but I never thought about it until I thought more about game programming. Is it right to think about math as the "language" that is used to explain, precisely explain, why things work?
I'm having a hard time asking it and again, I'm sure it's obvious to most, but after years of math I'm finally thinking when someone asks to "find the equation of a line" that people recognized certain characteristics of a line (y=mx+b) in space and found relationship. They needed something beside a huge paragraph (like this one) and something very precise. We call this math and at its base it's nothing more than a symbolic way to represent things.
Really, I was thinking, "I know why they said 'find the equation of a line'."
So now I am thinking, not just googling for a formula that tells me how to turn a curve with a walking man or follow a path, but why and how do I represent this mathematically and then programatically.
Just hoping for comments on math in programming.
To my way of thinking, I create a "model" of some aspect of the world. Examples:
Profit = Income - Expenditure
I throw a ball it's path will be a parabola with equation ...
I then represent the model in a computer program. So some kind of abstaction underpins the program, sometimes the math is so "obvious" we hardly notice it, sometimes (eg. simulation games) it's both very clearly there and pretty darn tricky.
Key idea: math can be used to model reality, most business systems can be viewed as represented as a model of reality.
Having said that, in 30 years of programming the amount of true (algebra, calculus) maths I have done is negligable.
Steve Yegge wrote a very good article that you may find helpful: Math Every Day
I recommend that you look into materials related to the theory of computation. For example:
On Computable Numbers, with an Application to the Entscheidungsproblem - Alan Turing (1936)
The Mathematical Theory of Communication - Claude Shannon (1948)
The General and Logical Theory of Automata - John Von Neumann (1951)
These are not papers for the faint of heart, but they will give you insights into the beautiful relationship between mathematics and computer science.
You might want to start with a textbook on the subject of computation theory before you tackle the papers listed above, e.g.
Introduction to the Theory of Computation - Michael Sipser
Math for a programmer is like a hammer for a carpenter. The carpenter doesn't use the hammer for everything, but if he doesn't have one, there's a lot he can't do.
Not sure what your precise question is ...
Some thoughts:
Programming is nothing but math (Functional programming, Lambda calculus, programming == math)
Math is a kind of language - An abstract description/representation of an expression in thought
Math helps you to formalize expressions: Instead of For all integer numbers x from one to ten the square of x is less than 250 you can write ∀x ∈ {1..10} (x² < 250)
Programming (a programming language) does the same thing and helps to formalize algorithms.
The kind math that is commonly used in computer programms is numeric math, but with some efforts, you can also perform symbolic computations
I think math is really the concepts behind the symbols instead of the symbols themselves, but when most people speak of math, they're not making the distinction. They're just thinking of the symbols. Partly, this is because of they way math is taught in school, where the focus is on the mechanistic manipulation of the symbols to get correct results, rather than what the concepts are.
This is similar to the way non-programmers view programming. They look at a computer program and see gibberish, whereas a programmer in the given language (after more or less effort) understands the behavior the code represents.
Some people are better at retaining the meaning of such symbols than others. I think there are people who might appreciate math more than they think if they could get past that barrier to the concepts.
I agree with Taylor. Math inside computers is a very deep topic with numerical methods. The biggest issues is precision and the fact that 32 bits only get you so far. There are some really cool (and complicated) functions that describe how to find integrals and such with computers, but because we can't be exact with our answers, and because computers are limited with what they can do (add, multiply, etc) there are lots of methods of how to estimate math to a great deal of precision.
If you are interested in that topic, all the more power to you. That was one class I struggled through.
I'm looking at something similar (financial models) - similar in that we come up with mathematical models, and then implement these in code.
The main issue you face from a programming perspective is taking a model that is expressed in mathematical terms (which assume continuity, infinitely small time/space steps etc.) and then translate these into 'discrete' models, that assume finite time/space steps (e.g. the ball moves every 1mm, or every 1ms).
The translation of these models is not necessarily trivial, and you should have a look at appropriate references for these (Numerical Recipes is a classic). The implementation in code is often very different to how you might express the problem in mathematical terms.
I think Math in programming with time, silence and good food such that I have a lot of paper and a pen, friends-to-ask-help and a pile of books from Rudin to Bourbaki on the top of my Macbook on the floor.
I think why is a philosophical question.
As far as how I think of math/programming and the interplay between... I think of them as layers of modeling. At the lowest, 'truest' level there is some fundamental truth, whatever that may be. Then there is the mathematical modeling of this truth, upon which the 'language' of mathematics is developed (fortunately there is only one language?). Then there is another layer, that of modeling and approximations. In the case of y=mx+b, its only a line within one model, it could be anything. Being visual beings, the most beneficial is perhaps geometric (lines, surfaces, etc). Then upon this there is the computational modeling, the numerical methods/analysis if you will.
As to how do i think of things, I like to think in the modeling perspective. That is, I like to conceptually model some process, and then apply the math and then the numerical methods. Middle out development if you will (to draw an N-tier analogy).
As an afterthought, perhaps the modeling could be called engineering.
The best way to get the type of understanding that you're looking for is to work through "story problems" (i.e. problems stated in words rather than equations). From this and your other questions, you're mostly looking at trigonometry.
In short, I would recommend trying the trig book from the Schaum's Outline Series -- they are cheap (~$13) and have lots of problems with solutions.
There are other routes to finding problems in math to solve, such as just make up game design problems to solve. Here are two: 1) show an object moving around a circle at constant speed, and 2) show two object moving along to different lines that don't intersect, and draw a line between them. Or you could get a book that walks you through these types of things. But you've got to work out a number of problems to force you to think things through yourself.