I was wondering about the use ternary operators outside of programming. For example, in those pesky calculus classes that are required for a CS degree. Could a person describe something like a hyperbolic function with a ternary operator like this:
1/x ? 1/x : infinity;
This assumes that x is a positive float and should say that if x != 0 then the function returns 1/x, otherwise it returns infinity. Would this circumvent the whole need for limits?
I'm not entirely certian as to the specific question, but yes, a ternary can answer any question posed as 'if/else' or 'if and only if, else'. Traditionally however, math is not written in a conditional format with any real flow control. 'if' and other flow control mechanisms let code execute in differant ways, but with most math, the flow is the same; just the results differ.
Mathematically, any operator can be equivalently described as a function, as in a + b = add(a,b); note that this is true for programming as well. In either case, binary operators are a common way to describe functions of two arguments because they are easy to read that way.
Ternary operators are more difficult to read, and they are correspondingly less common. But, since mathematical typography is not limited to a one-dimensional text string, many mathematical operators have large arity -- for instance, a definite integral arguably has 4 arguments (start, end, integrand, and differential).
To answer your second question: no, this does not circumvent the need for limits; you could just as easily say that the alternative was 42 instead of infinity.
I will also mention that your 1/x example doesn't really match the programming usage of the ?: ternary operator anyway. Note that 1/x is not a boolean; it looks like you're trying to use ?: to handle an exception-like condition, which would be better suited to a try/catch form.
Also, when you say "This assumes that x is a positive float", how is a reader supposed to know this? You may recall that there is mathematical notation that solves this specific problem by indicating limits from above....
Related
I am very confused in how CLP works in Prolog. Not only do I find it hard to see the benefits (I do see it in specific cases but find it hard to generalise those) but more importantly, I can hardly make up how to correctly write a recursive predicate. Which of the following would be the correct form in a CLP(R) way?
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
factorial(PrevN, NewF),
F = N * NewF}.
or
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
F = N * NewF},
factorial(PrevN, NewF).
In other words, I am not sure when I should write code outside the constraints. To me, the first case would seem more logical, because PrevN and NewF belong to the constraints. But if that's true, I am curious to see in which cases it is useful to use predicates outside the constraints in a recursive function.
There are several overlapping questions and issues in your post, probably too many to coherently address to your complete satisfaction in a single post.
Therefore, I would like to state a few general principles first, and then—based on that—make a few specific comments about the code you posted.
First, I would like to address what I think is most important in your case:
LP ⊆ CLP
This means simply that CLP can be regarded as a superset of logic programming (LP). Whether it is to be considered a proper superset or if, in fact, it makes even more sense to regard them as denoting the same concept is somewhat debatable. In my personal view, logic programming without constraints is much harder to understand and much less usable than with constraints. Given that also even the very first Prolog systems had a constraint like dif/2 and also that essential built-in predicates like (=)/2 perfectly fit the notion of "constraint", the boundaries, if they exist at all, seem at least somewhat artificial to me, suggesting that:
LP ≈ CLP
Be that as it may, the key concept when working with CLP (of any kind) is that the constraints are available as predicates, and used in Prolog programs like all other predicates.
Therefore, whether you have the goal factorial(N, F) or { N > 0 } is, at least in principle, the same concept: Both mean that something holds.
Note the syntax: The CLP(ℛ) constraints have the form { C }, which is {}(C) in prefix notation.
Note that the goal factorial(N, F) is not a CLP(ℛ) constraint! Neither is the following:
?- { factorial(N, F) }.
ERROR: Unhandled exception: type_error({factorial(_3958,_3960)},...)
Thus, { factorial(N, F) } is not a CLP(ℛ) constraint either!
Your first example therefore cannot work for this reason alone already. (In addition, you have a syntax error in the clause head: factorial (, so it also does not compile at all.)
When you learn working with a constraint solver, check out the predicates it provides. For example, CLP(ℛ) provides {}/1 and a few other predicates, and has a dedicated syntax for stating relations that hold about floating point numbers (in this case).
Other constraint solver provide their own predicates for describing the entities of their respective domains. For example, CLP(FD) provides (#=)/2 and a few other predicates to reason about integers. dif/2 lets you reason about any Prolog term. And so on.
From the programmer's perspective, this is exactly the same as using any other predicate of your Prolog system, whether it is built-in or stems from a library. In principle, it's all the same:
A goal like list_length(Ls, L) can be read as: "The length of the list Ls is L."
A goal like { X = A + B } can be read as: The number X is equal to the sum of A and B. For example, if you are using CLP(Q), it is clear that we are talking about rational numbers in this case.
In your second example, the body of the clause is a conjunction of the form (A, B), where A is a CLP(ℛ) constraint, and B is a goal of the form factorial(PrevN, NewF).
The point is: The CLP(ℛ) constraint is also a goal! Check it out:
?- write_canonical({a,b,c}).
{','(a,','(b,c))}
true.
So, you are simply using {}/1 from library(clpr), which is one of the predicates it exports.
You are right that PrevN and NewF belong to the constraints. However, factorial(PrevN, NewF) is not part of the mini-language that CLP(ℛ) implements for reasoning over floating point numbers. Therefore, you cannot pull this goal into the CLP(ℛ)-specific part.
From a programmer's perspective, a major attraction of CLP is that it blends in completely seamlessly into "normal" logic programming, to the point that it can in fact hardly be distinguished at all from it: The constraints are simply predicates, and written down like all other goals.
Whether you label a library predicate a "constraint" or not hardly makes any difference: All predicates can be regarded as constraints, since they can only constrain answers, never relax them.
Note that both examples you post are recursive! That's perfectly OK. In fact, recursive predicates will likely be the majority of situations in which you use constraints in the future.
However, for the concrete case of factorial, your Prolog system's CLP(FD) constraints are likely a better fit, since they are completely dedicated to reasoning about integers.
Is there a mathematical symbol or otherwise concise notation to represent option values (OCaml's option type, Haskell's Maybe...)?
It appears so often in functional programming that I would expect to find a concise syntax for this type, the same way lists have a somewhat standard [] notation, functions have the -> notation, and so on.
I know that in a more formal context one might use a partial function notation , but in most cases it doesn't fit as nicely as some explicit symbols for Some/None (or Just/Nothing).
Ideally, I'd like to write something like:
This function returns #42 if the input is valid, # otherwise.
Where #42 represents Some 42 and # represents None, but in a standard way, easily understandable by most readers (or at least those with some mathematical background).
I haven't seen any such specific notation. The closest I know is to use of mathematical symbols to express the type: α ⊕ 1. Here ⊕ represents direct sum (disjoint union) of types and 1 represents the unit type.
This notation is used in category theory or in typing systems.
I'm just learning R, so please forgive what I'm sure is a very elementary question. How do I take the real part of a complex number?
If you read the help file for complex (?complex), you will see a number of functions for performing complex arithmetic. The details clearly state
The functions Re, Im, Mod, Arg and Conj have their usual interpretation as returning the real part, imaginary part, modulus, argument and complex conjugate for complex values.
Therefore
Use Re:
Re(1+2i)
# 1
For the imaginary part:
Im(1+2i)
# 2
?complex will list other useful functions.
Re(z)
and
Im(z)
would be the functions you're looking for.
See also http://www.johnmyleswhite.com/notebook/2009/12/18/using-complex-numbers-in-r/
I'm interested in building a derivative calculator. I've racked my brains over solving the problem, but I haven't found a right solution at all. May you have a hint how to start? Thanks
I'm sorry! I clearly want to make symbolic differentiation.
Let's say you have the function f(x) = x^3 + 2x^2 + x
I want to display the derivative, in this case f'(x) = 3x^2 + 4x + 1
I'd like to implement it in objective-c for the iPhone.
I assume that you're trying to find the exact derivative of a function. (Symbolic differentiation)
You need to parse the mathematical expression and store the individual operations in the function in a tree structure.
For example, x + sin²(x) would be stored as a + operation, applied to the expression x and a ^ (exponentiation) operation of sin(x) and 2.
You can then recursively differentiate the tree by applying the rules of differentiation to each node. For example, a + node would become the u' + v', and a * node would become uv' + vu'.
you need to remember your calculus. basically you need two things: table of derivatives of basic functions and rules of how to derivate compound expressions (like d(f + g)/dx = df/dx + dg/dx). Then take expressions parser and recursively go other the tree. (http://www.sosmath.com/tables/derivative/derivative.html)
Parse your string into an S-expression (even though this is usually taken in Lisp context, you can do an equivalent thing in pretty much any language), easiest with lex/yacc or equivalent, then write a recursive "derive" function. In OCaml-ish dialect, something like this:
let rec derive var = function
| Const(_) -> Const(0)
| Var(x) -> if x = var then Const(1) else Deriv(Var(x), Var(var))
| Add(x, y) -> Add(derive var x, derive var y)
| Mul(a, b) -> Add(Mul(a, derive var b), Mul(derive var a, b))
...
(If you don't know OCaml syntax - derive is two-parameter recursive function, with first parameter the variable name, and the second being mathched in successive lines; for example, if this parameter is a structure of form Add(x, y), return the structure Add built from two fields, with values of derived x and derived y; and similarly for other cases of what derive might receive as a parameter; _ in the first pattern means "match anything")
After this you might have some clean-up function to tidy up the resultant expression (reducing fractions etc.) but this gets complicated, and is not necessary for derivation itself (i.e. what you get without it is still a correct answer).
When your transformation of the s-exp is done, reconvert the resultant s-exp into string form, again with a recursive function
SLaks already described the procedure for symbolic differentiation. I'd just like to add a few things:
Symbolic math is mostly parsing and tree transformations. ANTLR is a great tool for both. I'd suggest starting with this great book Language implementation patterns
There are open-source programs that do what you want (e.g. Maxima). Dissecting such a program might be interesting, too (but it's probably easier to understand what's going on if you tried to write it yourself, first)
Probably, you also want some kind of simplification for the output. For example, just applying the basic derivative rules to the expression 2 * x would yield 2 + 0*x. This can also be done by tree processing (e.g. by transforming 0 * [...] to 0 and [...] + 0 to [...] and so on)
For what kinds of operations are you wanting to compute a derivative? If you allow trigonometric functions like sine, cosine and tangent, these are probably best stored in a table while others like polynomials may be much easier to do. Are you allowing for functions to have multiple inputs,e.g. f(x,y) rather than just f(x)?
Polynomials in a single variable would be my suggestion and then consider adding in trigonometric, logarithmic, exponential and other advanced functions to compute derivatives which may be harder to do.
Symbolic differentiation over common functions (+, -, *, /, ^, sin, cos, etc.) ignoring regions where the function or its derivative is undefined is easy. What's difficult, perhaps counterintuitively, is simplifying the result afterward.
To do the differentiation, store the operations in a tree (or even just in Polish notation) and make a table of the derivative of each of the elementary operations. Then repeatedly apply the chain rule and the elementary derivatives, together with setting the derivative of a constant to 0. This is fast and easy to implement.
On the Wikipedia page about summation it says that the equivalent operation in Haskell is to use foldl. My question is: Is there any reason why it says to use this instead of sum? Is one more 'purist' than the other, or is there no real difference?
foldl is a general tail-recursive reduce function. Recursion is the usual way of thinking about manipulating lists of items in a functional programming languages, and provides an alternative to loop iteration that is often much more elegant. In the case of a reduce function like fold, the tail-recursive implementation is very efficient. As others have explained, sum is then just a convenient mnemonic for foldl (+) 0 l.
Presumably its use on the wikipedia page is to illustrate the general principle of summation through tail-recursion. But since the Haskell Prelude library contains sum, which is shorter and more obvious to understand, you should use that in your code.
Here's a nice discussion of Haskell's fold functions with simple examples that's well worth reading.
I don't see where it says anything about Haskell or foldl on that Wikipedia page, but sum in Haskell is just a more specific case of foldl. It can be implemented like this, for example:
sum l = foldl (+) 0 l
Which can be reduced to:
sum = foldl (+) 0
One thing to note is that sum may be lazier than you would want, so consider using foldl'.
As stated by the others, there's no difference. However, a sum-call is easier to read than a fold-call, so I'd go for sum if you need summation.
There is no difference. That page is simply saying that sum is implemented using foldl. Just use sum whenever you need to calculate the sum of a list of numbers.
The concept of summation can be extended to non-numeric types: all you need is something equivalent to a (+) operation and a zero value. In other words, you need a monoid. This leads to the Haskell function "mconcat", which returns the sum of a list of values of a monoid type. The default "mconcat" of course is defined in terms of "mappend" which is the plus operation.