I am trying to write a technical report that explains my algorithm for my work and instead of writing long pseudocode, I want to write it in neat mathematical notations. However, because of my limited knowledge of math, I cannot come up with a clean and straightforward notation for the below pseudocode:
elements = [a, b, ..., z] // a sequence of some elements with size K.
interested_poses = {1, 4, ... t} // a set of indexes of elements that we are interested in.
results = ∅ // an empty set to hold result elements.
FOR I = 1 to K DO
IF I ∈ interested_poses THEN
ADD elements[I] to results
If someone can guide me how I can convert the above to math notations, that would be very much appreciated as well as some references that I can use for further study for myself.
Thanks in advance.
Related
I am familiar with what choice operator ((?)) does, it takes two arguments and matches to both of them. We could define it as follows:
a?_=a
_?b=b
This can be used to introduce non-determinism between two values. However what I don't understand why we would want to do that.
What would be an example of a problem that could be solved by using (?)?
One example that is typically used to motivate non-determinism is a function that computes all permutations of a list.
insert e [] = [e]
insert e (x:xs) = (e : x : xs) ? (x : insert e xs)
perm [] = []
perm (x:xs) = insert x (perm xs)
The nice thing here is that you do not need to specify how you want to enumerate all lists, the search algorithm that underlies a logic programming language like Curry (per default it's depth-first-search) does the job for you. You merely give a specification of how a list in your result should look like.
Hopefully, you find some more realistic examples in the following papers.
New Functional Logic Design Patterns
Functional Logic Design Patterns
Edit: As I recently published work on that topic, I want to add the application of probabilistic programming. In the paper we show that using non-determinism to model probabilistic values can have advantages to list-based approaches with respect to pruning of the search space. More precisely, when we perform a query on the probabilistic values, i.e., filter a distribution based on a predicate, the non-determinism behaves less strict that an list-based approaches and can prune the search space.
my question is concerning Exercise 2.11 in the book Concrete Semantics (http://concrete-semantics.org/):
Define arithmetic expressions in one variable over integers
(type int) as a data type:
datatype exp = Var | Const int | Add exp exp | Mult exp exp
Define a function eval :: exp => int => int such that eval e x evaluates e at
the value x.
A polynomial can be represented as a list of coefficients, starting with the
constant. For example, [4, 2, -1, 3] represents the polynomial 4+2x-x^2+3x^3.
Define a function evalp :: int list => int => int that evaluates a polynomial at
the given value. Define a function coeffs :: exp => int list that transforms an
expression into a polynomial. This may require auxiliary functions. Prove that
coeffs preserves the value of the expression: evalp (coeffs e) x = eval e x.
---end
It's all pretty straightforward until you get to coeffs. We would have to deal with expressions like (X + X)*(2*X + 3*X*X) which have to be recursively expanded bottom-up using a distributive law until its in polynomial form. The resulting expression might still be something like (X*X + X*2*X + 3*X*X + 4*X*X*X) so then its necessary to normalize product terms (so eg X*2*X becomes 2*X*X), collect together like terms, and finally order them in order of increasing degree! This just seems significantly more complicated than any of the exercises so far that I wonder if I'm missing something or overly complicating it.
I think this exercise is considerably easier than you think. You can write a single primitively-recursive function coeffs that does the job: the coefficients of Var are [0,1], the coefficients of Const c are [c]. Similarly, if you have two subexpressions and you know their coefficients, you can combine those two coefficient lists into a single list for addition/multiplication.
For that, you should ideally write two auxiliary functions add_coeffs and mult_coeffs which add and multiply two lists of coefficients. (the latter will probably make use of the former)
You will have to prove that add_coeffs and mult_coeffs do the right thing (w.r.t. eval and evalp). The resulting lemmas also make good [simp] rules.
The proofs are all simple inductions where each case is automatic.
As a general rule: a good definition often makes the difference between a long and tedious proof and a straight-forward or even completely automatic proof. Doing a long-winded expansion and then grouping summands etc. as you suggested in your question is sure to lead to a tedious proof.
Of course, the method that I suggested in this answer is not very efficient, but when you want to do things in a theorem prover, efficiency is usually not a big concern – you want things to be simple and elegant and amenable to nice proofs. If you need efficient code, you can still develop your nice and simple abstract formulation into something more efficient later and show equivalence.
I'm new to OCaml, and I'd like to implement Gaussian Elimination as an exercise. I can easily do it with a stateful algorithm, meaning keep a matrix in memory and recursively operating on it by passing around a reference to it.
This statefulness, however, smacks of imperative programming. I know there are capabilities in OCaml to do this, but I'd like to ask if there is some clever functional way I haven't thought of first.
OCaml arrays are mutable, and it's hard to avoid treating them just like arrays in an imperative language.
Haskell has immutable arrays, but from my (limited) experience with Haskell, you end up switching to monadic, mutable arrays in most cases. Immutable arrays are probably amazing for certain specific purposes. I've always imagined you could write a beautiful implementation of dynamic programming in Haskell, where the dependencies among array entries are defined entirely by the expressions in them. The key is that you really only need to specify the contents of each array entry one time. I don't think Gaussian elimination follows this pattern, and so it seems it might not be a good fit for immutable arrays. It would be interesting to see how it works out, however.
You can use a Map to emulate a matrix. The key would be a pair of integers referencing the row and column. You'll want to use your own get x y function to ensure x < n and y < n though, instead of accessing the Map directly. (edit) You can use the compare function in Pervasives directly.
module OrderedPairs = struct
type t = int * int
let compare = Pervasives.compare
end
module Pairs = Map.Make (OrderedPairs)
let get_ n set x y =
assert( x < n && y < n );
Pairs.find (x,y) set
let set_ n set x y v =
assert( x < n && y < n );
Pairs.add (x,y) set v
Actually, having a general set of functions (get x y and set x y at a minimum), without specifying the implementation, would be an even better option. The functions then can be passed to the function, or be implemented in a module through a functor (a better solution, but having a set of functions just doing what you need would be a first step since you're new to OCaml). In this way you can use a Map, Array, Hashtbl, or a set of functions to access a file on the hard-drive to implement the matrix if you wanted. This is the really important aspect of functional programming; that you trust the interface over exploiting the side-effects, and not worry about the underlying implementation --since it's presumed to be pure.
The answers so far are using/emulating mutable data-types, but what does a functional approach look like?
To see, let's decompose the problem into some functional components:
Gaussian elimination involves a sequence of row operations, so it is useful first to define a function taking 2 rows and scaling factors, and returning the resultant row operation result.
The row operations we want should eliminate a variable (column) from a particular row, so lets define a function which takes a pair of rows and a column index and uses the previously defined row operation to return the modified row with that column entry zero.
Then we define two functions, one to convert a matrix into triangular form, and another to back-substitute a triangular matrix to the diagonal form (using the previously defined functions) by eliminating each column in turn. We could iterate or recurse over the columns, and the matrix could be defined as a list, vector or array of lists, vectors or arrays. The input is not changed, but a modified matrix is returned, so we can finally do:
let out_matrix = to_diagonal (to_triangular in_matrix);
What makes it functional is not whether the data-types (array or list) are mutable, but how they they are used. This approach may not be particularly 'clever' or be the most efficient way to do Gaussian eliminations in OCaml, but using pure functions lets you express the algorithm cleanly.
I'm interested in building a derivative calculator. I've racked my brains over solving the problem, but I haven't found a right solution at all. May you have a hint how to start? Thanks
I'm sorry! I clearly want to make symbolic differentiation.
Let's say you have the function f(x) = x^3 + 2x^2 + x
I want to display the derivative, in this case f'(x) = 3x^2 + 4x + 1
I'd like to implement it in objective-c for the iPhone.
I assume that you're trying to find the exact derivative of a function. (Symbolic differentiation)
You need to parse the mathematical expression and store the individual operations in the function in a tree structure.
For example, x + sin²(x) would be stored as a + operation, applied to the expression x and a ^ (exponentiation) operation of sin(x) and 2.
You can then recursively differentiate the tree by applying the rules of differentiation to each node. For example, a + node would become the u' + v', and a * node would become uv' + vu'.
you need to remember your calculus. basically you need two things: table of derivatives of basic functions and rules of how to derivate compound expressions (like d(f + g)/dx = df/dx + dg/dx). Then take expressions parser and recursively go other the tree. (http://www.sosmath.com/tables/derivative/derivative.html)
Parse your string into an S-expression (even though this is usually taken in Lisp context, you can do an equivalent thing in pretty much any language), easiest with lex/yacc or equivalent, then write a recursive "derive" function. In OCaml-ish dialect, something like this:
let rec derive var = function
| Const(_) -> Const(0)
| Var(x) -> if x = var then Const(1) else Deriv(Var(x), Var(var))
| Add(x, y) -> Add(derive var x, derive var y)
| Mul(a, b) -> Add(Mul(a, derive var b), Mul(derive var a, b))
...
(If you don't know OCaml syntax - derive is two-parameter recursive function, with first parameter the variable name, and the second being mathched in successive lines; for example, if this parameter is a structure of form Add(x, y), return the structure Add built from two fields, with values of derived x and derived y; and similarly for other cases of what derive might receive as a parameter; _ in the first pattern means "match anything")
After this you might have some clean-up function to tidy up the resultant expression (reducing fractions etc.) but this gets complicated, and is not necessary for derivation itself (i.e. what you get without it is still a correct answer).
When your transformation of the s-exp is done, reconvert the resultant s-exp into string form, again with a recursive function
SLaks already described the procedure for symbolic differentiation. I'd just like to add a few things:
Symbolic math is mostly parsing and tree transformations. ANTLR is a great tool for both. I'd suggest starting with this great book Language implementation patterns
There are open-source programs that do what you want (e.g. Maxima). Dissecting such a program might be interesting, too (but it's probably easier to understand what's going on if you tried to write it yourself, first)
Probably, you also want some kind of simplification for the output. For example, just applying the basic derivative rules to the expression 2 * x would yield 2 + 0*x. This can also be done by tree processing (e.g. by transforming 0 * [...] to 0 and [...] + 0 to [...] and so on)
For what kinds of operations are you wanting to compute a derivative? If you allow trigonometric functions like sine, cosine and tangent, these are probably best stored in a table while others like polynomials may be much easier to do. Are you allowing for functions to have multiple inputs,e.g. f(x,y) rather than just f(x)?
Polynomials in a single variable would be my suggestion and then consider adding in trigonometric, logarithmic, exponential and other advanced functions to compute derivatives which may be harder to do.
Symbolic differentiation over common functions (+, -, *, /, ^, sin, cos, etc.) ignoring regions where the function or its derivative is undefined is easy. What's difficult, perhaps counterintuitively, is simplifying the result afterward.
To do the differentiation, store the operations in a tree (or even just in Polish notation) and make a table of the derivative of each of the elementary operations. Then repeatedly apply the chain rule and the elementary derivatives, together with setting the derivative of a constant to 0. This is fast and easy to implement.
i am trying to make a 100 x 100 tridiagonal matrix with 2's going down the diagonal and -1's surrounding the 2's. i can make a tridiagonal matrix with only 1's in the three diagonals and preform matrix addition to get what i want, but i want to know if there is a way to customize the three diagonals to what ever you want. maplehelp doesn't list anything useful.
The Matrix function in the LinearAlgebra package can be called with a parameter (init) that is a function that can assign a value to each entry of the matrix depending on its position.
This would work:
f := (i, j) -> if i = j then 2 elif abs(i - j) = 1 then -1 else 0; end if;
Matrix(100, f);
LinearAlgebra[BandMatrix] works too (and will be WAY faster), especially if you use storage=band[1]. You should probably use shape=symmetric as well.
The answers involving an initializer function f will do O(n^2) work for square nxn Matrix. Ideally, this task should be O(n), since there are just less than 3*n entries to be filled.
Suppose also that you want a resulting Matrix without any special (eg. band) storage or indexing function (so that you can later write to any part of it arbitrarily). And suppose also that you don't want to get around such an issue by wrapping the band structure Matrix with another generic Matrix() call which would double the temp memory used and produce collectible garbage.
Here are two ways to do it (without applying f to each entry in an O(n^2) manner, or using a separate do-loop). The first one involves creation of the three bands as temps (which is garbage to be collected, but at least not n^2 size of it).
M:=Matrix(100,[[-1$99],[2$100],[-1$99]],scan=band[1,1]);
This second way uses a routine which walks M and populates it with just the three scalar values (hence not needing the 3 band lists explicitly).
M:=Matrix(100):
ArrayTools:-Fill(100,2,M,0,100+1);
ArrayTools:-Fill(99,-1,M,1,100+1);
ArrayTools:-Fill(99,-1,M,100,100+1);
Note that ArrayTools:-Fill is a compiled external routine, and so in principal might well be faster than an interpreted Maple language (proper) method. It would be especially fast for a Matrix M with a hardware datatype such as 'float[8]'.
By the way, the reason that the arrow procedure above failed with error "invalid arrow procedure" is likely that it was entered in 2D Math mode. The 2D Math parser of Maple 13 does not understand the if...then...end syntax as the body of an arrow operator. Alternatives (apart from writing f as a proc like someone else answered) is to enter f (unedited) in 1D Maple notation mode, or to edit f to use the operator form of if. Perhaps the operator form of if here requires a nested if to handle the elif. For example,
f := (i,j) -> `if`(i=j,2,`if`(abs(i-j)=1,-1,0));
Matrix(100,f);
jmbr's proposed solutions can be adapted to work:
f := proc(i, j)
if i = j then 2
elif abs(i - j) = 1 then -1
else 0
end if
end proc;
Matrix(100, f);
Also, I understand your comment as saying you later need to destroy the band matrix nature, which prevents you from using BandMatrix - is that right? The easiest solution to that is to wrap the BandMatrix call in a regular Matrix call, which will give you a Matrix you can change however you'd like:
Matrix(LinearAlgebra:-BandMatrix([1,2,1], 1, 100));