Say I am tossing a fair coin where 'tails' is assigned the value x = -1/2 and 'heads' is assigned x = 1/2.
I do this N times and I want to obtain the sum. This is what I have tried:
p = 0.5
N = 1e4
X(N,p)=(rand(N).<p)
I know this is incomplete but when I check (rand(N).<p) I see an array consisting of true, false. I interpret this as 'Tails' or 'Heads'. However, I don't know how to assign the values 1/2 and -1/2 to each of these elements in order for me to find the sum. If I simply use sum((rand(N).<p)) I do get an integer value, but I don't think this is the right way to do it because I haven't specified the values 1/2 and -1/2 anywhere.
Any help is greatly appreciated.
As indicated by the comments already, you want to do
sum(rand([-0.5, 0.5], N))
where N must be an integer (you wrote N=1e4, therefore typeof(N) == Float64 and rand won't work).
The documentation of rand (obtained by ?rand) describes what rand(S, N) does:
Pick a random element or array of random elements from the set of
values specified by S
Here, S can be an optional indexable collection, an array of values in your case (or a type like Int). So, above S = [-0.5, 0.5] and rand draws N random elements from this collection, which we can afterwards sum up.
Assigning specific values to a boolean array
Since this is the title of your question, and the answer above doesn't actually address this, let me comment on this as well.
You could do sum((rand(N).<p)-0.5), i.e. you shift all the ones to 0.5 and all the zeros to -0.5 to get the wanted result. Note that this is a general strategy: Let's say you want true to be a and false to be b, where a and b are numbers. You achieve this by (rand(N).<p)*(a-b) + b.
However, beyond being more "complicated", sum((rand(N).<p)-0.5) will allocate temporary arrays, first one of booleans, then one of numbers, the latter of which will eventually go into sum. Because of these unnecessary allocations this approach will be slower than the solution above.
I can create a recursive formula from recurrences where it only passes down one argument (something like $T(n/2)$). However, for a case like this where the value of $u$ and $v$ are different, how do I put them together? This is the problem:
The call to recursive function RecursiveFunction(n, n) for some n > 2
RecursiveFunction(a, b)
if a >= 2 and b >= 2
u=a/2
v=b-1
RecursiveFunction(u, v)
The end goal is to find the tight asymptotic bounds for the worst-case running time, but I just need a formula to start first.
There are in fact two different answers to this, depending on the relative sizes of a and b.
The function can be written as follows:
Where C is some constant work done per call (if statement, pushing u, v onto the call stack etc.). Since the two variables evolve independently, we can analyse their evolution separately.
a - consider the following function:
Expanding the iterative case by m times:
The stopping condition a < 2 is such that:
b - as before:
The complexity of T(a, b) thus depends on which variable reaches its stopping condition first, i.e. the smallest between m and n:
I'm currently learning programming language concepts and pragmatics, hence I feel like I need help in differentiating two subbranches of declarative language family.
Consider the following code snippets which are written in Scheme and Prolog, respectively:
;Scheme
(define gcd
(lambda (a b)
(cond ((= a b) a)
((> a b) (gcd (- a b) b))
(else (gcd (- b a) a)))))
%Prolog
gcd(A, B, G) :- A = B, G = A.
gcd(A, B, G) :- A > B, C is A-B, gcd(C, B, G).
gcd(A, B, G) :- B > A, C is B-A, gcd(C, A, G).
The thing that I didn't understand is:
How do these two different programming languages behave
differently?
Where do we make the difference so that they are categorized either
Functional or Logic-based programming language?
As far as I'm concerned, they do exactly the same thing, calling recursive functions until it terminates.
Since you are using very low-level predicates in your logic programming version, you cannot easily see the increased generality that logic programming gives you over functional programming.
Consider this slightly edited version of your code, which uses CLP(FD) constraints for declarative integer arithmetic instead of the low-level arithmetic you are currently using:
gcd(A, A, A).
gcd(A, B, G) :- A #> B, C #= A - B, gcd(C, B, G).
gcd(A, B, G) :- B #> A, C #= B - A, gcd(C, A, G).
Importantly, we can use this as a true relation, which makes sense in all directions.
For example, we can ask:
Are there two integers X and Y such that their GCD is 3?
That is, we can use this relation in the other direction too! Not only can we, given two integers, compute their GCD. No! We can also ask, using the same program:
?- gcd(X, Y, 3).
X = Y, Y = 3 ;
X = 6,
Y = 3 ;
X = 9,
Y = 3 ;
X = 12,
Y = 3 ;
etc.
We can also post even more general queries and still obtain answers:
?- gcd(X, Y, Z).
X = Y, Y = Z ;
Y = Z,
Z#=>X+ -1,
2*Z#=X ;
Y = Z,
_1712+Z#=X,
Z#=>X+ -1,
Z#=>_1712+ -1,
2*Z#=_1712 ;
etc.
That's a true relation, which is more general than a function of two arguments!
See clpfd for more information.
The GCD example only lightly touches on the differences between logic programming and functional programming as they are much closer to each other than to imperative programming. I will concentrate on Prolog and OCaml, but I believe it is quite representative.
Logical Variables and Unification:
Prolog allows to express partial datastructures e.g. in the term node(24,Left,Right) we don't need to specify what Left and Right stand for, they might be any term. A functional language might insert a lazy function or a thunk which is evaluated later on, but at the creation of the term, we need to know what to insert.
Logical variables can also be unified (i.e. made equal). A search function in OCaml might look like:
let rec find v = function
| [] -> false
| x::_ when v = x -> true
| _::xs (* otherwise *) -> find v xs
While the Prolog implementation can use unification instead of v=x:
member_of(X,[X|_]).
member_of(X,[_|Xs]) :-
member_of(X,Xs).
For the sake of simplicity, the Prolog version has some drawbacks (see below in backtracking).
Backtracking:
Prolog's strength lies in successively instantiating variables which can be easily undone. If you try the above program with variables, Prolog will return you all possible values for them:
?- member_of(X,[1,2,3,1]).
X = 1 ;
X = 2 ;
X = 3 ;
X = 1 ;
false.
This is particularly handy when you need to explore search trees but it comes at a price. If we did not specify the size of the list, we will successively create all lists fulfilling our property - in this case infinitely many:
?- member_of(X,Xs).
Xs = [X|_3836] ;
Xs = [_3834, X|_3842] ;
Xs = [_3834, _3840, X|_3848] ;
Xs = [_3834, _3840, _3846, X|_3854] ;
Xs = [_3834, _3840, _3846, _3852, X|_3860] ;
Xs = [_3834, _3840, _3846, _3852, _3858, X|_3866] ;
Xs = [_3834, _3840, _3846, _3852, _3858, _3864, X|_3872]
[etc etc etc]
This means that you need to be more careful using Prolog, because termination is harder to control. In particular, the old-style ways (the cut operator !) to do that are pretty hard to use correctly and there's still some discussion about the merits of recent approaches (deferring goals (with e.g. dif), constraint arithmetic or a reified if). In a functional programming language, backtracking is usually implemented by using a stack or a backtracking state monad.
Invertible Programs:
Perhaps one more appetizer for using Prolog: functional programming has a direction of evaluation. We can use the find function only to check if some v is a member of a list, but we can not ask which lists fulfill this. In Prolog, this is possible:
?- Xs = [A,B,C], member_of(1,Xs).
Xs = [1, B, C],
A = 1 ;
Xs = [A, 1, C],
B = 1 ;
Xs = [A, B, 1],
C = 1 ;
false.
These are exactly the lists with three elements which contain (at least) one element 1. Unfortunately the standard arithmetic predicates are not invertible and together with the fact that the GCD of two numbers is always unique is the reason why you could not find too much of a difference between functional and logic programming.
To summarize: logic programming has variables which allow for easier pattern matching, invertibility and exploring multiple solutions of the search tree. This comes at the cost of complicated flow control. Depending on the problem it is easier to have a backtracking execution which is sometimes restricted or to add backtracking to a functional language.
The difference is not very clear from one example. Programming language are categorized to logic,functional,... based on some characteristics that they support and as a result they are designed in order to be more easy for programmers in each field (logic,functional...). As an example imperative programming languages (like c) are very different from object oriented (like java,C++) and here the differences are more obvious.
More specifically, in your question the Prolog programming language has adopted he philosophy of logic programming and this is obvious for someone who knows a little bit about mathematical logic. Prolog has predicates (rather than functions-basically almost the same) which return true or false based on the "world" we have defined which is for example what facts and clauses do we have already defined, what mathematical facts are defined and more....All these things are inherited by mathematical logic (propositional and first order logic). So we could say that Prolog is used as a model to logic which makes logical problems (like games,puzzles...) more easy to solve. Moreover Prolog has some features that general-purpose languages have. For example you could write a program in your example to calculate gcd:
gcd(A, B, G) :- A = B, G = A.
gcd(A, B, G) :- A > B, C is A-B, gcd(C, B, G).
gcd(A, B, G) :- B > A, C is B-A, gcd(C, A, G).
In your program you use a predicate gcd in returns TRUE if G unifies with GCD of A,B, and you use multiple clauses to match all cases. When you query gcd(2,5,1). will return True (NOTE that in other languages like shceme you can't give the result as parameter), while if you query gcd(2,5,G). it unifies G with gcd of A,B and returns 1, it is like asking Prolog what should be G in order gcd(2,5,G). be true. So you can understand that it is all about when the predicate succeeds and for that reason you can have more than one solutions, while in functional programming languages you can't.
Functional languages are based in functions so always return the SAME
TYPE of result. This doesn't stand always in Prolog you could have a predicate predicate_example(Number,List). and query predicate_example(5,List). which returns List=... (a list) and also query
predicate_example(Number,[1,2,3]). and return N=... (a number).
The result should be unique, In mathematics, a function is a relation
between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output
Should be clear what parameter is the variable that will be returned
for example gcd function is of type : N * N -> R so gets A,B parameters which belong to N (natural numbers) and returns gcd. But prolog (with some changes in your program) could return the parameter A,so querying gcd(A,5,1). would give all possible A such that predicate gcd succeeds,A=1,2,3,4,5 .
Prolog in order to find gcd tries every possible way with choice
points so in every step it will try all of you three clauses and will
find every possible solutions. Functional programming languages on
the other hand, like functions should have well unique defined steps
to find the solution.
So you can understand that the difference between Functional and logic languages may not be always visible but they are based on different philosophy-way of thinking.
Imagine how hard would be to solve tic-tac-toe or N queens problem or man-goat-wolf-cabbage problem in Scheme.
very new to Prolog and I'm absolutely awful at it.
I am trying to get a sum for all the marks received on assignments, and return it. The sum predicate must be of the form sumAssignments(S).
Here's the knowledge base I've made:
assignment(1, A).
assignment(2, B).
assignment(3, C).
assignment(4, D).
assignment(5, E).
...Where assignment(1, A) means that assignment 1 has a variable grade A (could be 70, could be 50, etc.).
Here is my attempt at getting the sum, just for the first two assignments for testing purposes:
sumAssignments(S) :- assignment(1, A), assignment(2, B), S=A+B.
That always returns yes. The key here is I can't use lists.
I found out that only constants belong in predicates, so instead of assignment(1, A). we would find an arbitrary number to represent 'A', in this case maybe 67 or 100: assignment(1, 67). The summation works fine after doing that.
EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)