Integers can be used to store individual numbers, but not mathematical expressions. For example, lets say I have the expression:
6x^2 + 5x + 3
How would I store the polynomial? I could create my own object, but I don't see how I could represent the polynomial through member data. I do not want to create a function to evaluate a passed in argument because I do not only need to evaluate it, but also need to manipulate the expression.
Is a vector my only option or is there a more apt solution?
A simple yet inefficient way would be to store it as a list of coefficients. For example, the polynomial in the question would look like this:
[6, 5, 3]
If a term is missing, place a zero in its place. For instance, the polynomial 2x^3 - 4x + 7 would be represented like this:
[2, 0, -4, 7]
The degree of the polynomial is given by the length of the list minus one. This representation has one serious disadvantage: for sparse polynomials, the list will contain a lot of zeros.
A more reasonable representation of the term list of a sparse polynomial is as a list of the nonzero terms, where each term is a list containing the order of the term and the coefficient for that order; the degree of the polynomial is given by the order of the first term. For example, the polynomial x^100+2x^2+1 would be represented by this list:
[[100, 1], [2, 2], [0, 1]]
As an example of how useful this representation is, the book SICP builds a simple but very effective symbolic algebra system using the second representation for polynomials described above.
A list is not the only option.
You can use a map (dictionary) mapping the exponent to the corresponding coefficient.
Using a map, your example would be
{2: 6, 1: 5, 0: 3}
A list of (coefficient, exponent) pairs is quite standard. If you know your polynomial is dense, that is, all the exponent positions are small integers in the range 0 to some small maximum exponent, you can use the array, as I see Óscar Lopez just posted. :)
You can represent expressions as Expression Trees. See for example .NET Expression Trees.
This allows for much more complex expressions than simple polynomials and those expressions can also use multiple variables.
In .NET you can manipulate the expression tree as a tree AND you can evaluate it as a function.
Expression<Func<double,double>> polynomial = x => (x * x + 2 * x - 1);
double result = polynomial.Compile()(23.0);
An object-oriented approach would say that a Polynomial is a collection of Monomials, and a Monomial encapsulates a coefficient and exponent together.
This approach works when when you have a polynomial like this:
y(x) = x^1000 + 1
An approach that tied a data structure to a polynomial order would be terribly wasteful for this pathological case.
You need to store two things:
The degree of your polynomial (e.g. "3")
A list containing each coefficient (e.g. "{3, 0, 2}")
In standard C++, "std::vector<>" and "std::list<>" can do both.
Vector/array is obvious choice. Depending on type of expressions you may consider some sort of sparse vector type (custom made, i.e. based on dictionary or even linked list if you expressions have 2-3 non-zero coefficients 5x^100+x ).
In either case exposing through custom class/interface would be beneficial as you can replace implementation later. You would likely want to provide standard operations (+, -, *, equals) if you plan to write a lot of expression manipulation code.
Just store the coefficients in an array or vector. For example, in C++ if you are only using integer coefficients, you could use std::vector<int>, or for real numbers, std::vector<double>. Then you just push the coefficients in order and access them by variable exponent number.
For example (again in C++), to store 5*x^3 + 9*x - 2 you might do:
std::vector<int> poly;
poly.push_back(-2); // x^0, acceesed with poly[0]
poly.push_back(9); // x^1, accessed with poly[1]
poly.push_back(0); // x^2, etc
poly.push_back(5); // x^3, etc
If you have large, sparse, polynomials, then maybe you'd want to use a map instead of a vector. If you have fixed sized lengths, then you'd perhaps use an fixed length array instead of a vector.
I've used C++ for examples, but this same scheme can be used in any language.
You can also transform it into reverse Polish notation:
6x^2 + 5x + 3 -> x 2 ^ 6 * x 5 * + 3 +
Where x and numbers are "pushed" onto a stack and operations (^,*,+) take the two top-most values from the stack and replace them with the result of the operation. In the end you get the resultant value on the stack.
In this form it's easy to calculate arbitrarily complex expressions.
This representation is also close to tree representation of expressions where non-leaf tree nodes represent operations and functions and leaf nodes are for constants and variables.
What's good about trees is that you can also easily evaluate expressions and you can also do things like symbolic differentiation on them. Both have recursive nature.
Related
I'm working on a string similarity algorithm, and was thinking on how to give a score between 0 and 1 when comparing two strings. The two variables for this function are the Levenshtein distance D: (added, removed and changed characters) and the maximum length of the two strings L (but you could also take the average).
My initial algorithm was just 1-D/L but this gave too high scores for short strings, e.g. 'tree' and 'bee' would get a score of 0.5, and too low scores for longer strings which have more in common even if half of the characters is different.
Now I'm looking for a mathematical function that can output a better score. I wasn't able to come up with one, so I sketched this height map of a 3D plot (L is x and D = y).
Does anyone know how to convert such a graph to an equation, if I would be better off to just create a lookup table or if there is an existing solution?
Is it possible to express ANY random set of numbers by a function?
Question clarification:
for example:
if desired result set = {1,2,3,4,5}
so I don't mean something like this:
function getSet(){
return {1,2,3,4,5};
}
but more like this:
function genSet(){
result = {}
for(i=0;i<5;i++){
result.push(i);
}
return result;
}
So in other words, can there be a logic to calculate any desired set?
There is a lot of mathematics behind this question. There are some interesting results.
Any set of (real) numbers can be define by a polynomial function f(x) = a + b x + c x^2 + ... so that a number is in the set if f(x)=0. Technically this is an algebraic curve in 1D. While this might seem a optimistic result there is not limit on how complex the polynomial could be and polynomials above the degree 5 have no explicit result.
There is a whole field of study on Computable numbers, real numbers which can be can be computed to within any desired precision by a finite, terminating algorithm, and their converse: non computable numbers, which can't. The bad news is there are a lot more non-computable numbers than computable ones.
The above has been based on real numbers which are decidedly more tricky than the integers or even a finite set of integers which is all we can represent by int or long datatypes. There is a big field of study in this see Computability theory (computer science). I think the Turings halting problem come in to play, this is about if you can determine if a algorithm will terminate. Unfortunately this can't be determined and a consequence is "Not every set of natural numbers is computable." The proof of this does require the infinite size of the naturals so I'm not sure about finite sets.
Representations
There are two common representations used for sets when programming. Suppose the set S is a subset of some universe of items U.
Membership Predicate
One way to represent the set S is a function member from S to { true, false }. For all x in U:
member(x) = true if x is in S
member(x) = false if x is not in S
Pseudocode
bool member(int n)
return 1 <= n <= 5
Enumeration
Another way to represent the S is to store all of its members in a data structure, such as a list, hash table, or binary tree.
Pseudocode
enumerable<int> S()
for int i = 1 to 5
yield return i
Operations
With either of these representations, most set operations can be defined. For example, the union of two sets would look as follows with each of the two representations.
Membership Predicate
func<int, bool> union(func<int, bool> s, func<int, bool> t)
return x => s(x) || t(x)
Enumeration
enumrable<int> union(enumerable<int> s, enumerable<int> t)
hashset<int> r
foreach x in s
r.add(x)
foreach x in t
if x not in r
r.add(x)
return r
Comparison
The membership predicate representation can be extremely versatile because all kinds of set operations from mathematics can be very easily expressed (complement, Cartesian product, etc.). The drawback is that there is no general way to enumerate all the members of a set represented in this way. The set of all positive real numbers, for example, cannot even be enumerated.
The enumeration representation typically involves much more expensive set operations, and some operations (such as the complement of the integer set {1, 2, 3, 4, 5}) cannot even be represented. It should be chosen if you need to be able to enumerate the members of a set, not just test membership.
If I enter a value, for example
1234567 ^ 98787878
into Wolfram Alpha it can provide me with a number of details. This includes decimal approximation, total length, last digits etc. How do you evaluate such large numbers? As I understand it a programming language would have to have a special data type in order to store the number, let alone add it to something else. While I can see how one might approach the addition of two very large numbers, I can't see how huge numbers are evaluated.
10^2 could be calculated through repeated addition. However a number such as the example above would require a gigantic loop. Could someone explain how such large numbers are evaluated? Also, how could someone create a custom large datatype to support large numbers in C# for example?
Well it's quite easy and you can have done it yourself
Number of digits can be obtained via logarithm:
since `A^B = 10 ^ (B * log(A, 10))`
we can compute (A = 1234567; B = 98787878) in our case that
`B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...`
integer part + 1 (601767807 + 1 = 601767808) is the number of digits
First, say, five, digits can be gotten via logarithm as well;
now we should analyze fractional part of the
B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...
f = 0.4709646...
first digits are 10^f (decimal point removed) = 29577...
Last, say, five, digits can be obtained as a corresponding remainder:
last five digits = A^B rem 10^5
A rem 10^5 = 1234567 rem 10^5 = 34567
A^B rem 10^5 = ((A rem 10^5)^B) rem 10^5 = (34567^98787878) rem 10^5 = 45009
last five digits are 45009
You may find BigInteger.ModPow (C#) very useful here
Finally
1234567^98787878 = 29577...45009 (601767808 digits)
There are usually libraries providing a bignum datatype for arbitrarily large integers (eg. mapping digits k*n...(k+1)*n-1, k=0..<some m depending on n and number magnitude> to a machine word of size n redefining arithmetic operations). for c#, you might be interested in BigInteger.
exponentiation can be recursively broken down:
pow(a,2*b) = pow(a,b) * pow(a,b);
pow(a,2*b+1) = pow(a,b) * pow(a,b) * a;
there also are number-theoretic results that have engenedered special algorithms to determine properties of large numbers without actually computing them (to be precise: their full decimal expansion).
To compute how many digits there are, one uses the following expression:
decimal_digits(n) = 1 + floor(log_10(n))
This gives:
decimal_digits(1234567^98787878) = 1 + floor(log_10(1234567^98787878))
= 1 + floor(98787878 * log_10(1234567))
= 1 + floor(98787878 * 6.0915146640862625)
= 1 + floor(601767807.4709647)
= 601767808
The trailing k digits are computed by doing exponentiation mod 10^k, which keeps the intermediate results from ever getting too large.
The approximation will be computed using a (software) floating-point implementation that effectively evaluates a^(98787878 log_a(1234567)) to some fixed precision for some number a that makes the arithmetic work out nicely (typically 2 or e or 10). This also avoids the need to actually work with millions of digits at any point.
There are many libraries for this and the capability is built-in in the case of python. You seem primarily concerned with the size of such numbers and the time it may take to do computations like the exponent in your example. So I'll explain a bit.
Representation
You might use an array to hold all the digits of large numbers. A more efficient way would be to use an array of 32 bit unsigned integers and store "32 bit chunks" of the large number. You can think of these chunks as individual digits in a number system with 2^32 distinct digits or characters. I used an array of bytes to do this on an 8-bit Atari800 back in the day.
Doing math
You can obviously add two such numbers by looping over all the digits and adding elements of one array to the other and keeping track of carries. Once you know how to add, you can write code to do "manual" multiplication by multiplying digits and putting the results in the right place and a lot of addition - but software will do all this fairly quickly. There are faster multiplication algorithms than the one you would use manually on paper as well. Paper multiplication is O(n^2) where other methods are O(n*log(n)). As for the exponent, you can of course multiply by the same number millions of times but each of those multiplications would be using the previously mentioned function for doing multiplication. There are faster ways to do exponentiation that require far fewer multiplies. For example you can compute x^16 by computing (((x^2)^2)^2)^2 which involves only 4 actual (large integer) multiplications.
In practice
It's fun and educational to try writing these functions yourself, but in practice you will want to use an existing library that has been optimized and verified.
I think a part of the answer is in the question itself :) To store these expressions, you can store the base (or mantissa), and exponent separately, like scientific notation goes. Extending to that, you cannot possibly evaluate the expression completely and store such large numbers, although, you can theoretically predict certain properties of the consequent expression. I will take you through each of the properties you talked about:
Decimal approximation: Can be calculated by evaluating simple log values.
Total number of digits for expression a^b, can be calculated by the formula
Digits = floor function (1 + Log10(a^b)), where floor function is the closest integer smaller than the number. For e.g. the number of digits in 10^5 is 6.
Last digits: These can be calculated by the virtue of the fact that the expression of linearly increasing exponents form a arithmetic progression. For e.g. at the units place; 7, 9, 3, 1 is repeated for exponents of 7^x. So, you can calculate that if x%4 is 0, the last digit is 1.
Can someone create a custom datatype for large numbers, I can't say, but I am sure, the number won't be evaluated and stored.
I have 2 questions,
I've made a vector from a document by finding out how many times each word appeared in a document. Is this the right way of making the vector? Or do I have to do something else also?
Using the above method I've created vectors of 16 documents, which are of different sizes. Now i want to apply cosine similarity to find out how similar each document is. The problem I'm having is getting the dot product of two vectors because they are of different sizes. How would i do this?
Sounds reasonable, as long as it means you have a list/map/dict/hash of (word, count) pairs as your vector representation.
You should pretend that you have zero values for the words that do not occur in some vector, without storing these zeros anywhere. Then, you can use the following algorithm to compute the dot product of these vectors (pseudocode):
algorithm dot_product(a : WordVector, b : WordVector):
dot = 0
for word, x in a do
y = lookup(word, b)
dot += x * y
return dot
The lookup part can be anything, but for speed, I'd use hashtables as the vector representation (e.g. Python's dict).
EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)