Project Euler 9 Understanding - math

This question states:
A Pythagorean triplet is a set of three natural numbers, a b c, for which,
a2 + b2 = c2
For example, 32 + 42 = 9 + 16 = 25 = 52.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product abc.
I'm not sure what's it trying to ask you. Are we trying to find a2 + b2 = c2 and then plug those numbers into a + b + c = 1000?

You need to find the a, b, and c such that both a2 + b2 = c2 and a + b + c = 1000. Then you need to output the product a * b * c.

These problems are often solvable trivially, if you find the proper insight. The trick here is to use a little algebra before you ever write a loop. I'll give you one hint. Look at the formula to generate pythagorean triples. Can you write the sum of the side lengths in a useful way?

Like a large number of project euler problems, it's all about finding a set of numbers that simultaneously fulfil multiple constraints.
In this case, the constraints are:
1) a^2 + b^2 = c^2
2) a+b+c = 1000
In the early questions the solution can be as simple as nested loops which try each possible combination.

Related

Given a, b, c and alpha find x

I was doing image processing to determine the distance between two points in a picture. it involves a fair amount of geometry. One of the problems which I tried to solve using basic geometry but failed to arrive at a solution is the following. I have transformed the question into mathematical terms so that a wider audience could answer it.
The sides a, b, c, and the angle alpha are given. The length x is to be found
Using sine and cosine laws I`ve found:
Using Cosine Law and,
Using Sine Law
where beta is the angle opposite the side b
This is not a trivial problem, and maybe it should have been asked Math.SE.
But here is my take:
Considering the triangle abx
b^2 = x^2 + a^2 - 2*a*x*cos(β) #1
And the triangle a1cx
c^2 = x^2 + a1^2 -2*a1*x*cos(β) #2
sin(α)/α1 = sin(β)/c #3
Three non-linear equations to be solved for x, a1 and β.
Subtract #2 from #1 to eliminate x^2 (with some simplifications)
b^2 - c^2 = -2*x *(a-a1)*cos(β)+a^2 -a1^2 #4
use #3 to eliminate β in terms of a1 in #4
b^2 - c^2 = -2*x*(a-a1)*sqrt(1 - c^2/a1^2*sin(α)^2)+a^2-a1^2 #5
Now subtract (a/a1)*#2 from #1 to eliminate a1^2
b^2 - a*c^2/a1 = -(a-a1)*(x^2-a*a1)/a1 #6
Equations #5 and #6 are two non-linear equations to be solved for x and a1.
From #5 we have x in terms of a1 with
x = a1*(a^2-a1^2-b^2+c^2)/(2*(a-a1)*sqrt(a1^2-c^2*sin(α)^2)) #7
Unfortunately using the above in #6 results in a sixth order polynomial to be solved for a1.
It can only be solved numerically at this point. If a1 is found, then #7 also gives us x.
0 = 4*a^2*c^2*g^2
+ a1*(4*a*g^2*(a^2-b^2-c^2))
+ a1^2*(a^4-2a^2(b^2+c^2+4g^2)+b^4+2b^2(2g^2-c^2)+c^4)
+ a1^3*(-4a(a^2-b^2-c^2-g^2))
+ a1^4*(2(3a^2-b^2-c^2))
+ a1^5*(-4*a)
+ a1^6
where g = c*sin(α)

How to solve equation with rotation and translation matrices?

I working on computer vision task and have this equation:
R0*c + t0 = R1*c + t1 = Ri*c + ti = ... = Rn*c + tn ,
n is about 20 (but can be more if needs)
where each pair of R,t (rotation matrix and translation vector in 3D) is a result of i-measurement and they are known, and vector c is what I whant to know.
I've got result with ceres solver. It's good that it can handle outliers but I think it's overkill for this task.
So what methods I should use for two situations:
With outliers
Without outliers
To handle outliers you can use RANSAC:
* In each iteration randomly pick i,j (a "sample") and solve c:
Ri*c + ti = Rj*c + tj
- Set Y = Ri*c + ti
* Apply to a larger population:
- Select S={k} for which ||Rk*c + tk - Y||<e
e ~ 3*RMS of errors without outliers
- Find optimal c for all k equations (with least mean square)
- Give it a "grade": size of S
* After few iterations use optimal c found for Max "grade".
* Number of iterations: log(1-p)/log(1-w^2)
[https://en.wikipedia.org/wiki/Random_sample_consensus]
p = 0.001 (for example. It is the required certainty of the result)
w is an assumption of nonoutliers/n.

How to compute limiting value

I have to compute the value of this expression
(e1*cos(θ) - e2*sin(θ)) / ((cos(θ))^2 - (sin(θ))^2)
Here e1 and e2 are some complex expression.
In the case when θ approach to PI/4, then the denominator will approach to zero. But in that case e1 and e2 will also approach to same value. So at PI/4, the value of expression will be E/sqrt(2) where e1=e2=E
I can do special handing for θ=PI/4 but what about the value of θ very very near to PI/4. What is the general strategy to compute such kind of expressions
This is a bit tricky. You need to know more about how e1 and e2 behave.
First, some notation: I'll use the variable a = theta - pi/4, so that we are interested in a->0, and write e1 = E + d1, e2 = E + d2. Let F = your expression times sqrt(2)
In terms of these we have
F = ((E+d1)*(cos(a) - sin(a)) - (E+d2)*(cos(a) + sin(a))) / - sin( 2*a)
= (-(2*E+d1+d2)*sin(a) + (d1-d2)*cos(a)) / (-2*sin(a)*cos(a))
= (E+(d1+d2)/2)/cos(a) - (d1-d2)/(2*sin(a))
Assuming that d1->0 and d2->0 as a->0 the first term will tend to E.
However the second term could tend to anything, or blow up -- for example if d1=d2=sqrt(a).
We need to assume more, for example that d1-d2 has derivative D at a=0.
In this case we will have
F-> E - D/2 as a->0
To be able to compute F for values of a close to 0 we need to know even more.
One approach is to have code like this:
if ( fabs(a) < small) { F = E-D/2 + C*a; } else { F = // normal code }
So we need to figure out what 'small' and C should be. In part this depends on what (relative) accuracy you require. The most stringent requirement would be that at a = +- small, the difference between the approximation and the normal code should be too small to represent in a double (if that's what you're using). But note that we mustn't make 'small' too small, or there is a danger that, as doubles, the normal code will evaluate to 0/0 at a += small. One approach would be to expand the numerator and denominator of F (or just the second term) as power series (to say second order), divide each by a, and then divide these series, keeping terms up to second order; the first term in this gives you C above, and the second term will allow you to estimate the error in this approximation, and hence estimate 'small'.

Calculating log(sum of exp(terms) ) when "terms" are very small

I would like to compute log( exp(A1) + exp(A2) ).
The formula below
log(exp(A1) + exp(A2) ) = log[exp(A1)(1 + exp(A2)/exp(A1))] = A1 + log(1+exp(A2-A1))
is useful when A1 and A2 are large and numerically exp(A1)=Inf (or exp(A2)=Inf).
(this formula is discussed in this thread ->
How to calculate log(sum of terms) from its component log-terms). The formula is true when the role of A1 and A2 are replaced.
My concern of this formula is when A1 and A2 are very small. For example, when A1 and A2 are:
A1 <- -40000
A2 <- -45000
then the direct calculation of log(exp(A1) + exp(A2) ) is:
log(exp(A1) + exp(A2))
[1] -Inf
Using the formula above gives:
A1 + log(1 + exp(A2-A1))
[1] -40000
which is the value of A1.
Ising the formula above with flipped role of A1 and A2 gives:
A2 + log(1 + exp(A1-A2))
[1] Inf
Which of the three values are the closest to the true value of log(exp(A1) + exp(A2))? Is there robust way to compute log(exp(A1) + exp(A2)) that can be used both when A1, A2 are small and A1, A2 are large.
Thank you in advance
You should use something with more accuracy to do the direct calculation.
It’s not “useful when [they’re] large”. It’s useful when the difference is very negative.
When x is near 0, then log(1+x) is approximately x. So if A1>A2, we can take your first formula:
log(exp(A1) + exp(A2)) = A1 + log(1+exp(A2-A1))
and approximate it by A1 + exp(A2-A1) (and the approximation will get better as A2-A1 is more negative). Since A2-A1=-5000, this is more than negative enough to make the approximation sufficient.
Regardless, if y is too far from zero (either way) exp(y) will (over|under)flow a double and result in 0 or infinity (this is a double, right? what language are you using?). This explains your answers. But since exp(A2-A1)=exp(-5000) is close to zero, your answer is approximately -40000+exp(-5000), which is indistinguishable from -40000, so that one is correct.
in such huge exponent differences the safest you can do without arbitrary precision is
chose the biggest exponent let it be Am = max(A1,A2)
so: log(exp(A1)+exp(A2)) -> log(exp(Am)) = Am
that is the closest you can get for such case
so in your example the result is -40000+delta
where delta is something very small
If you want to use the second formula then all breaks down to computing log(1+exp(A))
if A is positive then the result is far from the real thing
if A is negative then it will truncate to log(1)=0 so you get the same result as in above
[Notes]
your exponent difference is base^500
single precision 32bit float can store numbers up to (+/-)2^(+/-128)
double precision 64bit float can store numbers up to (+/-)2^(+/-1024)
so when your base is 10 or e then this is nowhere near enough what you need
if you have quadruple precision that should be enough but when you start changing the exp difference again yo will quickly get to the same point as now
[PS] if you need more precision without arbitrary precision
you can try to create own number class
with internal store of numbers like number=a^b
where a,b are floats
but for that you would need to code all basic functions
*,/ is easy
+,- is a nightmare but there could be some approaches/algorithms out there even for this

Mathematical (Arithmetic) representation of XOR

I have spent the last 5 hours searching for an answer. Even though I have found many answers they have not helped in any way.
What I am basically looking for is a mathematical, arithmetic only representation of the bitwise XOR operator for any 32bit unsigned integers.
Even though this sounds really simple, nobody (at least it seems so) has managed to find an answer to this question.
I hope we can brainstorm, and find a solution together.
Thanks.
XOR any numerical input between 0 and 1 including both ends
a + b - ab(1 + a + b - ab)
XOR binary input
a + b - 2ab or (a-b)²
Derivation
Basic Logical Operators
NOT = (1-x)
AND = x*y
From those operators we can get...
OR = (1-(1-a)(1-b)) = a + b - ab
Note: If a and b are mutually exclusive then their and condition will always be zero - from a Venn diagram perspective, this means there is no overlap. In that case, we could write OR = a + b, since a*b = 0 for all values of a & b.
2-Factor XOR
Defining XOR as (a OR B) AND (NOT (a AND b)):
(a OR B) --> (a + b - ab)
(NOT (a AND b)) --> (1 - ab)
AND these conditions together to get...
(a + b - ab)(1 - ab) = a + b - ab(1 + a + b - ab)
Computational Alternatives
If the input values are binary, then powers terms can be ignored to arrive at simplified computationally equivalent forms.
a + b - ab(1 + a + b - ab) = a + b - ab - a²b - ab² + a²b²
If x is binary (either 1 or 0), then we can disregard powers since 1² = 1 and 0² = 0...
a + b - ab - a²b - ab² + a²b² -- remove powers --> a + b - 2ab
XOR (binary) = a + b - 2ab
Binary also allows other equations to be computationally equivalent to the one above. For instance...
Given (a-b)² = a² + b² - 2ab
If input is binary we can ignore powers, so...
a² + b² - 2ab -- remove powers --> a + b - 2ab
Allowing us to write...
XOR (binary) = (a-b)²
Multi-Factor XOR
XOR = (1 - A*B*C...)(1 - (1-A)(1-B)(1-C)...)
Excel VBA example...
Function ArithmeticXOR(R As Range, Optional EvaluateEquation = True)
Dim AndOfNots As String
Dim AndGate As String
For Each c In R
AndOfNots = AndOfNots & "*(1-" & c.Address & ")"
AndGate = AndGate & "*" & c.Address
Next
AndOfNots = Mid(AndOfNots, 2)
AndGate = Mid(AndGate, 2)
'Now all we want is (Not(AndGate) AND Not(AndOfNots))
ArithmeticXOR = "(1 - " & AndOfNots & ")*(1 - " & AndGate & ")"
If EvaluateEquation Then
ArithmeticXOR = Application.Evaluate(xor2)
End If
End Function
Any n of k
These same methods can be extended to allow for any n number out of k conditions to qualify as true.
For instance, out of three variables a, b, and c, if you're willing to accept any two conditions, then you want a&b or a&c or b&c. This can be arithmetically modeled from the composite logic...
(a && b) || (a && c) || (b && c) ...
and applying our translations...
1 - (1-ab)(1-ac)(1-bc)...
This can be extended to any n number out of k conditions. There is a pattern of variable and exponent combinations, but this gets very long; however, you can simplify by ignoring powers for a binary context. The exact pattern is dependent on how n relates to k. For n = k-1, where k is the total number of conditions being tested, the result is as follows:
c1 + c2 + c3 ... ck - n*∏
Where c1 through ck are all n-variable combinations.
For instance, true if 3 of 4 conditions met would be
abc + abe + ace + bce - 3abce
This makes perfect logical sense since what we have is the additive OR of AND conditions minus the overlapping AND condition.
If you begin looking at n = k-2, k-3, etc. The pattern becomes more complicated because we have more overlaps to subtract out. If this is fully extended to the smallest value of n = 1, then we arrive at nothing more than a regular OR condition.
Thinking about Non-Binary Values and Fuzzy Region
The actual algebraic XOR equation a + b - ab(1 + a + b - ab) is much more complicated than the computationally equivalent binary equations like x + y - 2xy and (x-y)². Does this mean anything, and is there any value to this added complexity?
Obviously, for this to matter, you'd have to care about the decimal values outside of the discrete points (0,0), (0,1), (1,0), and (1,1). Why would this ever matter? Sometimes you want to relax the integer constraint for a discrete problem. In that case, you have to look at the premises used to convert logical operators to equations.
When it comes to translating Boolean logic into arithmetic, your basic building blocks are the AND and NOT operators, with which you can build both OR and XOR.
OR = (1-(1-a)(1-b)(1-c)...)
XOR = (1 - a*b*c...)(1 - (1-a)(1-b)(1-c)...)
So if you're thinking about the decimal region, then it's worth thinking about how we defined these operators and how they behave in that region.
Non-Binary Meaning of NOT
We expressed NOT as 1-x. Obviously, this simple equation works for binary values of 0 and 1, but the thing that's really cool about it is that it also provides the fractional or percent-wise compliment for values between 0 to 1. This is useful since NOT is also known as the Compliment in Boolean logic, and when it comes to sets, NOT refers to everything outside of the current set.
Non-Binary Meaning of AND
We expressed AND as x*y. Once again, obviously it works for 0 and 1, but its effect is a little more arbitrary for values between 0 to 1 where multiplication results in partial truths (decimal values) diminishing each other. It's possible to imagine that you would want to model truth as being averaged or accumulative in this region. For instance, if two conditions are hypothetically half true, is the AND condition only a quarter true (0.5 * 0.5), or is it entirely true (0.5 + 0.5 = 1), or does it remain half true ((0.5 + 0.5) / 2)? As it turns out, the quarter truth is actually true for conditions that are entirely discrete and the partial truth represents probability. For instance, will you flip tails (binary condition, 50% probability) both now AND again a second time? Answer is 0.5 * 0.5 = 0.25, or 25% true. Accumulation doesn't really make sense because it's basically modeling an OR condition (remember OR can be modeled by + when the AND condition is not present, so summation is characteristically OR). The average makes sense if you're looking at agreement and measurements, but it's really modeling a hybrid of AND and OR. For instance, ask 2 people to say on a scale of 1 to 10 how much do they agree with the statement "It is cold outside"? If they both say 5, then the truth of the statement "It is cold outside" is 50%.
Non-Binary Values in Summary
The take away from this look at non-binary values is that we can capture actual logic in our choice of operators and construct equations from the ground up, but we have to keep in mind numerical behavior. We are used to thinking about logic as discrete (binary) and computer processing as discrete, but non-binary logic is becoming more and more common and can help make problems that are difficult with discrete logic easier/possible to solve. You'll need to give thought to how values interact in this region and how to translate them into something meaningful.
"mathematical, arithmetic only representation" are not correct terms anyway. What you are looking for is a function which goes from IxI to I (domain of integer numbers).
Which restrictions would you like to have on this function? Only linear algebra? (+ , - , * , /) then it's impossible to emulate the XOR operator.
If instead you accept some non-linear operators like Max() Sgn() etc, you can emulate the XOR operator with some "simpler" operators.
Given that (a-b)(a-b) quite obviously computes xor for a single bit, you could construct a function with the floor or mod arithmetic operators to split the bits out, then xor them, then sum to recombine. (a-b)(a-b) = a2 -2·a·b + b2 so one bit of xor gives a polynomial with 3 terms.
Without floor or mod, the different bits interfere with each other, so you're stuck with looking at a solution which is a polynomial interpolation treating the input a,b as a single value: a xor b = g(a · 232 + b)
The polynomial has 264-1 terms, though will be symmetric in a and b as xor is commutative so you only have to calculate half of the coefficients. I don't have the space to write it out for you.
I wasn't able to find any solution for 32-bit unsigned integers but I've found some solutions for 2-bit integers which I was trying to use in my Prolog program.
One of my solutions (which uses exponentiation and modulo) is described in this StackOverflow question and the others (some without exponentiation, pure algebra) can be found in this code repository on Github: see different xor0 and o_xor0 implementations.
The nicest xor represention for 2-bit uints seems to be: xor(A,B) = (A + B*((-1)^A)) mod 4.
Solution with +,-,*,/ expressed as Excel formula (where cells from A2 to A5 and cells from B1 to E1 contain numbers 0-4) to be inserted in cells from A2 to E5:
(1-$A2)*(2-$A2)*(3-$A2)*($A2+B$1)/6 - $A2*(1-$A2)*(3-$A2)*($A2+B$1)/2 + $A2*(1-$A2)*(2-$A2)*($A2-B$1)/6 + $A2*(2-$A2)*(3-$A2)*($A2-B$1)/2 - B$1*(1-B$1)*(3-B$1)*$A2*(3-$A2)*(6-4*$A2)/2 + B$1*(1-B$1)*(2-B$1)*$A2*($A2-3)*(6-4*$A2)/6
It may be possible to adapt and optimize this solution for 32-bit unsigned integers. It's complicated and it uses logarithms but seems to be the most universal one as it can be used on any integer number. Additionaly, you'll have to check if it really works for all number combinations.
I do realize that this is sort of an old topic, but the question is worth answering and yes, this is possible using an algorithm. And rather than go into great detail about how it works, I'll just demonstrate with a simple example (written in C):
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
typedef unsigned long
number;
number XOR(number a, number b)
{
number
result = 0,
/*
The following calculation just gives us the highest power of
two (and thus the most significant bit) for this data type.
*/
power = pow(2, (sizeof(number) * 8) - 1);
/*
Loop until no more bits are left to test...
*/
while(power != 0)
{
result *= 2;
/*
The != comparison works just like the XOR operation.
*/
if((power > a) != (power > b))
result += 1;
a %= power;
b %= power;
power /= 2;
}
return result;
}
int main()
{
srand(time(0));
for(;;)
{
number
a = rand(),
b = rand();
printf("a = %lu\n", a);
printf("b = %lu\n", b);
printf("a ^ b = %lu\n", a ^ b);
printf("XOR(a, b) = %lu\n", XOR(a, b));
getchar();
}
}
I think this relation might help in answering your question
A + B = (A XOR B ) + 2*(A.B)
(a-b)*(a-b) is the right answer. the only one? I guess so!

Resources