I am self taught and thought that I understood recursion, but I can not solve this problem:
What is returned by the call recur(12)?
What is returned by the call recur (25)?
public static int recur (int y)
{
if(y <=3)
return y%4;
return recur(y-2) + recur(y-1) + 1;
}
Would someone please help me with understanding how to solve these problems?
First of all, I assume you mean:
public static int recur(int y)
but the results of this method are discovered by placing a print statement at the beginning of the method:
public static int recur(int y)
{
System.out.println(y);
if(y <=3)
return y % 4;
return recur(y-2) + recur(y-1) + 1;
}
I am not sure what you mean by what is returned because there are several returns, though. Anyway, these are the steps to figure this out:
is 12 <= 3? No
recur(10) Don't proceed to the next recursion statement yet
is 10 <= 3? No
recur(8) Don't proceed to the next recursion statement yet
Continue this pattern until y <= 3 is true. Then you return y % 4 (whatever that number may be).
Now you are ready to go to the second recursive statement in the most recent recur() call. So, recur(y - 1).
Is y <= 3? If so, return y % 4. If not, do a process similar to step 1
once you return you add the result of recur(y - 2) + recur(y - 1) + 1. This will be a number of course.
continue this process for many iterations.
Recursion is difficult to follow and understand sometimes even for advanced programmers.
Here is a very common (and similar) problem for you to look into:
Java recursive Fibonacci sequence
I hope this helps!
Good luck!
I have removed the modulus there since any nonegative n less than 4 will just become n so I ended up with:
public static int recur (int y)
{
return y <= 3 ?
y :
recur(y-2) + recur(y-1) + 1;
}
I like to start off by testing the base case so what happens for y when they are 0,1,2,3? well the argument so 0,1,2,3 of course.
What about 4? Well then it's not less or equal to 3 and you get to replace it with recur(4-2) + recur(4-1) + 1 which is recur(2) + recur(3) + 1. Now you can solve each of the recur ones since we earlier established became its argument so you end up with 2 + 3 + 1.
Now doing this for 12 or 25 is exactly the same just with more steps. Here is 5 which has just one more step:
recur(5); //=>
recur(3) + recur(4) + 1; //==>
recur(3) + ( recur(2) + recur(3) + 1 ) + 1; //==>
3 + 2 + 3 + 1 + 1; // ==>
10
So in reality the recursion halts the process in the current iteration until you have an answer that the current iteration adds together so I could have done this in the opposite way, but then I would have paused every time I now have used a previous calculated value.
You should have enough info to do any y.
This is nothing more than an augmented Fibonacci sequence.
The first four terms are defined as 0, 1, 2, 3. Thereafter, each term is the sum of the previous two terms, plus one. This +1 augmentation is where it differs from the classic Fibonacci sequence. Just add up the series by hand:
0
1
2
3
3+2+1 = 6
6+3+1 = 10
10+6+1 = 17
17+10+1 = 28
...
What happens when the result of a multiplication or sum in OpenCL overflows? Does it wrap?
In particular I'd like to know if I can catch an overflow in
uint4 x = ( get_global_id( 0 ) * 4 + (uint4)(0, 1, 2, 3) ) * q + r;
with
int4 invalid = x < get_global_id( 0 ) * 4;
or how else that would be possible. (Assuming r >= 0 && q > r && q < (1 << 20) and the id will be at most just big enough to cause an overflow.)
Context: I want to check every 32 bit uint x for which x % q == r , where q and r are known. With vectors I can check 4 at a time, but the number of tests may not be divisible by 4.
I'm targeting the GPU, but that shouldn't be relevant, right?
OpenCL 1.2 standard (section 6.2.3.3) refers to C99 standard (section 6.3.1.3):
...if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Generally, get_global_id returns size_t, so narrowing conversion is bad idea IMO. Though, I never faced NDRange big enough to exceed uint range.
I have been trying to get my head around this perticular complexity computation but everything i read about this type of complexity says to me that it is of type big O(2^n) but if i add a counter to the code and check how many times it iterates per given n it seems to follow the curve of 4^n instead. Maybe i just misunderstood as i placed an count++; inside the scope.
Is this not of type big O(2^n)?
public int test(int n)
{
if (n == 0)
return 0;
else
return test(n-1) + test(n-1);
}
I would appreciate any hints or explanation on this! I completely new to this complexity calculation and this one has thrown me off the track.
//Regards
int test(int n)
{
printf("%d\n", n);
if (n == 0) {
return 0;
}
else {
return test(n - 1) + test(n - 1);
}
}
With a printout at the top of the function, running test(8) and counting the number of times each n is printed yields this output, which clearly shows 2n growth.
$ ./test | sort | uniq -c
256 0
128 1
64 2
32 3
16 4
8 5
4 6
2 7
1 8
(uniq -c counts the number of times each line occurs. 0 is printed 256 times, 1 128 times, etc.)
Perhaps you mean you got a result of O(2n+1), rather than O(4n)? If you add up all of these numbers you'll get 511, which for n=8 is 2n+1-1.
If that's what you meant, then that's fine. O(2n+1) = O(2⋅2n) = O(2n)
First off: the 'else' statement is obsolete since the if already returns if it evaluates to true.
On topic: every iteration forks 2 different iterations, which fork 2 iterations themselves, etc. etc. As such, for n=1 the function is called 2 times, plus the originating call. For n=2 it is called 4+1 times, then 8+1, then 16+1 etc. The complexity is therefore clearly 2^n, since the constant is cancelled out by the exponential.
I suspect your counter wasn't properly reset between calls.
Let x(n) be a number of total calls of test.
x(0) = 1
x(n) = 2 * x(n - 1) = 2 * 2 * x(n-2) = 2 * 2 * ... * 2
There is total of n twos - hence 2^n calls.
The complexity T(n) of this function can be easily shown to equal c + 2*T(n-1). The recurrence given by
T(0) = 0
T(n) = c + 2*T(n-1)
Has as its solution c*(2^n - 1), or something like that. It's O(2^n).
Now, if you take the input size of your function to be m = lg n, as might be acceptable in this scenario (the number of bits to represent n, the true input size) then this is, in fact, an O(m^4) algorithm... since O(n^2) = O(m^4).
I have the equation:
C = A^b + (2*A)^b + (4*A)^b.
Where C and A are known, but b is unknown. How to find b?
All numbers are 8 bit bytes. Is there any possible method much faster than brute-force?
Does the + sign indicate addition on bytes and * multipication, with overflowing bits discarded? If so, I think the answer is
b = C ^ (A + 2 * A + 4* A)
How to reach that conclusion :
C = A^b + (2*A)^b + (4*A)^b
hence
C^b = A^b^b + (2*A)^b^b + (4*A)^b^b = A + 2*A + 4*A
then
C^C^b = b = C^(A + 2*A + 4*A)
EDIT Just to make sure : This answer is not correct. Shame on me. I'll have to think more about it.
I'm taking the same assumptions: + and * are addition and multiplication with overflow ignored.
Look-up Table
This would probably be the fastest solution: Precompute the results, and store them in a look-up table. It would require 216 bytes of memory, or 64 kB.
Guess, Check, Refine Method
Presented in C-family-like pseudocode:
byte Solve(byte a, byte c){
byte guess = lastGuess = result = lastResult = 0;
do {
guess = lastGuess ^ lastResult ^ c; //see explanation below
result = a^guess + (2*a)^guess + (4*a)^guess;
lastGuess = guess;
lastResult = result;
} while (result != c);
return guess;
}
The idea of this algorithm is that it makes a guess at what b is, then plugs it into the formula for a tentative result, and checks it against c. Whatever bits in the guess caused the result to differ from c are changed. This corresponds to the XOR of the last guess, last result, and c (if this statement is a little bit of a jump, I encourage you draw a truth table, and not just take my word for it!).
Explanation
It works because changing a bit can only affect the results of that bit, and the more significant bits, but not lower bits (since when you do addition with pen and paper, the carries can propagate to the left). So in the worst case the algorithm takes 2 guesses to get the least significant bit correct, another guess for the 2nd lsb, another for the 3rd, etc. for a maximum of 9 guesses given any combination of a and c.
Here's an example trace from my test program:
a: 00001100
c: 01100111
Guess: 01100111
Result: 01000001
Guess: 01000001
Result: 00010111
Guess: 00110001
Result: 01100111
b: 00110001
I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++.
Here's an example of the equations:
-44.3940 = a * 50.0 + b * 37.0 + tx
-45.3049 = a * 43.0 + b * 39.0 + tx
-44.9594 = a * 52.0 + b * 41.0 + tx
From this, I'd like to get the best approximation for a, b, and tx.
Cramer's Rule
and
Gaussian Elimination
are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course).
EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL.
You can solve this with a program exactly the same way you solve it by hand (with multiplication and subtraction, then feeding results back into the equations). This is pretty standard secondary-school-level mathematics.
-44.3940 = 50a + 37b + c (A)
-45.3049 = 43a + 39b + c (B)
-44.9594 = 52a + 41b + c (C)
(A-B): 0.9109 = 7a - 2b (D)
(B-C): 0.3455 = -9a - 2b (E)
(D-E): 1.2564 = 16a (F)
(F/16): a = 0.078525 (G)
Feed G into D:
0.9109 = 7a - 2b
=> 0.9109 = 0.549675 - 2b (substitute a)
=> 0.361225 = -2b (subtract 0.549675 from both sides)
=> -0.1806125 = b (divide both sides by -2) (H)
Feed H/G into A:
-44.3940 = 50a + 37b + c
=> -44.3940 = 3.92625 - 6.6826625 + c (substitute a/b)
=> -41.6375875 = c (subtract 3.92625 - 6.6826625 from both sides)
So you end up with:
a = 0.0785250
b = -0.1806125
c = -41.6375875
If you plug these values back into A, B and C, you'll find they're correct.
The trick is to use a simple 4x3 matrix which reduces in turn to a 3x2 matrix, then a 2x1 which is "a = n", n being an actual number. Once you have that, you feed it into the next matrix up to get another value, then those two values into the next matrix up until you've solved all variables.
Provided you have N distinct equations, you can always solve for N variables. I say distinct because these two are not:
7a + 2b = 50
14a + 4b = 100
They are the same equation multiplied by two so you cannot get a solution from them - multiplying the first by two then subtracting leaves you with the true but useless statement:
0 = 0 + 0
By way of example, here's some C code that works out the simultaneous equations that you're placed in your question. First some necessary types, variables, a support function for printing out an equation, and the start of main:
#include <stdio.h>
typedef struct { double r, a, b, c; } tEquation;
tEquation equ1[] = {
{ -44.3940, 50, 37, 1 }, // -44.3940 = 50a + 37b + c (A)
{ -45.3049, 43, 39, 1 }, // -45.3049 = 43a + 39b + c (B)
{ -44.9594, 52, 41, 1 }, // -44.9594 = 52a + 41b + c (C)
};
tEquation equ2[2], equ3[1];
static void dumpEqu (char *desc, tEquation *e, char *post) {
printf ("%10s: %12.8lf = %12.8lfa + %12.8lfb + %12.8lfc (%s)\n",
desc, e->r, e->a, e->b, e->c, post);
}
int main (void) {
double a, b, c;
Next, the reduction of the three equations with three unknowns to two equations with two unknowns:
// First step, populate equ2 based on removing c from equ.
dumpEqu (">", &(equ1[0]), "A");
dumpEqu (">", &(equ1[1]), "B");
dumpEqu (">", &(equ1[2]), "C");
puts ("");
// A - B
equ2[0].r = equ1[0].r * equ1[1].c - equ1[1].r * equ1[0].c;
equ2[0].a = equ1[0].a * equ1[1].c - equ1[1].a * equ1[0].c;
equ2[0].b = equ1[0].b * equ1[1].c - equ1[1].b * equ1[0].c;
equ2[0].c = 0;
// B - C
equ2[1].r = equ1[1].r * equ1[2].c - equ1[2].r * equ1[1].c;
equ2[1].a = equ1[1].a * equ1[2].c - equ1[2].a * equ1[1].c;
equ2[1].b = equ1[1].b * equ1[2].c - equ1[2].b * equ1[1].c;
equ2[1].c = 0;
dumpEqu ("A-B", &(equ2[0]), "D");
dumpEqu ("B-C", &(equ2[1]), "E");
puts ("");
Next, the reduction of the two equations with two unknowns to one equation with one unknown:
// Next step, populate equ3 based on removing b from equ2.
// D - E
equ3[0].r = equ2[0].r * equ2[1].b - equ2[1].r * equ2[0].b;
equ3[0].a = equ2[0].a * equ2[1].b - equ2[1].a * equ2[0].b;
equ3[0].b = 0;
equ3[0].c = 0;
dumpEqu ("D-E", &(equ3[0]), "F");
puts ("");
Now that we have a formula of the type number1 = unknown * number2, we can simply work out the unknown value with unknown <- number1 / number2. Then, once you've figured that value out, substitute it into one of the equations with two unknowns and work out the second value. Then substitute both those (now-known) unknowns into one of the original equations and you now have the values for all three unknowns:
// Finally, substitute values back into equations.
a = equ3[0].r / equ3[0].a;
printf ("From (F ), a = %12.8lf (G)\n", a);
b = (equ2[0].r - equ2[0].a * a) / equ2[0].b;
printf ("From (D,G ), b = %12.8lf (H)\n", b);
c = (equ1[0].r - equ1[0].a * a - equ1[0].b * b) / equ1[0].c;
printf ("From (A,G,H), c = %12.8lf (I)\n", c);
return 0;
}
The output of that code matches the earlier calculations in this answer:
>: -44.39400000 = 50.00000000a + 37.00000000b + 1.00000000c (A)
>: -45.30490000 = 43.00000000a + 39.00000000b + 1.00000000c (B)
>: -44.95940000 = 52.00000000a + 41.00000000b + 1.00000000c (C)
A-B: 0.91090000 = 7.00000000a + -2.00000000b + 0.00000000c (D)
B-C: -0.34550000 = -9.00000000a + -2.00000000b + 0.00000000c (E)
D-E: -2.51280000 = -32.00000000a + 0.00000000b + 0.00000000c (F)
From (F ), a = 0.07852500 (G)
From (D,G ), b = -0.18061250 (H)
From (A,G,H), c = -41.63758750 (I)
Take a look at the Microsoft Solver Foundation.
With it you could write code like this:
SolverContext context = SolverContext.GetContext();
Model model = context.CreateModel();
Decision a = new Decision(Domain.Real, "a");
Decision b = new Decision(Domain.Real, "b");
Decision c = new Decision(Domain.Real, "c");
model.AddDecisions(a,b,c);
model.AddConstraint("eqA", -44.3940 == 50*a + 37*b + c);
model.AddConstraint("eqB", -45.3049 == 43*a + 39*b + c);
model.AddConstraint("eqC", -44.9594 == 52*a + 41*b + c);
Solution solution = context.Solve();
string results = solution.GetReport().ToString();
Console.WriteLine(results);
Here is the output:
===Solver Foundation Service Report===
Datetime: 04/20/2009 23:29:55
Model Name: Default
Capabilities requested: LP
Solve Time (ms): 1027
Total Time (ms): 1414
Solve Completion Status: Optimal
Solver Selected: Microsoft.SolverFoundation.Solvers.SimplexSolver
Directives:
Microsoft.SolverFoundation.Services.Directive
Algorithm: Primal
Arithmetic: Hybrid
Pricing (exact): Default
Pricing (double): SteepestEdge
Basis: Slack
Pivot Count: 3
===Solution Details===
Goals:
Decisions:
a: 0.0785250000000004
b: -0.180612500000001
c: -41.6375875
For a 3x3 system of linear equations I guess it would be okay to roll out your own algorithms.
However, you might have to worry about accuracy, division by zero or really small numbers and what to do about infinitely many solutions. My suggestion is to go with a standard numerical linear algebra package such as LAPACK.
Are you looking for a software package that'll do the work or actually doing the matrix operations and such and do each step?
The the first, a coworker of mine just used Ocaml GLPK. It is just a wrapper for the GLPK, but it removes a lot of the steps of setting things up. It looks like you're going to have to stick with the GLPK, in C, though. For the latter, thanks to delicious for saving an old article I used to learn LP awhile back, PDF. If you need specific help setting up further, let us know and I'm sure, me or someone will wander back in and help, but, I think it's fairly straight forward from here. Good Luck!
Template Numerical Toolkit from NIST has tools for doing that.
One of the more reliable ways is to use a QR Decomposition.
Here's an example of a wrapper so that I can call "GetInverse(A, InvA)" in my code and it will put the inverse into InvA.
void GetInverse(const Array2D<double>& A, Array2D<double>& invA)
{
QR<double> qr(A);
invA = qr.solve(I);
}
Array2D is defined in the library.
In terms of run-time efficiency, others have answered better than I. If you always will have the same number of equations as variables, I like Cramer's rule as it's easy to implement. Just write a function to calculate determinant of a matrix (or use one that's already written, I'm sure you can find one out there), and divide the determinants of two matrices.
Personally, I'm partial to the algorithms of Numerical Recipes. (I'm fond of the C++ edition.)
This book will teach you why the algorithms work, plus show you some pretty-well debugged implementations of those algorithms.
Of course, you could just blindly use CLAPACK (I've used it with great success), but I would first hand-type a Gaussian Elimination algorithm to at least have a faint idea of the kind of work that has gone into making these algorithms stable.
Later, if you're doing more interesting linear algebra, looking around the source code of Octave will answer a lot of questions.
From the wording of your question, it seems like you have more equations than unknowns and you want to minimize the inconsistencies. This is typically done with linear regression, which minimizes the sum of the squares of the inconsistencies. Depending on the size of the data, you can do this in a spreadsheet or in a statistical package. R is a high-quality, free package that does linear regression, among a lot of other things. There is a lot to linear regression (and a lot of gotcha's), but as it's straightforward to do for simple cases. Here's an R example using your data. Note that the "tx" is the intercept to your model.
> y <- c(-44.394, -45.3049, -44.9594)
> a <- c(50.0, 43.0, 52.0)
> b <- c(37.0, 39.0, 41.0)
> regression = lm(y ~ a + b)
> regression
Call:
lm(formula = y ~ a + b)
Coefficients:
(Intercept) a b
-41.63759 0.07852 -0.18061
function x = LinSolve(A,y)
%
% Recursive Solution of Linear System Ax=y
% matlab equivalent: x = A\y
% x = n x 1
% A = n x n
% y = n x 1
% Uses stack space extensively. Not efficient.
% C allows recursion, so convert it into C.
% ----------------------------------------------
n=length(y);
x=zeros(n,1);
if(n>1)
x(1:n-1,1) = LinSolve( A(1:n-1,1:n-1) - (A(1:n-1,n)*A(n,1:n-1))./A(n,n) , ...
y(1:n-1,1) - A(1:n-1,n).*(y(n,1)/A(n,n)));
x(n,1) = (y(n,1) - A(n,1:n-1)*x(1:n-1,1))./A(n,n);
else
x = y(1,1) / A(1,1);
end
For general cases, you could use python along with numpy for Gaussian elimination. And then plug in values and get the remaining values.