Related
I tried to implement bessel function using that formula, this is the code:
function result=Bessel(num);
if num==0
result=bessel(0,1);
elseif num==1
result=bessel(1,1);
else
result=2*(num-1)*Bessel(num-1)-Bessel(num-2);
end;
But if I use MATLAB's bessel function to compare it with this one, I get too high different values.
For example if I type Bessel(20) it gives me 3.1689e+005 as result, if instead I type bessel(20,1) it gives me 3.8735e-025 , a totally different result.
such recurrence relations are nice in mathematics but numerically unstable when implementing algorithms using limited precision representations of floating-point numbers.
Consider the following comparison:
x = 0:20;
y1 = arrayfun(#(n)besselj(n,1), x); %# builtin function
y2 = arrayfun(#Bessel, x); %# your function
semilogy(x,y1, x,y2), grid on
legend('besselj','Bessel')
title('J_\nu(z)'), xlabel('\nu'), ylabel('log scale')
So you can see how the computed values start to differ significantly after 9.
According to MATLAB:
BESSELJ uses a MEX interface to a Fortran library by D. E. Amos.
and gives the following as references for their implementation:
D. E. Amos, "A subroutine package for Bessel functions of a complex
argument and nonnegative order", Sandia National Laboratory Report,
SAND85-1018, May, 1985.
D. E. Amos, "A portable package for Bessel functions of a complex
argument and nonnegative order", Trans. Math. Software, 1986.
The forward recurrence relation you are using is not stable. To see why, consider that the values of BesselJ(n,x) become smaller and smaller by about a factor 1/2n. You can see this by looking at the first term of the Taylor series for J.
So, what you're doing is subtracting a large number from a multiple of a somewhat smaller number to get an even smaller number. Numerically, that's not going to work well.
Look at it this way. We know the result is of the order of 10^-25. You start out with numbers that are of the order of 1. So in order to get even one accurate digit out of this, we have to know the first two numbers with at least 25 digits precision. We clearly don't, and the recurrence actually diverges.
Using the same recurrence relation to go backwards, from high orders to low orders, is stable. When you start with correct values for J(20,1) and J(19,1), you can calculate all orders down to 0 with full accuracy as well. Why does this work? Because now the numbers are getting larger in each step. You're subtracting a very small number from an exact multiple of a larger number to get an even larger number.
You can just modify the code below which is for the Spherical bessel function. It is well tested and works for all arguments and order range. I am sorry it is in C#
public static Complex bessel(int n, Complex z)
{
if (n == 0) return sin(z) / z;
if (n == 1) return sin(z) / (z * z) - cos(z) / z;
if (n <= System.Math.Abs(z.real))
{
Complex h0 = bessel(0, z);
Complex h1 = bessel(1, z);
Complex ret = 0;
for (int i = 2; i <= n; i++)
{
ret = (2 * i - 1) / z * h1 - h0;
h0 = h1;
h1 = ret;
if (double.IsInfinity(ret.real) || double.IsInfinity(ret.imag)) return double.PositiveInfinity;
}
return ret;
}
else
{
double u = 2.0 * abs(z.real) / (2 * n + 1);
double a = 0.1;
double b = 0.175;
int v = n - (int)System.Math.Ceiling((System.Math.Log(0.5e-16 * (a + b * u * (2 - System.Math.Pow(u, 2)) / (1 - System.Math.Pow(u, 2))), 2)));
Complex ret = 0;
while (v > n - 1)
{
ret = z / (2 * v + 1.0 - z * ret);
v = v - 1;
}
Complex jnM1 = ret;
while (v > 0)
{
ret = z / (2 * v + 1.0 - z * ret);
jnM1 = jnM1 * ret;
v = v - 1;
}
return jnM1 * sin(z) / z;
}
}
Is it possible to divide an unsigned integer by 10 by using pure bit shifts, addition, subtraction and maybe multiply? Using a processor with very limited resources and slow divide.
Editor's note: this is not actually what compilers do, and gives the wrong answer for large positive integers ending with 9, starting with div10(1073741829) = 107374183 not 107374182. It is exact for smaller inputs, though, which may be sufficient for some uses.
Compilers (including MSVC) do use fixed-point multiplicative inverses for constant divisors, but they use a different magic constant and shift on the high-half result to get an exact result for all possible inputs, matching what the C abstract machine requires. See Granlund & Montgomery's paper on the algorithm.
See Why does GCC use multiplication by a strange number in implementing integer division? for examples of the actual x86 asm gcc, clang, MSVC, ICC, and other modern compilers make.
This is a fast approximation that's inexact for large inputs
It's even faster than the exact division via multiply + right-shift that compilers use.
You can use the high half of a multiply result for divisions by small integral constants. Assume a 32-bit machine (code can be adjusted accordingly):
int32_t div10(int32_t dividend)
{
int64_t invDivisor = 0x1999999A;
return (int32_t) ((invDivisor * dividend) >> 32);
}
What's going here is that we're multiplying by a close approximation of 1/10 * 2^32 and then removing the 2^32. This approach can be adapted to different divisors and different bit widths.
This works great for the ia32 architecture, since its IMUL instruction will put the 64-bit product into edx:eax, and the edx value will be the wanted value. Viz (assuming dividend is passed in eax and quotient returned in eax)
div10 proc
mov edx,1999999Ah ; load 1/10 * 2^32
imul eax ; edx:eax = dividend / 10 * 2 ^32
mov eax,edx ; eax = dividend / 10
ret
endp
Even on a machine with a slow multiply instruction, this will be faster than a software or even hardware divide.
Though the answers given so far match the actual question, they do not match the title. So here's a solution heavily inspired by Hacker's Delight that really uses only bit shifts.
unsigned divu10(unsigned n) {
unsigned q, r;
q = (n >> 1) + (n >> 2);
q = q + (q >> 4);
q = q + (q >> 8);
q = q + (q >> 16);
q = q >> 3;
r = n - (((q << 2) + q) << 1);
return q + (r > 9);
}
I think that this is the best solution for architectures that lack a multiply instruction.
Of course you can if you can live with some loss in precision. If you know the value range of your input values you can come up with a bit shift and a multiplication which is exact.
Some examples how you can divide by 10, 60, ... like it is described in this blog to format time the fastest way possible.
temp = (ms * 205) >> 11; // 205/2048 is nearly the same as /10
to expand Alois's answer a bit, we can expand the suggested y = (x * 205) >> 11 for a few more multiples/shifts:
y = (ms * 1) >> 3 // first error 8
y = (ms * 2) >> 4 // 8
y = (ms * 4) >> 5 // 8
y = (ms * 7) >> 6 // 19
y = (ms * 13) >> 7 // 69
y = (ms * 26) >> 8 // 69
y = (ms * 52) >> 9 // 69
y = (ms * 103) >> 10 // 179
y = (ms * 205) >> 11 // 1029
y = (ms * 410) >> 12 // 1029
y = (ms * 820) >> 13 // 1029
y = (ms * 1639) >> 14 // 2739
y = (ms * 3277) >> 15 // 16389
y = (ms * 6554) >> 16 // 16389
y = (ms * 13108) >> 17 // 16389
y = (ms * 26215) >> 18 // 43699
y = (ms * 52429) >> 19 // 262149
y = (ms * 104858) >> 20 // 262149
y = (ms * 209716) >> 21 // 262149
y = (ms * 419431) >> 22 // 699059
y = (ms * 838861) >> 23 // 4194309
y = (ms * 1677722) >> 24 // 4194309
y = (ms * 3355444) >> 25 // 4194309
y = (ms * 6710887) >> 26 // 11184819
y = (ms * 13421773) >> 27 // 67108869
each line is a single, independent, calculation, and you'll see your first "error"/incorrect result at the value shown in the comment. you're generally better off taking the smallest shift for a given error value as this will minimise the extra bits needed to store the intermediate value in the calculation, e.g. (x * 13) >> 7 is "better" than (x * 52) >> 9 as it needs two less bits of overhead, while both start to give wrong answers above 68.
if you want to calculate more of these, the following (Python) code can be used:
def mul_from_shift(shift):
mid = 2**shift + 5.
return int(round(mid / 10.))
and I did the obvious thing for calculating when this approximation starts to go wrong with:
def first_err(mul, shift):
i = 1
while True:
y = (i * mul) >> shift
if y != i // 10:
return i
i += 1
(note that // is used for "integer" division, i.e. it truncates/rounds towards zero)
the reason for the "3/1" pattern in errors (i.e. 8 repeats 3 times followed by 9) seems to be due to the change in bases, i.e. log2(10) is ~3.32. if we plot the errors we get the following:
where the relative error is given by: mul_from_shift(shift) / (1<<shift) - 0.1
Considering Kuba Ober’s response, there is another one in the same vein.
It uses iterative approximation of the result, but I wouldn’t expect any surprising performances.
Let say we have to find x where x = v / 10.
We’ll use the inverse operation v = x * 10 because it has the nice property that when x = a + b, then x * 10 = a * 10 + b * 10.
Let use x as variable holding the best approximation of result so far. When the search ends, x Will hold the result. We’ll set each bit b of x from the most significant to the less significant, one by one, end compare (x + b) * 10 with v. If its smaller or equal to v, then the bit b is set in x. To test the next bit, we simply shift b one position to the right (divide by two).
We can avoid the multiplication by 10 by holding x * 10 and b * 10 in other variables.
This yields the following algorithm to divide v by 10.
uin16_t x = 0, x10 = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
uint16_t t = x10 + b10;
if (t <= v) {
x10 = t;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
Edit: to get the algorithm of Kuba Ober which avoids the need of variable x10 , we can subtract b10 from v and v10 instead. In this case x10 isn’t needed anymore. The algorithm becomes
uin16_t x = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
if (b10 <= v) {
v -= b10;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
The loop may be unwinded and the different values of b and b10 may be precomputed as constants.
On architectures that can only shift one place at a time, a series of explicit comparisons against decreasing powers of two multiplied by 10 might work better than the solution form hacker's delight. Assuming a 16 bit dividend:
uint16_t div10(uint16_t dividend) {
uint16_t quotient = 0;
#define div10_step(n) \
do { if (dividend >= (n*10)) { quotient += n; dividend -= n*10; } } while (0)
div10_step(0x1000);
div10_step(0x0800);
div10_step(0x0400);
div10_step(0x0200);
div10_step(0x0100);
div10_step(0x0080);
div10_step(0x0040);
div10_step(0x0020);
div10_step(0x0010);
div10_step(0x0008);
div10_step(0x0004);
div10_step(0x0002);
div10_step(0x0001);
#undef div10_step
if (dividend >= 5) ++quotient; // round the result (optional)
return quotient;
}
Well division is subtraction, so yes. Shift right by 1 (divide by 2). Now subtract 5 from the result, counting the number of times you do the subtraction until the value is less than 5. The result is number of subtractions you did. Oh, and dividing is probably going to be faster.
A hybrid strategy of shift right then divide by 5 using the normal division might get you a performance improvement if the logic in the divider doesn't already do this for you.
I've designed a new method in AVR assembly, with lsr/ror and sub/sbc only. It divides by 8, then sutracts the number divided by 64 and 128, then subtracts the 1,024th and the 2,048th, and so on and so on. Works very reliable (includes exact rounding) and quick (370 microseconds at 1 MHz).
The source code is here for 16-bit-numbers:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/div10_16rd.asm
The page that comments this source code is here:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/DIV10.html
I hope that it helps, even though the question is ten years old.
brgs, gsc
elemakil's comments' code can be found here: https://doc.lagout.org/security/Hackers%20Delight.pdf
page 233. "Unsigned divide by 10 [and 11.]"
In Linux. There is an srand() function, where you supply a seed and it will guarantee the same sequence of pseudorandom numbers in subsequent calls to the random() function afterwards.
Lets say, I want to store this pseudo random sequence by remembering this seed value.
Furthermore, let's say I want the 100 thousandth number in this pseudo random sequence later.
One way, would be to supply the seed number using srand(), and then calling random() 100 thousand times, and remembering this number.
Is there a better way of skipping all 99,999 other numbers in the pseudo random list and directly getting the 100 thousandth number in the list.
thanks,
m
I'm not sure there's a defined standard for implementing rand on any platform; however, picking this one from the GNU Scientific Library:
— Generator: gsl_rng_rand
This is the BSD rand generator. Its sequence is
xn+1 = (a xn + c) mod m
with a = 1103515245, c = 12345 and m = 231. The seed specifies the initial value, x1. The period of this generator is 231, and it uses 1 word of storage per generator.
So to "know" xn requires you to know xn-1. Unless there's some obvious pattern I'm missing, you can't jump to a value without computing all the values before it. (But that's not necessarily the case for every rand implementation.)
If we start with x1...
x2 = (a * x1 + c) % m
x3 = (a * ((a * x1 + c) % m) + c) % m
x4 = (a * ((a * ((a * x1 + c) % m) + c) % m) + c) % m
x5 = (a * (a * ((a * ((a * x1 + c) % m) + c) % m) + c) % m) + c) % m
It gets out of hand pretty quickly. Is that function easily reducible? I don't think it is.
(There's a statistics phrase for a series where xn depends on xn-1 -- can anyone remind me what that word is?)
If they're available on your system, you can use rand_r instead of rand & srand, or use initstate and setstate with random. rand_r takes an unsigned * as an argument, where it stores its state. After calling rand_r numerous times, save the value of this unsigned integer and use it as the starting value the next time.
For random(), use initstate rather than srandom. Save the contents of the state buffer for any state that you want to restore. To restore a state, fill a buffer with and call setstate. If a buffer is already the current state buffer, you can skip the call to setstate.
This is developed from #Mark's answer using the BSD rand() function.
rand1() computes the nth random number, starting at seed, by stepping through n times.
rand2() computes the same using a shortcut. It can step up to 2^24-1 steps in one go. Internally it requires only 24 steps.
If the BSD random number generator is good enough for you then this will suffice:
#include <stdio.h>
const unsigned int m = (1<<31)-1;
unsigned int a[24] = {
1103515245, 1117952617, 1845919505, 1339940641, 1601471041,
187569281 , 1979738369, 387043841 , 1046979585, 1574914049,
1073647617, 285024257 , 1710899201, 1542750209, 2011758593,
1876033537, 1604583425, 1061683201, 2123366401, 2099249153,
2051014657, 1954545665, 1761607681, 1375731713
};
unsigned int b[24] = {
12345, 1406932606, 1449466924, 1293799192, 1695770928, 1680572000,
422948032, 910563712, 519516928, 530212352, 98880512, 646551552,
940781568, 472276992, 1749860352, 278495232, 556990464, 1113980928,
80478208, 160956416, 321912832, 643825664, 1287651328, 427819008
};
unsigned int rand1(unsigned int seed, unsigned int n)
{
int i;
for (i = 0; i<n; ++i)
{
seed = (1103515245U*seed+12345U) & m;
}
return seed;
}
unsigned int rand2(unsigned int seed, unsigned int n)
{
int i;
for (i = 0; i<24; ++i)
{
if (n & (1<<i))
{
seed = (a[i]*seed+b[i]) & m;
}
}
return seed;
}
int main()
{
printf("%u\n", rand1 (10101, 100000));
printf("%u\n", rand2 (10101, 100000));
}
It's not hard to adapt to any linear congruential generator. I computed the tables in a language with a proper integer type (Haskell), but I could have computed them another way in C using only a few lines more code.
If you always want the 100,000th item, just store it for later.
Or you could gen the sequence and store that... and query for the particular element by index later.
How do I map numbers, linearly, between a and b to go between c and d.
That is, I want numbers between 2 and 6 to map to numbers between 10 and 20... but I need the generalized case.
My brain is fried.
If your number X falls between A and B, and you would like Y to fall between C and D, you can apply the following linear transform:
Y = (X-A)/(B-A) * (D-C) + C
That should give you what you want, although your question is a little ambiguous, since you could also map the interval in the reverse direction. Just watch out for division by zero and you should be OK.
Divide to get the ratio between the sizes of the two ranges, then subtract the starting value of your inital range, multiply by the ratio and add the starting value of your second range. In other words,
R = (20 - 10) / (6 - 2)
y = (x - 2) * R + 10
This evenly spreads the numbers from the first range in the second range.
It would be nice to have this functionality in the java.lang.Math class, as this is such a widely required function and is available in other languages.
Here is a simple implementation:
final static double EPSILON = 1e-12;
public static double map(double valueCoord1,
double startCoord1, double endCoord1,
double startCoord2, double endCoord2) {
if (Math.abs(endCoord1 - startCoord1) < EPSILON) {
throw new ArithmeticException("/ 0");
}
double offset = startCoord2;
double ratio = (endCoord2 - startCoord2) / (endCoord1 - startCoord1);
return ratio * (valueCoord1 - startCoord1) + offset;
}
I am putting this code here as a reference for future myself and may be it will help someone.
As an aside, this is the same problem as the classic convert celcius to farenheit where you want to map a number range that equates 0 - 100 (C) to 32 - 212 (F).
https://rosettacode.org/wiki/Map_range
[a1, a2] => [b1, b2]
if s in range of [a1, a2]
then t which will be in range of [b1, b2]
t= b1 + ((s- a1) * (b2-b1))/ (a2-a1)
In addition to #PeterAllenWebb answer, if you would like to reverse back the result use the following:
reverseX = (B-A)*(Y-C)/(D-C) + A
Each unit interval on the first range takes up (d-c)/(b-a) "space" on the second range.
Pseudo:
var interval = (d-c)/(b-a)
for n = 0 to (b - a)
print c + n*interval
How you handle the rounding is up to you.
if your range from [a to b] and you want to map it in [c to d] where x is the value you want to map
use this formula (linear mapping)
double R = (d-c)/(b-a)
double y = c+(x*R)+R
return(y)
Where X is the number to map from A-B to C-D, and Y is the result:
Take the linear interpolation formula, lerp(a,b,m)=a+(m*(b-a)), and put C and D in place of a and b to get Y=C+(m*(D-C)). Then, in place of m, put (X-A)/(B-A) to get Y=C+(((X-A)/(B-A))*(D-C)). This is an okay map function, but it can be simplified. Take the (D-C) piece, and put it inside the dividend to get Y=C+(((X-A)*(D-C))/(B-A)). This gives us another piece we can simplify, (X-A)*(D-C), which equates to (X*D)-(X*C)-(A*D)+(A*C). Pop that in, and you get Y=C+(((X*D)-(X*C)-(A*D)+(A*C))/(B-A)). The next thing you need to do is add in the +C bit. To do that, you multiply C by (B-A) to get ((B*C)-(A*C)), and move it into the dividend to get Y=(((X*D)-(X*C)-(A*D)+(A*C)+(B*C)-(A*C))/(B-A)). This is redundant, containing both a +(A*C) and a -(A*C), which cancel each other out. Remove them, and you get a final result of: Y=((X*D)-(X*C)-(A*D)+(B*C))/(B-A)
TL;DR: The standard map function, Y=C+(((X-A)/(B-A))*(D-C)), can be simplified down to Y=((X*D)-(X*C)-(A*D)+(B*C))/(B-A)
int srcMin = 2, srcMax = 6;
int tgtMin = 10, tgtMax = 20;
int nb = srcMax - srcMin;
int range = tgtMax - tgtMin;
float rate = (float) range / (float) nb;
println(srcMin + " > " + tgtMin);
float stepF = tgtMin;
for (int i = 1; i < nb; i++)
{
stepF += rate;
println((srcMin + i) + " > " + (int) (stepF + 0.5) + " (" + stepF + ")");
}
println(srcMax + " > " + tgtMax);
With checks on divide by zero, of course.
I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++.
Here's an example of the equations:
-44.3940 = a * 50.0 + b * 37.0 + tx
-45.3049 = a * 43.0 + b * 39.0 + tx
-44.9594 = a * 52.0 + b * 41.0 + tx
From this, I'd like to get the best approximation for a, b, and tx.
Cramer's Rule
and
Gaussian Elimination
are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course).
EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL.
You can solve this with a program exactly the same way you solve it by hand (with multiplication and subtraction, then feeding results back into the equations). This is pretty standard secondary-school-level mathematics.
-44.3940 = 50a + 37b + c (A)
-45.3049 = 43a + 39b + c (B)
-44.9594 = 52a + 41b + c (C)
(A-B): 0.9109 = 7a - 2b (D)
(B-C): 0.3455 = -9a - 2b (E)
(D-E): 1.2564 = 16a (F)
(F/16): a = 0.078525 (G)
Feed G into D:
0.9109 = 7a - 2b
=> 0.9109 = 0.549675 - 2b (substitute a)
=> 0.361225 = -2b (subtract 0.549675 from both sides)
=> -0.1806125 = b (divide both sides by -2) (H)
Feed H/G into A:
-44.3940 = 50a + 37b + c
=> -44.3940 = 3.92625 - 6.6826625 + c (substitute a/b)
=> -41.6375875 = c (subtract 3.92625 - 6.6826625 from both sides)
So you end up with:
a = 0.0785250
b = -0.1806125
c = -41.6375875
If you plug these values back into A, B and C, you'll find they're correct.
The trick is to use a simple 4x3 matrix which reduces in turn to a 3x2 matrix, then a 2x1 which is "a = n", n being an actual number. Once you have that, you feed it into the next matrix up to get another value, then those two values into the next matrix up until you've solved all variables.
Provided you have N distinct equations, you can always solve for N variables. I say distinct because these two are not:
7a + 2b = 50
14a + 4b = 100
They are the same equation multiplied by two so you cannot get a solution from them - multiplying the first by two then subtracting leaves you with the true but useless statement:
0 = 0 + 0
By way of example, here's some C code that works out the simultaneous equations that you're placed in your question. First some necessary types, variables, a support function for printing out an equation, and the start of main:
#include <stdio.h>
typedef struct { double r, a, b, c; } tEquation;
tEquation equ1[] = {
{ -44.3940, 50, 37, 1 }, // -44.3940 = 50a + 37b + c (A)
{ -45.3049, 43, 39, 1 }, // -45.3049 = 43a + 39b + c (B)
{ -44.9594, 52, 41, 1 }, // -44.9594 = 52a + 41b + c (C)
};
tEquation equ2[2], equ3[1];
static void dumpEqu (char *desc, tEquation *e, char *post) {
printf ("%10s: %12.8lf = %12.8lfa + %12.8lfb + %12.8lfc (%s)\n",
desc, e->r, e->a, e->b, e->c, post);
}
int main (void) {
double a, b, c;
Next, the reduction of the three equations with three unknowns to two equations with two unknowns:
// First step, populate equ2 based on removing c from equ.
dumpEqu (">", &(equ1[0]), "A");
dumpEqu (">", &(equ1[1]), "B");
dumpEqu (">", &(equ1[2]), "C");
puts ("");
// A - B
equ2[0].r = equ1[0].r * equ1[1].c - equ1[1].r * equ1[0].c;
equ2[0].a = equ1[0].a * equ1[1].c - equ1[1].a * equ1[0].c;
equ2[0].b = equ1[0].b * equ1[1].c - equ1[1].b * equ1[0].c;
equ2[0].c = 0;
// B - C
equ2[1].r = equ1[1].r * equ1[2].c - equ1[2].r * equ1[1].c;
equ2[1].a = equ1[1].a * equ1[2].c - equ1[2].a * equ1[1].c;
equ2[1].b = equ1[1].b * equ1[2].c - equ1[2].b * equ1[1].c;
equ2[1].c = 0;
dumpEqu ("A-B", &(equ2[0]), "D");
dumpEqu ("B-C", &(equ2[1]), "E");
puts ("");
Next, the reduction of the two equations with two unknowns to one equation with one unknown:
// Next step, populate equ3 based on removing b from equ2.
// D - E
equ3[0].r = equ2[0].r * equ2[1].b - equ2[1].r * equ2[0].b;
equ3[0].a = equ2[0].a * equ2[1].b - equ2[1].a * equ2[0].b;
equ3[0].b = 0;
equ3[0].c = 0;
dumpEqu ("D-E", &(equ3[0]), "F");
puts ("");
Now that we have a formula of the type number1 = unknown * number2, we can simply work out the unknown value with unknown <- number1 / number2. Then, once you've figured that value out, substitute it into one of the equations with two unknowns and work out the second value. Then substitute both those (now-known) unknowns into one of the original equations and you now have the values for all three unknowns:
// Finally, substitute values back into equations.
a = equ3[0].r / equ3[0].a;
printf ("From (F ), a = %12.8lf (G)\n", a);
b = (equ2[0].r - equ2[0].a * a) / equ2[0].b;
printf ("From (D,G ), b = %12.8lf (H)\n", b);
c = (equ1[0].r - equ1[0].a * a - equ1[0].b * b) / equ1[0].c;
printf ("From (A,G,H), c = %12.8lf (I)\n", c);
return 0;
}
The output of that code matches the earlier calculations in this answer:
>: -44.39400000 = 50.00000000a + 37.00000000b + 1.00000000c (A)
>: -45.30490000 = 43.00000000a + 39.00000000b + 1.00000000c (B)
>: -44.95940000 = 52.00000000a + 41.00000000b + 1.00000000c (C)
A-B: 0.91090000 = 7.00000000a + -2.00000000b + 0.00000000c (D)
B-C: -0.34550000 = -9.00000000a + -2.00000000b + 0.00000000c (E)
D-E: -2.51280000 = -32.00000000a + 0.00000000b + 0.00000000c (F)
From (F ), a = 0.07852500 (G)
From (D,G ), b = -0.18061250 (H)
From (A,G,H), c = -41.63758750 (I)
Take a look at the Microsoft Solver Foundation.
With it you could write code like this:
SolverContext context = SolverContext.GetContext();
Model model = context.CreateModel();
Decision a = new Decision(Domain.Real, "a");
Decision b = new Decision(Domain.Real, "b");
Decision c = new Decision(Domain.Real, "c");
model.AddDecisions(a,b,c);
model.AddConstraint("eqA", -44.3940 == 50*a + 37*b + c);
model.AddConstraint("eqB", -45.3049 == 43*a + 39*b + c);
model.AddConstraint("eqC", -44.9594 == 52*a + 41*b + c);
Solution solution = context.Solve();
string results = solution.GetReport().ToString();
Console.WriteLine(results);
Here is the output:
===Solver Foundation Service Report===
Datetime: 04/20/2009 23:29:55
Model Name: Default
Capabilities requested: LP
Solve Time (ms): 1027
Total Time (ms): 1414
Solve Completion Status: Optimal
Solver Selected: Microsoft.SolverFoundation.Solvers.SimplexSolver
Directives:
Microsoft.SolverFoundation.Services.Directive
Algorithm: Primal
Arithmetic: Hybrid
Pricing (exact): Default
Pricing (double): SteepestEdge
Basis: Slack
Pivot Count: 3
===Solution Details===
Goals:
Decisions:
a: 0.0785250000000004
b: -0.180612500000001
c: -41.6375875
For a 3x3 system of linear equations I guess it would be okay to roll out your own algorithms.
However, you might have to worry about accuracy, division by zero or really small numbers and what to do about infinitely many solutions. My suggestion is to go with a standard numerical linear algebra package such as LAPACK.
Are you looking for a software package that'll do the work or actually doing the matrix operations and such and do each step?
The the first, a coworker of mine just used Ocaml GLPK. It is just a wrapper for the GLPK, but it removes a lot of the steps of setting things up. It looks like you're going to have to stick with the GLPK, in C, though. For the latter, thanks to delicious for saving an old article I used to learn LP awhile back, PDF. If you need specific help setting up further, let us know and I'm sure, me or someone will wander back in and help, but, I think it's fairly straight forward from here. Good Luck!
Template Numerical Toolkit from NIST has tools for doing that.
One of the more reliable ways is to use a QR Decomposition.
Here's an example of a wrapper so that I can call "GetInverse(A, InvA)" in my code and it will put the inverse into InvA.
void GetInverse(const Array2D<double>& A, Array2D<double>& invA)
{
QR<double> qr(A);
invA = qr.solve(I);
}
Array2D is defined in the library.
In terms of run-time efficiency, others have answered better than I. If you always will have the same number of equations as variables, I like Cramer's rule as it's easy to implement. Just write a function to calculate determinant of a matrix (or use one that's already written, I'm sure you can find one out there), and divide the determinants of two matrices.
Personally, I'm partial to the algorithms of Numerical Recipes. (I'm fond of the C++ edition.)
This book will teach you why the algorithms work, plus show you some pretty-well debugged implementations of those algorithms.
Of course, you could just blindly use CLAPACK (I've used it with great success), but I would first hand-type a Gaussian Elimination algorithm to at least have a faint idea of the kind of work that has gone into making these algorithms stable.
Later, if you're doing more interesting linear algebra, looking around the source code of Octave will answer a lot of questions.
From the wording of your question, it seems like you have more equations than unknowns and you want to minimize the inconsistencies. This is typically done with linear regression, which minimizes the sum of the squares of the inconsistencies. Depending on the size of the data, you can do this in a spreadsheet or in a statistical package. R is a high-quality, free package that does linear regression, among a lot of other things. There is a lot to linear regression (and a lot of gotcha's), but as it's straightforward to do for simple cases. Here's an R example using your data. Note that the "tx" is the intercept to your model.
> y <- c(-44.394, -45.3049, -44.9594)
> a <- c(50.0, 43.0, 52.0)
> b <- c(37.0, 39.0, 41.0)
> regression = lm(y ~ a + b)
> regression
Call:
lm(formula = y ~ a + b)
Coefficients:
(Intercept) a b
-41.63759 0.07852 -0.18061
function x = LinSolve(A,y)
%
% Recursive Solution of Linear System Ax=y
% matlab equivalent: x = A\y
% x = n x 1
% A = n x n
% y = n x 1
% Uses stack space extensively. Not efficient.
% C allows recursion, so convert it into C.
% ----------------------------------------------
n=length(y);
x=zeros(n,1);
if(n>1)
x(1:n-1,1) = LinSolve( A(1:n-1,1:n-1) - (A(1:n-1,n)*A(n,1:n-1))./A(n,n) , ...
y(1:n-1,1) - A(1:n-1,n).*(y(n,1)/A(n,n)));
x(n,1) = (y(n,1) - A(n,1:n-1)*x(1:n-1,1))./A(n,n);
else
x = y(1,1) / A(1,1);
end
For general cases, you could use python along with numpy for Gaussian elimination. And then plug in values and get the remaining values.