Maxima: How to factor out a finite sum? - math

This question is about humane representation of Maxima output.
In short, how do I make
b*d4 + b*d3 + b*d2 + b*d1 + a*sin(c4 + alpha) + a*sin(c3 + alpha) + a*sin(c2 + alpha) + a*sin(c1 + alpha)
look like
b*sum_{i=1}^{4} d_i + a*sum_{j=1}^{4}sin(c_i + \alpha)
where sum_{*}^{*}* is a summation sign and an expression with subscripts ?
Or deeper, how to properly model a finite set of items here ?
Consider a finite set of entities $x_i$ (trying to speak tex here) that are numbered from 1 to n where n is known. Let a function $F$ depend on several characteristics of those entities $c_ji = c_j(x_i), j = 1..k$ (k - also known) so that $F = F(c_11,...,c_kn)$.
Now when I try to implement that in Maxima and do things with it, it would yield sums and products of all kinds, where the numbered items are represented something like $c_1*a + c_2*a + c_3*a + c_4*a + d_1*b + d_2*b + d_3*b + d_4*b$ which you would write down on paper as $a*\sum_{i=1}^{4}c_i + b*sum_{i=1}^{4}d_i$.
So how can I make Maxima do that sort of expression contraction ?
To be more specific, here is an actual code example:
(Maxima output marked as ">>>")
/* let's have 4 entities: */
n: 4 $
/* F is a sum of similar components corresponding to each entity F = F_1 + F_2 + F_3 + F_4 */
F_i: a*sin(alpha + c_i) + b*d_i;
>>> b*d_i + a*sin(c_i + alpha)
/* defining the characteristics */
c(i) := concat(c, i) $
d(i) := concat(d, i) $
/* now let's see what F looks like */
/* first, we should model the fact that we have 4 entities somehow: */
F_i(i) := subst(c(i), c_i, subst(d(i), d_i, F_i)) $
/* now we can evaluate F: */
F: sum(F_i(i), i, 1, 4);
>>> b*d4 + b*d3 + b*d2 + b*d1 + a*sin(c4 + alpha) + a*sin(c3 + alpha) + a*sin(c2 + alpha) + a*sin(c1 + alpha)
/* at this point it would be nice to do something like: */
/* pretty(F); */
/* and get an output of: */
/* $b*\sum_{i=1}^{4}d_i + a*\sum_{j=1}^4 sin(c_j + \alpha)$ */
/* not to mention having Maxima write things in the same order as I do */
So, to sum up, there are three quetions here:
How do I factor out a sum from an expression like the one on top of this post ?
How do I properly let Maxima know what I'm speaking about here ?
How to make Maxima preserve my order of things in output ?
Thanks in advance.

Here's a way to go about what I think you want.
(%i1) n: 4 $
(%i2) F(i) := a*sin(alpha + c[i]) + b*d[i];
(%o2) F(i) := a sin(alpha + c ) + b d
i i
(%i3) 'sum(F(i), i,1,4);
4
====
\
(%o3) > (a sin(c + alpha) + b d )
/ i i
====
i = 1
(%i4) declare (nounify(sum), linear);
(%o4) done
(%i5) 'sum(F(i), i,1,4);
4 4
==== ====
\ \
(%o5) a > sin(c + alpha) + b > d
/ i / i
==== ====
i = 1 i = 1
(%i6)
The most important thing here is that I've written what we would call "c sub i" and "d sub i" as c[i] and d[i] respectively. These are indexed variables named c and d, and i is the index. It's not necessary for there to actually exist arrays or lists named c or d, and i might or might not have a specific value.
I have written F as an ordinary function. I've avoided the construction of variable names via concat and avoided the substitution of those names into an expression. I would like to emphasize that such operations are almost certainly not the best way to go about it.
In %i3 note that I wrote the summation as 'sum(...) which makes it a so-called noun expression, which means it is maintained in a symbolic form and not evaluated.
By default, summations are not treated as linear, so in %i4 I declared summations as linear so that the result in %o5 is as expected.
Maxima doesn't have a way to collect expressions such as a1 + a2 + a3 back into a symbolic summation, but perhaps you don't need such an operation.

Related

Why is my approximation of the Gamma function not exact?

So I'm setting out to recreating some math functions from the math library from python.
One of those functions is the math.gamma-function. Since I know my way around JavaScript I thought I might try to translate the JavaScript implementation of the Lanczos approximation on Rosetta code into Applescript code:
on gamma(x)
set p to {1.0, 676.520368121885, -1259.139216722403, 771.323428777653, -176.615029162141, 12.507343278687, -0.138571095266, 9.98436957801957E-6, 1.50563273514931E-7}
set E to 2.718281828459045
set g to 7
if x < 0.5 then
return pi / (sin(pi * x) * (gamma(1 - x)))
end if
set x to x - 1
set a to item 1 of p
set t to x + g + 0.5
repeat with i from 2 to count of p
set a to a + ((item i of p) / (x + i))
end repeat
return ((2 * pi) ^ 0.5) * (t ^ x + 0.5) * (E ^ -t) * a
end gamma
The required function for this to run is:
on sin(x)
return (do shell script "python3 -c 'import math; print(math.sin(" & x & "))'") as number
end sin
All the other functions of the Javascript implementation have been removed to not have too many required functions, but the inline operations I introduced produce the same result.
This Javascript-code works great when trying to run it in the browser-console, but my Applescript implementation doesn't produce answers anywhere near the actual result. Is it because...
...I implemented something wrong?
...Applescript doesn't have enough precision?
...something else?
You made two mistakes in your code:
First of all, the i in your repeat statement starts at 2 rather than 1, which is fine for (item i of p), but it needs to be subtracted by 1 in the (x + i).
Secondly, in the code (t ^ x + 0.5) in the return statement, the t and x are being calculated first since they are exponents and then added to 0.5, but according to the JavaScript implementation the x and 0.5 need to be added together first instead.

sympy: rootfinding of parameterized function in range

I'm having trouble to find the root of a parameterized quintic polynome. Background is: I want to find the parameter s_f such that for any given parameter d_f, the curvature of the polynomial is smaller than a threshold (yeah .. sounds complex, but the math is rather straight);
# define quintic polynomial (jerk-minimized trajectory)
# see http://courses.shadmehrlab.org/Shortcourse/minimumjerk.pdf
s = symbols('s', real=True, positive=True)
s_f = symbols('s_f', real=True, positive=True, nonzero=True)
d_0 = 0
d_f = symbols('d_f', real=True, positive=True, nonzero=True)
d_of_s = d_0 + (d_f - d_0) * ( 10*(s/s_f)**3 - 15*(s/s_f)**4 + 6*(s/s_f)**5 )
display(d_of_s)
# define curvature of d_of_s
# see https://en.wikipedia.org/wiki/Curvature#In_terms_of_a_general_parametrization
y = d_of_s
dy = diff(d_of_s, s)
ddy = diff(dy, s)
x = s
dx = diff(x, s) # evaluates to 1
ddx = diff(dx, s) # evaluates to 0
k = (dx*ddy - dy*ddx) / ((dx*dx + dy*dy)**Rational(3,2))
# the goal is to find s_f for any given d_f, such that k(s) < some_threshold
# strategy: find the roots of the derivative of k in the range of sāˆˆ[0, s_f]
dk = diff(k, s)
dk = simplify(dk)
display(dk)
# now solve
res = solveset(dk, s, Interval(0, s_f).intersection(S.Reals))
display(res)
The function dk(s, d_f, s_f) has two root in the interval sāˆˆ[0, s_f], however solveset returns this:
ConditionSet(s, Eq(5400*d_f**2*s**4*(-s**2 + 2*s*s_f - s_f**2)*(2*s**2 - 3*s*s_f + s_f**2)**2 + (900*d_f**2*s**4*(s**2 - 2*s*s_f + s_f**2)**2 + s_f**10)*(6*s**2 - 6*s*s_f + s_f**2), 0), Interval(0, s_f))
.. which is afaik equivalent to: I can't solve this, we got infinite number of results. Well, this is true for the function in general. limit(dk, s, -oo) and limit(dk, s, +oo) is zero. But since I stated the domain interval, why am I not getting the two roots I'm expecting? I'd also expect to get a more granular result:
set containing the roots for s < 0;
set containing the roots for s > s_f
the two roots when sāˆˆ[0, s_f]
I started with solve() and a lot of different assumptions on my symbols. I get different results for different assumptions, but no combination seems to yield what I need. When I state no assumptions, I get back a set with a huge condition and 8 roots, that don't seem real or correct. In general, the constraints are:
- all symbols are real
- s_f > 0
- d_f > 0
- s āˆˆ [0, s_f] (domain range .. the polynomial is only evaluated in this interval)
I guess the problem is that I'm not setting up my solveset correctly:
how to specify that s_f and d_f are real? afaik the symbol
assumptions are ignored when using solveset?
how to specify intervals
and assumptions on other multivariate functions, i.e. other symbols
than the domain one?
This is what d_of_s look like for s_f = 1, d_f = 1
And this is what dk(s) looks like (I plotted outside the domain range to visualize the problem).
Substitute in the known values of d_f and s_f and use real_roots to find the real roots of the numerator of dk. Keep the ones that have a value in the range of interest:
>>> s_fi = 3
>>> [i for i in real_roots(dk.subs(d_f,2).subs(s_f, s_fi).as_numer_denom()[0])
... if 0 <= i.n(2) <= s_fi]
[CRootOf(800*s**10 - 12000*s**9 + 74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5
+ 162000*s**4 - 4374*s**2 + 13122*s - 6561, 1), CRootOf(800*s**10 - 12000*s**9 +
74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5 + 162000*s**4 - 4374*s**2 +
13122*s - 6561, 2)]
I kept the CRootOf instances because they can be computed to arbitrary precision, e.g. to 3 digits:
>>> [i.n(3) for i in _]
[0.433, 2.57]

why powerset gives 2^N time complexity?

The following is a recursive function for generating powerset
void powerset(int[] items, int s, Stack<Integer> res) {
System.out.println(res);
for(int i = s; i < items.length; i++) {
res.push(items[i]);
powerset(items, s+1, res);
res.pop();
}
}
I don't really understand why this would take O(2^N). Where's that 2 coming from ?
Why T(N) = T(N-1) + T(N-2) + T(N-3) + .... + T(1) + T(0) solves to O(2^n). Can someone explains why ?
We are doubling the number of operations we do every time we decide to add another element to the original array.
For example, let us say we only have the empty set {}. What happens to the power set if we want to add {a}? We would then have 2 sets: {}, {a}. What if we wanted to add {b}? We would then have 4 sets: {}, {a}, {b}, {ab}.
Notice 2^n also implies a doubling nature.
2^1 = 2, 2^2 = 4, 2^3 = 8, ...
Below is more generic explanation.
Note that generating power set is basically generating combinations.
(nCr is number of combinations can be made by taking r items from total n items)
formula: nCr = n!/((n-r)! * r!)
Example:Power Set for {1,2,3} is {{}, {1}, {2}, {3}, {1,2}, {2,3}, {1,3} {1,2,3}} = 8 = 2^3
1) 3C0 = #combinations possible taking 0 items from 3 = 3! / ((3-0)! * 0!) = 1
2) 3C1 = #combinations possible taking 1 items from 3 = 3! / ((3-1)! * 1!) = 3
3) 3C2 = #combinations possible taking 2 items from 3 = 3! / ((3-2)! * 2!) = 3
4) 3C3 = #combinations possible taking 3 items from 3 = 3! / ((3-3)! * 3!) = 1
if you add above 4 it comes out 1 + 3 + 3 + 1 = 8 = 2^3. So basically it turns out to be 2^n possible sets in a power set of n items.
So in an algorithm if you are generating a power set with all these combinations, then its going to take time proportional to 2^n. And so the time complexity is 2^n.
Something like this
T(1)=T(0);
T(2)=T(1)+T(0)=2T(0);
T(3)=T(2)+T(1)+T(0)=2T(2);
Thus we have
T(N)=2T(N-1)=4T(N-2)=... = 2^(N-1)T(1), which is O(2^N)
Shriganesh Shintre's answer is pretty good, but you can simplify it even more:
Assume we have a set S:
{a1, a2, ..., aN}
Now we can write a subset s where each item in the set can have the value 1 (included) or 0 (excluded). Now we can see that the number of possible sets s is a product of:
2 * 2 * ... * 2 or 2^N
I could explain this in several mathematical ways to you the first one :
Consider one element like a each subset have 2 option about a either they have it or not so we must have $ 2^n $ subset and since you need to call function for create every subset you need to call this function $ 2^n $.
another solution:
This solution is with this recursion and it produce you equation , let me define T(0) = 2 for first a set with one element we have T(1) = 2, you just call the function and it ends here. now suppose that for every sets with k < n elements we have this formula
T(k) = T(k-1) + T(k-2) + ... + T(1) + T(0) (I name it * formula)
I want to prove that for k = n this equation is true.
consider every subsets that have the first element ( like what you do at began of your algorithm and you push first element) now we have n-1 elements so it take T(n-1) to find every subsets that have the first element. so far we have :
T(n) = T(n-1) + T(subsets that dont have the first element) (I name it ** formula)
at the end of your for loop you remove the first element now we have every subsets that dont have the first element like I said in (**) and again you have n-1 elements so we have :
T(subsets that dont have the first element) = T(n-1) (I name it * formula)
from formula (*) and (*) we have :
T(subsets that dont have the first element) = T(n-1) = T(n-2) + ... + T(1) + T(0) (I name it **** formula)
And now we have what you want from the first, from formula() and (**) we have :
T(n) = T(n-1) + T(n-2) + ... + T(1) + T(0)
And also we have T(n) = T(n-1) + T(n-1) = 2 * T(n-1) so T(n) = $2^n $

When dealing with integer division, is there a way to collect like terms?

For example, when NOT working with integer division the following is true
x/4 + x/2 = x*(1/4+1/2) = x * 3/4
When dealing with integer division is there a way to reduce x/4 + x/2 into this form:
x * (int1/int2)? If so, how?
The question reduce x/4 + x/2 into this form: x * (int1/int2) appears to be not quite the query you want. Forcing the (int1/int2) division first simple results in int3.
So let's work with
reduce x/4 + x/2 into this form: (x * int1)/int2
As others have mentioned, there are issues with this that hints to its impossibility. So I'll propose yet another form that might work for you in that it is still one access to x and no branching.
reduce x/4 + x/2 into this form: ((x/int1)*int2)/int3
x/4 + x/2 reduces to ((x/2)*3)/2. It takes advantage that 4 is a multiple of 2.
Note: There remains a possibility of overflow for large |x| beginning with INTMAX/3*2 or so.
Test code
int test2(int x) {
int y1 = x/4 + x/2;
int y2 = ((x/2)*3)/2;
printf("%3d %3d %3d %d\n", x, y1, y2, y1==y2);
return y1==y2;
}
I don't think you'll be able to do this. Take for example
5 \ 3 + 5 \ 2 = 1 + 2 = 3
where \ denotes integer division.
Now look at the same expression with regular division
a / b + a / c = a(b + c) / bc
If we were to try to apply this rule to the example above, substituting \ for /, we would get this:
5 \ 3 + 5 \ 2 = 5(3 + 2) \ (2 * 3) = 25 \ 6 = 4 [wrong answer!]
^^^
This must be wrong
I'm not trying to make the claim that there doesn't exist some identity similar to this that is correct.

Solving a linear equation

I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++.
Here's an example of the equations:
-44.3940 = a * 50.0 + b * 37.0 + tx
-45.3049 = a * 43.0 + b * 39.0 + tx
-44.9594 = a * 52.0 + b * 41.0 + tx
From this, I'd like to get the best approximation for a, b, and tx.
Cramer's Rule
and
Gaussian Elimination
are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course).
EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL.
You can solve this with a program exactly the same way you solve it by hand (with multiplication and subtraction, then feeding results back into the equations). This is pretty standard secondary-school-level mathematics.
-44.3940 = 50a + 37b + c (A)
-45.3049 = 43a + 39b + c (B)
-44.9594 = 52a + 41b + c (C)
(A-B): 0.9109 = 7a - 2b (D)
(B-C): 0.3455 = -9a - 2b (E)
(D-E): 1.2564 = 16a (F)
(F/16): a = 0.078525 (G)
Feed G into D:
0.9109 = 7a - 2b
=> 0.9109 = 0.549675 - 2b (substitute a)
=> 0.361225 = -2b (subtract 0.549675 from both sides)
=> -0.1806125 = b (divide both sides by -2) (H)
Feed H/G into A:
-44.3940 = 50a + 37b + c
=> -44.3940 = 3.92625 - 6.6826625 + c (substitute a/b)
=> -41.6375875 = c (subtract 3.92625 - 6.6826625 from both sides)
So you end up with:
a = 0.0785250
b = -0.1806125
c = -41.6375875
If you plug these values back into A, B and C, you'll find they're correct.
The trick is to use a simple 4x3 matrix which reduces in turn to a 3x2 matrix, then a 2x1 which is "a = n", n being an actual number. Once you have that, you feed it into the next matrix up to get another value, then those two values into the next matrix up until you've solved all variables.
Provided you have N distinct equations, you can always solve for N variables. I say distinct because these two are not:
7a + 2b = 50
14a + 4b = 100
They are the same equation multiplied by two so you cannot get a solution from them - multiplying the first by two then subtracting leaves you with the true but useless statement:
0 = 0 + 0
By way of example, here's some C code that works out the simultaneous equations that you're placed in your question. First some necessary types, variables, a support function for printing out an equation, and the start of main:
#include <stdio.h>
typedef struct { double r, a, b, c; } tEquation;
tEquation equ1[] = {
{ -44.3940, 50, 37, 1 }, // -44.3940 = 50a + 37b + c (A)
{ -45.3049, 43, 39, 1 }, // -45.3049 = 43a + 39b + c (B)
{ -44.9594, 52, 41, 1 }, // -44.9594 = 52a + 41b + c (C)
};
tEquation equ2[2], equ3[1];
static void dumpEqu (char *desc, tEquation *e, char *post) {
printf ("%10s: %12.8lf = %12.8lfa + %12.8lfb + %12.8lfc (%s)\n",
desc, e->r, e->a, e->b, e->c, post);
}
int main (void) {
double a, b, c;
Next, the reduction of the three equations with three unknowns to two equations with two unknowns:
// First step, populate equ2 based on removing c from equ.
dumpEqu (">", &(equ1[0]), "A");
dumpEqu (">", &(equ1[1]), "B");
dumpEqu (">", &(equ1[2]), "C");
puts ("");
// A - B
equ2[0].r = equ1[0].r * equ1[1].c - equ1[1].r * equ1[0].c;
equ2[0].a = equ1[0].a * equ1[1].c - equ1[1].a * equ1[0].c;
equ2[0].b = equ1[0].b * equ1[1].c - equ1[1].b * equ1[0].c;
equ2[0].c = 0;
// B - C
equ2[1].r = equ1[1].r * equ1[2].c - equ1[2].r * equ1[1].c;
equ2[1].a = equ1[1].a * equ1[2].c - equ1[2].a * equ1[1].c;
equ2[1].b = equ1[1].b * equ1[2].c - equ1[2].b * equ1[1].c;
equ2[1].c = 0;
dumpEqu ("A-B", &(equ2[0]), "D");
dumpEqu ("B-C", &(equ2[1]), "E");
puts ("");
Next, the reduction of the two equations with two unknowns to one equation with one unknown:
// Next step, populate equ3 based on removing b from equ2.
// D - E
equ3[0].r = equ2[0].r * equ2[1].b - equ2[1].r * equ2[0].b;
equ3[0].a = equ2[0].a * equ2[1].b - equ2[1].a * equ2[0].b;
equ3[0].b = 0;
equ3[0].c = 0;
dumpEqu ("D-E", &(equ3[0]), "F");
puts ("");
Now that we have a formula of the type number1 = unknown * number2, we can simply work out the unknown value with unknown <- number1 / number2. Then, once you've figured that value out, substitute it into one of the equations with two unknowns and work out the second value. Then substitute both those (now-known) unknowns into one of the original equations and you now have the values for all three unknowns:
// Finally, substitute values back into equations.
a = equ3[0].r / equ3[0].a;
printf ("From (F ), a = %12.8lf (G)\n", a);
b = (equ2[0].r - equ2[0].a * a) / equ2[0].b;
printf ("From (D,G ), b = %12.8lf (H)\n", b);
c = (equ1[0].r - equ1[0].a * a - equ1[0].b * b) / equ1[0].c;
printf ("From (A,G,H), c = %12.8lf (I)\n", c);
return 0;
}
The output of that code matches the earlier calculations in this answer:
>: -44.39400000 = 50.00000000a + 37.00000000b + 1.00000000c (A)
>: -45.30490000 = 43.00000000a + 39.00000000b + 1.00000000c (B)
>: -44.95940000 = 52.00000000a + 41.00000000b + 1.00000000c (C)
A-B: 0.91090000 = 7.00000000a + -2.00000000b + 0.00000000c (D)
B-C: -0.34550000 = -9.00000000a + -2.00000000b + 0.00000000c (E)
D-E: -2.51280000 = -32.00000000a + 0.00000000b + 0.00000000c (F)
From (F ), a = 0.07852500 (G)
From (D,G ), b = -0.18061250 (H)
From (A,G,H), c = -41.63758750 (I)
Take a look at the Microsoft Solver Foundation.
With it you could write code like this:
SolverContext context = SolverContext.GetContext();
Model model = context.CreateModel();
Decision a = new Decision(Domain.Real, "a");
Decision b = new Decision(Domain.Real, "b");
Decision c = new Decision(Domain.Real, "c");
model.AddDecisions(a,b,c);
model.AddConstraint("eqA", -44.3940 == 50*a + 37*b + c);
model.AddConstraint("eqB", -45.3049 == 43*a + 39*b + c);
model.AddConstraint("eqC", -44.9594 == 52*a + 41*b + c);
Solution solution = context.Solve();
string results = solution.GetReport().ToString();
Console.WriteLine(results);
Here is the output:
===Solver Foundation Service Report===
Datetime: 04/20/2009 23:29:55
Model Name: Default
Capabilities requested: LP
Solve Time (ms): 1027
Total Time (ms): 1414
Solve Completion Status: Optimal
Solver Selected: Microsoft.SolverFoundation.Solvers.SimplexSolver
Directives:
Microsoft.SolverFoundation.Services.Directive
Algorithm: Primal
Arithmetic: Hybrid
Pricing (exact): Default
Pricing (double): SteepestEdge
Basis: Slack
Pivot Count: 3
===Solution Details===
Goals:
Decisions:
a: 0.0785250000000004
b: -0.180612500000001
c: -41.6375875
For a 3x3 system of linear equations I guess it would be okay to roll out your own algorithms.
However, you might have to worry about accuracy, division by zero or really small numbers and what to do about infinitely many solutions. My suggestion is to go with a standard numerical linear algebra package such as LAPACK.
Are you looking for a software package that'll do the work or actually doing the matrix operations and such and do each step?
The the first, a coworker of mine just used Ocaml GLPK. It is just a wrapper for the GLPK, but it removes a lot of the steps of setting things up. It looks like you're going to have to stick with the GLPK, in C, though. For the latter, thanks to delicious for saving an old article I used to learn LP awhile back, PDF. If you need specific help setting up further, let us know and I'm sure, me or someone will wander back in and help, but, I think it's fairly straight forward from here. Good Luck!
Template Numerical Toolkit from NIST has tools for doing that.
One of the more reliable ways is to use a QR Decomposition.
Here's an example of a wrapper so that I can call "GetInverse(A, InvA)" in my code and it will put the inverse into InvA.
void GetInverse(const Array2D<double>& A, Array2D<double>& invA)
{
QR<double> qr(A);
invA = qr.solve(I);
}
Array2D is defined in the library.
In terms of run-time efficiency, others have answered better than I. If you always will have the same number of equations as variables, I like Cramer's rule as it's easy to implement. Just write a function to calculate determinant of a matrix (or use one that's already written, I'm sure you can find one out there), and divide the determinants of two matrices.
Personally, I'm partial to the algorithms of Numerical Recipes. (I'm fond of the C++ edition.)
This book will teach you why the algorithms work, plus show you some pretty-well debugged implementations of those algorithms.
Of course, you could just blindly use CLAPACK (I've used it with great success), but I would first hand-type a Gaussian Elimination algorithm to at least have a faint idea of the kind of work that has gone into making these algorithms stable.
Later, if you're doing more interesting linear algebra, looking around the source code of Octave will answer a lot of questions.
From the wording of your question, it seems like you have more equations than unknowns and you want to minimize the inconsistencies. This is typically done with linear regression, which minimizes the sum of the squares of the inconsistencies. Depending on the size of the data, you can do this in a spreadsheet or in a statistical package. R is a high-quality, free package that does linear regression, among a lot of other things. There is a lot to linear regression (and a lot of gotcha's), but as it's straightforward to do for simple cases. Here's an R example using your data. Note that the "tx" is the intercept to your model.
> y <- c(-44.394, -45.3049, -44.9594)
> a <- c(50.0, 43.0, 52.0)
> b <- c(37.0, 39.0, 41.0)
> regression = lm(y ~ a + b)
> regression
Call:
lm(formula = y ~ a + b)
Coefficients:
(Intercept) a b
-41.63759 0.07852 -0.18061
function x = LinSolve(A,y)
%
% Recursive Solution of Linear System Ax=y
% matlab equivalent: x = A\y
% x = n x 1
% A = n x n
% y = n x 1
% Uses stack space extensively. Not efficient.
% C allows recursion, so convert it into C.
% ----------------------------------------------
n=length(y);
x=zeros(n,1);
if(n>1)
x(1:n-1,1) = LinSolve( A(1:n-1,1:n-1) - (A(1:n-1,n)*A(n,1:n-1))./A(n,n) , ...
y(1:n-1,1) - A(1:n-1,n).*(y(n,1)/A(n,n)));
x(n,1) = (y(n,1) - A(n,1:n-1)*x(1:n-1,1))./A(n,n);
else
x = y(1,1) / A(1,1);
end
For general cases, you could use python along with numpy for Gaussian elimination. And then plug in values and get the remaining values.

Resources