simplify algebraic expressions in R - r

I would like to solve a first order condition given an expression in R. I have successfully taken the derivative but, I suspect because I am taking the derivative of a bunch of call objects. I cannot get my expressions to simplify.
In the example code below I outline three functions, combine them with arithmetic and then take the derivative with respect to my K variable. The associated output has also been placed below.
Q = quote(K^(a)*L^(1-a))
P = quote(wL+x)
MC = quote(wL+K)
eval(parse(text=R-MC))
profit = substitute(Q*(P-MC), list(Q=Q,P=P,MC=MC))
D(profit,'K')
Output:
K^((a) - 1) * (a) * L^(1 - a) * (wL + x - (wL + K)) - K^(a) *
L^(1 - a)
Note that the (wL + x - (wL + K)) in the above output ought to simplify to (x - K). While the output is correct, I am afraid that if I went much further or tried to solve for a particular first order condition, I want to be working with the most simplified expressions possible
My question: Is there a function or means by which I can simplify expressions either prior to or once they have been mathematically evaluated?
My desired output (or any algebraic equivalent):
K^((a) - 1) * (a) * L^(1 - a) * (x - K) - K^(a) *
L^(1 - a)

Using the Ryacas0 package (also see the Ryacas package) we can write:
library(Ryacas0)
der <- D(profit,'K')
e <- as.expression(der)
Simplify(e)
giving:
yacas_expression(K^(a - 1) * a * L^(1 - a) * x - K^(a - 1) * a * L^(1 - a) * K - L^(1 - a) * K^a)

Related

Solving a system of differential equations in R

I have a simple flux model in R. It boils down to two differential equations that model two state variables within the model, we'll call them A and B. They are calculated as simple difference equations of four component fluxes flux1-flux4, 5 parameters p1-p5, and a 6th parameter, of_interest, that can take on values between 0-1.
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=0.1)
state <- c(A=28, B=1.4)
model<-function(t,state,parameters){
with(as.list(c(state,parameters)),{
#fluxes
flux1 = (1-of_interest) * p1*(B / (p2 + B))*p3
flux2 = p4* A #microbial death
flux3 = of_interest * p1*(B / (p2 + B))*p3
flux4 = p5* B
#differential equations of component fluxes
dAdt<- flux1 - flux2
dBdt<- flux3 - flux4
list(c(dAdt,dBdt))
})
I would like to write a function to take the derivative of dAdt with respect to of_interest, set the derived equation to 0, then rearrange and solve for the value of of_interest. This will be the value of the parameter of_interest that maximizes the function dAdt.
So far I have been able to solve the model at steady state, across the possible values of of_interest to demonstrate there should be a maximum.
require(rootSolve)
range<- seq(0,1,by=0.01)
for(i in range){
of_interest=i
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=of_interest)
state <- c(A=28, B=1.4)
ST<- stode(y=y,func=model,parms=parameters,pos=T)
out<- c(out,ST$y[1])
Then plotting:
plot(out~range, pch=16,col='purple')
lines(smooth.spline(out~range,spar=0.35), lwd=3,lty=1)
How can I analytically solve for the value of of_interest that maximizes dAdt in R? If an analytical solution is not possible, how can I know, and how can I go about solving this numerically?
Update: I think this problem can be solved with the deSolve package in R, linked here, however I am having trouble implementing it using my particular example.
Your equation in B(t) is just-about separable since you can divide out B(t), from which you can get that
B(t) = C * exp{-p5 * t} * (p2 + B(t)) ^ {of_interest * p1 * p3}
This is an implicit solution for B(t) which we'll solve point-wise.
You can solve for C given your initial value of B. I suppose t = 0 initially? In which case
C = B_0 / (p2 + B_0) ^ {of_interest * p1 * p3}
This also gives a somewhat nicer-looking expression for A(t):
dA(t) / dt = B_0 / (p2 + B_0) * p1 * p3 * (1 - of_interest) *
exp{-p5 * t} * ((p2 + B(t) / (p2 + B_0)) ^
{of_interest * p1 * p3 - 1} - p4 * A(t)
This can be solved by integrating factor (= exp{p4 * t}), via numerical integration of the term involving B(t). We specify the lower limit of the integral as 0 so that we never have to evaluate B outside the range [0, t], which means the integrating constant is simply A_0 and thus:
A(t) = (A_0 + integral_0^t { f(tau; parameters) d tau}) * exp{-p4 * t}
The basic gist is B(t) is driving everything in this system -- the approach will be: solve for the behavior of B(t), then use this to figure out what's going on with A(t), then maximize.
First, the "outer" parameters; we also need nleqslv to get B:
library(nleqslv)
t_min <- 0
t_max <- 10000
t_N <- 10
#we'll only solve the behavior of A & B over t_rng
t_rng <- seq(t_min, t_max, length.out = t_N)
#I'm calling of_interest ttheta
ttheta_min <- 0
ttheta_max <- 1
ttheta_N <- 5
tthetas <- seq(ttheta_min, ttheta_max, length.out = ttheta_N)
B_0 <- 1.4
A_0 <- 28
#No sense storing this as a vector when we'll only ever use it as a list
parameters <- list(p1 = 0.028, p2 = 0.3, p3 = 0.5,
p4 = 0.0002, p5 = 0.001)
From here, the basic outline is:
Given the parameter values (in particular ttheta), solve for BB over t_rng via non-linear equation solving
Given BB and the parameter values, solve for AA over t_rng by numerical integration
Given AA and your expression for dAdt, plug & maximize.
derivs <-
sapply(tthetas, function(th){
#append current ttheta
params <- c(parameters, ttheta = th)
#declare a function we'll use to solve for B (see above)
b_slv <- function(b, t)
with(params, b - B_0 * ((p2 + b)/(p2 + B_0)) ^
(ttheta * p1 * p3) * exp(-p5 * t))
#solving point-wise (this is pretty fast)
# **See below for a note**
BB <- sapply(t_rng, function(t) nleqslv(B_0, function(b) b_slv(b, t))$x)
#this is f(tau; params) that I mentioned above;
# we have to do linear interpolation since the
# numerical integrator isn't constrained to the grid.
# **See below for note**
a_int <- function(t){
#approximate t to the grid (t_rng)
# (assumes B is monotonic, which seems to be true)
# (also, if t ends up negative, just assign t_rng[1])
t_n <- max(1L, which.max(t_rng - t >= 0) - 1L)
idx <- t_n:(t_n+1)
ts <- t_rng[idx]
#distance-weighted average of the local B values
B_app <- sum((-1) ^ (0:1) * (t - ts) / diff(ts) * BB[idx])
#finally, f(tau; params)
with(params, (1 - ttheta) * p1 * p3 * B_0 / (p2 + B_0) *
((p2 + B_app)/(p2 + B_0)) ^ (ttheta * p1 * p3 - 1) *
exp((p4 - p5) * t))
}
#a_int only works on scalars; the numeric integrator
# requires a version that works on vectors
a_int_v <- function(t) sapply(t, a_int)
AA <- exp(-params$p4 * t_rng) *
sapply(t_rng, function(tt)
#I found the subdivisions constraint binding in some cases
# at the default value; no trouble at 1000.
A_0 + integrate(a_int_v, 0, tt, subdivisions = 1000L)$value)
#using the explicit version of dAdt given as flux1 - flux2
max(with(params, (1 - ttheta) * p1 * p3 * BB / (p2 + BB) - p4 * AA))})
Finally, simply run `tthetas[which.max(derivs)]` to get the maximizer.
Note:
This code is not optimized for efficiency. There are a few places where there are some potential speed-ups:
probably faster to run the equation solver recursively, as it'll converge faster with better initial guesses -- using the previous value instead of the initial value is surely better
Will be faster to simply use Riemann sums to integrate; the tradeoff is in accuracy, but should be fine if you have a dense enough grid. One beauty of Riemann is you won't have to interpolate at all, and numerically they're simple linear algebra. I ran this with t_N == ttheta_N == 1000L and it ran within a few minutes.
Probably possible to vectorize a_int directly instead of just sapplying on it, which concomitant speed-up by more direct appeal to BLAS.
Loads of other small stuff. Pre-compute ttheta * p1 * p3 since it's re-used so much, etc.
I didn't bother including any of that stuff, though, because you're honestly probably better off porting this to a faster language -- Julia is my own pet favorite, but of course R speaks well with C++, C, Fortran, etc.

Numerically stable evaluation of sqrt(x+a) - sqrt(x)

Is there an elegant way of numerically stable evaluating the following expression for the full parameter range x,a >= 0?
f(x,a) = sqrt(x+a) - sqrt(x)
Also is there any programming language or library that does provide this kind of function? If yes, under what name? I have no specific problem using the above expression right now, but encountered it many times in the past and always thought that this problem must have been solved before!
Yes, there is! Provided that at least one of x and a is positive, you can use:
f(x, a) = a / (sqrt(x + a) + sqrt(x))
which is perfectly numerically stable, but hardly worth a library function in its own right. Of course, when x = a = 0, the result should be 0.
Explanation: sqrt(x + a) - sqrt(x) is equal to (sqrt(x + a) - sqrt(x)) * (sqrt(x + a) + sqrt(x)) / (sqrt(x + a) + sqrt(x)). Now multiply the first two terms to get sqrt(x+a)^2 - sqrt(x)^2, which simplifies to a.
Here's an example demonstrating the stability: the troublesome case for the original expression is where x + a and x are very close in value (or equivalently when a is much smaller in magnitude than x). For example, if x = 1 and a is small, we know from a Taylor expansion around 1 that sqrt(1 + a) should be 1 + a/2 - a^2/8 + O(a^3), so sqrt(1 + a) - sqrt(1) should be close to a/2 - a^2/8. Let's try that for a particular choice of small a. Here's the original function (written in Python, in this case, but you can treat it as pseudocode):
def f(x, a):
return sqrt(x + a) - sqrt(x)
and here's the stable version:
def g(x, a):
if a == 0:
return 0.0
else:
return a / ((sqrt(x + a) + sqrt(x))
Now let's see what we get with x = 1 and a = 2e-10:
>>> a = 2e-10
>>> f(1, a)
1.000000082740371e-10
>>> g(1, a)
9.999999999500001e-11
The value we should have got is (up to machine accuracy): a/2 - a^2/8 - for this particular a, the cubic and higher order terms are insignificant in the context of IEEE 754 double-precision floats, which only provide around 16 decimal digits of precision. Let's compute that value for comparison:
>>> a/2 - a**2/8
9.999999999500001e-11

How to find where an equation equals zero

Say I have a function and I find the second derivative like so:
xyr <- D(expression(14252/(1+exp((-1/274.5315)*(x-893)))), 'x')
D2 <- D(xyr, 'x')
it gives me back as, typeof 'language':
-(14252 * (exp((-1/274.5315) * (x - 893)) * (-1/274.5315) * (-1/274.5315))/(1 +
exp((-1/274.5315) * (x - 893)))^2 - 14252 * (exp((-1/274.5315) *
(x - 893)) * (-1/274.5315)) * (2 * (exp((-1/274.5315) * (x -
893)) * (-1/274.5315) * (1 + exp((-1/274.5315) * (x - 893)))))/((1 +
exp((-1/274.5315) * (x - 893)))^2)^2)
how do I find where this is equal to 0?
A little bit clumsy to use a graph/solver for this, since your initial function as the form:
f(x) = c / ( 1 + exp(ax+b) )
You derive twice and solve for f''(x) = 0 :
f''(x) = c * a^2 * exp(ax+b) * (1+exp(ax+b)) * [-1 + exp(ax+b)] / ((1+exp(ax+b))^3)
Which is equivalent that the numerator equals 0 - since a, c, exp() and 1+exp() are always positive the only term which can be equal to zero is:
exp(ax+b) - 1 = 0
So:
x = -b/a
Here a =-1/274.5315, b=a*(-893). So x=893.
Just maths ;)
++:
from applied mathematician point of view, it's always better to have closed form/semi-closed form solution than using solver or optimization. You gain in speed and in accuracy.
from pur mathematician point of view, it's more elegant!
You can use uniroot after having created a function from your derivative expression:
f = function(x) eval(D2)
uniroot(f,c(0,1000)) # The second argument is the interval over which you want to search roots.
#Result:
#$root
#[1] 893
#$f.root
#[1] -2.203307e-13
#$iter
#[1] 7
#$init.it
#[1] NA
#$estim.prec
#[1] 6.103516e-05

Roots of a Quartic Function

I came across a situation doing some advanced collision detection, where I needed to calculate the roots of a quartic function.
I wrote a function that seems to work fine using Ferrari's general solution as seen here: http://en.wikipedia.org/wiki/Quartic_function#Ferrari.27s_solution.
Here's my function:
private function solveQuartic(A:Number, B:Number, C:Number, D:Number, E:Number):Array{
// For paramters: Ax^4 + Bx^3 + Cx^2 + Dx + E
var solution:Array = new Array(4);
// Using Ferrari's formula: http://en.wikipedia.org/wiki/Quartic_function#Ferrari.27s_solution
var Alpha:Number = ((-3 * (B * B)) / (8 * (A * A))) + (C / A);
var Beta:Number = ((B * B * B) / (8 * A * A * A)) - ((B * C) / (2 * A * A)) + (D / A);
var Gamma:Number = ((-3 * B * B * B * B) / (256 * A * A * A * A)) + ((C * B * B) / (16 * A * A * A)) - ((B * D) / (4 * A * A)) + (E / A);
var P:Number = ((-1 * Alpha * Alpha) / 12) - Gamma;
var Q:Number = ((-1 * Alpha * Alpha * Alpha) / 108) + ((Alpha * Gamma) / 3) - ((Beta * Beta) / 8);
var PreRoot1:Number = ((Q * Q) / 4) + ((P * P * P) / 27);
var R:ComplexNumber = ComplexNumber.add(new ComplexNumber((-1 * Q) / 2), ComplexNumber.sqrt(new ComplexNumber(PreRoot1)));
var U:ComplexNumber = ComplexNumber.pow(R, 1/3);
var preY1:Number = (-5 / 6) * Alpha;
var RedundantY:ComplexNumber = ComplexNumber.add(new ComplexNumber(preY1), U);
var Y:ComplexNumber;
if(U.isZero()){
var preY2:ComplexNumber = ComplexNumber.pow(new ComplexNumber(Q), 1/3);
Y = ComplexNumber.subtract(RedundantY, preY2);
} else{
var preY3:ComplexNumber = ComplexNumber.multiply(new ComplexNumber(3), U);
var preY4:ComplexNumber = ComplexNumber.divide(new ComplexNumber(P), preY3);
Y = ComplexNumber.subtract(RedundantY, preY4);
}
var W:ComplexNumber = ComplexNumber.sqrt(ComplexNumber.add(new ComplexNumber(Alpha), ComplexNumber.multiply(new ComplexNumber(2), Y)));
var Two:ComplexNumber = new ComplexNumber(2);
var NegativeOne:ComplexNumber = new ComplexNumber(-1);
var NegativeBOverFourA:ComplexNumber = new ComplexNumber((-1 * B) / (4 * A));
var NegativeW:ComplexNumber = ComplexNumber.multiply(W, NegativeOne);
var ThreeAlphaPlusTwoY:ComplexNumber = ComplexNumber.add(new ComplexNumber(3 * Alpha), ComplexNumber.multiply(new ComplexNumber(2), Y));
var TwoBetaOverW:ComplexNumber = ComplexNumber.divide(new ComplexNumber(2 * Beta), W);
solution["root1"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.add(W, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.add(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root2"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.subtract(NegativeW, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.subtract(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root3"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.subtract(W, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.add(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root4"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.add(NegativeW, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.subtract(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
return solution;
}
The only issue is that I seem to get a few exceptions. Most notably when I have two real roots, and two imaginary roots.
For example, this equation:
y = 0.9604000000000001x^4 - 5.997600000000001x^3 + 13.951750054511718x^2 - 14.326264455924333x + 5.474214401412618
Returns the roots:
1.7820304835380467 + 0i
1.34041662585388 + 0i
1.3404185025061823 + 0i
1.7820323472855648 + 0i
If I graph that particular equation, I can see that the actual roots are closer to 1.2 and 2.9 (approximately). I can't dismiss the four incorrect roots as random, because they're actually two of the roots for the equation's first derivative:
y = 3.8416x^3 - 17.9928x^2 + 27.9035001x - 14.326264455924333
Keep in mind that I'm not actually looking for the specific roots to the equation I posted. My question is whether there's some sort of special case that I'm not taking into consideration.
Any ideas?
For finding roots of polynomials of degree >= 3, I've always had better results using Jenkins-Traub ( http://en.wikipedia.org/wiki/Jenkins-Traub_algorithm ) than explicit formulas.
I do not know why Ferrari's solution does not work, but I tried to use the standard numerical method (create a companion matrix and compute its eigenvalues), and I obtain the correct solution, i.e., two real roots at 1.2 and 1.9.
This method is not for the faint of heart. After constructing the companion matrix of the polynomial, you run the QR algorithm to find the eigenvalues of that matrix. Those are the zeroes of the polynomial.
I suggest you to use an existing implementation of the QR algorithm since a good deal of it is closer to kitchen recipe than algorithmics. But it is, I believe, the most widely used algorithm to compute eigenvalues, and thereby, roots of polynomials.
You can see my answer to a related question. I support the view of Olivier: the way to go may just be the companion matrix / eigenvalue approach (very stable, simple, reliable, and fast).
Edit
I guess it does not hurt if I reproduce the answer here, for convenience:
The numerical solution for doing this many times in a reliable, stable manner, involve: (1) Form the companion matrix, (2) find the eigenvalues of the companion matrix.
You may think this is a harder problem to solve than the original one, but this is how the solution is implemented in most production code (say, Matlab).
For the polynomial:
p(t) = c0 + c1 * t + c2 * t^2 + t^3
the companion matrix is:
[[0 0 -c0],[1 0 -c1],[0 1 -c2]]
Find the eigenvalues of such matrix; they correspond to the roots of the original polynomial.
For doing this very fast, download the singular value subroutines from LAPACK, compile them, and link them to your code. Do this in parallel if you have too many (say, about a million) sets of coefficients. You could use QR decomposition, or any other stable methodology for computing eigenvalues (see the Wikipedia entry on "matrix eigenvalues").
Notice that the coefficient of t^3 is one, if this is not the case in your polynomials, you will have to divide the whole thing by the coefficient and then proceed.
Good luck.
Edit: Numpy and octave also depend on this methodology for computing the roots of polynomials. See, for instance, this link.
The other answers are good and sound advice. However, recalling my experience with the implementation of Ferrari's method in Forth, I think your wrong results are probably caused by 1. wrong implementation of the necessary and rather tricky sign combinations, 2. not realizing yet that ".. == beta" in floating-point should become "abs(.. - beta) < eps, 3. not yet having found out that there are other square roots in the code that may return complex solutions.
For this particular problem my Forth code in diagnostic mode returns:
x1 = 1.5612244897959360787072371026316680470492e+0000 -1.6542769593216835969789894020584464029664e-0001 i
--> -4.2123274051525879873007970023884313331788e-0054 3.4544674220377778501545407451201598284464e-0077 i
x2 = 1.5612244897959360787072371026316680470492e+0000 1.6542769593216835969789894020584464029664e-0001 i
--> -4.2123274051525879873007970023884313331788e-0054 -3.4544674220377778501545407451201598284464e-0077 i
x3 = 1.2078440724224197532447709413299479764843e+0000 0.0000000000000000000000000000000000000000e-0001 i
--> -4.2123274051525879873010733597821943554068e-0054 0.0000000000000000000000000000000000000000e-0001 i
x4 = 1.9146049071693819497220585618954851525216e+0000 -0.0000000000000000000000000000000000000000e-0001 i
--> -4.2123274051525879873013497171759573776348e-0054 0.0000000000000000000000000000000000000000e-0001 i
The text after "-->" follows from backsubstituting the root into the original equation.
For reference, here are Mathematica/Alpha's results to the highest possible precision I managed to set it:
Mathematica:
x1 = 1.20784407242
x2 = 1.91460490717
x3 = 1.56122449 - 0.16542770 i
x4 = 1.56122449 + 0.16542770 i
A good alternative to the methods already mentioned is the TOMS Algorithm 326, which is based on the paper "Roots of Low Order Polynomials" by Terence R.F.Nonweiler CACM (Apr 1968).
This is an algebraic solution to 3rd and 4th order polynomials that is reasonably compact, fast, and quite accurate. It is much simpler than Jenkins Traub.
Be warned however that the TOMS code doesn't work all that well.
This Iowa Hills Root Solver page has code for a Quartic / Cubic root finder that is a bit more refined. It also has a Jenkins Traub type root finder.

Which method of matrix determinant calculation is this?

This is the approach John Carmack uses to calculate the determinant of a 4x4 matrix. From my investigations i have determined that it starts out like the laplace expansion theorem but then goes on to calculate 3x3 determinants which doesn't seem to agree with any papers i've read.
// 2x2 sub-determinants
float det2_01_01 = mat[0][0] * mat[1][1] - mat[0][1] * mat[1][0];
float det2_01_02 = mat[0][0] * mat[1][2] - mat[0][2] * mat[1][0];
float det2_01_03 = mat[0][0] * mat[1][3] - mat[0][3] * mat[1][0];
float det2_01_12 = mat[0][1] * mat[1][2] - mat[0][2] * mat[1][1];
float det2_01_13 = mat[0][1] * mat[1][3] - mat[0][3] * mat[1][1];
float det2_01_23 = mat[0][2] * mat[1][3] - mat[0][3] * mat[1][2];
// 3x3 sub-determinants
float det3_201_012 = mat[2][0] * det2_01_12 - mat[2][1] * det2_01_02 + mat[2][2] * det2_01_01;
float det3_201_013 = mat[2][0] * det2_01_13 - mat[2][1] * det2_01_03 + mat[2][3] * det2_01_01;
float det3_201_023 = mat[2][0] * det2_01_23 - mat[2][2] * det2_01_03 + mat[2][3] * det2_01_02;
float det3_201_123 = mat[2][1] * det2_01_23 - mat[2][2] * det2_01_13 + mat[2][3] * det2_01_12;
return ( - det3_201_123 * mat[3][0] + det3_201_023 * mat[3][1] - det3_201_013 * mat[3][2] + det3_201_012 * mat[3][3] );
Could someone explain to me how this approach works or point me to a good write up which uses the same approach?
NOTE
If it matters this matrix is row major.
It seems to be the method that involves using minors. The mathematical aspect can be found on wikipedia at
http://en.wikipedia.org/wiki/Determinant#Properties_characterizing_the_determinant
Basically you reduce the matrix to something smaller and easier to compute, and sum those results up (it involves some (-1) factors which should be described on the page i linked to).
He uses the standard formula where you can compute, in pseudocode,
det(M) = sum(M[0, i] * det(M.minor[0, i]) * (-1)^i)
Here minor[0, i] is a matrix you obtain by crossing out 0-th row and i-th column from your original matrix and (-1)*i stands for i-th power of -1.
The same (up to an overall sign) formula will work if you take a different row or if you make a loop over a column. If you think about how det is defined, it's pretty self-explanatory. Note how for 2-matrix this becomes:
det(M) = M[0, 0] * M[1, 1] * (+1) + M[0, 1] * M[1, 0] * (-1)
or, by row 1 rather then 0,
-det(M) = M[1, 0] * M[0, 1] * (+1) + M[1, 1] * M[0, 0] * (-1)
– you should recognize the standard formula for determinant of 2x2 matrix.
Similarly, for a 3-matrix composed as N = [[a, b, c], [d, e, f], [g, h, i]] this leads to the formula
det(N) = a * det([[e, f], [h, i]]) - b * det([[d, f], [g, i]]) + c * det([[d, e], [g, h]])
which of course becomes the textbook formula
a*e*i + b*f*g + c*d*h - c*e*g - a*f*h - b*d*i
once you expand each of 2x2 determinants.
Now if you take a 4-matrix X, you will see that to compute det(X) you need to compute determinants of 4 minors, each minor being a 3x3 matrix; but you can also expand them further so you'll have the determinants of 6 2x2 matrices with some coefficients. You should really try it yourself similarly to what is above for 3x3 matrices.

Resources