R: Understanding formulas - r

I'm trying to get a better understanding of what R formulas mathematically mean.
For example: lm(y ~ x) would fit a line to y = Ax + B
Would lm(y ~ x + z) be fitting to the plane y = Ax + Bz + C?
Would lm(y ~ x + z + x:z) be fitting to the plane y = Ax + Bz + Cxz + D?

Your understanding is correct! Though it may be helpful to understand it a bit more abstractly. Your linear model (lm) only means that it's fitting parameters on a one-dimensional dependance (Ax not Ax^2 or Asin(x) or anything fancier than that).
But that does not mean it only fits 1 to 3 parameters. Imagine that foods represent dimensions: grains, fruits, vegetables, meats, and dairy make up our 5 "dimensions of food". These things are clearly relatable--and maybe not even independent--but still not totally describable in exactly the same ways. We can think of our model as the tool which gauges our coefficients--which in this food example we can imagine as "flavors", like sweet, spicy, sour, etc.
Our model then takes points in different dimensions (food groups) and attempts to relate them by their coefficient values (flavors) for a function. This model then allows us to describe other foods/flavors. This is really what most models "do": they "train" themselves on annotated data and build a relationship--linear models just treat flavors as directly proportional to the amount of food group.
I hope this explanation was helpful. If there's anything that's unclear, please let me know. Also, I would have made this as a comment but have not yet accumulated the required 50 pts. Sorry!

Related

what does a colon (:) do in a linear mixed effects model analysis?

Let me clarify that I am a complete beginner at R.
I'm working on a problem and having a bit of trouble understanding a formula I'm supposed to use in a linear mixed effects model analysis of a dataset, more specifically this formula,
ModelName <- lmer(outcome ~ predictor1 + predictor2 + predictor1:predictor2 + (random_structure), data = DatasetName)
I don't know what the predictor1:predictor2 part of it means, could anyone please help me understand or link to something I can read to understand?
I've run the code and it gives an additional output for the predictor2 part of the formula which doesnt happen when you dont include that part.
Wow! You may be new to R, but you ask a great question!
As you probably know already, the + operator separates terms in a model.
Y ~ a + b + c means that the response is modeled by a linear combination of a, b, and c.
The colon operator denotes interaction between the items it separates, for example:
Y ~ a + b + a:b means that the response is modeled by a linear combination of a, b, and the interaction between a and b.
I hope this helps!
Rose Hartman explains how interactions affect linear models, and why it’s important to consider them in Understanding Interactions in Linear Models https://education.arcus.chop.edu/understanding-interactions/

Min timepoints to model longitudinal data with natural quadratic splines?

I'm new to applying splines to longitudinal data, so here comes my question:
I've some longitudinal data on growing mice in 3 timepoints: at x, y and z months. It's known from the existent literature that the trajectories of growth in this type of data are usually better modeled in non-linear terms.
However, since I have only 3 timepoints, I wonder if this allows me to apply natural quadratic spline to age variable in my lmer model?
edit:I mean is
lmer<-mincLmer(File ~ ns(Age,2) * Genotype + Sex + (1|Subj_ID),data, mask=mask)
a legit way to go around?
I'm sorry if this is a stupid question - I'm just a lonely PhD student without supervision, and I would be super-grateful for any advice!!!
Marina
With the nls() function you can fit your data to whatever non-linear function you want. Then, from the biological point of view, probably your data is described by a Gompertz-like function (sigmoidal), but as you have only three time points, probably you can simplify these kind of functions into an exponential one. Try the following:
fit_formula <- independent_variable ~ a * exp(b * dependent_variable)
result <- nls(formula = fit_formula, data = your_Dataset)
It will probably give you an error the first times, something like singular matrix gradient at initial estimates ; if this happens, try adding the additional parameter start, where you provide different starting values for a and b more close to the true values. Remember that in your dataset, the column names must be equal to the names of the variables in the formula.

R: Linear Regression with N Features

I saw quite a few examples of how to do regression (linear, multiple... etc.) but on every example I saw, you had to define every single feature in the formula...
linearMod <- lm(Y ~ x1 + x2 + x3 + ..., data=myData)
Well, we used TSFresh to generate more features. Around 100. So how am I supposed to do this now? I don't really want to type in x1 .. all the way to .. x100.
In Phyton scikit-learn I could just put in all the data:
lm = linear_model.LinearRegression()
model = lm.fit(X,y)
And then repeat this for each 'feature group' to create a multiple linear regression.
Is there a way to do this in R? Or am I doing it wrong? Maybe another approach?
Originally we had 8 features/properties per Row. And with TSFresh we gernerated more of those. (Mean, STD and so on)
And every one of those features has a pretty linear influence on the Y result. So how can I now define something like a multiple linear model that just uses all extended features? Ideally without me having to tell it by hand each time.
So for example (one formulare would probably be feature 1-12 for Y) the next one (13-24 for Y) and so on. Is there a easy way to do this?
If you want to regress on all variables except Y you can do
lm(Y ~ ., data = myData)

Is there an implementation of loess in R with more than 3 parametric predictors or a trick to a similar effect?

Calling all experts on local regression and/or R!
I have run into a limitation of the standard loess function in R and hope you have some advice. The current implementation supports only 1-4 predictors. Let me set out our application scenario to show why this can easily become a problem as soon as we want to employ globally fit parametric covariables.
Essentially, we have a spatial distortion s(x,y) overlaid over a number of measurements z:
z_i = s(x_i,y_i) + v_{g_i}
These measurements z can be grouped by the same underlying undistorted measurement value v for each group g. The group membership g_i is known for each measurement, but the underlying undistorted measurement values v_g for the groups are not known and should be determined by (global, not local) regression.
We need to estimate the two-dimensional spatial trend s(x,y), which we then want to remove. In our application, say there are 20 groups of at least 35 measurements each, in the most simple scenario. The measurements are randomly placed. Taking the first group as reference, there are thus 19 unknown offsets.
The below code for toy data (with a spatial trend in one dimension x) works for two or three offset groups.
Unfortunately, the loess call fails for four or more offset groups with the error message
Error in simpleLoess(y, x, w, span, degree, parametric, drop.square,
normalize, :
only 1-4 predictors are allowed"
I tried overriding the restriction and got
k>d2MAX in ehg136. Need to recompile with increased dimensions.
How easy would that be to do? I cannot find a definition of d2MAX anywhere, and it seems this might be hardcoded -- the error is apparently triggered by line #1359 in loessf.f
if(k .gt. 15) call ehg182(105)
Alternatively, does anyone know of an implementation of local regression with global (parametric) offset groups that could be applied here?
Or is there a better way of dealing with this? I tried lme with correlation structures but that seems to be much, much slower.
Any comments would be greatly appreciated!
Many thanks,
David
###
#
# loess with parametric offsets - toy data demo
#
x<-seq(0,9,.1);
x.N<-length(x);
o<-c(0.4,-0.8,1.2#,-0.2 # works for three but not four
); # these are the (unknown) offsets
o.N<-length(o);
f<-sapply(seq(o.N),
function(n){
ifelse((seq(x.N)<= n *x.N/(o.N+1) &
seq(x.N)> (n-1)*x.N/(o.N+1)),
1,0);
});
f<-f[sample(NROW(f)),];
y<-sin(x)+rnorm(length(x),0,.1)+f%*%o;
s.fs<-sapply(seq(NCOL(f)),function(i){paste('f',i,sep='')});
s<-paste(c('y~x',s.fs),collapse='+');
d<-data.frame(x,y,f)
names(d)<-c('x','y',s.fs);
l<-loess(formula(s),parametric=s.fs,drop.square=s.fs,normalize=F,data=d,
span=0.4);
yp<-predict(l,newdata=d);
plot(x,y,pch='+',ylim=c(-3,3),col='red'); # input data
points(x,yp,pch='o',col='blue'); # fit of that
d0<-d; d0$f1<-d0$f2<-d0$f3<-0;
yp0<-predict(l,newdata=d0);
points(x,y-f%*%o); # spatial distortion
lines(x,yp0,pch='+'); # estimate of that
op<-sapply(seq(NCOL(f)),function(i){(yp-yp0)[!!f[,i]][1]});
cat("Demo offsets:",o,"\n");
cat("Estimated offsets:",format(op,digits=1),"\n");
Why don't you use an additive model for this? Package mgcv will handle this sort of model, if I understand your Question, just fine. I might have this wrong, but the code you show is relating x ~ y, but your Question mentions z ~ s(x, y) + g. What I show below for gam() is for response z modelled by a spatial smooth in x and y with g being estimated parametrically, with g stored as a factor in the data frame:
require(mgcv)
m <- gam(z ~ s(x,y) + g, data = foo)
Or have I misunderstood what you wanted? If you want to post a small snippet of data I can give a proper example using mgcv...?

Fitting polynomials to data

Is there a way, given a set of values (x,f(x)), to find the polynomial of a given degree that best fits the data?
I know polynomial interpolation, which is for finding a polynomial of degree n given n+1 data points, but here there are a large number of values and we want to find a low-degree polynomial (find best linear fit, best quadratic, best cubic, etc.). It might be related to least squares...
More generally, I would like to know the answer when we have a multivariate function -- points like (x,y,f(x,y)), say -- and want to find the best polynomial (p(x,y)) of a given degree in the variables. (Specifically a polynomial, not splines or Fourier series.)
Both theory and code/libraries (preferably in Python, but any language is okay) would be useful.
Thanks for everyone's replies. Here is another attempt at summarizing them. Pardon if I say too many "obvious" things: I knew nothing about least squares before, so everything was new to me.
NOT polynomial interpolation
Polynomial interpolation is fitting a polynomial of degree n given n+1 data points, e.g. finding a cubic that passes exactly through four given points. As said in the question, this was not want I wanted—I had a lot of points and wanted a small-degree polynomial (which will only approximately fit, unless we've been lucky)—but since some of the answers insisted on talking about it, I should mention them :) Lagrange polynomial, Vandermonde matrix, etc.
What is least-squares?
"Least squares" is a particular definition/criterion/"metric" of "how well" a polynomial fits. (There are others, but this is simplest.) Say you are trying to fit a polynomial
p(x,y) = a + bx + cy + dx2 + ey2 + fxy
to some given data points (xi,yi,Zi) (where "Zi" was "f(xi,yi)" in the question). With least-squares the problem is to find the "best" coefficients (a,b,c,d,e,f), such that what is minimized (kept "least") is the "sum of squared residuals", namely
S = ∑i (a + bxi + cyi + dxi2 + eyi2 + fxiyi - Zi)2
Theory
The important idea is that if you look at S as a function of (a,b,c,d,e,f), then S is minimized at a point at which its gradient is 0. This means that for example ∂S/∂f=0, i.e. that
∑i2(a + … + fxiyi - Zi)xiyi = 0
and similar equations for a, b, c, d, e.
Note that these are just linear equations in a…f. So we can solve them with Gaussian elimination or any of the usual methods.
This is still called "linear least squares", because although the function we wanted was a quadratic polynomial, it is still linear in the parameters (a,b,c,d,e,f). Note that the same thing works when we want p(x,y) to be any "linear combination" of arbitrary functions fj, instead of just a polynomial (= "linear combination of monomials").
Code
For the univariate case (when there is only variable x — the fj are monomials xj), there is Numpy's polyfit:
>>> import numpy
>>> xs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> ys = [1.1, 3.9, 11.2, 21.5, 34.8, 51, 70.2, 92.3, 117.4, 145.5]
>>> p = numpy.poly1d(numpy.polyfit(xs, ys, deg=2))
>>> print p
2
1.517 x + 2.483 x + 0.4927
For the multivariate case, or linear least squares in general, there is SciPy. As explained in its documentation, it takes a matrix A of the values fj(xi). (The theory is that it finds the Moore-Penrose pseudoinverse of A.) With our above example involving (xi,yi,Zi), fitting a polynomial means the fj are the monomials x()y(). The following finds the best quadratic (or best polynomial of any other degree, if you change the "degree = 2" line):
from scipy import linalg
import random
n = 20
x = [100*random.random() for i in range(n)]
y = [100*random.random() for i in range(n)]
Z = [(x[i]+y[i])**2 + 0.01*random.random() for i in range(n)]
degree = 2
A = []
for i in range(n):
A.append([])
for xd in range(degree+1):
for yd in range(degree+1-xd):
A[i].append((x[i]**xd)*(y[i]**yd)) #f_j(x_i)
c,_,_,_ = linalg.lstsq(A,Z)
j = 0
for xd in range(0,degree+1):
for yd in range(0,degree+1-xd):
print " + (%.2f)x^%dy^%d" % (c[j], xd, yd),
j += 1
prints
+ (0.01)x^0y^0 + (-0.00)x^0y^1 + (1.00)x^0y^2 + (-0.00)x^1y^0 + (2.00)x^1y^1 + (1.00)x^2y^0
so it has discovered that the polynomial is x2+2xy+y2+0.01. [The last term is sometimes -0.01 and sometimes 0, which is to be expected because of the random noise we added.]
Alternatives to Python+Numpy/Scipy are R and Computer Algebra Systems: Sage, Mathematica, Matlab, Maple. Even Excel might be able to do it. Numerical Recipes discusses methods to implement it ourselves (in C, Fortran).
Concerns
It is strongly influenced by how the points are chosen. When I had x=y=range(20) instead of the random points, it always produced 1.33x2+1.33xy+1.33y2, which was puzzling... until I realised that because I always had x[i]=y[i], the polynomials were the same: x2+2xy+y2 = 4x2 = (4/3)(x2+xy+y2). So the moral is that it is important to choose the points carefully to get the "right" polynomial. (If you can chose, you should choose Chebyshev nodes for polynomial interpolation; not sure if the same is true for least squares as well.)
Overfitting: higher-degree polynomials can always fit the data better. If you change the degree to 3 or 4 or 5, it still mostly recognizes the same quadratic polynomial (coefficients are 0 for higher-degree terms) but for larger degrees, it starts fitting higher-degree polynomials. But even with degree 6, taking larger n (more data points instead of 20, say 200) still fits the quadratic polynomial. So the moral is to avoid overfitting, for which it might help to take as many data points as possible.
There might be issues of numerical stability I don't fully understand.
If you don't need a polynomial, you can obtain better fits with other kinds of functions, e.g. splines (piecewise polynomials).
Yes, the way this is typically done is by using least squares. There are other ways of specifying how well a polynomial fits, but the theory is simplest for least squares. The general theory is called linear regression.
Your best bet is probably to start with Numerical Recipes.
R is free and will do everything you want and more, but it has a big learning curve.
If you have access to Mathematica, you can use the Fit function to do a least squares fit. I imagine Matlab and its open source counterpart Octave have a similar function.
For (x, f(x)) case:
import numpy
x = numpy.arange(10)
y = x**2
coeffs = numpy.polyfit(x, y, deg=2)
poly = numpy.poly1d(coeffs)
print poly
yp = numpy.polyval(poly, x)
print (yp-y)
Bare in mind that a polynomial of higher degree ALWAYS fits the data better. Polynomials of higher degree typically leads to highly improbable functions (see Occam's Razor), though (overfitting). You want to find a balance between simplicity (degree of polynomial) and fit (e.g. least square error). Quantitatively, there are tests for this, the Akaike Information Criterion or the Bayesian Information Criterion. These tests give a score which model is to be prefered.
If you want to fit the (xi, f(xi)) to an polynomial of degree n then you would set up a linear least squares problem with the data (1, xi, xi, xi^2, ..., xi^n, f(xi) ). This will return a set of coefficients (c0, c1, ..., cn) so that the best fitting polynomial is *y = c0 + c1 * x + c2 * x^2 + ... + cn * x^n.*
You can generalize this two more than one dependent variable by including powers of y and combinations of x and y in the problem.
Lagrange polynomials (as #j w posted) give you an exact fit at the points you specify, but with polynomials of degree more than say 5 or 6 you can run into numerical instability.
Least squares gives you the "best fit" polynomial with error defined as the sum of squares of the individual errors. (take the distance along the y-axis between the points you have and the function that results, square them, and sum them up) The MATLAB polyfit function does this, and with multiple return arguments, you can have it automatically take care of scaling/offset issues (e.g. if you have 100 points all between x=312.1 and 312.3, and you want a 6th degree polynomial, you're going to want to calculate u = (x-312.2)/0.1 so the u-values are distributed between -1 and +=).
NOTE that the results of least-squares fits are strongly influenced by the distribution of x-axis values. If the x-values are equally spaced, then you'll get larger errors at the ends. If you have a case where you can choose the x values and you care about the maximum deviation from your known function and an interpolating polynomial, then the use of Chebyshev polynomials will give you something that is close to the perfect minimax polynomial (which is very hard to calculate). This is discussed at some length in Numerical Recipes.
Edit: From what I gather, this all works well for functions of one variable. For multivariate functions it is likely to be much more difficult if the degree is more than, say, 2. I did find a reference on Google Books.
at college we had this book which I still find extremely useful: Conte, de Boor; elementary numerical analysis; Mc Grow Hill. The relevant paragraph is 6.2: Data Fitting.
example code comes in FORTRAN, and the listings are not very readable either, but the explanations are deep and clear at the same time. you end up understanding what you are doing, not just doing it (as is my experience of Numerical Recipes).
I usually start with Numerical Recipes but for things like this I quickly have to grab Conte-de Boor.
maybe better posting some code... it's a bit stripped down, but the most relevant parts are there. it relies on numpy, obviously!
def Tn(n, x):
if n==0:
return 1.0
elif n==1:
return float(x)
else:
return (2.0 * x * Tn(n - 1, x)) - Tn(n - 2, x)
class ChebyshevFit:
def __init__(self):
self.Tn = Memoize(Tn)
def fit(self, data, degree=None):
"""fit the data by a 'minimal squares' linear combination of chebyshev polinomials.
cfr: Conte, de Boor; elementary numerical analysis; Mc Grow Hill (6.2: Data Fitting)
"""
if degree is None:
degree = 5
data = sorted(data)
self.range = start, end = (min(data)[0], max(data)[0])
self.halfwidth = (end - start) / 2.0
vec_x = [(x - start - self.halfwidth)/self.halfwidth for (x, y) in data]
vec_f = [y for (x, y) in data]
mat_phi = [numpy.array([self.Tn(i, x) for x in vec_x]) for i in range(degree+1)]
mat_A = numpy.inner(mat_phi, mat_phi)
vec_b = numpy.inner(vec_f, mat_phi)
self.coefficients = numpy.linalg.solve(mat_A, vec_b)
self.degree = degree
def evaluate(self, x):
"""use Clenshaw algorithm
http://en.wikipedia.org/wiki/Clenshaw_algorithm
"""
x = (x-self.range[0]-self.halfwidth) / self.halfwidth
b_2 = float(self.coefficients[self.degree])
b_1 = 2 * x * b_2 + float(self.coefficients[self.degree - 1])
for i in range(2, self.degree):
b_1, b_2 = 2.0 * x * b_1 + self.coefficients[self.degree - i] - b_2, b_1
else:
b_0 = x*b_1 + self.coefficients[0] - b_2
return b_0
Remember, there's a big difference between approximating the polynomial and finding an exact one.
For example, if I give you 4 points, you could
Approximate a line with a method like least squares
Approximate a parabola with a method like least squares
Find an exact cubic function through these four points.
Be sure to select the method that's right for you!
It's rather easy to scare up a quick fit using Excel's matrix functions if you know how to represent the least squares problem as a linear algebra problem. (That depends on how reliable you think Excel is as a linear algebra solver.)
The lagrange polynomial is in some sense the "simplest" interpolating polynomial that fits a given set of data points.
It is sometimes problematic because it can vary wildly between data points.

Resources