Smooth change of day length - r

I want to model what it might look like to have the day length change smoothly over time (but remain sinusoidal). The formula for a "chirp", to change the instantaneous frequency is given at https://en.wikipedia.org/wiki/Chirp but it doesn't look right when coded for a 24h period over 5 days and then a transition to 12h over another 5 days:
period = list( c(24,24,5), c(24,12,5) )
alpha = list( c(0,5), c(0,5) )
s_samples = 100
A=50
O=50
simulatedData = data.frame(t=numeric(), v=numeric()) #initialise the output
daySteps = c(0, cumsum(unlist(period)[seq(3,length(unlist(period)), by=3)])) #set up the period starts and ends to set over, starting at 0
##Cycle over each of the items in the list
for(set in seq(period) ){
t_points = s_samples*period[[set]][3]
t = seq(daySteps[set], daySteps[set+1], length.out=t_points) #make the time
slope = (24/period[[set]][2]-24/period[[set]][1])/(max(t)-min(t)) # get the slope
f0 = 24/period[[set]][1] - slope*(min(t)) # find the freq when t0
c = (24/period[[set]][2]-f0)/(max(t)) #calculate the chirp see https://en.wikipedia.org/wiki/Chirp and https://dsp.stackexchange.com/questions/57904/chirp-after-t-seconds
wt = ((c*(t^2))/2) + f0*(t) # calc the freq
a = alpha[[set]][1]
v = A * cos(2*pi*wt - a) + O
simulatedData = rbind(simulatedData, data.frame(t, v) )
}
plot(simulatedData, type="l", lwd=2)
t = seq(0,sum(unlist(period)[seq(3,length(unlist(period)), by=3)]), by=1/24)
points(t, A*cos(2*pi*t)+O, col=3, type="l", lty=2)
points(t, A*cos(2*(24/12)*pi*t)+O, col=4, type="l", lty=2)
The first 24 are perfect, as expected, and the last part of the second 5 days matches a 12h cycled, but the first part of that period looks 180deg out of phase. What's wrong?

I think you're making this a lot more complex than it needs to be. Remember that many R functions are already vectorized. The following function will produce a linear chirp between frequencies f0 and f1 between t0 and t1, with an optional phi parameter to specify at what point on the cycle you want your sequence to begin:
chirp <- function(f0, f1, t0, t1, phi = 0, n_steps = 1000)
{
C <- (f1 - f0)/(t1 - t0)
x <- seq(t0, t1, length.out = n_steps)
y <- sin(2 * pi * (C / 2 * (x - t0)^2 + f0 * (x - t0)) + phi) # Ref Wikipedia
data.frame(x, y)
}
Of course, it can also produce the static first half of your plot by "chirping" between two identical frequencies, so we can get a data frame of x, y points on the plot by doing
df <- rbind(chirp(1, 1, 0, 5), chirp(1, 2, 5, 10))
Which results in:
plot(df$x, df$y, type = "l")
Note that between 5 and 10 days there are 7.5 cycles, so if you wanted to smoothly continue frequency 2, you would need to set the phi parameter to a half cycle (i.e. to pi):
df <- rbind(df, chirp(2, 2, 10, 15, phi = pi))
plot(df$x, df$y, type = "l")
Note that the phases of the chirped signal and a 2 Hz signal will only match after n seconds if the chirp occurs over an even number of periods of the original signal. For an odd number, the phase will be out by 180 degrees. This is a mathematical consequence of a linear chirp. To see this, let's use our function to chirp over 6 seconds so the phases match at 10 seconds:
plot(df$x, df$y, type = "l")
lines(df2$x, df2$y, lty = 2, col = "green")
lines(df3$x, df3$y, lty = 2, col = "blue")
lines(df$x, df$y)

Related

Plotting a function in R with multiple values for variable

I need to plot in R a function of the log likelihood. I have a fixed n, multiple values for k and arbitrary pi between 0 and 1.
I tried this code but the result is not what i want:
n<-10
k<-c(8,8,5,4,6)
pi = seq(0,1,length=100)
l = function(pi){k*log(pi) + (n-k) * log(1-pi)}
plot(x=pi,y=l(pi),ylab="l(pi)",xlab="q",type="l",ylim=c(-10,0))
the plot is far from a normal curve
please help
You might be looking for curve. Just define your function in first curve argument, using x as variable, and the first k[1]. In a second step loop the same over the remaining k[2:5] using add=TRUE argument.
n <- 10
k <- c(7,8,5,4,6)
curve(k[1]*log(x) + (n-k[1]) * log(1-x), ylim=c(-40, 0))
invisible(sapply(seq(k[-1]) + 1, function(i)
curve(k[i]*log(x) + (n-k[i]) * log(1-x), col=i, add=TRUE)))
legend("bottomright", legend=k, lty=1, col=1:5, title="k", horiz=T, cex=.8)
The problem lies in how R handles your 'multiple values of k'.
Because k is a vector of length 5, and pi is a vector of length 100, k is 'recycled', meaning that R pairs each pi with a value of k in turn:
pi: 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
k: 8 8 5 4 6 [back to:] 8 8 5 ...
Instead, you want to plot each k separately:
pi = seq(0, 1, length = 100)
l = function(pi, k) k * log(pi) + (n-k) * log(1-pi)
plot(x = pi, y = l(pi, 8), ylab="l(pi)", xlab = "q",
ylim = c(-10, 0))
lines(x = pi, y = l(pi, 5), col = 'red')
lines(x = pi, y = l(pi, 4), col = 'blue')
lines(x = pi, y = l(pi, 6), col = 'orange')

How to calculate x-values of the convolution of two distributions?

(This question may be suited for https://stats.stackexchange.com/, but I'm thinking it's just how you calculate what I want in R that is my question).
I'm trying to add multiple distributions together, and then look at the resulting distribution. I'll illustrate my problem with a simple example using normally distributed random variables, p1 and p2.
set.seed(21)
N <- 1000
p1 <- rnorm(N, mean = 0, sd = 1)
p2 <- rnorm(N, mean = 10, sd = 1)
Which we can plot:
data.frame(p1, p2) %>%
gather(key="dist", value="value") %>%
ggplot(aes(value, color=dist)) + geom_density()
I can add these distributions together using convolve. Okay so that's fine. But what I can't figure out, is how to plot the summation of the distributions with the appropriate x-values. In the examples I've seen, it looks like the x-values are manually added in a way that doesn't seem "accurate" for lack of better work. See this Example.
I can "add" them together and plot:
pdf.c <- convolve(pdf1.y, pdf2.y, type = "open")
plot(pdf.c, type="l")
My question is how to get the corresponding x-values of the new distribution. I'm sure I'm missing something from a foundational statistics point of view.
Appendix for pdf1 and pdf2:
set.seed(21)
N <- 1000
p1 <- rnorm(N, mean = 0, sd = 1)
p2 <- rnorm(N, mean = 10, sd = 1)
pdf1.x <- density(p1)$x
pdf2.x <- density(p2)$x
pdf1.y <- density(p1)$y / sum(density(p1)$y)
pdf2.y <- density(p2)$y / sum(density(p2)$y)
df1 <- data.frame(pdf.x = pdf1.x, pdf.y = pdf1.y, dist = "1", stringsAsFactors = FALSE)
df2 <- data.frame(pdf.x = pdf2.x, pdf.y = pdf2.y, dist = "2", stringsAsFactors = FALSE)
df <- bind_rows(df1, df2)
Assuming that p1 and p2 are discretized uniformly, with the same interval dx between successive x values. (I see that you have discretized p1 and p2 at random points -- that's not the same, and, without thinking about it some more, I don't have an answer for that.) Let x1 = x1_0 + (k - 1) times dx, k = 1, 2, 3, ..., n1 be the points at which p1 is discretized, and x2 = x2_0 + (k - 1) times dx, k = 1, 2, 3, ..., n2 be the points at which p2 is discretized.
Each point xi_k = xi_0 + (k - 1) times dx represents the center point of a bar which has width dx and height pi(xi_k), i = 1, 2. Thus the mass of the bar is dx times pi(xi_k), and the total mass for all bars approaches 1 as dx approaches 0. These masses are the values which are convolved. If the discretized masses are normalized to 1, then their convolution will also be normalized to 1.
To be very careful, the range over which the distributions are discretized are xi_0 - dx/2 to xi_0 + (ni - 1) times dx + dx/2. After computing the convolution, the range for the result is likewise -dx/2 and +dx/2 the first and last points, respectively.
The convolution has n = n1 + n2 - 1 points, namely x1_0 + x2_0 + (k - 1) times dx, k = 1, 2, 3, ..., n1 + n2 - 1. The first point is x1_0 + x2_0 (i.e. first point for p1 plus first point for p2) and the last point is x1_0 + x2_0 + (n1 + n2 - 2) times dx = (x1_0 + (n1 - 1) times dx) + (x2_0 + (n2 - 1) times dx) (i.e. last point for p1 plus last point for p2). From this you can construct x values corresponding to the convolution via the seq function or something like that.

topoplot in ggplot2 – 2D visualisation of e.g. EEG data

Can ggplot2 be used to produce a so-called topoplot (often used in neuroscience)?
Sample data:
label x y signal
1 R3 0.64924459 0.91228430 2.0261520
2 R4 0.78789621 0.78234410 1.7880972
3 R5 0.93169511 0.72980685 0.9170998
4 R6 0.48406513 0.82383895 3.1933129
Full sample data.
Rows represent individual electrodes. Columns x and y represent the projection into 2D space and the column signal is essentially the z-axis representing voltage measured at a given electrode.
stat_contour doesn't work, apparently due to unequal grid.
geom_density_2d only provides a density estimation of x and y.
geom_raster is one not fitted for this task or I must be using it incorrectly since it quickly runs out of memory.
Smoothing (like in the image on the right) and head contours (nose, ears) aren't necessary.
I want to avoid Matlab and transforming the data so that it fits this or that toolbox… Many thanks!
Update (26 January 2016)
The closest I've been able to get to my objective is via
library(colorRamps)
ggplot(channels, aes(x, y, z = signal)) + stat_summary_2d() + scale_fill_gradientn(colours=matlab.like(20))
which produces an image like this:
Update 2 (27 January 2016)
I've tried #alexforrence's approach with full data and this is the result:
It's a great start but there is a couple of issues:
The last call (ggplot()) takes about 40 seconds on an Intel i7 4790K while Matlab toolboxes manage to generate these almost instantly; my ‘emergency solution’ above takes about a second.
As you can see, the upper and lower border of the central part appear to be ‘sliced’ – I'm not sure what causes this but it could be the third issue.
I'm getting these warnings:
1: Removed 170235 rows containing non-finite values (stat_contour).
2: Removed 170235 rows containing non-finite values (stat_contour).
Update 3 (27 January 2016)
Comparison between two plots produced with different interp(xo, yo) and stat_contour(binwidth) values:
Ragged edges if one chooses low interp(xo, yo), in this case xo/yo = seq(0, 1, length = 100):
Here's a potential start:
First, we'll attach some packages. I'm using akima to do linear interpolation, though it looks like EEGLAB uses some sort of spherical interpolation here? (the data was a little sparse to try it).
library(ggplot2)
library(akima)
library(reshape2)
Next, reading in the data:
dat <- read.table(text = " label x y signal
1 R3 0.64924459 0.91228430 2.0261520
2 R4 0.78789621 0.78234410 1.7880972
3 R5 0.93169511 0.72980685 0.9170998
4 R6 0.48406513 0.82383895 3.1933129")
We'll interpolate the data, and stick that in a data frame.
datmat <- interp(dat$x, dat$y, dat$signal,
xo = seq(0, 1, length = 1000),
yo = seq(0, 1, length = 1000))
datmat2 <- melt(datmat$z)
names(datmat2) <- c('x', 'y', 'value')
datmat2[,1:2] <- datmat2[,1:2]/1000 # scale it back
I'm going to borrow from some previous answers. The circleFun below is from Draw a circle with ggplot2.
circleFun <- function(center = c(0,0),diameter = 1, npoints = 100){
r = diameter / 2
tt <- seq(0,2*pi,length.out = npoints)
xx <- center[1] + r * cos(tt)
yy <- center[2] + r * sin(tt)
return(data.frame(x = xx, y = yy))
}
circledat <- circleFun(c(.5, .5), 1, npoints = 100) # center on [.5, .5]
# ignore anything outside the circle
datmat2$incircle <- (datmat2$x - .5)^2 + (datmat2$y - .5)^2 < .5^2 # mark
datmat2 <- datmat2[datmat2$incircle,]
And I really liked the look of the contour plot in R plot filled.contour() output in ggpplot2, so we'll borrow that one.
ggplot(datmat2, aes(x, y, z = value)) +
geom_tile(aes(fill = value)) +
stat_contour(aes(fill = ..level..), geom = 'polygon', binwidth = 0.01) +
geom_contour(colour = 'white', alpha = 0.5) +
scale_fill_distiller(palette = "Spectral", na.value = NA) +
geom_path(data = circledat, aes(x, y, z = NULL)) +
# draw the nose (haven't drawn ears yet)
geom_line(data = data.frame(x = c(0.45, 0.5, .55), y = c(1, 1.05, 1)),
aes(x, y, z = NULL)) +
# add points for the electrodes
geom_point(data = dat, aes(x, y, z = NULL, fill = NULL),
shape = 21, colour = 'black', fill = 'white', size = 2) +
theme_bw()
With improvements mentioned in the comments (setting extrap = TRUE and linear = FALSE in the interp call to fill in gaps and do a spline smoothing, respectively, and removing NAs before plotting), we get:
mgcv can do spherical splines. This replaces akima (the chunk containing interp() isn't necessary).
library(mgcv)
spl1 <- gam(signal ~ s(x, y, bs = 'sos'), data = dat)
# fine grid, coarser is faster
datmat2 <- data.frame(expand.grid(x = seq(0, 1, 0.001), y = seq(0, 1, 0.001)))
resp <- predict(spl1, datmat2, type = "response")
datmat2$value <- resp

how to detect peaks in a particular range in R

I have a time series and I would like to detect (and identify them) some peaks but only for a particular range in R.
here is an example
## generate test data with 3 peaks
set.seed(123)
x <- seq(0, 360, length = 20)
y <- abs(rnorm(20, mean = 1, sd = 0.1))
y[5:10] <- c(2, 4, 7, 3, 4, 2)
y <- c(y, 0.8 * y, 1.2 * y)
x <- seq(0, 360, along = y)
y[6] <- y[7] # test case with 2 neighbouring equal points
plot(x, y, type="b")
#
In that example, let says, I want to select peaks (y) only between 6 and 9 (2 peaks) or only between 2 and 4 (also 2 peaks).
I am aware of several packages in R detecting peaks (e.g. Peaks, pastecs, quantmod, pracma, splus2R) but none seems to have this feature, usually only having a minimum threshold.
Any advice would be appreciated.
thank you
Martin
Edit: The code provided by Eric works perfectly. But with my own datasets I have a small problem. What would you do to detect only one peak if same values twice in a certain window (x). Basically I would like to create a conditional statement that would say, you need a certain number of points (x) between peaks to be considered as two distinctive peaks.
Something like this gets close (not sure if you care about detecting the peak with two values twice).
# Reproduce your data
set.seed(123)
x <- seq(0, 360, length = 20)
y <- abs(rnorm(20, mean = 1, sd = 0.1))
y[5:10] <- c(2, 4, 7, 3, 4, 2)
y <- c(y, 0.8 * y, 1.2 * y)
x <- seq(0, 360, along = y)
y[6] <- y[7] # test case with 2 neighbouring equal points
plot(x, y, type="b")
# shift y up and down a position (for peak identification)
yu <- c(tail(y, -1), NA)
yd <- c(NA, head(y, -1))
# identify peaks that are in the correct range
# where y is higher than the point before and after
high <- which(y - yu >= 0 & y - yd >= 0 & y > 6 & y < 9)
low <- which(y - yu >= 0 & y - yd >= 0 & y >= 2 & y <= 4) # one peak is at 4
# plot lines at peaks
abline(v = x[high], col = 'blue')
abline(v = x[low], col = 'red')

Visual Comparison of Regression & PCA

I'm trying to perfect a method for comparing regression and PCA, inspired by the blog Cerebral Mastication which has also has been discussed from a different angle on SO. Before I forget, many thanks to JD Long and Josh Ulrich for much of the core of this. I'm going to use this in a course next semester. Sorry this is long!
UPDATE: I found a different approach which almost works (please fix it if you can!). I posted it at the bottom. A much smarter and shorter approach than I was able to come up with!
I basically followed the previous schemes up to a point: Generate random data, figure out the line of best fit, draw the residuals. This is shown in the second code chunk below. But I also dug around and wrote some functions to draw lines normal to a line through a random point (the data points in this case). I think these work fine, and they are shown in First Code Chunk along with proof they work.
Now, the Second Code Chunk shows the whole thing in action using the same flow as #JDLong and I'm adding an image of the resulting plot. Data in black, red is the regression with residuals pink, blue is the 1st PC and the light blue should be the normals, but obviously they are not. The functions in First Code Chunk that draw these normals seem fine, but something is not right with the demonstration: I think I must be misunderstanding something or passing the wrong values. My normals come in horizontal, which seems like a useful clue (but so far, not to me). Can anyone see what's wrong here?
Thanks, this has been vexing me for a while...
First Code Chunk (Functions to Draw Normals and Proof They Work):
##### The functions below are based very loosely on the citation at the end
pointOnLineNearPoint <- function(Px, Py, slope, intercept) {
# Px, Py is the point to test, can be a vector.
# slope, intercept is the line to check distance.
Ax <- Px-10*diff(range(Px))
Bx <- Px+10*diff(range(Px))
Ay <- Ax * slope + intercept
By <- Bx * slope + intercept
pointOnLine(Px, Py, Ax, Ay, Bx, By)
}
pointOnLine <- function(Px, Py, Ax, Ay, Bx, By) {
# This approach based upon comingstorm's answer on
# stackoverflow.com/questions/3120357/get-closest-point-to-a-line
# Vectorized by Bryan
PB <- data.frame(x = Px - Bx, y = Py - By)
AB <- data.frame(x = Ax - Bx, y = Ay - By)
PB <- as.matrix(PB)
AB <- as.matrix(AB)
k_raw <- k <- c()
for (n in 1:nrow(PB)) {
k_raw[n] <- (PB[n,] %*% AB[n,])/(AB[n,] %*% AB[n,])
if (k_raw[n] < 0) { k[n] <- 0
} else { if (k_raw[n] > 1) k[n] <- 1
else k[n] <- k_raw[n] }
}
x = (k * Ax + (1 - k)* Bx)
y = (k * Ay + (1 - k)* By)
ans <- data.frame(x, y)
ans
}
# The following proves that pointOnLineNearPoint
# and pointOnLine work properly and accept vectors
par(mar = c(4, 4, 4, 4)) # otherwise the plot is slightly distorted
# and right angles don't appear as right angles
m <- runif(1, -5, 5)
b <- runif(1, -20, 20)
plot(-20:20, -20:20, type = "n", xlab = "x values", ylab = "y values")
abline(b, m )
Px <- rnorm(10, 0, 4)
Py <- rnorm(10, 0, 4)
res <- pointOnLineNearPoint(Px, Py, m, b)
points(Px, Py, col = "red")
segments(Px, Py, res[,1], res[,2], col = "blue")
##========================================================
##
## Credits:
## Theory by Paul Bourke http://local.wasp.uwa.edu.au/~pbourke/geometry/pointline/
## Based in part on C code by Damian Coventry Tuesday, 16 July 2002
## Based on VBA code by Brandon Crosby 9-6-05 (2 dimensions)
## With grateful thanks for answering our needs!
## This is an R (http://www.r-project.org) implementation by Gregoire Thomas 7/11/08
##
##========================================================
Second Code Chunk (Plots the Demonstration):
set.seed(55)
np <- 10 # number of data points
x <- 1:np
e <- rnorm(np, 0, 60)
y <- 12 + 5 * x + e
par(mar = c(4, 4, 4, 4)) # otherwise the plot is slightly distorted
plot(x, y, main = "Regression minimizes the y-residuals & PCA the normals")
yx.lm <- lm(y ~ x)
lines(x, predict(yx.lm), col = "red", lwd = 2)
segments(x, y, x, fitted(yx.lm), col = "pink")
# pca "by hand"
xyNorm <- cbind(x = x - mean(x), y = y - mean(y)) # mean centers
xyCov <- cov(xyNorm)
eigenValues <- eigen(xyCov)$values
eigenVectors <- eigen(xyCov)$vectors
# Add the first PC by denormalizing back to original coords:
new.y <- (eigenVectors[2,1]/eigenVectors[1,1] * xyNorm[x]) + mean(y)
lines(x, new.y, col = "blue", lwd = 2)
# Now add the normals
yx2.lm <- lm(new.y ~ x) # zero residuals: already a line
res <- pointOnLineNearPoint(x, y, yx2.lm$coef[2], yx2.lm$coef[1])
points(res[,1], res[,2], col = "blue", pch = 20) # segments should end here
segments(x, y, res[,1], res[,2], col = "lightblue1") # the normals
############ UPDATE
Over at Vincent Zoonekynd's Page I found almost exactly what I wanted. But, it doesn't quite work (obviously used to work). Here is a code excerpt from that site which plots normals to the first PC reflected through a vertical axis:
set.seed(1)
x <- rnorm(20)
y <- x + rnorm(20)
plot(y~x, asp = 1)
r <- lm(y~x)
abline(r, col='red')
r <- princomp(cbind(x,y))
b <- r$loadings[2,1] / r$loadings[1,1]
a <- r$center[2] - b * r$center[1]
abline(a, b, col = "blue")
title(main='Appears to use the reflection of PC1')
u <- r$loadings
# Projection onto the first axis
p <- matrix( c(1,0,0,0), nrow=2 )
X <- rbind(x,y)
X <- r$center + solve(u, p %*% u %*% (X - r$center))
segments( x, y, X[1,], X[2,] , col = "lightblue1")
And here is the result:
Alright, I'll have to answer my own question! After further reading and comparison of methods that people have put on the internet, I have solved the problem. I'm not sure I can clearly state what I "fixed" because I went through quite a few iterations. Anyway, here is the plot and the code (MWE). The helper functions are at the end for clarity.
# Comparison of Linear Regression & PCA
# Generate sample data
set.seed(39) # gives a decent-looking example
np <- 10 # number of data points
x <- -np:np
e <- rnorm(length(x), 0, 10)
y <- rnorm(1, 0, 2) * x + 3*rnorm(1, 0, 2) + e
# Plot the main data & residuals
plot(x, y, main = "Regression minimizes the y-residuals & PCA the normals", asp = 1)
yx.lm <- lm(y ~ x)
lines(x, predict(yx.lm), col = "red", lwd = 2)
segments(x, y, x, fitted(yx.lm), col = "pink")
# Now the PCA using built-in functions
# rotation = loadings = eigenvectors
r <- prcomp(cbind(x,y), retx = TRUE)
b <- r$rotation[2,1] / r$rotation[1,1] # gets slope of loading/eigenvector 1
a <- r$center[2] - b * r$center[1]
abline(a, b, col = "blue") # Plot 1st PC
# Plot normals to 1st PC
X <- pointOnLineNearPoint(x, y, b, a)
segments( x, y, X[,1], X[,2], col = "lightblue1")
###### Needed Functions
pointOnLineNearPoint <- function(Px, Py, slope, intercept) {
# Px, Py is the point to test, can be a vector.
# slope, intercept is the line to check distance.
Ax <- Px-10*diff(range(Px))
Bx <- Px+10*diff(range(Px))
Ay <- Ax * slope + intercept
By <- Bx * slope + intercept
pointOnLine(Px, Py, Ax, Ay, Bx, By)
}
pointOnLine <- function(Px, Py, Ax, Ay, Bx, By) {
# This approach based upon comingstorm's answer on
# stackoverflow.com/questions/3120357/get-closest-point-to-a-line
# Vectorized by Bryan
PB <- data.frame(x = Px - Bx, y = Py - By)
AB <- data.frame(x = Ax - Bx, y = Ay - By)
PB <- as.matrix(PB)
AB <- as.matrix(AB)
k_raw <- k <- c()
for (n in 1:nrow(PB)) {
k_raw[n] <- (PB[n,] %*% AB[n,])/(AB[n,] %*% AB[n,])
if (k_raw[n] < 0) { k[n] <- 0
} else { if (k_raw[n] > 1) k[n] <- 1
else k[n] <- k_raw[n] }
}
x = (k * Ax + (1 - k)* Bx)
y = (k * Ay + (1 - k)* By)
ans <- data.frame(x, y)
ans
}
Try changing this line of your code:
res <- pointOnLineNearPoint(x, y, yx2.lm$coef[2], yx2.lm$coef[1])
to
res <- pointOnLineNearPoint(x, new.y, yx2.lm$coef[2], yx2.lm$coef[1])
So you're calling the correct y values.
In Vincent Zoonekynd's code, change the line u <- r$loadings to u <- solve(r$loadings). In the second instance of solve(), the predicted component scores along the first principal axis (i.e., the matrix of predicted scores with the second predicted components scores set to zero) need to be multiplied by the inverse of the loadings/eigenvectors. Multiplying data by the loadings gives predicted scores; dividing predicted scores by the loadings give data. Hope that helps.

Resources